PyTorch Autograd Explained - In-depth Tutorial
Elliot Waite
In this PyTorch tutorial, I explain how the PyTorch autograd system works by going through some examples and visualize the graphs with diagrams. As you perform operations on PyTorch tensors that have requires_grad=True, you build up an autograd backward graph. Then when you call the backward() method on one of the output nodes, the backward graph gets traversed, starting at the node that the grad_fn attribute of the output node points to, and traversing backwards from there, accumulating gradients until leaf nodes are reached. The final leaf node gradients will be stored on the grad attribute of the leaf tensors.
This is my first PyTorch tutorial video. If you'd like to see more PyTorch related videos, let me know in the comments. And if you have anything specific about PyTorch that you would like me to make videos about, let me know.
The draw.io flowcharts shown in the video: š https://drive.google.com/file/d/1bq3akhmA5DGRCiFYJfNPSn7il2wvCkEY/view?usp=sharing (Note: There are tabs along the bottom of the draw.io page for all the different graphs shown in the video.)
Join our Discord community: š¬ https://discord.gg/cdQhRgw
Connect with me: š¦ Twitter - https://twitter.com/elliotwaite š· Instagram - https://www.instagram.com/elliotwaite š± Facebook - https://www.facebook.com/elliotwaite š¼ LinkedIn - https://www.linkedin.com/in/elliotwaite
šµ ksolis - Nobody Else (https://youtu.be/RiiSXmH509c) ... https://www.youtube.com/watch?v=MswxJw-8PvE
35412233 Bytes