New GPU-Acceleration for PyTorch on M1 Macs! + using with BERT
James Briggs
GPU-acceleration on Mac is finally here!
Today's deep learning models owe a great deal of their exponential performance gains to ever increasing model sizes. Those larger models require more computations to train and run.
These models are simply too big to be run on CPU hardware, which performs large step-by-step computations. Instead, they need massively parallel computations. That leaves us with either GPU or TPU hardware.
Our home PCs aren't coming with TPUs anytime soon, so we're left with the GPU option. GPUs use a highly parallel structure, originally designed to process images for visual heavy processes. They became essential components in gaming for rendering real-time 3D images.
GPUs are essential for the scale of today's models. Using CPUs makes many of these models too slow to be useful, which can make deep learning on M1 machines rather disappointing.
Fortunately, this is changing with the support of GPU on M1 machines beginning with PyTorch v1.12. In this video we will explain the new integration and how to implement it yourself.
š Article: https://towardsdatascience.com/gpu-acceleration-comes-to-pytorch-on-m1-macs-195c399efcc1
š Friend Link (free access): https://towardsdatascience.com/gpu-acceleration-comes-to-pytorch-on-m1-macs-195c399efcc1?sk=a88acd35f600858093c177b97d690b03
š Code notebooks: https://github.com/jamescalam/pytorch-mps
š¤ 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5
š Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership
š¾ Discord: https://discord.gg/c5QtDB9RAP
00:00 Intro 01:34 PyTorch MPS 04:57 Installing ARM Python 09:09 Using PyTorch with GPU 12:14 BERT on PyTorch GPU 13:51 Best way to train LLMs on Mac 16:01 Buffer Size Bug 17:24 When we would use Mac M1 GPU ... https://www.youtube.com/watch?v=uYas6ysyjgY
298555082 Bytes