0:00
20:50
CUDA graph trees are the internal implementation of CUDA graphs used in PT2 when you say mode="reduce-overhead". Their primary innovation is that they allow the reuse of memory across multiple CUDA graphs, as long as they form a tree structure of potential paths you can go down with the CUDA graph. This greatly reduced the memory usage of CUDA graphs in PT2. There are some operational implications to using CUDA graphs which are described in the podcast.
Fler avsnitt från "PyTorch Developer Podcast"
Missa inte ett avsnitt av “PyTorch Developer Podcast” och prenumerera på det i GetPodcast-appen.