Last Updated on
You may have read about some of the recent impressive advances in deep learning applications, and are now eager to build your own neural networks.
But which of the many libraries should you use? In this article, we present PyTorch and TensorFlow, the two most commonly used frameworks for deep learning.
Read on to find out which one offers the best conditions for you to realize your projects.
Recap: Defining Deep Learning
Let’s briefly recall what we mean by the term deep learning. It refers to a group of algorithms from the large family of machine-learning models. Its name comes from the deep interconnected networks, called neural nets, which are the building blocks for deep learning models.
The recent advances in computational resources and access to large data collections have facilitated a surge in deep-learning applications in many areas, from machine translation to automated driving. If you have a complex task that conventional machine-learning algorithms find hard to solve, chances are that a neural network will improve the performance — provided that you have the data to train it.
PyTorch vs. TensorFlow: The Key Facts
PyTorch and TensorFlow lead the list of the most popular frameworks in deep learning. Let’s look at some key facts about the two libraries. PyTorch was released in 2016 by Facebook’s AI Research lab. As the name implies, it is primarily meant to be used in Python, but it has a C++ interface, too. Python enthusiasts love it for its imperative programming style, which they see as more “pythonic” than that of other, more declarative frameworks. Given its pythonic nature, PyTorch fits smoothly into the Python machine learning ecosystem.
TensorFlow, on the other hand, has interfaces in many programming languages. But the high-level Keras API for TensorFlow in Python has proven so successful with deep learning practitioners that the newest TensorFlow version integrates it by default.
The Keras interface offers readymade building blocks which significantly improve the speed at which even newcomers can implement deep-learning architectures. TensorFlow, which is named after the high-dimensional dataframes that “flow” through a neural network, was developed at Google Brain and has been around a little bit longer than PyTorch, since 2015.
A Deeper Dive: The Main Differences
The most obvious difference between PyTorch and TensorFlow lies in their definition of graphs. In classic TensorFlow, a graph is defined statically, meaning that you outline its entire structure — the layers and connections, and what kind of data gets processed where — before running it.
It is embedded in a TensorFlow session via which the user communicates with the network. Most notably, the graph cannot be modified after compilation. This is how you would calculate the sum of two tensors in TensorFlow up until recently:
In contrast, a graph in PyTorch is defined dynamically, meaning that the graph and its input can be modified during runtime. This is referred to as eager execution. It offers the programmer better access to the inner workings of the network than a static graph does, which considerably eases the process of debugging the code. So how would we compute our sum of two tensors in PyTorch?
However, the developers of the two libraries have continually been integrating popular features from their competitors, resulting in a process of gradual convergence. Therefore, TensorFlow 2.0, which was released in October 2019, now has eager execution by default, too.
Our tensor-sum computation would, therefore, look very similar to the PyTorch implementation in TensorFlow 2.0. Since the static graph architecture could present a conceptual challenge for beginners, the new implementation is one of the features that considerably improve access to TensorFlow.
Given that we conceptualize a neural network as a graph with nodes and edges, wouldn’t it be nice to actually look at these connections? This is the purpose of TensorBoard, TensorFlow’s visualization feature.
In addition to drawing the computational graph, it allows you to observe the behavior of your training parameters over time, by logging so-called summaries of the network at predefined intervals. This makes TensorBoard a valuable device for debugging.
PyTorch, on the other hand, doesn’t come with a native visualization feature. Instead, it uses regular Python packages like matplotlib or seaborn for plotting the behavior of certain functions. Graph visualization packages for PyTorch (e.g. Visdom) are available, too, but they do not display the same versatility as TensorBoard.
Let’s say you have successfully trained your neural network. How do you make it available to other people? The migration of your model to production is referred to as deployment. For years, TensorFlow has been clearly superior in this regard, as it offers native systems for deploying your models.
TensorFlow Serving makes it easy to offer and update your trained models on the server-side. TensorFlow Lite, on the other hand, allows you to compress your trained model so that it can be used on mobile devices.
Until recently, PyTorch did not have a comparable feature. However, in March 2020 Facebook announced the release of TorchServe, a PyTorch model serving library.
We have seen that both PyTorch and TensorFlow are moving towards an erasure of most of the differences between them by integrating new functionalities from the competing framework. But when it comes to the libraries’ user bases, the divide between the two is still very much alive.
PyTorch has long been the preferred deep-learning library for researchers, while TensorFlow is much more widely used in production.
PyTorch’s ease of use combined with the default eager execution mode for easier debugging predestines it to be used for fast, hacky solutions and smaller-scale models. But TensorFlow’s extensions for deployment on both servers and mobile devices, combined with the lack of Python overhead, make this the preferred option for companies that work with deep learning models.
In addition, the TensorBoard visualization feature offers a nice way of showing the inner workings of your model to, say, your customers.
So… Which Library Should You Use?
So, for your first deep-learning model, should you use PyTorch or TensorFlow? There is no definitive answer to this question. As a rule of thumb, if you are already a Python programmer, want to build a model just for yourself, or for use in research, you can follow the general recommendation and use PyTorch.
On the other hand, if you are planning to use your model in production, you should give TensorFlow a shot. At the moment, nobody can say whether this divide will be upheld, or whether one of the two frameworks will triumph over the other (there are widely contrasting predictions regarding this question).
In either case, we encourage you to try and understand as much as possible about your neural networks regardless of which framework you choose. In the end, it will just be a tool to help you build your models. Happy learning!
Both TensorFlow and PyTorch have their advantages as starting platforms to get into neural network programming. Traditionally, researchers and Python enthusiasts have preferred PyTorch, while TensorFlow has long been the favored option for building large scale deep learning models for use in production.
However, the latest releases have seen the two libraries converge towards a more similar profile. As long as you stick to either TensorFlow or PyTorch as your deep learning framework, you can do nothing wrong.
Looking to get started with deep learning? Sign up for our AI Nanodegree.