TensorFlow and PyTorch: Who is the best development framework?

DIY MAKER 於 03/07/2020 發表 收藏文章
Gradient recently published a blog that prominently shows the rise and adoption of PyTorch in academia (based on the number of papers published on CVPR, ICRL, ICML, NIPS, ACL, ICCV, etc.). The data shows that PyTorch is clearly a minority in 2018, and compared to 2019, it has been unanimously welcomed by researchers in academic research.
TensorFlow and PyTorch: History of development
Both libraries are open source and contain licenses suitable for commercial projects.
TensorFlow was first developed by the Google Brain team in 2015 and is currently used by Google for academic research and production purposes.
On the other hand, PyTorch was originally developed by Facebook based on the popular Torch framework and was originally a premium alternative to NumPy. However, in early 2018, Caffe2 (a convolution architecture for rapid feature embedding) was merged into PyTorch, effectively shifting the focus of PyTorch into the field between data analysis and deep learning. PyTorch is one of the latest deep learning frameworks and is popular because of its simplicity and ease of use. Pytorch is very popular because of its dynamic computation graph and effective memory usage. Dynamic graphs are ideal for certain use cases, such as processing text. Pytorch is easy to learn and easy to code.
TensorFlow and PyTorch: growing in popularity
TensorFlow is widely used and provides strong support in the community/forum. The TensorFlow team also released TensorFlow Lite, which can run on mobile devices.
In order to speed up the processing speed of TensorFlow, you need to use hardware devices such as Tensor Processing Unit (TPU), and the Edge TPU accessible on Google Cloud Platform. Edge TPU is the ASIC chip on most Android devices running TensorFlow Lite .
TensorFlow is Google's open source machine learning library and is currently the most popular AI library. It is welcomed through its distributed training support, scalable production deployment options, and support for various devices (such as Android). One of the best features in TensorFlow is Tensorboard visualization. Usually during training, you must run multiple times to adjust hyperparameters or identify any potential data problems. Using Tensorboard makes it easy to view and discover problems.
PyTorch, which challenges TensorFlow, is familiar to most Python developers.
PyTorch is native to Python and easy to integrate with other Python software packages. This fact makes it easy for researchers to choose it. Many researchers use Pytorch because the API is intuitive and easy to learn, and you can quickly experiment without reading the documentation.
It can replace NumPy, which is the industry-standard universal array processing package. Since PyTorch and NumPy have very similar interfaces, Python developers can easily adapt to it.
TensorFlow and PyTorch: technical differences
1) Dynamic calculation chart
The real highlight of PyTorch is that it uses dynamic rather than static (TensorFlow) computation graphs. Deep learning frameworks use computation graphs to define the order of computation that needs to be performed in any artificial neural network.
The static graph must be compiled before the model can be used for testing. This is incredibly tedious and not suitable for rapid prototyping. For example, with TensorFlow, the entire calculation graph must be defined before the model can be run.
But with PyTorch, graphics can be dynamically defined and manipulated. This greatly improves developer productivity and is useful when using variable-length input in recurrent neural networks (RNN). Fortunately, TensorFlow added support for dynamic computation graphs in the TensorFlow Fold library released in 2018.
2) Save and load the model
Both libraries have saved and loaded the model very well. PyTorch has a simple API that can save the weight of the model for easy replication.
TensorFlow can also handle save/load well. The entire model can be saved as a protocol buffer, including parameters and operations. This feature also supports saving the model in one language and then loading the model in another language (such as C++ or Java), which is critical for deployment stacks where Python cannot be selected.
3) Deployment method
The traditional interface of the AI/ML model is the REST API. For most Python models, a simple Flask server will be created to provide convenient access.
Both libraries can be easily packaged using Flask server. For mobile and embedded deployments, TensorFlow is by far the best method. With the help of tools such as TensorFlow Lite, it can be easily integrated into Android or even iOS frameworks.
TensorFlow Serving is another great feature. The model will become obsolete over time, and new data will be needed for retraining. TensorFlow Serving allows replacing old models with new models without shutting down the entire service.
So what are the latest developments in TensorFlow and PyTorch?
TensorFlow 2.1.0... Is "Advent" Windows?
In the latest version of TensorFlow, the tensorflow pip package now includes GPU support for Linux and Windows by default (same as tensorflow-gpu). It can run on machines with and without NVIDIA GPUs. tensorflow-gpu is still available, for users who care about the size of the software package, you can download the CPU-only software package on tensorflow-cpu.
In order to take advantage of the new /d2 Reduced Optimize Huge Functions compiler flag, the officially released tensorflow Pip package is now built using Visual Studio 2019 version 16.4. Whether TensorFlow is popular on Windows remains to be seen.
AI chip designed to run TensorFlow Lite on the edge
Edge TPU is an ASIC chip designed specifically to run TensorFlow Lite ML models on the edge. Edge TPU can be used in more and more industrial use scenarios, such as predictive maintenance, anomaly detection, machine vision, robotics, speech recognition, etc. It can be applied to various fields such as manufacturing, local deployment, healthcare, retail, smart space, transportation and so on. Its small size and low energy consumption, but excellent performance, can deploy high-precision AI on the edge.
The first global AI model platform based on Google Edge TPU chip-Model Play
Model Play is an AI model resource communication and trading platform for global users. It provides a rich and diverse functional model for machine learning and deep learning. It supports various types of mainstream smart terminal hardware such as Titanium (TiOrb) AIX, helping users quickly create And deploy models, significantly improve model development and application efficiency, and lower the threshold for artificial intelligence development and application.
The AI model in the Model Play platform is compatible with mainstream edge computing AI chips in many types of markets, including Google Coral Edge TPU, Intel Movidius, and NVIDIA Jetson Nano. Especially for Googl Coral Edge TPU, after downloading the AI model, it can run directly with TiOrb AIX.
The Titanium AI market launched by Google Edge TPU global distributor partner Gravitylink has also been launched. Titanium AI Market is a global AI algorithm and solution trading market created by Google AI technology promotion partner Gravitylink, which is committed to helping excellent AI service providers and demanders from the world establish more efficient direct connection and accelerate The landing and application of AI technology in various fields.

留言


請按此登錄後留言。未成為會員? 立即註冊
    快捷鍵:←
    快捷鍵:→