adv

Easier, faster: The next steps for deep learning

Easier, faster: The next steps for deep learningIf there is one subset of machine learning that spurs the most excitement, that seems most like the intelligence in artificial intelligence, it’s deep learning. Deep learning frameworks—aka deep neural networks—power complex pattern-recognition systems that provide everything from automated language translation to image identification.


Deep learning holds enormous promise for analyzing unstructured data. There are just three problems: It’s hard to do, it requires large amounts of data, and it uses lots of processing power. Naturally, great minds are at work to overcome these challenges.  


What’s now brewing in this space isn’t just a clash of supremacy between competing deep learning frameworks, such as Google’s TensorFlow versus projects like Baidu’s Paddle. Rivalry between multiple software frameworks is a given in most any part of IT.


The newest part of the story is about hardware versus software. Will the next big advances in deep learning come by way of dedicated hardware designed for training models and serving predictions? Or will better, smarter, and more efficient algorithms put that power into many more hands without the need for a hardware assist? Finally, will deep learning become accessible to the rest of us, or will we always need computer science PhDs to put this technology to work?


Microsoft Cognitive Toolkit: More tension with TensorFlow

Any time a major technology comes along to show the world a better way, you can count on the biggest names in tech to try to seize a slice of the pie. It happened with NoSQL, with Hadoop, and with Spark, and now it’s happening with deep learning frameworks. Google’s TensorFlow has been promoted as a powerful, general solution, but also as a way to tie deep learning apps to Google’s cloud and to Google’s proprietary hardware acceleration.


Leave it to Microsoft to assume the role of rival. Its push back against Google on the deep learning front comes in the form of the Cognitive Toolkit, or CNTK for short. The 2.0 revision of CNTK challenges TensorFlow on multiple fronts. CNTK now provides a Java API, allowing more direct integration with the likes of the Spark processing framework, and supports code written for the popular neural network library Keras, which is essentially a front end for TensorFlow. Thus Keras users may transition gracefully away from Google’s solution and towards Microsoft’s.


But Microsoft’s most direct and meaningful challenge to TensorFlow was making CNTK faster and more accurate, and providing Python APIs that expose both low-level and high-level functionality. Microsoft even went so far as to draw up a list of reasons to switch from TensorFlow to CNTK, with those benefits at the top.


Speed and accuracy aren’t just bragging points. If Microsoft’s system is faster than TensorFlow by default, it means people have more options than just to throw more hardware at the problem—e.g., hardware acceleration of TensorFlow, via Google’s custom (and proprietary) TPU processors. It also means third-party projects that interface with both TensorFlow and CNTK, such as Spark, will gain a boost. TensorFlow and Spark already work together, courtesy of Yahoo, but if CNTK and Spark offer more payoff for less work, CNTK becomes an appealing option in all of those places that Spark has already conquered.



Comments