There were a lot of interesting announcements coming out of Google I/O today, particularly if you're an Android fan. But maybe one of the most interesting initiatives is something the vast majority of Google's users will never use directly: It's big push to speed up machine learning tasks with improved TensorFlow performance.
The announcement included new hardware (that you can only use by tapping Google's cloud services), the second generation of what Google's dubbed its Tensor Processing Unit, a souped-up GPU that delivers up to 180 teraflops of floating-point performance. If you're in a real hurry — and who isn't? — Google boasted that it will offer pods that combine the computational power of 64 TPUs, letting you tap up to 11.5 petaflops of computation for a given Machine Learning task.
How fast is that? The Verge reports that Google was happy to brag:
The edge this gives Google over competitors’ offerings is the speed and freedom to experiment, says Jeff Dean, a senior fellow on the Google Brain team. “Our new large-scale translation model takes a full day to train on 32 of the world's best commercially available GPU's,” Dean told a group of reporters in a press briefing this week. “While one eighth of a TPU pod can do the job in an afternoon.”
Google is giving away TensorFlow, its highly-regarded open source machine learning toolkit, in the hopes that its custom-made hardware and cloud services keep customers coming back again and again for more.
There's an interesting Machine Learning land grab going on. I've been playing around with a variety of the offerings from Google, Microsoft, Amazon, and others and the rate at which they keep improving — and it gets easier and easier to tap into — is pretty incredible. Would love to hear about whether you've found ways to integrate ML into your businesses, and if you see opportunities to drive your company forward with the latest tools.