Google claims its TensorFlow Processor 15 to 30 times faster than GPUs and CPUs

10 Apr 2017

1

In a paper published Wednesday, Google laid out the performance gains its TensorFlow Processor (TPU) demonstrated over CPUs and GPUs, both in terms of raw power and the performance per watt of power consumed.

TPUs on average performed 15 to 30 times faster at tested machine learning inference tasks than a comparable server-class Intel Haswell CPU or Nvidia K80 GPU, according to Google.

Also the performance per watt of the TPU was 25 to 80 times better than what Google found with the CPU and GPU.

According to experts this sort of performance increase was important for Google, in its efforts related to building machine learning applications. They add that the results validated the company's focus on building machine learning hardware at a time when performance boosts from traditional silicon were proving more and more difficult to get.

Google has been using TPUs in its data centres since 2015, which had been used for improving the performance of applications including translation and image recognition. The TPUs were seen to hold much promise on energy efficiency, an important metric related to the cost of using hardware at massive scale.

Google said it started looking into how it could use GPUs, field programmable gate arrays (FPGAs) and custom application specific integrated circuits (ASICS) (which was essentially what the TPUs were) in its data centres back in 2006.

At the time, though, the applications were limited for the special hardware because most of the heavy workloads could just make use of the excess hardware that was already available in the data centre anyway.

''The conversation changed in 2013 when we projected that (dynamic neural networks) DNNs could become so popular that they might double computation demands on our data centers, which would be very expensive to satisfy with conventional CPUs,'' the authors of Google's paper write.

''Thus, we started a high-priority project to quickly produce a custom ASIC for inference (and bought off-the-shelf GPUs for training).'' The goal here, Google's researchers say, ''was to improve cost-performance by 10x over GPUs.''

Business History Videos

History of hovercraft Part 3...

Today I shall talk a bit more about the military plans for ...

By Kiron Kasbekar | Presenter: Kiron Kasbekar

History of hovercraft Part 2...

In this episode of our history of hovercraft, we shall exam...

By Kiron Kasbekar | Presenter: Kiron Kasbekar

History of Hovercraft Part 1...

If you’ve been a James Bond movie fan, you may recall seein...

By Kiron Kasbekar | Presenter: Kiron Kasbekar

History of Trams in India | ...

The video I am presenting to you is based on a script writt...

By Aniket Gupta | Presenter: Sheetal Gaikwad

view more