The future and technology exist parallel. As we move forward into the future, modern tech development and exploration in technology advancements will unravel subsequently.
Our future is bright in terms of new tech improvements. In early 2017, Google revealed their latest Processing unit and named it Tensor Processing Unit (TPU) on their headquarters Mountain View, California.
The revelation of TPU advances our current hardware limitation to a major boost on automation and machine learning capabilities. In latest tech news we will uncover in more details about Google’s plans for the future in technological advancements.
The TPU is only a single component among the rest of bunch that takes Google’s in complete domination on machine learning concept subsequently that powers up Siri, Chatbots and various other services.
This bit of news was disclosed in Google’s I/O exhibition where they talked about an open source library named TencelFlow that will be used for developing machine learning software in the future.
It is undebatable that our current generation processor will continue to become more versatile in term of processing speed and performance, but there is still no modern breakthrough in processing units that have the ability to change the face of our technical know how.
Unlike conventional Processing Unit, The TPU is built for only one and one purpose alone, that’s to push the device capacity on making every inch of silicon particles to attain the highest form of processing speed to power ratio.
As it is understood with numerous experiments that a device’s processing speed is entirely depended on device’s power. TPU manages to perform immensely fast in today’s limited power accessibility on devices.
To get into a deeper speculation on Google’s new processing chip and to understand exactly how powerful is this chip really is, an interview was conducted with the Google’s Hardware Engineer Norm Jouppi about the true potential of this processing marvel. He was very cautious on discussing TPU’s specifications in general, but still, it’s enough to understand initial guesses on how powerful this chip can be.
Interviewer: Tell us about this processing unit?
Norm Jouppi: Tensor Processing Unit (TSU) is a state-of-the-art custom accelerator Integrated Circuit specifically build to cater Machine Learning algorithms.
This chip yield extreme processing speed overclocking while keeping the power efficiency in mind for executing our TencelFlow software. The software can only be as good as the hardware it runs over, so we have to provide some customization on it.
Interviewer: What’s the main difference between TPU and current-gen processors?
Norm Jouppi: TPU is specifically developed to handle Machine learning on TenselFlow software. To make things clear, Machine Learning requires CPU and GPU processing units to execute efficiently thus consuming a lot of power in the process.
Making power the key variable here urges Google to create a customized processor to handle Machine Learning.
Interviewer: Does TPU operates any differently than general CPUs?
Norm Jouppi: Yes, TPU is all about efficiency. It takes as much speeding bits as it requires, it’s very adaptable in using an amount of silicon for processing which ultimately saves power.
Interviewer: Can you compare how fast TPU is by giving any real-life example on its processing capabilities?
Norm Jouppi: Keeping Specifications aside, I may provide a real tested example with you that we did in our servers. We used a server equipped with TPU to integrate Machine Learning on Google’s Map and navigation, in order to improve the accuracy of getting positive images and text.
Using an array of TPUs, every street text along with coordinates was processed within five days. We cannot achieve same speed using conventional processing units.
Interviewer: Is it possible that technology like TPU could be seen on our everyday devices anytime soon?
Norm Jouppi: As of this moment, TPU is made to handle Google Data servers. We cannot quote about anything regarding normal devices but TencelFlow will soon to become open source and hopefully will announce our plans ahead in times to come.