What is the future chip that Apple, Qualcomm, and Google are all betting on? | Hard Philosophy

During my experience with the Pixel 6 Pro, aside from taking pictures, I seldom felt anything wrong with what Google called "the smartest Pixel phone" until one morning when my phone's alarm woke me up .

Different from the operation of sliding to close the alarm on a normal mobile phone, the Pixel 6 Pro prompts me to say "Snooze (one more break)" or "Stop (stop)" to control the alarm, and when I say "Stop" carefully, the alarm The cell phone that kept going was silent.

It's a trivial little feature, but it keeps me in a good mood every morning I wake up to my alarm clock.

Finally, I no longer have to force sleepiness to search for my mobile phone randomly, and I only need a sentence to shut up the life-threatening mobile phone. This is the first time I feel that the mobile phone can "understand" me.

The secret of "understanding people's words" is hidden in the humble TPU.

Ubiquitous AI Computing

On the SoC of the mobile phone, the presence of the NPU is always weaker than that of the CPU and GPU.

This processor that focuses on neural network operations does not even have a unified name: it is called NPU on the Kirin chip, and the Neural Engine on the A-series bionic chip; Google names it TPU, and MediaTek thinks it is used for It should be called APU for AI calculation…

Although these chips have various names and different architectures and principles, their purposes are largely similar – to accelerate machine learning and improve the artificial intelligence computing power of mobile phones.

If you pay attention to the performance of the mobile phone processor, you will find that whether it is the A-series chip of the iPhone or the flagship Snapdragon chip of Android, the computing power of the CPU has been improved very limited in the past two years, and the phenomenon of "squeezing toothpaste" in performance is more and more more and more serious.

In contrast, AI computing power has become a parameter indicator that more manufacturers are willing to mention. Taking the A-series chip as an example, Apple's A14 bionic chip has nearly doubled the peak computing power compared to the previous generation, and can perform 11 trillion operations per second.

A year later, the A15 bionic chip can still bring a substantial improvement of more than 40% on this basis, and can perform up to 15.8 trillion operations per second.

The AI ​​computing power progress of the Android camp is also very impressive. On the AI ​​performance test list launched by the Zurich University of Technology, the Kirin 970 AI performance score, which was introduced for the first time with NPU, scored 23,600 points. Four years later, the Google Tensor chip reached the top with a high score of 214,700. , and Kirin 9000 and Snapdragon 888 also achieved about 160,000 points.

Since AI computing power is growing almost exponentially, why is it difficult for us to feel any changes? Is the slightly advanced-sounding word AI function too far away from us?

▲ Picture from: Gadgetmatch

In fact, every time you unlock the phone, wake up the voice assistant, or even press the shutter, it is a close contact with AI computing.

The NPU is like a black box, which makes the AI ​​computing process almost non-existent, making you invisible to technology, but surrounded by more natural human-computer interaction. The evolution of Google's voice assistant is a good example.

Since Siri joined the voice wake-up function of "Hey, Siri" in 2014, the wake-up words are almost bound to the voice assistant. Every time we talk to the voice assistant, we have to take the trouble to call their names: Siri, Xiao Ai, Xiao Bu , Xiaoyi… If the voice environment is very noisy, this embarrassing process may be repeated many times.

▲ The voiceprint image for recognizing the wake word comes from: Apple

This is because, due to power consumption considerations, the mobile phone processor cannot waste computing power to parse every sentence of the user in the background for a long time. At this time, a voice receiver with low power consumption and only recognizing wake words is required to work permanently.

When the wake-up word signal is received, the main processor is mobilized to listen to the user's next instruction.

However, although this can achieve low-power voice wake-up, it is still a little far from the ideal form of AI assistants in science fiction movies. It is like Iron Man saying "Hey, Jarvis" before fighting. Again, human-computer interaction is a bit awkward.

Google's "Shortcut Commands" feature on the Pixel 6 series brings this natural interaction that exists in science fiction movies into reality.

As mentioned at the beginning of the article, users can also wake up the Google Assistant to perform specified tasks such as turning off the alarm clock and answering the phone without shouting wake-up words such as "OK Google" through the "Shortcut Instruction".

▲ VoiceFilter algorithm proposed by Google Image from: Google

To directionally separate human voices in noisy sound environments, mobile phones need to have higher-precision voiceprint recognition capabilities, and use more complex convolutional neural network algorithms to accurately capture and recognize the user's password.

And Google's TPU chip specially designed for AI computing just meets this AI computing power demand, and this natural voice interaction is finally realized on the Pixel 6 series.

The NPU based on the neural processing unit is much more efficient than the traditional CPU in image and speech recognition and processing. Mobile phone manufacturers can thus develop many functions such as computational photography and text recognition to enrich the software functions of the system.

On Apple's latest iOS15, many new features are designed based on neural computing engines, such as spatial audio and portrait mode added to FaceTime, real-time text extraction and translation, photo albums to directly search for text in photos, Siri offline operation, etc.

Since these functions have certain requirements for AI computing power, Apple also emphasized that if the SoC chip is not a model after the A12 Bionic, then even if you upgrade to iOS15, these functions cannot be experienced.

Another example is the intelligent identification of ID photos on MIUI13, adding watermarks, face verification privacy protection and other functions. HarmonyOS's space gestures and head rotation are also developed using technologies such as AI image recognition and text OCR.

AI functions are beginning to become an important part of our daily mobile phone experience, and the NPU, which was originally considered insignificant, has become an integral part of the system software experience.

Why do you need AI computing chips?

Compared with other parts of mobile phones, NPU appeared much later.

In September 2017, Huawei released the Kirin 970 at the IFA exhibition in Berlin, which is the first SoC with an integrated NPU. During the same period, Apple released the first A11 bionic chip equipped with a neural computing engine. The two camps paid attention to the field of AI computing surprisingly synchronously.

The emergence of AI functions may seem abrupt, but it is actually the result of natural evolution during the development of smartphone forms.

In an interview with Apple VP Tim Millet about the A-series chips, Wired magazine mentioned that a few years before the iPhone X was released, some Apple engineers proposed using machine learning algorithms to make the iPhone's camera smarter. idea.

It is this idea that makes the iPhone X, which defines the shape of the iPhone in the next decade, possible. Switching to the full-screen iPhone X requires a new security mechanism to replace the Touch ID that originally occupies the chin, and cannot lag behind the former in terms of accuracy and unlocking speed. To achieve these points, Apple turned to 3D structured light face recognition.

Every time the iPhone is turned on to unlock, the depth-sensing camera located in the bangs will create a depth map through thousands of points, which will be compared with the stored face data to complete the unlocking, and this process of collection, creation and proofreading needs to be controlled. In the blink of an eye, more importantly, power consumption must be kept at a low enough level.

According to data released by Apple, iPhone users unlock an average of 80 times a day. If the CPU or GPU needs to be mobilized to perform high-power graphics operations every time they unlock, it will be a considerable challenge for the battery life of the phone.

▲ The process of machine learning

The multi-core architecture neural computing engine can perform a large number of operations at the same time, and through deep machine learning, it can recognize and judge human face information like the human brain. Using it to realize face recognition has different power consumption and performance than traditional CPUs. small advantage.

"We couldn't have done this without a neural computing engine," Tim Millet said in the interview.

With the increase in the number of cores, the computing power of the neural computing engine will also be greatly improved, and its applications will become more and more extensive.

For example, the 8-core neural computing engine of the A13 bionic chip brings Deep Fusion and night scene mode functions to the iPhone11 series, which improves the clarity and details of photos through multiple fusions; Call multiple cameras at the same time to achieve a smooth zoom experience.

▲ Anatomy of the A15 bionic chip and A14 bionic chip, the neural computing engine is concentrated in the lower left corner

In general, the emergence of NPUs such as neural computing engines can well share the computing pressure of CPU or GPU. Through efficient parallel analysis and calculation of big data, meaningful results can be extracted, and more natural processing can be used. Ability to improve our experience.

AI will define smartphones again

John Giannandrea, senior vice president of machine learning and artificial intelligence strategy at Apple, mentioned in an interview that he believes that in the next few years, all functions of iOS or Apple's software ecosystem will be changed by machine learning.

I think Apple has always represented the intersection of creativity and technology. When you think about building smart experiences, it's really important to vertically integrate apps, frameworks, chips… I think it's a journey, this is the future of computing devices that we have, they get smart, and then this Intelligence will be invisible.

The original iPhone redefined the mobile phone with touch screen interaction and connecting to the Internet anytime, anywhere. Since then, the mobile phone has emerged as a branch of "feature phone" and "smart phone".

When the functions of smart phones converge, the so-called "smart" – sending WeChat, playing music, taking pictures, watching news, etc., in a sense, return to functions.

▲ Picture from: Gadgetmatch

Smartphones need to be redefined, and new intelligence should be interpreted as a mobile phone that "understands people". It can recognize the world you see, understand your every command, and dynamically adjust according to the environment. All these require AI chips. Get deeply involved.

As the mobile phone hardware supply chain becomes more and more transparent, the differences in the core accessories of mid-to-high-end mobile phones are becoming smaller and smaller, and software functions have been paid more and more attention by more and more manufacturers. Just like cooking, the hardware supply chain provides a good way to make a good The "basic ingredients" of dishes, and if you want to cook a unique taste, excellent software experience is the key "seasoning".

Today, we already have a clear enough screen and a camera that can shoot landscapes from 100 meters away, but the experience of smartphones is not limited to conventional display and photography.

It should let you shoot multi-focal length photos at the same time, so that you don't miss the scenery by rushing to focus; it should be able to preview the night scene or HDR effect in real time in the viewfinder, and no longer wait for imaging; it should even be A translator that can accompany you on your travels, even if the network is poor, it can complete real-time translation work offline.

AI is the best choice to help us realize these functions. In order to customize software functions more deeply, more mobile phone manufacturers such as Google and OPPO have begun to participate in the design of NPU chips to catch up with the forerunners such as Apple and Huawei.

At the same time, powerful AI computing power is no longer the patent of self-developed chip players. Qualcomm Snapdragon 8 and Dimensity 9000 both regard AI computing power as the focus of improvement, and have surpassed Google's Tensor in AI performance running points. Samsung's recently released Exynos 2200 also focuses on improving NPU performance, bringing double the improvement.

The concentrated efforts of chip giants in AI performance make mobile AI chips look like they are experiencing the "new Moore's Law".

In addition to the speed of performance growth, the popularity of AI chips is also very impressive. According to the statistics of the statistics agency Counterpoint, the number of mobile phones with built-in AI chips in 2017 only accounted for 3% of the market share, and in 2020 this figure has reached 35%.

In the future, more mobile phones will support AI-accelerated computing, which means that the use of machine learning to develop mobile applications will become the new normal. In fact, on national-level apps such as Douyin and WeChat, the use of machine learning has already appeared. Realize AI functions such as background blur and one-click clipping.

With the participation of mobile phone manufacturers and third-party development, AI applications will continue to deepen, and the shape of smartphones may also change accordingly, becoming a software-hardware symbiosis for a pleasant experience.

At that time, the battle for the right to speak in the form of smartphones will gradually shift from the management of the supply chain to the control of user big data.

Stop talking nonsense.

#Welcome to pay attention to the official WeChat account of Aifaner: Aifaner (WeChat: ifanr), more exciting content will be brought to you as soon as possible.

Love Faner | Original link · View comments · Sina Weibo