iPhone sales plummet in China, urgently needing this year’s “biggest update in history” to make a comeback

At the beginning of 2024, last year’s world’s best-selling i smartphone iPhone experienced a rare sharp drop in sales in China.

According to data from research firm Counterpoint, Apple iPhone sales in China fell sharply in the first six weeks of 2024, down 24% year-on-year. Huawei's recovery is one of the reasons. Huawei's sales surged 64% during the same period.

Tianfeng Securities analyst Ming-Chi Kuo previously said that Apple has lowered its 2024 iPhone shipment forecast to about 200 million units, a 15% drop from the previous year. This may be the largest decline among the world's major mobile phone brands.

Ming-Chi Kuo further predicted that shipments of the iPhone 15 series and the new iPhone 16 series are expected to decrease by 10-15% year-on-year in the first half of 2024 and the second half of 2024 respectively.

He believes that the reason for the iPhone's "sluggish sales" may be related to the gradual shift in demand for high-end mobile phones to AI and folding screens. Ming-Chi Kuo even believes that if Apple fails to launch GenAI services that exceed expectations this year, Nvidia's market value is likely to exceed Apple's.

Not long ago, Apple's strongest rival, Samsung, used AI functions as the biggest selling point of its new flagship Galaxy S24 series mobile phones, touting the concept of "AI phones".

Prior to this, Google and Microsoft had high-profilely announced their plans to deploy AI large language models on mobile phones. Domestic mobile phone manufacturers such as Huawei, OPPO, vivo, and Xiaomi have also announced their own AI strategies. OPPO even directly announced Enter the era of AI mobile phones.

Mobile phone manufacturers have reached a high degree of consensus on the idea that "AI will bring freshness to smartphones." The last time such a unanimous trend occurred may be traced back to the full-screen wave seven years ago.

But by 2024, Apple no longer plans to be an "outsider."

Recently, Cook has rarely revealed Apple's ambitions in AI many times. Cook said the company will "break ground" in the field of generative artificial intelligence in 2024

We firmly believe that this will bring revolutionary opportunities to our users.

When Apple decided to cancel its ten-year car-building plan, it also invested more resources in AI projects. Some employees in the car-building project team will be transferred to the machine learning and AI department led by John Giannandrea, and will shift to generative methods. AI project.

In the official press release for the new MacBook Air released this week, Apple directly used the expression "World's Best Consumer Laptop for AI".

Combined with previous revelations from Bloomberg reporter Mark Gurman, Apple’s new AIGC feature is likely to appear on iOS 18, which will be released in June. And said that iOS 18 is likely to be the "most significant" software update in the history of the iPhone.

At present, consumers have not fully accepted AI phones. Xiaomi brand general manager Lu Weibing recently said that "AI phones are gimmicks," which triggered a lot of discussion.

What is certain is that the integration of AIGC and hardware devices will become faster and faster. Smartphones have been considered only toothpaste-like micro-innovations in recent years. The only variable that may change this situation in the next few years is AI.

Will iOS 18 turn Siri into ChatGPT?

The most powerful revelations about iOS 18 did not come from the technology reporters we are familiar with, but from Cook.

In a recent earnings call, Cook revealed in an interview that Apple has been paying close attention to generative AI technology and has conducted a lot of exploration and practice within the team.

He emphasized that Apple's consistent approach is to ensure that the work reaches a certain standard before publicly discussing the relevant results. Cook also mentioned that Apple will share some exciting new developments later this year.

Previous reports pointed out that Apple is independently developing a large language model and has used it internally as an artificial intelligence assistant to answer questions. Combining this information, Apple may integrate large language models in iOS 18 to improve Siri's performance and introduce new features.

A recent paper published by Apple's machine learning team, "LLM in a flash: Efficient Large Language Model Inference with Limited Memory," also confirms the authenticity of this news. .

In this paper, the research team discusses the impact of memory constraints on running large language models on devices such as mobile phones and tablets, and proposes two methods to solve the computational bottleneck, paving the way for the future deployment of large language models.

Continuing to read the papers published by Apple's machine learning research team over the past year, you will find that the Apple team is particularly concerned about how to combine large language models with natural language processing.

For example, a paper published in December, "Federated Learning for Speech Recognition: Revisiting Current Trends Towards Large-Scale ASR," points out that although large-scale language models perform well in multiple natural language processing tasks, they fail to perform well on spoken language understanding tasks. Performance also relies on accurate Automatic Speech Recognition Transcription (ASR) or built-in understanding modules.

Therefore, the research team proposed new solutions to improve the model's accuracy in understanding natural language recognition content.

In another paper, "Leveraging Large Language Models for Exploiting ASR Uncertainty," the Apple team is studying how to combine large language models with web search and machine translation to improve the accuracy and quality of generated content.

Although these research results may not necessarily be directly applied to product design, they are enough to show that Apple has invested a lot of effort in combining large language models and Siri.

Before the advent of Siri, human-computer interaction on the iPhone mainly relied on touch operations. The addition of Siri adds a new dimension to interactions, but accurate speech recognition has always been a challenge.

At that time, the immature speech recognition technology was difficult to bring users a natural and smooth interactive experience. Smartisan TNT, known as a "revolutionary product", was a lesson learned from the past.

Now, the emergence of large language models may be able to solve some "technical obstacles."

Maybe you have seen videos of people chatting with the ChatGPT mobile app on social platforms. In these videos, the fluent ChatGPT not only shows its powerful thinking and answering abilities, but also its ability to imitate human tone and accent. It can almost pass the Turing test, demonstrating the true potential of voice assistants.

ChatGPT’s powerful understanding capabilities give people further room for imagination. Recently, the paper "Mobile-Agent: Autonomous Multi-Modal Mobile Device Agent with Visual Perception" published by the research team of Beijing Jiaotong University and Alibaba began to explore the possibility of using AI to realize mobile phone operation.

They designed a multi-modal agent called Mobile-Agent based on GPT-4V, which can automatically perform operations such as searching for videos, comments, and even fighting landlords through natural language instructions.

The operation of Mobile-Agent relies on the visual recognition of GPT-4V for positioning, so recognition errors often lead to operation failures. If this process is replaced with an automated mechanism, the success rate will be greatly improved, and this is the application Apple is researching. direction.

According to Mark Gurman, Apple is considering combining Siri with shortcuts to provide more flexible automated operations. This shows that Apple not only wants to create a smarter voice assistant, but may also change the voice interaction model and bring a new interactive experience to users.

If the above revelations come true, then iOS 18 is likely to be the big move that Apple has been waiting for since it reorganized the Siri team in 2018.

Use AI to change human-computer interaction again

Let’s look at our opponents. Google, the top AI student, has completed its AI layout on Android within a year.

Since Google released its self-developed large model Gemini last year, they have been looking for ways to apply the latest AI technology to all product arrays, and mobile phones are no exception.

The first product to use the Gemini large model is the Pixel 8 series. Based on the Gemini Nano model, Google has brought many innovative features.

For example, the input method can automatically reply to messages based on the current conversation content, or it can transcribe the recording into text in real time and automatically summarize the recording content, etc.

Recently, with the release of Samsung's latest flagship mobile phone Galaxy S24 series, Samsung and Google announced a cooperation to jointly create a Galaxy AI experience, once again demonstrating the power of Google's self-developed model.

Now, these features are not limited to recording to text, even conversations during phone calls can be converted to text in real time and further translated. If you're speaking to a foreigner, the system can also translate the conversation into your language.

The AI ​​image editing functions that Google is good at have also been introduced into Samsung devices. When the picture taken by the user needs to be recomposed, AI can automatically generate content that is not captured in the picture, providing a new perspective and proportion to complete the picture.

You can also erase passers-by or reposition objects from the image, and the AI ​​will repopulate the image based on the selection.

Google also launched a powerful new feature “circle search”. When you are interested in an item or text sentence in the picture, you only need to long press the Home button to activate the circle drawing interface, circle the object on the picture, and the system will automatically perform a search and provide relevant information and purchase links.

This revolutionizes the previously cumbersome image search and online shopping experience.

Search is Google's core business, and the introduction of "circle search" is actually a combination of AI capabilities and Google's search technology, and is an attempt at self-innovation.

In the past, searches relied on input boxes. Now, users can search by simply drawing a circle on the phone screen, without the need for complex keyword input and filtering processes.

This not only shortens the distance between users and service experience, but also marks a revolution in the field of search, and this revolution is brought about by the combination of AI and device hardware, which cannot be achieved in the Internet era.

This also means that although AI capabilities may only be regarded as an additional function at this stage, as more and more applications open interfaces to AI and combine AI's powerful understanding and image recognition capabilities to achieve automated operations, AI is likely to explode at some point in the future, bringing innovative experiences to people and enabling more convenient and faster interactions.

This is the real potential of AI applications on mobile phones, and it also provides ideas for new human-computer interaction models in the AI ​​era.

AI will eventually become the “new infrastructure” of smartphones

The sudden explosion of AI has given the mobile phone industry a lot of room for imagination. Hardware manufacturers such as Qualcomm and MediaTek have taken AI computing power as a new point of contention and started a new computing power competition; software developers have brainstormed and strived to copy ChatGPT and Miaoya The camera's breaking circle spread.

2024 will undoubtedly be a big year for mobile phone systems.

But when it comes to the soul torture of "what can AI bring to users?", few manufacturers can give a convincing answer. Do users really need to chat with robots on their mobile phones and use AI to generate different selfies every day? Applying past popular experiences to the narrative model of the mobile phone circle may not necessarily work.

The answer given by Apple and Google is to return to user experience. The addition of AI only makes sense when AI phones make smartphones more usable.

Before ChatGPT led the AI ​​wave and major mobile phone manufacturers announced their AI strategies, AI technology had been quietly integrated into every aspect of our daily use of mobile phones. AI technology plays a role every time you unlock your phone, pay with your face, or even every time you pick up your phone to take a photo.

When the Huawei Mate 60 series was first released, its smart payment function that could directly swipe codes quickly became popular on short video platforms, arousing enthusiastic sharing among users.

This function actually calls the NPU module of Huawei's Kirin chip, which is specially used to identify items and achieve quick jumps.

This kind of spontaneous sharing behavior just shows that users are more concerned about the direct experience improvement brought by AI technology rather than the AI ​​computing process itself. The focus behind AI is not only the potential improvement of system experience.

As mobile phone manufacturers reach bottlenecks in screen and camera hardware, smartphone competition will soon shift from the hardware level to the system level, with manufacturers using innovative interactions and improving user experience as their core competitiveness.

After smartphones undergo intelligent upgrades, the next step in the experience revolution may be a more humane upgrade. This understanding will be as important as graphics computing and is expected to become the new standard in smart devices. The core of all this may be the explosion of large language models we are currently experiencing.

Huawei announced in August that the HarmonyOS 4 system will be fully integrated with the Pangu model. Two months later, Xiaomi announced that ThePaper OS will be integrated with the "MiLM-6B" model; vivo announced that the vivo X100 will have a built-in blue heart model, and OPPO later officially announced the Find X7 series AndesGPT built-in.

In the past, photo album image recognition and image cutout functions were the "unique skills" of a few manufacturers, but now with the support of large models, such AI functions have become standard features on flagship mobile phones. With the popularization and deep integration of large model applications, more efficient computing hardware and optimized algorithms will emerge, improving computing power utilization and performance, thereby reducing unit computing costs.

In the coming years, hardware manufacturers such as Qualcomm and MediaTek will continue to improve chip computing power, while mobile phone manufacturers will continue to improve model compression and quantization technology to reduce model size and runtime resource requirements. Through the collaborative evolution of software and hardware, the large-scale model on the mobile phone is expected to break through the inflection point of intelligence and achieve an explosion in computing power.

By then, voice assistants may become life-like human assistants, taking photos can become a one-click scan of 3D spatial images, and reading pictures can enable price comparison across the entire network… AI will eventually become an inseparable part of our lives, just like mobile communications and LBS services. .

AI as a marketing buzzword will one day cool down, but the experience innovation brought by AI will profoundly change the way people use mobile phones.

Stop talking nonsense.

# Welcome to follow the official WeChat public account of aifaner: aifaner (WeChat ID: ifanr). More exciting content will be provided to you as soon as possible.

Ai Faner | Original link · View comments · Sina Weibo