Why is Apple’s smart phone, criticized by Musk, closer to the ideal form of AI?

This year's Apple WWDC was a pre-announced AI conference, and it was rare for Cook to exaggerate the atmosphere on various occasions from a few months ago.

But when you saw the press conference between OpenAI and Google and Microsoft last month, you will feel that this does not look like an AI conference.

Apple has not even officially released a large model, has not compared the parameters of competitors and its multi-modal capabilities and other conventional aspects, has not talked about the future of AGI, and has not launched a hot product like Copilot, which the media has shouted as "subverting everything". application.

Instead, Apple coined a word called Apple Intelligence (hereinafter referred to as "Apple Intelligence"). This homophonic stem tells people that what Apple releases is not a piece of software or hardware, but a new user experience.

Cook believes that AI must be user-centered and needs to be seamlessly integrated into your daily experience:

It has to know you and be based on your personal context, such as your daily life, your relationships, your communications, etc., all of which are beyond the scope of artificial intelligence. This is personal intelligence and Apple's next big move.

Apple has chosen to infiltrate AI capabilities into the entire ecosystem, which means that it is difficult to see the cheering demonstration of GPT-4o, but this "no surprise" is different from when the iPhone 15 was released last year, which was a smartphone A profile of the end of a grand narrative.

Apple Smart is paving the way for a new narrative for AI applications. This may not just be a new chapter for Apple.

Personal Intelligence and Personal Privacy

When Apple Smart was unveiled, Apple summarized its five characteristics: power, ease of use, deep integration, personalization, and privatization .

If you want AI to integrate the details of life without even being noticed, you need to let AI know you better than you do. This also means that the more personal data you have, the more likely it is to be realized. Then the question that must be faced is: Is this based on the transfer of personal privacy?

This is also the reason why Musk criticized Apple after today’s press conference.

He even stated directly under Cook's tweet that "all Apple devices will be banned from entering my company's offices" and questioned the protection of user privacy security after Apple and OpenAI cooperated.

In fact, this is indeed a problem that users will worry about, especially for Apple, which has 2.2 billion active devices. But this also goes against Apple's always cautious approach to privacy protection.

In the past, Apple devices and AI-related functions were basically implemented by local machine learning, requiring a small amount of data, which was largely based on considerations of personal privacy.

Although Apple did not introduce these issues one by one at the press conference, through some media sharing sessions after the conference, we also learned more about Apple’s smart handling of privacy. We can try to clarify the currently controversial issues on the Internet. some problems.

APPSO has learned that Apple has provided two new solutions for data processing involving the cloud.

First, users don't have to send all their data, all their emails, all their messages, all their photos, all their documents to someone else's cloud and store it there so that the server model can detect it if needed.

Instead, Apple intelligence on the user's device figures out which small pieces of information are relevant to answering the question. Therefore, the questions asked to the cloud only contain this small part of the information, and Apple keeps a small part of the information confidential.

Apple created an encryption system so that iPhones, for example, can only communicate with servers with designated tags. In other words, if there are any changes to the software on this server, its signature will change, and you can refuse to communicate with it.

Including user data processing in cooperation with OpenAI, the IP addresses of users with registered accounts will be hidden when using services, and OpenAI is not allowed to record user requests.

This may answer the public's doubts to a certain extent, and it is indeed necessary for Apple to disclose more information in this regard. Apple's smart phone, which emphasizes "personalization", must deal with this problem if it wants to realize the blueprint at the press conference.

Moreover , the cooperation model between Apple and OpenAI is likely not exclusive. In the future, there is obviously room for cooperation with other large models in different scenarios and regions.

This is not a problem faced by Apple as a manufacturer, but no matter who it is, it must carefully choose its partners. When AI penetrates into life with a large number of terminal devices, the battle between privacy and convenience will become more intense, even if it is considered to be more tolerable. The same is true for the Chinese market.

A press conference without hardware, but it has a great impact on hardware

From the naming to the implementation method of Apple intelligence, we can see that Apple wants to define AI hardware in its own way and penetrate AI capabilities into the entire ecosystem, rather than launching a certain killer application and function.

This is the biggest difference from the large amount of current AI hardware. Last year, hardware manufacturers started the trend of AI hardware. Many of them simply equate AI hardware with large models + terminal equipment. As a result, they often launch semi-finished products with a certain function and experimental update.

This is also an important reason why Internet celebrity AI hardware such as Ai Pin and Rabbit R1 fell into the dust after a wave of excitement.

The idea of ​​Apple intelligence is similar to the way machine learning has been applied to Apple products in the past. Although Apple does not mention AI much, it has been integrated into many commonly used small functions. For example, the adaptive audio mode of AirPods Pro is also implemented through machine learning.

Many people say that Apple has fallen behind in the era of large models. This is probably true from a single technology perspective, but what Apple needs has never been a more powerful model than ChatGPT, but to transform computing power into a holistic rather than partial experience.

Although the system and software are the protagonists of this conference, the hardware is an unspecified but crucial part. APPSO learned that this time Apple Smart is running a model with a scale of 3 billion parameters on the device side .

Apple is low-key but confident about this. It is reported that Apple engineers believe this is the best terminal-side model currently.

For comparison, the parameters of Phi-Silica, a small device-side model released by Microsoft not long ago, are 3.3 billion, while the models of domestic mobile phone manufacturers in most scenarios of the device side have parameters between 7 billion and 13 billion.

Higher parameters probably mean higher performance, but if the same performance can be achieved with a smaller parameter scale, this will have greater significance for the combination of mobile devices and large models.

Moreover, many studies in the industry have proven that the performance of fine-tuned small models may not be inferior to large models in certain usage scenarios. Apple's previously exposed open source small model OpenELM covers 270 million, 450 million, 1.1 billion and 3 billion parameters.

Although Apple believes that users value experience rather than parameter scale, the device-side model is likely to be where Apple is quietly putting its efforts.

If it goes well, Apple is very likely to promote a new wave of hardware, from Vision Pro to AirPods with cameras, as well as rumored housekeeping robots. With its strong design, production and supply chain capabilities, Apple can reuse software to shape hardware.

This conference, which did not release any new hardware, may be the conference that has had the greatest impact on Apple hardware in recent years.

Siri will become Apple's true operating system

When Apple wants to integrate AI capabilities into its operating system, Siri becomes an important bridge.

At today’s media sharing session, John Giannandrea, Apple’s senior vice president of machine learning and artificial intelligence strategy, said:

Siri is no longer just a voice assistant, it's actually a system of devices.

We have also predicted in the WWDC preview article that the ultimate goal of Apple's AI is to realize this scenario: wake up in the morning, wake up Siri with the word "Siri", and then let it open the WeChat public account "Aifan'er" to read the latest articles, In this way, you can listen to Ai Faner's morning report without using your hands at all.

Siri's ability to become smart is actually an improvement in its semantic understanding ability, and it can understand the meaning of all this data like a human being. This understanding will become richer over time.

The natural interactive language after the rise of large models has always been considered to replace the graphical interface GUI of our current devices. Behind this is the significant improvement in the computer's ability to understand natural language.

The arrival of interaction based on natural language will not only affect our portable devices, but the form of applications will also be completely changed. For example, Siri calls specific capabilities through APIs to perform various tasks. Even apps will no longer be needed, or they will be used in a new way. form appears.

OpenAI's departed co-founder Andrej Karpathy also expressed a similar view. He believes this is the most exciting thing about Apple and cited six reasons:

  1. Multimodal I/O: Supports reading and writing of text, audio, images, and video. These can be said to be human native APIs.
  2. Agency: allows all parts of the operating system and applications to interoperate through "function calls"; the LLM of the kernel process can schedule and coordinate work based on user queries.
  3. Seamless experience: Fully integrate these capabilities in a highly seamless, fast, always-on way. No need to copy and paste information or prompt the project, adjust the UI accordingly.
  4. Proactivity: Not just following prompts, but anticipating prompts, providing suggestions, and proactively executing on them.
  5. Delegation hierarchy: Run as much intelligence as possible on the device (Apple Silicon is great for this), but also allow work to be delegated to the cloud.
  6. Modularity: allows the operating system to access and support the entire growing LLM ecosystem (such as the release of ChatGPT).
  7. Privacy Protection: <3

Today’s Apple WWDC details all point to this future. But Apple also knows that this will most likely not be realized within a few years, so it just tells you that at least Siri is much easier to use.

In the past two years, what we lack most is actually the "Amazing" brought by AIGC. But just like mobile phones and the Internet, there are still no signs of technology or products that are deeply embedded in the fabric of life.

Moisturizing things silently is the ultimate goal of technological innovation and the ideal form of AI. This is also the most anticipated aspect of Apple's intelligence.

# Welcome to follow the official WeChat public account of aifaner: aifaner (WeChat ID: ifanr). More exciting content will be provided to you as soon as possible.

Ai Faner | Original link · View comments · Sina Weibo