When you press the shutter to take a photo, your phone is doing math problems

If you are poor, fluid mechanics will be better, but if you dare, you will fly with great strength. This is a joke circulated in aviation circles.

▲ MiG 25: A journey to heaven on a piece of stainless steel.

It refers to the story of the Soviet Union flying 32 tons of stainless steel to 3.2 Mach, which shocked the United States at that time. Compared with the SR-71 Blackbird reconnaissance aircraft that is full of black technology, the MiG-25's approach is direct and effective, directly using two rocket engines to solve the problem.

According to the unconventional thinking of Soviet engineers, the "brick"-shaped aircraft can easily fly at supersonic speeds.

▲ Musk's starship SN6. Picture from: cnet

Up to now, the example of the "big brick flying" should be Musk's starship prototype. It has the same effect as the MiG 25, and it just works miracles vigorously.

If the interesting talk of the aviation circle is transferred to the field of imaging, it can also be matched.

In the digital age, the quality of image output can be almost proportional to the size of the sensor. The larger the size, the greater the natural advantage. With the shackles of traditional physical optics, it is difficult to "counterattack."

The smartphone sensor is only the size of a "nail cap" and is limited by its size, the lens design is in a dilemma. With traditional imaging thinking, there is a feeling of being attacked by the enemy, and it is difficult to break through the upper limit of physics, let alone make some innovations that are welcome.

▲ Marc Levoy, considered the father of “computational photography''

Until the first generation of Google Pixel was released, and won the crown of DxOMark in one fell swoop. At that time, people gradually realized the charm of "computational photography".

At the launch of the iPhone 12 series, Apple spent a long time introducing the advancements and functional enhancements brought by "computational photography" on the iPhone.

It can be said that computational photography has become a trend at this time, and compared to the stagnation of traditional physical optics, it seems to have greater potential, and even the realm of "big bricks flying" can be reached by algorithms alone.

In the past, the algorithm may only be the icing on the cake, making the color of the picture better, the night scene is cleaner, and the backlight is clearer, but recently, the processing of "images" in MIX 4 and iOS 15 has not only stayed at the level of "good-looking".

▲ Xiaomi MIX4.

The biggest highlight of Mi MIX 4 can be said to be the full screen realized by the under-screen camera technology. The concept of full screen is gradually implemented. In addition to breakthroughs in screen technology, computational photography has also played a great role in it.

At present, the CUP area screen uses technologies such as "small pixels" and "transparent wires" to avoid diffraction effects and enhance light transmittance, but compared to ordinary OLED screens, its light transmittance is still less than 20%.

▲ Prototype of OPPO under-screen camera.

Before Xiaomi MIX 4, OPPO also announced its own under-screen camera prototype, and said that the next goal is to increase the light transmittance to 40%, but there is still a lot of room for the traditional 90%.

And this gap was handed over to "computational photography," which turned physical optics into a series of mathematical problems.

According to Xiaomi’s sharing, the MIX4 cameras will be calibrated one by one according to the screen on the production line, and then the "diffraction model" will be used to reduce the diffraction effect, and perform operations such as multi-frame HDR, noise reduction, and defogging. Carry out a series of details, color optimization and so on.

▲ Mi MIX 4 calls selfies in QQ.

Such an operation, through our previous actual measurement, can basically achieve a selfie effect close to 80%. However, the calls of these algorithms are limited to their own camera apps. They are not yet open to third parties, or third-party targeted optimizations. It is easy to "expose" the front-end call of the third-party app.

In iOS 15, Apple also used "algorithms" to eliminate an optical defect in the iPhone camera.

▲ Shot on iPhone. Picture from: Reddit

Starting from the iPhone 11 series, due to the design of the lens group, "ghost images" will appear when shooting with a point light source, especially at night, as if there is a transparent mirror in the real world.

Whether it was on Weibo or Reddit, there was a lot of discussion. Until the release of the iPhone 12 series, Apple still did not make changes to it, and the ghost image remained the same.

▲ The picture on the right shows iOS 15 Beta 4. Picture from: Reddit

In the iOS 15 Beta 4 update, Apple used algorithms to deal with ghost images in some photos. In addition to wiping, this feature of iOS 15 Beta 4 will also automatically recognize ghost images based on the content of the image.

In this way, scenery photos are easier to be judged, while indoors, they may not be so effective, and ghost images still exist.

▲ The picture on the right shows iOS 15 Beta 4. Picture from: Reddit

And, under Reddit related posts, ghosts can be optimized from iPhone XS to iPhone 12 Pro Max, not limited to the latest models. With the arrival of the official version of iOS 15 in September, I believe the problem of "ghosting" will be eliminated even better.

In the field of traditional imaging, "ghost images" can only be avoided, and it is difficult to completely eliminate them through optical design. The ultra-wide-angle lens uses exaggerated large lenses and special lenses such as various aspheric lenses inside to avoid "defects."

Apple's use of algorithms to solve optical defects can be described as a new way, jumping out of the inertial thinking of traditional imaging manufacturers of stacking materials on lenses to solve defects.

▲ Jon McCormack.

I remember that in an interview after the iPhone 12 series conference, Jon McCormack, Apple’s vice president of software and in charge of the iPhone’s imaging system, said that “Apple’s way of thinking is not the same as that of professional traditional imaging manufacturers.” “Our goal is to make everyone Tell their own stories."

That is to say, compared to products with tool attributes such as micro-single and SLR, smartphone images are more from the user's perspective, such as optimizing colors through algorithms, multi-frame synthesis of "night scenes", HDR, and so on.

At the same time, smart phones are currently the only device that can combine collection, editing, processing, and sharing in one, with a special status. This is also the case that gave birth to "computational photography", which is regarded as a "sword leaning forward" path in the field of traditional photography.

Different from the current professional imaging manufacturers, including mobile phone manufacturers, they have been exploring the revolutionary innovation of Camera 3.0. (Camera 1.0 is the film age, and 2.0 is the digital age, moving from the field of chemistry to the field of physics.)

▲ Light L16. Picture from: theVerge

The Light L16 multi-lens camera was called the "Future of Cameras" by the Wall Street Journal, and National Geographic even called it a revolutionary camera.

Its acquisition principle is now similar to that of a smart phone's multi-camera system. 16 lenses (wide-angle, telephoto, medium-telephoto, etc.) participate in imaging together, and finally output a "flawless" photo.

▲ Different lenses used in the final imaging of the Light L16.

As soon as the concept of Light L16 came out, it became the darling of the media at the same time that it also received 30 million US dollars of investment from Google Ventures, but the product was constantly bounced and postponed.

The Light L16 camera that finally went on the market didn't complete the original vision, except that the hardware and software were messed up. After testing the camera editor Ki_min of Mobile01 in Taiwan Province, he found that the so-called multi-lens algorithm synthesis is actually a simple digital cropping, and many functions have not been implemented.

The most important thing is that Light L16 does not have any computing power to solve the "one-stop" processing in the machine, and it still needs the assistance of a PC-level computing platform to complete the final output.

Looking back now, Light L16 should only be regarded as a semi-finished product, and the less is actually the computing platform and algorithm. The success of attracting a lot of media attention and becoming the darling of the media actually relies on the imaging concept, that is, the algorithm, and now it is "computational photography."

The initial appearance of "computational photography" was actually to make up for the shortcomings of the small light-sensitive components of mobile phones. With the increase in the computing power of chips, smartphone imaging has gradually become a long competitive track, making "computational photography" a full improvement.

Now, with the optimization of the algorithm, the CUP full screen has become mature, and the "ghost image" that once plagued lens designers disappears. Computational photography no longer assumes the role of "optimization", and gradually moves to the foreground, becoming a key part of improving image quality and imaging quality.

If the Light L16 at that time had the current "algorithm", I think it might be a future possibility for the camera, but it is not Camera 3.0, it is "computational photography".

#Welcome to follow Aifaner's official WeChat account: Aifaner (WeChat ID: ifanr), more exciting content will be provided to you as soon as possible.

Ai Faner | Original link · View comments · Sina Weibo