Why can’t you photograph the real Beijing sandstorm?

For the abnormal weather in Beijing, Moments and Weibo may be faster and more accurate than the weather forecast.

Whether it's sunny, hazy gray or the most recent apocalyptic yellow, I can see the unusual weather inside and outside the fourth ring of the capital on Moments of Friends and Weibo.

▲ The turret in the sandstorm (the effect is exaggerated). Picture from: Moments of Friends

Take the recent "sandstorm" as an example. I woke up in the morning and swiped in a circle of friends. Although I was in hot and humid Guangzhou, I could still realize from the pictures on the screen that I was forced to the corner of the street by the yellow sand. Our situation.

This is known as the "worst sandstorm in ten years," and it can be regarded as a severe weather condition in recent years, and it can be regarded as a moment worth recording. The advancement of smartphone imaging has given every ordinary person the opportunity to become a "historical" recorder.

If it goes back ten years, no matter how big a dust storm, without images or the Internet, it will be difficult for me to empathize.

However, looking at the photos in the circle of friends, the "sand and dust" weather presented has a slight color difference. Some shots of the orange-red color of the "doomsday twilight", and some shots of the "white mist" scene in Silent Hill, as if living in In different parallel time and space, enjoy different sand and dust.

▲ The picture is from Weibo @奶爸小张

And there are photography enthusiasts in the circle of friends, who took the opportunity to release a tutorial on how to use a mobile phone to take pictures of "real Beijing." The most important step is to switch the shooting mode to professional mode and adjust the color temperature, which is to "get rid of the control of the AI ​​algorithm" and regain control.

According to this tutorial, after raising the color temperature and adjusting it to 8000K, I took a picture. If the air is a little bit worse (or there is a heavy fog), it will definitely smell of sandstorm.

People have always wanted to make it easy to take pictures

Whether it comes from the film era or from the digital era. Tools for recording images have been developing in the direction of automation, from machinery to electronics, from 35mm film to CMOS imaging, automatic film advancement, automatic focus, automatic exposure, etc. The emergence of these technologies is to allow people to learn from complex metering. , Liberate in the exposure process, put all energy on recording, creation and expression.

After the camera is fully electronic, the algorithm-based object recognition function has actually appeared on the camera long ago. For example, Sony’s advanced automatic mode (that is, the golden camera mode) used to do very basic scene recognition and optimize the straight-out , And was later ported to the Xperia mobile phone. It's just that the computing power at that time was relatively limited, and the final imaging effect was not much different.

With the development of moving images, the barriers to taking pictures are lowered again, and it is almost at your fingertips. However, due to limited size and physical optics, smartphones only follow in the footsteps of traditional optical manufacturers. It is almost impossible to achieve curve overtaking in imaging.

Smart phones have exploited their strengths and circumvented their weaknesses, giving play to their advantages in the field of chips and developing a new genre of "computational photography." Its thinking does not go against the general trend of image development, and it also follows "making photography easy."

Simply put, smartphones use the AI ​​computing power of the chip to optimize the images captured by the sensor in terms of object recognition, scene recognition, multi-frame noise reduction, synthesis, and so on. With the continuous advancement of technology, the power of AI color grading has become more and more profound, and the control of photos has also become more in-depth. From the beginning, the bold and resolute, gradually became smooth and silent, more and more delicate.

Nowadays, as long as you have a little understanding of composition and light and shadow, it is easy to take great photos with your smartphone. The more AI is involved in image processing, the easier it will be for people to obtain photos, and more and more people will feel the joy of it.

Will "AI Enhancement" be a double-edged sword?

However, as AIs have greater authority to adjust photos, it has also caused some controversy.

The true color of the sandstorm cannot be photographed. It is just my personal observation. AI is adjusted according to the algorithm in the program. When the sky is recognized, the inherent algorithm of "blue sky" is called. In simple terms, this algorithm will make the sky bluer and the air more transparent. The opposite hue of yellow is blue. Whenever it encounters an orange-yellow sandstorm, it will be "optimized" into a pale white fog.

In fact, this is only a relatively rudimentary change, but it only changes the hue and white balance, which has no effect on the picture itself, and this effect will also exist in some cameras. The most controversial AI enhancement model is actually the past "moon event" and the "beauty algorithm" on various software.

The core of the dispute is "shooting" or "painting", "true" or "false," whether it is made out of nothing or is icing on the cake.

▲ Picture from: Know almost @小城

Take the moon event as an example. In fact, in order to ensure that the telephoto can shoot the moon that everyone likes, the AI ​​"excessive force" will automatically optimize the processing when identifying the moon-like scenery, and even add two strokes to make There are craters and ring pits in your photos.

And shooting the moon is actually very similar to the "beauty" of many software in recent years. The transitional beautification algorithm allows aunts to become girls, and fierce men to wives, and this type of beauty filter is also accompanied by many Is associated with "scams".

Both of these two types of AI force transition examples are somewhat offensive, and have risen to the level of "deception". However, when encountering the same scene, I still turn on the AI, let it give me a "moon", or give me a face without a trace.

The pursuit of restoring "what you see" is a principled bottom line in the field of photography. No matter what you create, you should start with truth. Too many elements in the later stage will gradually get rid of the category of photography works. In the World Photography Contest, many of them are not allowed to transition the photos in the later stage, what is needed is a sense of record.

For ordinary people, what we need is to use the simplest equipment and the simplest steps to take a nice picture. And most of the photos taken are used for sharing on social networks. Sharing the "what you see" you want to share is not so important whether it is true or not.

Perhaps for photographers or photography enthusiasts, AI enhancement will involve too much of the picture, losing the overall control of the photo, and it is difficult to match the scene you want to record.

But for ordinary people, the AI ​​enhancement in smart phones is an auxiliary tool that lowers the threshold for taking pictures, allowing more people to participate, and turning it from a small group of activities into a public entertainment.

Whether the fast-forward AI camera mode is a double-edged sword or not depends on how we use it. This is like the "mother box" in the DC universe, but it is, in the hands of Darkside, it is a weapon of destruction, and in the hands of steel, it can become the key to resurrecting Superman.

Like the heroes of the Justice League, they know very little about the mother box. We are now groping for the AI ​​camera mode.

Why can't I make "what you see"?

The simple answer is that AI is not strong enough.

For the blue sky, for the flowers, and for the sunset, AIs will optimize according to a specific algorithm, whether it is icing on the cake, or something out of nothing, the final result is the empty, gorgeous, and full-bodied picture in our mind. But for extreme weather such as "sandstorms", AIs do not have a standard answer to optimize, so that "negative optimization" appears.

When the haze was raging, major manufacturers, even Adobe's Photoshop and Lightroom, launched corresponding functions to remove the haze, giving you a blue sky in the photos. If the sandstorm comes a few more times, I believe there will be "de-sand" or "blade runner" filters soon.

In addition to the inability to target optimization for specific environments, the AI ​​camera mode is still in its infancy. The algorithms are relatively close, and the output photo styles are also relatively close. There is no way to highlight the stylization.

In addition, the AI ​​camera mode is far from smart enough, and it is temporarily unable to learn independently, to learn everyone's preferences for taking pictures and later preferences, and use the big data of the machine to target optimization. For example, I like to shoot high-contrast, low-color, and slightly low-exposure styles. When shooting, using AI camera mode can directly produce similar styles. This should be a general direction for the subsequent development of AI or computational photography. In other words, using the same equipment to create different results is an advanced gameplay.

Everyone's "what you see" has a very subjective will in it. The same scene, different mood, may want to present different pictures. Maybe one day in the future, you find that the AI ​​camera mode of your mobile phone can guess the style you want to present. Then brush your circle of friends, and maybe no one will use the stills of "Blade Runner" to act as Beijing. The scene of the sandstorm.

#Welcome to follow Aifaner's official WeChat account: Aifaner (WeChat ID: ifanr), more exciting content will be provided to you as soon as possible.

Ai Faner | Original link · View comments · Sina Weibo