AI face-changing defrauded multinational companies of nearly 200 million, be wary of these new scams

In the new year and new atmosphere, in addition to having a good relationship with the God of Wealth, it is also important to keep your money bag.

The process of technology implementation is often the evolution history of pornographic content and fraud routines.

Taking advantage of the remaining balance during the holidays, teach your elders to be wary of new AI scams, so that you can achieve the "common prosperity" of the "loving family" family group.

Catching a lot of AI bloggers can deceive people more than "Fake Jin Dong"

It is said in "Westward Journey" that humans are born from humans and demons are born from demons. The AI ​​produced by the code has no parents, but it wants to pretend to be flesh and blood to make you misunderstand it as a human being.

Recently, many "Russian beauties" have appeared on video accounts. All of them are fair-skinned and beautiful, and their Mandarin is fluent, but their sentence fragmentation and pronunciation are strange, and their intonations and nasal sounds are difficult to pronounce. Of course, considering their foreign status, it would be natural to have an accent.

They are enthusiastic and love Chinese culture. They labeled the video with labels such as "Russian Girls in China" and "Sino-Russian Friendship", and brought viewers "hometown specialties" such as beef tendon sausage, big ribs, goat milk powder, and pickled cucumbers. , chocolate, handmade soap.

They have eyebrows that look like they are frowning but not frowning, and eyes that look like they are happy but not happy. In addition to carrying goods, they also express their opinions. They believe that love knows no borders and believe that plainness is the truth. They lament the prosperity here and want to marry their parents and sister. Also developed.

They may be long-lost sisters. Yelena and Elena look exactly the same, and their IPs are both from Shandong. They are like a ball of fire when gathered, and a sky full of stars when scattered. They are distributed all over the world. Lina is from Shanghai, Nina is from Anhui, Irene is from Hebei, Katya is from Liaoning, and Alyssa is from Fujian.

These Russian beauties are all AI, and the platform has marked them thoughtfully. We can tell the difference through their exaggerated blurred backgrounds, uncombed hair, repeated sets of expressions and movements, and overly down-to-earth lines.

But the middle-aged and elderly people in the comment area couldn't see it and sincerely hoped that they would stay. When replying, the "Russian beauties" were only concerned about their missions and completed a soft fraud in a quiet manner.

It’s human nature to be seduced by beauty, regardless of age. Xiaohongshu’s AI handsome bloggers use fashionable and Internet-friendly characters to kill young and old alike. "Elite man in suit", "athletic student in white socks", "goddess of pure desire"… At first glance, I just thought they were real people who had passed the P picture.

At this time, looking at the hands is still the only way to identify whether it is AI, because the face is relatively flat and easy to learn, while the hands have a three-dimensional structure, which is more labor-intensive. Needless to say, the number of fingers is excessive, and the knuckles are twisted so carefully that it is very suspicious not to show your hands unless necessary. Similarly, details such as muscles, limbs, clothing, etc. may also have obvious flaws.

The rest of the identification methods are becoming more and more metaphysical and intuitive: the painting style is greasy, the copywriting is cloudy, there is no breath of life, the light and shadow effects are unnatural, the eyes are lifeless, the face is too delicate and perfect, there are only static pictures and no videos, single It’s hard to be sure just by looking at one picture, but if you look at a few more pictures, you’ll find that every posture has the same expression…

Xiaohongshu AI blogger @cyberAngel_ is a software engineer by profession, and AI painting is a hobby that generates electricity. He does not hide the fact that he uses AI. The title of his post states "AI Painting", and his account profile also reads "I Just an emotionless robot."

▲ Picture from: 小红书@cyberAngel_

At the same time, @cyberAngel_ believes that future AI paintings will become increasingly difficult to distinguish between true and false. AI generation and AI detection evolve with each other. Creators are more aware of where AI is not like real people, so they will work hard to make this part "like" real people. For example, the beautiful girl produced by AI is too perfect, so it can be appropriately blurred and add a "film feel".

Even if the AI ​​beauty blogger is not so real, a plot similar to that of "Russian Beauty" has appeared. The platform has clearly prompted that "suspected to contain AI creation information." There are still people leaving rainbow farts in the comment area, and even asking for links to clothes. It seems that the question of truth or falsehood will inevitably give way to beauty as justice.

When the number of fans increases, AI beauty bloggers can monetize in many ways. Some are self-reliant and monetize their traffic through advertising. Some pay for knowledge, sell courses, and sell customized models. According to @cyberAngel_, "New media There are new media monetization methods, and technology has technology monetization methods.”

▲Real-life model.

The AI ​​"Russian beauties" and "good-looking bloggers" are somewhat similar to the "fake Jin Dong" who previously defrauded middle-aged and elderly women of their emotions and money, but they are even scarier and more imaginative.

"Fake Jin Dong" has no technical content. Videos and photos related to celebrities are downloaded or purchased in packages. His affectionate voice is forged using voice-changing software. It is so rough that it "fake at first glance". The AI ​​level is higher, and sometimes it is real. It's not that easy to distinguish.

It can only be said that when we see handsome guys and beautiful girls on the Internet, it is best not to assume that they are real people, nor to be too sincere. Internet celebrity companies do not want to spend money on real KOLs, and the audience consumes batches of KOLs more quickly. The views of beauty and cheapness are so in line with the communication laws of the Internet.

AI imitates your face and scares your heart

In addition to casting a wide net through "good-looking bloggers" and "Russian beauties", AI can also accurately target.

In other words, AI can not only weave a gentle countryside, but also transform into a PUA master.

A common type of AI fraud is that the scammer forges the caller ID and the voice of an acquaintance, and tells the middle-aged and elderly people who receive the call that their grandchildren are in trouble. They have either committed something and need money to solve the problem, or they are in danger. Rescued with a ransom.

Several such phone scams have been reported abroad. The plots are similar, including kidnapping, injuries, drunk driving and rear-end collisions… Similar things have also happened in China, especially using the "time and space difference" to fabricate the kidnapping of international students to defraud their parents.

The routine itself is not new and has been popular for several years. However, due to technological development, the requirements for computing power, samples and other aspects have been reduced, the effect is more realistic, and the scam is easier to implement.

ElevenLabs, the leader in AI voice cloning, only needs one dollar and one minute of high-quality audio, allowing you to instantly master 29 languages ​​and multiple tones while retaining your own accent, intonation and rhythm.

Although today's Elevenlabs has repeatedly assured that only you can clone your own voice, and there is a verification process to prove that the voice belongs to you, this is a last resort. When Elevenlabs first launched the beta version in 2023, cloned voices of celebrities such as Taylor Swift were already flying all over the place. . I don’t get to decide what AI Taylor Swift says.

However, although the other party can fake the caller ID and voice, if the person concerned is clear-headed, hangs up the phone, re-enters the mobile phone number, and takes the initiative to contact relatives, the lie will often be self-defeating.

Compared with voice calls, video calls may make middle-aged and elderly people who believe that "seeing is believing" more skeptical about life.

The age-old joke of not knowing who is a human or a dog on the other side of the screen will never go out of style, and it is not difficult to achieve technically. A more common operation is to chat with the other party through virtual camera software and AI face-changing function.

A Shenzhen technology company interviewed by CCTV that provides technical support to the public security agency said that when performing real-time face-changing in video chat, whether it is an avatar or a friend circle photo, after uploading the image, feature recognition only takes about 30 seconds, and then the AI ​​starts to build Model, perform real-time conversion after modeling is completed.

You may still be wary of one-on-one videos, but what about a one-to-many "professional team"?

Recently, the Hong Kong branch of a multinational company was defrauded of US$25 million due to AI. The victim was a financial employee. He received an email from the "CFO" of the British headquarters and was invited to participate in a video conference involving "secret transactions." At the meeting, there were not only the "CFO" but also several familiar "colleagues" ”.

The colleagues participating in the meeting were not really present. The scammer downloaded the public video, faked the face and voice of the real person through Deepfake, and then applied it to the video conference. In order to avoid exposing the secret, the "CFO" gave the order unilaterally. The "colleagues" did not communicate with the victim, and the video was quickly hung up. The scammer continued to contact the victim through emails and other methods.

▲ The police demonstrated how to use Deepfake to fake a multi-person video conference.

Although there are investigation results on this matter, some netizens suspect that it is just an insider. The risk management and control measures of multinational companies are unknown, but what is certain is that AI face-changing can also be extended to many places, from the earliest head-changing in pornographic films to fake celebrities bringing goods in live broadcast rooms.

Fortunately, AI is not perfect yet, and the identification method is still simple and feasible.

If it is a face-changing video through AI, the original picture of the camera consumes a lot of computing power when converting several layers, and the sound and picture are often delayed. In addition, you can guide the other person to do some actions, such as opening their mouth, shaking their head significantly, moving their fingers back and forth in front of their face, etc. If AI changes faces, the "face" may be deformed and flawed.

In addition to breaking through technical bugs, there is another trick that has been tried and tested every time, which is to ask some privacy questions that you know, I know, and he doesn’t know, or you can make up lies and deliberately use tricks to see how the other party responds.

Of course, the lack of technology can only solve temporary difficulties. We might as well despise it strategically and pay attention to it tactically. We cannot think that we will not be deceived. It may be that the high-end game has not yet appeared.

AI scams are not new, but a new digital divide has emerged

2023 is known as the first year of generative AI. However, the promotion of technology lags behind. Perhaps what most ordinary people can better perceive is how technology that seems not new but has become more popular in recent years has affected daily life. .

Everyone understands that technology is a double-edged sword. Elevenlabs allows people with language barriers to speak, HeyGen allows Swift to lip-sync Chinese without translation, and Miaoya Camera allows people to obtain refined certificates without leaving home. It turns out that there is no right or wrong between the bright side and the dark side of technology, they just exist objectively.

However, for many middle-aged and elderly people, smartphones and the Internet are incomprehensible to them, and now AI is causing trouble again.

In the AI ​​​​era, text, sound, pictures and videos may be fake, or even appear in combination. Scammers’ forged identities are more specific, and fraud is more targeted and genuine, making middle-aged and elderly people more likely to be deceived.

Using magic to fight magic and using technology to attack and defend technology is a cat-and-mouse game that middle-aged and elderly people may not be able to digest quickly. We may start from the nature of fraud to reduce the possibility of middle-aged and elderly people being fooled.

No matter how technology develops, many fraud schemes remain the same: stealing privacy, using fear, greed, and emotional value to make up stories, pretending to be acquaintances or packaging yourself to gain trust, with money as the ultimate goal.

After the AI ​​Fashion Pope came out of the industry, an X (formerly Twitter) V with nearly 13 million fans lamented: "I thought the Pope's down jacket was real and didn't think much about it. I can't survive the future of technology."

For middle-aged and elderly people, it is more difficult to adapt to the rules that seeing is not necessarily true, and pictures are not necessarily the truth. We can show them some simple, traditional but still effective methods.

One type is to be careful and not easily trust the content of the Internet, do not answer harassing calls, do not click on unfamiliar links, and try not to over-expose personal biometric information such as faces, voices, fingerprints, etc. on the Internet.

Especially when you receive suspicious phone calls or text messages, don’t blindly believe in “one-sided words.” No matter who the other party is, they have multiple perspectives when it comes to money. Use multiple methods to verify the other party’s identity. If the other party says something has happened to your family, call them back again. Call to confirm if this is true.

The other type is proactive and prepared. Middle-aged and elderly people have their own preferred media. Their family members can forward anti-fraud content from government departments’ WeChat official accounts and other sources to them. If conditions permit, you can also show middle-aged and elderly people how to use AI tools such as ChatGPT.

Of course, for the elderly who do not understand smartphones, try to solve the problem at the source. It is best not to bind bank cards to WeChat or Alipay and only save a small amount of change.

Successful AI fraud is only the final result, but preventing AI fraud can start at any time.

Technology should serve everyone, lead the way, illuminate the beacon of the digital world for those who come after us, and explore the unknown underwater reefs. Only then can we jointly move towards a future where technology is not feared but used rationally.

It is as sharp as autumn frost, and can ward off evil disasters. Work email: [email protected]

# Welcome to follow the official WeChat public account of aifaner: aifaner (WeChat ID: ifanr). More exciting content will be provided to you as soon as possible.

Ai Faner | Original link · View comments · Sina Weibo