On May 22, Chinese state media’s China Fund Report revealed that AI fraud is becoming rampant across the nation. A typical case of telecom fraud leveraging Artificial Intelligence (AI) was reported by the Baotou City police. In this instance, Mr. Guo, the legal representative of a tech company in Fuzhou City, was swindled out of 4.3 million yuan (US$608,520) within 10 minutes.
The artificial intelligence fraud incident: A high-end deception
Around 11:40 a.m. on April 20, Mr. Guo was suddenly contacted by his friend via a WeChat video call. After a brief conversation, his friend told him that he had another friend bidding on a project in another city who needed 4.3 million yuan as a bid bond. The money needed to be transferred via a business-to-business bank account, and they wanted to use Mr. Guo’s company’s account for this transaction. His friend asked Mr. Guo for his bank account number, claiming that he had already transferred the money to Mr. Guo’s account. He even sent Mr. Guo a screenshot of the bank transfer receipt via WeChat.
Subscribe to our Newsletter!
Receive selected content straight into your inbox.
Trusting his friend based on the video chat, Mr. Guo didn’t verify whether the money had been credited to his account. At 11:49 a.m., he transferred the 4.3 million yuan to the other party in two transactions. After the money was transferred, Mr. Guo sent his friend a message on WeChat, stating that the matter had been taken care of. To his surprise, the friend replied with a question mark, indicating confusion.
After calling his friend who claimed to know nothing about the money transfer, Mr. Guo realized he had fallen victim to a high-end scam. The fraudster had used AI deepfake technology to impersonate his friend and commit the fraud.
The disturbing realities of AI-enabled fraud
“The entire conversation did not mention borrowing money. They just said they would first transfer the money to me, and then I would transfer it to his friend’s account. Plus, they video-called me, and I confirmed the face and voice, so I let down my guard,” said Mr. Guo. What’s even more incredible is that the scammer didn’t use a fake WeChat account to video chat with Mr. Guo, but initiated the video chat directly from his friend’s real account. This suggests that the friend’s WeChat account was also unknowingly hijacked by the scammer, another key to their successful scam.
After local police investigated the matter, they found that the fraud group used AI to impersonate faces and voices to commit fraud and hijacked the representative’s friend’s chat program to succeed. They have successfully intercepted 3.36 million yuan (approximately US$510,000), but about 930,000 yuan (approximately US$132,000) has been transferred, and they are working hard to recover it.
Preventive measures and the changing landscape of fraud
Cybersecurity experts remind individuals to avoid posting personal information, such as ID numbers, phone numbers, and bank card numbers, online. Artificial intelligence fraud can take several forms, including voice synthesis and AI face swapping. Fraudsters can obtain someone’s voice through harassing phone calls, short videos on social media, or by hacking into voice databases to steal various voice files (including dialects). They then fake these voices for fraud and use AI to swap faces during video calls to gain trust. Scammers also use AI technology to select victims and target specific individuals for fraud.
People need to be vigilant and verify the identity of the other party through various channels before transferring money. Additionally, they can establish a “security password” with family and friends. Even if the person appears to be familiar in the video, they still need to answer a pre-agreed password or mention a private matter known only to both parties to confirm the identity of the person in the conversation.
AI fraud and the spread of online rumors
Artificial intelligence scams in China are not only used for financial fraud, but also for spreading online rumors, causing distress to those targeted. In March 2023, a photo of a naked female on a subway platform went viral, with numerous derogatory comments about the person in the photo appearing. However, someone found the original picture, with the same pose on the same platform, but the woman in the picture was dressed. The spread of the nude photo was generated using AI “one-key undressing” technology. Since then, the woman in the photo has been plagued by various rumors in her life.
In another case, an individual used artificial intelligence software to generate an article entitled Shanghai Model Kindergarten Teacher Arrested for Prostitution by inputting keywords into AI software without verification. This article sparked widespread discussion among netizens. The Qingpu Public Security Bureau later confirmed that the news was entirely fabricated. The rumormongers and spreaders have been criminally detained by the police.
AI technology is changing the pattern of fraud. We must be more cautious about online information to avoid becoming victims of fraud or even perpetrators due to the spread of false information.
Translated by Audrey Wang