A man from Worcester was scammed out of £600 by someone using “deepfake” technology on a dating app.
James Klair, 33, was led to believe he was speaking to a US soldier after matching with her on the dating app Tinder. Mr Klair said: “I felt an instant connection with her.”
The “relationship” soon turned sour after Mr Klair sent the fraudster hundreds of pounds for medical treatment and never heard from them again.
Mr Klair said: “After several weeks of messaging, photos and videos, they told me their brother needed money for an operation. As they were away at war, they apparently had no access to funds.
“My heart made the foolish decision over my head to send them £600, believing they would give me back the money upon their return home. I’m sure you can probably guess the rest, but I still feel ashamed to this day that I fell for this, to be honest.”
He added: “I realised afterwards that it was just all made-up rubbish after they started ignoring me – they had used clever computer tricks to fool me. I don’t use dating apps anymore.”
Deepfake scams use artificial intelligence (AI) to create hyper-realistic videos and audio to impersonate people and defraud unsuspecting victims.
The rise in cases is alarming. New data from anti-fraud platform Sumsub showed that three-quarters of UK dating app users have encountered deepfakes.
As much as 19% have personally been deceived by one, and 22% have had someone close to them misled by AI.
Of the 2,000-person survey respondents, 79% of people who had been deceived by a deepfake said they were “confident” in their ability to spot them.
Despite misconceptions, younger people are more commonly misled by AI content. 22% of those aged 25 to 34 admitted to being misled, with the figure falling to 17% for those aged between 45 to 54, and 16% for those aged 55 and above.
Pavel Goldman-Kalaydin, head of AI at Sumsub, said: “Without meaningful action, deepfakes and synthetic content generated by AI represents a threat to the users of all digital services.
“Online dating is particularly at risk – as shown by the level of ID fraud it faces – more than all other sectors, even compared to finance or online media. Malicious actors can bypass the often unsophisticated verification measures these apps have, sign up with fake information and images, and deceive people – often to scam for monetary gain or worse.”
To avoid falling victim, always verify unexpected requests through a separate, trusted channel.
Alexey Antonov, data science team lead at Kaspersky AI Technology Research Centre, said: “If you get a suspicious voice message from a friend or family member asking for money, check in with other family members of friends and try to contact them on their usual phone number.
“If a politician or celebrity suddenly contacts you, you should be vigilant and don’t trust offers that seem too good to be true.
“In video calls, watch for unnatural movements, robotic-sounding speech or facial glitches.”
He added: “Be cautious of urgent financial demands and never share personal details unless you’re certain they are who they say they are. If in doubt, trust your instincts and take extra care.”
Three deepfake scam red flags
- Unexpected communications: If a message or call seems unusual or from an unknown number, always take a minute to verify it before acting upon the request.
- Time pressure: Scammers rely on urgency to give you less time to think. Make sure to take your time and assess the situation carefully.
- Unnatural sounds or sights: AI-generated videos and audio can often be glitchy. Watch for strange movements or a peculiar voice.