The cases of Taylor Swift, Tom Hanks and more: the dangers behind easy-to-create deepfakes


Artificial intelligence tools that are increasingly simple to use, deepfakes created in less than 45 minutes, voices and faces of stars synthetically recreated: the latest case is from these days, and has Taylor Swift as its involuntary protagonist. In the videos circulating on social media, which according to AFP fact-checking have collected thousands of views on Facebook, what appears to be the singer’s voice can be heard stating that “due to a problem with the packaging we cannot sell three thousand sets of Le Creuset cookware, so I’m giving them away to my loyal fans.” The links accompanying these posts, however, referred to a series of questionnaires that requested personal information and payments that would be used to cover shipping costs: as confirmed by McAfeeit is a deepfake scam generated by artificial intelligence.

The previous Tom Hanks case

In the case in question, experts contacted by the New York Times highlighted how artificial intelligence made it possible to create a synthetic version of Taylor Swift’s voice, superimposed on images of the singer and those of Le Creuset’s products. The company itself confirmed that it is not involved in any consumer giveaway campaign featuring Taylor Swift. As mentioned before, this is not the only case in which the voice or image of a celebrity has been exploited for similar purposes with the use of artificial intelligence: in October we told you about the case of Tom Hanks, reproduced with AI in a dental insurance promotion. It was the actor himself who reported the case on Instagram, underlining how it was not the true image of him and how he had nothing to do with that promotion.

AI tools increasingly accessible

The development of artificial intelligence in recent times has made it much easier to create similar content, even by people with low technical knowledge. In particular – Professor Siwei Lyu, who directs the Media Forensic Lab at the University of Buffalo, underlined to the New York Times – it has become rather simple to create synthetic voices and rather difficult to identify them as such: “These tools are becoming very accessible”, and explained that it is possible to generate “decent quality” videos in under 45 minutes. Another case, similar to those indicated previously, involved the American television presenter Gayle King: the same journalist denounced in a post on Instagram how her voice had been manipulated with artificial intelligence to make people believe that she was promoting a product.

The possible dangers of deepfakes

The arrival of artificial intelligence tools that are increasingly simple to use – and therefore within the reach of a growing number of people – can however lead to dangers not only linked to deepfakes generated by exploiting images and voices of famous people. In fact, according to several important representatives of law enforcement and justice in the United States, these tools could make hacking, scams and money laundering activities easier. Manhattan Attorney Damian Williams said artificial intelligence could help non-English speakers create credible-sounding messages to try to scam people, Reuters reported. While Brooklyn prosecutor Breon Peace said deepfake images and videos could be used to bypass banks’ security systems to verify the identities of their customers. And for Rob Joyce, director of cybersecurity at the NSA, easily usable artificial intelligence tools could allow hacking operations to be carried out on people otherwise incapable of carrying them out. However, Joyce also underlined that there is another side of the coin: AI developments could also help the authorities to detect crimes.

The AI4TRUST project

Artificial intelligence can also be used for the purpose of countering online misinformation. The AI4TRUST project is based on this, a European initiative involving 17 partners (including SkyTG24) from 11 countries and financed by the Horizon Europe program of the European Union: the objective is to combat online disinformation, exploiting the tools that artificial intelligence can make it available to people.