Deepfake Dilemmas: How to Spot and Combat Synthetic Media
Discover the extent of threat and misuse posed by deepfakes, and strategies for detecting and contending with synthetic media.
FEATURED
My post content
Deepfake Dilemmas: How to Spot and Combat Synthetic Media
Discover the extent of threat and misuse posed by deepfakes, and strategies for detecting and contending with synthetic media.
If you have seen FLUX AI’s recently generated images in 2024, you know it’s time to get serious, because it has left even Midourney behind. Far from Will Smith and his tragic spaghetti, AI technology is getting better at blurring the lines between fact and fiction with every new day, and deepfakes have emerged as one of the most provocative and dangerous threats to our perception of truth.
But what exactly is a deepfake?
According to the Oxford definition, a deepfake is a video or sound recording that replaces someone's face or voice with that of someone else, in a way that appears real. 1 Seeing is no longer believing. Neither is hearing. Yet this is not stopping people throughout the world from causing personal issues to discrepancies in global politics. And others from believing them.
People may recall hearing the voice of 46th US President Joe Biden reaching out to the residents of New Hampshire, advising them against voting in Tuesday's presidential primary, and saving their vote for the general election in November. In another instance, an AI rendering of current Ukrainian president Volodymyr Zelenskyy was telling his soldiers to surrender the fight against Russia. Worse yet, many political candidates in India are seeking services of deepfake creators. As elections in India are beginning, Divyendra Singh Jadoun, a prominent Indian deepfake creator, says hundreds of politicians have been requesting his services, with more than half asking for “unethical” things.2 Candidates have asked him to create fake audio of competitors causing breach of etiquette on their campaigns, or to superimpose their faces onto pornographic images. Or to create low-quality AI videos of their own selves as protection in case any of their real videos cause discord and damnation of them.
And where deepfakes are personal, there is an epidemic of nude deepfakes of girls as young as 14 being shared in schools. A 2019 report by Sensity, a company which detects and monitors deepfakes, found that 96% of deepfakes were non-consensual sexual deepfakes, and of those, 99% featured women.3 Brandon Ewing, a Twitch streamer, admitted to buying and watching deepfake porn of his female colleagues in 2023. And after global phenomenon Taylor Swift’s nude deepfakes went viral on social media, this issue was brought to proper attention after her fans quickly reported them, and propelled lawmakers into seriously tackling this issue.
It’s strictly alarming to know that deepfake porn can be ‘made to order’, featuring anyone you would like. “A creator offered on Discord to make a five-minute deepfake of a ‘personal girl,’ meaning anyone with fewer than 2 million Instagram followers, for $65,” NBC reports.4
The gravity of this situation is even more distressing when one considers the amount of pictures people have available publicly: work site profile pictures, pictures taken from parties, or selfies posted with friends, can all be used in deepfakes. With this technology, even showing your face in public can be very dangerous.
Whether strict regulations are implemented or not, in this rapidly evolving age where AI is improving by the second, it’s crucial to be able to tell deception from reality. With a notable regional difference in the increase of deepfake fraud, being 1740% in North America, 1530% in Asia-Pacific, 780% in Europe, 450% in Middle East and Africa, and 410% in Latin America5, there should be more regulatory laws on AI generation and sharing, with more stress on AI identification, as only 38% of students have received guidance from their schools on how to identify AI-generated images, texts, or videos, despite a desire for such training expressed by 71% of students.6 The more frequently we encounter fake content, the more likely we are to recall it as real. “Through repetition, content becomes ingrained in people’s heads,” a cognitive psychologist at Brown University, Steven Sloman, says. 7
To be able to differentiate, we first need to know how deepfakes are formed.
How are deepfakes generated?
Deepfakes are a form of AI generated content, and are created using advanced deep learning techniques.
Any machine learning or AI model requires extensive training. To create a deepfake, a large dataset of images, videos, and/or sound of the target individual is collected. The more data used in training, the more convincing the deepfake will be.
The core of deepfake technology involves training a deep learning model, often a General Adversarial Network, GAN for short. A GAN consists of two neural networks: the generator and the discriminator. These neural networks work against each other as such:
● The generator tries to create realistic media of the target person, using the data that is fed to it.
● The discriminator evaluates this generated media and distinguishes between real and fake content.
● Over time, the generator improves, producing increasingly realistic deepfakes as it learns from the feedback provided by the discriminator.
Users can then use this AI model to perform a ‘face swap’, in which the trained model takes a source video and replaces the face with that of the target individual. The model adjusts the target’s face to match the movements, expressions, and angles of the source video. Users can also use the model to perform a ‘synthesis’, in which the model generates entirely new content by creating images or videos of the target person saying or doing things they never actually did.
After generating the deepfake, additional editing and refinements are applied to make the deepfake appear more convincing.
The final output is a video or image that appears to show the target individual, but is actually a synthesised version created by the AI model.
To further improve AI, AI-generated media is being repeatedly used to train new models. With each iteration, the content produced by AI becomes increasingly realistic. This increasingly realistic data will again be used to train AI, creating even more realistic data, which will then be fed into AI…
This cycle of using ever-more convincing synthetic data to train AI models leads to the creation of even more lifelike content. Ultimately, the differences between real and synthetic data will become nearly impossible to discern.
How can we detect deepfakes
Deepfakes can be detected in two ways: manually, or using AI technology.
How to detect deepfakes manually
While no single detection method is perfect, using multiple manual techniques together can often help determine if a multimedia file is likely a deepfake.
Text is the big giveaway
Focus on the small text, like a lanyard or text in the background. Patterns and textures can also look strange upon close observation, and elements can be out of proportion.
Facial and body movement
For images and video files, deepfakes can still often be identified by closely examining the participants' facial expressions and body movements.
Lip-sync detection
When video is matched with altered audio in the form of spoken voice, it's likely there is mismatched synchronisation in how words are projected. Pay close attention to lip movements, which might illuminate these discrepancies.
Irregular reflections or shadowing
Deepfake algorithms often do a poor job of recreating shadows and reflections. Look closely at the reflections or shadows on surrounding surfaces, in the backgrounds, or even within the participants' eyes.
Pupil dilation
In most cases, AI does not alter the diameter of pupils, leading to eyes that appear off. If you are watching subjects whose pupils aren't dilating naturally, that's a sign that the video is a deepfake.
Metadata Analysis
Check the metadata of digital media files, such as timestamps, camera settings, and software used, for any signs of tampering.
Audio Artefacts
Listen for discrepancies in audio quality, such as unnatural voice modulation or mismatched audio with visual cues.
Source Verification
Cross-check the origin of the media with reliable and reputable sources, and verify whether the content has been published or validated by trusted organisations.
Reverse Image, Video, and Audio Search
Use reverse search tools like Google Images or TinEye to trace the origins of images and videos, identifying any altered or fake versions. For audio, consider tools like Shazam or Google Assistant's "What's this song?" feature to identify and trace audio clips to their original source, helping to detect any potential manipulations.
Fact-Checking Services
consult fact-checking websites and services to verify the authenticity of suspicious media.
Stay Informed
Keep up with the latest developments in deepfake technology and detection methods through industry reports, research papers, and news updates to ensure you know the advancement level of technology, and to keep your detection tools up-to-date.
How to detect deepfakes using AI-detection tools
AI can also be used to detect AI-generated deepfakes, so even as deepfakes evolve, so too will AI-powered deepfake detection technologies.
Use AI Detection Software such as:
● Sensity AI: an AI-powered deepfake detection for videos, images, and audio. Has a multilayer approach for reducing the risks and consequences of AI-powered cyber threats
● Deepware Scanner: Provides tools to identify deepfakes by examining subtle visual and audio anomalies.
● Microsoft Video Authenticator: Analyses media for signs of tampering and provides a confidence score.
Cross-Check Results
To ensure accuracy, use multiple AI detection tools to analyse the same media. This can help confirm the findings and reduce the risk of false positives or negatives.
How can we combat deepfakes
1. Advanced Detection Technologies
● AI Detection Tools: Constantly develop and improve AI-based detection tools that can identify deepfakes. This will ensure the technology to identify AI media is up-to-date with the AI-generation technology.
● Blockchain for Authentication: Implement blockchain technology to track and verify the origin of digital content, ensuring that any alterations are recorded and traceable.
● Digital Watermarking: Use digital watermarks or metadata that can verify the authenticity of media. These watermarks should be embedded in the content in such a way they cannot be removed using photo editors, and can be used to check for signs of tampering.
2. Public Awareness and Education
● Media Literacy Programs: Educate the public on how to identify deepfakes and their potential harms. By improving media literacy, individuals become better equipped to verify and validate the authenticity of any content they encounter.
● Awareness Campaigns: Run campaigns to inform the public about the dangers of deepfakes. Highlighting real-world examples can help people understand the potential impact. These campaigns should be targeted for all age groups.
● Educational Resources by Tech Companies: all companies involved in the advancement of generative AI should create resources to educate the public on the process, benefits, and misuses of their technology, as well as how to detect them.
3. Regulation and Legal Measures
● Legislation Against Malicious Use: Implement laws and regulations that penalise the malicious creation and distribution of deepfakes, especially when they are used for fraud, harassment, or misinformation.
● Platform Responsibility: Encourage social media platforms and content-sharing sites to detect and label deepfakes, as well as to report any harmful ones. Platforms can also implement stricter policies against the spread of manipulated media.
4. Collaboration and Research
● Industry Collaboration: Encourage collaboration between tech companies, academic institutions, and governments to stay ahead of deepfake technology and to create more effective detection and prevention methods.
● Open Research Initiatives: Support open-source projects and research initiatives focused on deepfake detection and prevention, allowing a broader community to contribute to combating the issue. Hold monthly hackathons aimed at developing AI-detection programs.
5. Ethical AI Development
● Ethical Standards: AI developers should be aware of the potential misuse of their technologies and work to implement safeguards and ethical standards.
● Transparency in AI: Encourage transparency in AI model training processes and data usage to ensure that AI technologies are not contributing to the escalation of deepfakes.
6. Critical Response Strategies
● Fact-Checking and Verification: Strengthen fact-checking organisations and promote the use of verification tools for content, especially in news and media outlets.
● Rapid Response to Deepfakes: Develop protocols for quickly addressing and debunking deepfakes when they are identified, especially in situations where they could cause significant harm, such as during elections.
References
1‘deepfake, n’, (Cambridge Dictionary, Online) <dictionary.cambridge.org/dictionary/english/deepfake> accessed August 13 2024.
2Pranshu Verma, Cat Zakrzewski, ‘AI deepfakes threaten to upend global elections. No one can stop them.’ (The Washington Post, April 23 2024) <www.washingtonpost.com/technology/2024/04/23/ai-deepfake-election-2024-us-india/> accessed August 13 2024.
3Arwa Mahdawi, ‘Nonconsensual deepfake porn is an emergency that is ruining lives’ (The Guardian, 1 April 2023) <www.theguardian.com/commentisfree/2023/apr/01/ai-deepfake-porn-fake-images> accessed August 13 2024.
4Arwa Mahdawi, ‘Nonconsensual deepfake porn is an emergency that is ruining lives’ (The Guardian, 1 April 2023) <www.theguardian.com/commentisfree/2023/apr/01/ai-deepfake-porn-fake-images> accessed August 13 2024.
5Tony Petrov, Andrew Novoselsky, Pavel Goldman Kalaydin, Vyacheslav Zholudev, ‘Sumsub Expert Roundtable: The Top KYC Trends Coming in 2024’ (Sumsub, December 27 2023) <sumsub.com/blog/sumsub-experts-top-kyc-trends-2024/#ai-generated-fraud-and-deepfakes-to-grow> accessed August 13 2024.
6Maddy Dwyer, Kate Ruane, Aliya Bhatia, ‘Just Released Research: Student Demands for Better Guidance Outpace School Supports to Spot Deepfakes’ (Center For Democracy & Technology, March 20 2024) <cdt.org/insights/just-released-research-student-demands-for-better-guidance-outpace-school-supports-to-spot-deepfakes/> accessed August 12 2024.
7Sara Reardon, ‘How to spot a deepfake—and prevent it from causing political chaos’ (Science, 29 January 2024) <www.science.org/content/article/how-spot-deepfake-and-prevent-it-causing-political-chaos> accessed August 12 2024.