In a study conducted by Soroush Vosoughi, Deb Roy, and Sinan Ara from MIT University, researchers discovered that fake news spreads six times faster than true stories on Twitter. As we move through election season, this issue worsens as campaigns are being propelled into social media algorithms more intensely than ever before. So if you feel overwhelmed by the amount of political campaigns and posts online, it is totally valid. While staying politically informed is crucial in understanding the candidates and their parties, the spread of misinformation on social media often overpowers the truth, destroying the line between truth and deceit. Distinguishing false political ads is already difficult for many, but with the integration of AI in the social media world, the issue has become detrimental, and it’s now more important than ever for society to take the initiative and learn media literacy and critical thinking skills.
Although fake news tends to spread concerningly faster on social media, research indicates that people aren’t easily manipulated or brainwashed. Panelist Musa Al-Gharbi, a sociologist from Stony Brook University, shared in a discussion for “Democracy in the Digital Age”, that fake news mostly ‘serves up content to people who want to affirm what they already believe’. But fake news in 2024 is different; 2024 is the first presidential election with AI technology in play.
The depth of the issue doesn’t lie with AI platforms like ChatGPT or Grammarly, it stems deeper than that— literally. Deepfake AI is a form of Artificial Intelligence that can be used to create convincing hoax images, sounds, and videos. We’ve already seen several of these in the past year:
- In August, Donald Trump posted a picture of an AI-generated image of Taylor Swift endorsing him
- In July, Elon Musk posted a video on Twitter cloning Kamala Harris’ voice saying things she never said.
Several X users see no real harm in it, believing that it’s just another form of political humor and memes that are not meant to be taken seriously. After receiving backlash, Musk pinned the post to his profile, emphasizing his point that “parody isn’t a crime.” But while many images can be easily distinguished between AI and real photographs, the more concerning areas of Deepfake AI lie in sounds and videos that imitate the exact behaviors and tones of real people.
Earlier this week, U.S. Intelligence and cybersecurity experts reported that Russian operatives created and posted AI-enhanced deepfake videos portraying Vice Presidential candidate, Tim Walz, in an “unfavorable light”. The video included a deepfake of his alleged former student accusing that Walz sexually abused the student in the past. However, the real former student, a man who attended a school where Walz taught, came out and said that they never even met, and was dismayed by the claims that was being made under his name.
Microsoft, who tracks cybersecurity threat assessments traced these campaigns, linking them to a Russian group called Storm-1516. The group is seen to have consistently spread scandalous claims from fake reporters, fake journalists, and fake whistleblowers to fuel political discord in the United States. Dangerous media like this make it much more difficult for users to distinguish fake news versus real news. Reality is, some may not even go out of their way to fact-check, making them easily susceptible to believing false narratives and slanderous claims.
So although the spread of misinformation can’t necessarily be stopped, the best way to slow the spread of it is by building the skills to stop it. The North Carolina State Board of Elections gives the following tips on combatting misinformation on the media:
- Check the date to ensure it is recent
- Make sure the content matches the headline
- Consider whether arguments are supported by facts and research
- Check to see if any other news sources are reporting the information
- Check the author’s sources
- Check the site’s sponsors
To test your critical thinking skills on misinformation, here’s a quick exercise from Washington Post:
Does the following tweet include misleading information ?
If you answered yes, you’re correct! As stated by Washington Post, the chart shows misleading information because it makes it seem like GDP growth took a larger jump in 2021 than it really did. The vertical axis on the left is inconsistent, showing it goes up by increments of one percentage point until the top, when it abruptly switches to half-percentage points, making the economic growth look more impactful during Joe Biden’s first year as president.
In an era where technology like AI has the power to shape public opinion more than ever, it places a heavier responsibility on us to combat present and future storms of manipulation and distorted information. This election season, it’s important to commit to practicing this responsibility by taking the extra measures to make sure the sources of information we’re using are valid and credible. AI technology’s continuous complex development is inevitable but holding ourselves and others accountable for false information is the best solution to protect ourselves and our democracy from further political polarization.
Sources:
- https://www.washingtonpost.com/technology/interactive/2024/election-misinformation-quiz-ai-fake-real/
- https://www.ncsbe.gov/about-elections/election-security/combating-misinformation
- https://apnews.com/article/parody-ad-ai-harris-musk-x-misleading-3a5df582f911a808d34f68b766aa3b8e
- https://www.verifythis.com/article/news/verify/tim-walz-verify/tim-walz-misconduct-while-teaching-claim-video-isnt-real-elections-2024-fact-check/536-8a0fbe5c-7de3-458a-be0e-4f3eb0606e2d
- https://www.washingtonpost.com/investigations/2024/10/21/tim-walz-matthew-metro-video/