Indian Elections 2024

SORA Makes It Hard To #KeepItReal In Indian Elections 2024

OpenAI recently unveiled SORA, a text-to-video tool that has sparked significant interest and concern among AI enthusiasts, researchers, and journalists. The tool's ability to generate realistic videos from text prompts has raised questions about its potential impact on journalism, democracy, and the spread of misinformation.

Key Points from the Poynter Analysis:

  • Realism and Potential for Misuse: Sora's demonstrations have showcased its ability to create highly realistic videos. However, there are noticeable flaws, such as missing limbs or unnatural movements, which, while not immediately jarring, highlight the technology's current limitations. Despite these imperfections, the tool's advancements raise concerns about its misuse for creating fake news, deepfakes, or misleading content, especially in sensitive contexts like reporting on war zones or verifying user-generated content.
  • Implications for Journalism and Media Literacy: The ease of generating believable videos complicates the verification process for journalists and increases the burden on consumers to discern the authenticity of the content they encounter. This development could exacerbate the post-truth era, where fabricated narratives gain traction more easily, and the authenticity of real videos is questioned, contributing to the "liar's dividend" phenomenon.
  • Ethical and Societal Concerns: The conversation among Poynter experts highlights the ethical dilemmas and societal implications of Sora and similar technologies. There is a fear that these tools could be used to create content that undermines public trust in media, spreads misinformation, and manipulates public opinion. The potential for AI to generate content that crowds out legitimate news and the challenge of distinguishing between real and AI-generated content are significant concerns.
  • The Need for Preparedness and Ethical Guidelines: The discussion emphasizes the importance of preparing for the challenges posed by AI in journalism. This includes developing ethical guidelines for using such technologies, investing in skills to evaluate the authenticity of content, and advocating for safeguards against misuse. News organizations are encouraged to experiment with AI tools to understand their capabilities and integrate them into their workflows responsibly.
AI-generated video content, including deepfakes, can pose several risks in the context of the upcoming Indian national elections in 2024:
  1. Misleading Voters: AI can generate realistic videos that can mislead voters by impersonating candidates or spreading false information. This could undermine the democratic process and affect election results.
  2. Deepfakes: Deepfakes are AI-generated videos that can convincingly mimic real people. They can be used to create false narratives or smear campaigns against political candidates. This could influence public opinion and sway voters.
  3. Spread of Misinformation: AI can rapidly produce and disseminate misleading content at a scale and speed not seen before. This could lead to widespread confusion and misinformation among voters.
  4. Influence on Social Media: AI-generated videos can be spread through social media platforms, reaching a large number of people in a short time. Given the popularity of social media in India, this could have a significant impact on voters’ perceptions.
  5. Threat to Democracy: The deceptive use of AI in elections can undermine trust in the democratic process. It can create doubts about the authenticity of information and the credibility of candidates.
To combat these risks, tech giants have announced an initiative titled “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” at the Munich Security Conference (MSC), pledging to tackle the proliferation of harmful AI-generated content meant to deceive voters3. Also, Meta has announced a dedicated fact-checking helpline on WhatsApp for users in India, launched in partnership with the country’s Misinformation Combat Alliance (MCA), to assess media that has been generated by artificial intelligence2.

However, it’s crucial for voters to stay informed and critically evaluate the information they receive. Media literacy and awareness of the potential misuse of AI are key to mitigating these risks.

Our #KeepItReal initiative focuses exactly on that, through our RAC triangle offence methodology we are battling Misinformation in India, doing extensive Research to understand the problem, Awareness generation campaigns to explain the problem at scale. Capacity building initiatives to solve the problem. If you would like to collaborate with us, give us a shout.
Copyright © 2024 Social Media Matters. All Rights Reserved.