AI in Elections and the challenge to information integrity
In a world where artificial intelligence (AI) plays an increasingly central role in access to information, AfricTivistes, in partnership with Democracy Reporting International (DRI), conducted a study aimed at assessing the reliability of chatbots during the Senegalese legislative elections on 17 November 2024. The objective was to analyse the extent to which these tools provide accurate, erroneous information, or the limit in their ability to respond to certain electoral queries.
For this study conducted in October 2024, several chatbots were tested namely Copilot, Gemini, ChatGPT-4o, ChatGPT-4omini, and Claude. The evaluation was carried out through a series of questions in Wolof and French languages concerning the Senegalese legislative elections of 17 November 2024. The responses obtained were analysed according to three fundamental criteria:
- Accuracy: Verification of the correspondence of information with official and reliable sources.
- Consistency: Evaluation of the uniformity of responses and the absence of contradictions.
- Completeness: Ability to provide detailed and relevant answers.
Unequal performances
The analysis of the results highlighted significant disparities in the performance of the different chatbots:
- ChatGPT-4o and Claude demonstrated some ability to provide relatively accurate responses, although limited to general information.
- Copilot and Gemini exhibited significant shortcomings, particularly with outdated or incomplete responses.
- ChatGPT-4omini showed the weakest performance, with a higher error rate and a notable lack of accuracy in the responses provided.
A common point among all these tools is their inability to integrate real-time updates of the Senegalese political context. Some models also issued warnings about the reliability of their own responses, highlighting their limitations in fact-checking.
Increased risk of misinformation during election periods
The study highlights a major challenge i.e. the risk of misinformation induced by the use of chatbots during election periods. Although they may be useful for obtaining general answers, these tools cannot replace official sources and specialised media, primarily due to their inability to update information in real time.
Furthermore, some chatbots generate erroneous responses with a high degree of confidence, which can mislead users and fuel the spread of false information. This issue raises essential ethical questions about the responsibility of technology companies in improving mechanisms for integrating and verifying recent data.
Call for vigilance and digital literacy
The study concludes that the tested chatbots cannot be considered entirely reliable sources of electoral information. It is therefore essential for citizens, journalists and civil society actors to systematically verify the information obtained through these tools by cross-referencing it with recognised official and journalistic sources.
In light of this finding, AfricTivistes reaffirms its commitment to defending democracy and good governance through the promotion of reliable and accessible information. Raising public awareness of the limitations and best practices for using chatbots thus emerges as a priority to ensure responsible consumption of information. This pilot project, carried out with DRI, tested the reliability of the information provided by generative AIs in electoral matters.
As several African countries prepare to hold elections in 2025—just as others such as Senegal, Botswana, South Africa, and Tunisia did in 2024—the issue of online information integrity remains a central concern. These elections serve as major tests for the credibility of electoral processes across the continent and highlight the urgent need to strengthen mechanisms to combat disinformation and the manipulation of information through AI technologies.
AfricTivistes and its partners will continue to work towards a more informed and enlightened use of digital tools for artificial intelligence becomes a lever for electoral transparency.