The AI Election: How Fast Intelligence Threatens Democracy

Prompt: Politician campaigning for votes, large crowd - Midjourney v6

In 2024, the US and the UK will see two significant elections: the first for the US President and a General Election in the UK. Politicians and technologists are already concerned about AI's role in creating, targeting, and spreading misinformation and disinformation, but what can be done to keep democracy free?

If politicians are serious about preventing AI from interfering with elections, they need to start with the source of misuse, as AI could have a far more damaging impact on democracy than social media or foreign powers like Russia.

This is because AI is exceptionally good at creating believable narratives, whether actual or false and, in our age of Fast Intelligence, where an answer is just a voice prompt away, we seldom take the necessary time to check or verify convincing stories. We now regularly read stories where professionals incorrectly used AI to produce business reports, court documents, or news stories. These professionals should have checked the hallucinated story created by AI or, worse, they lacked sufficient knowledge to identify that their fast intelligence was false.

Examples include lawyers who presented court papers with fictitious case references, academics who submitted evidence to government investigations with false incidents, and politicians deliberately using Deepfake technology on themselves to gain publicity.

Our desire to generate and consume fast intelligence to save our time is leading to lazy acceptance of false information.

The current prevalence of Generative AI, using models and transformers to create textual and visual narratives that mimic human creativity, is particularly good at generating convincing narratives. Trained on the content of the World Wide Web and optimised with specific data, GenAI is the epitome of fast intelligence. It is also the acme of building trust.

We are sceptical about reading an unsourced internet page, especially if they were using that information for a weighty decision. It is human nature to mistrust the unknown. Equally, if we label something "Generated by a Computer" or "Written with AI, " people are more sceptical.

Yet if you do not need to label an advert as "AI generated", and we filter that same page through a GenAI transformer, make it sound convincing by adding specific phrases and facts relevant to the reader, target that information with exact language tuned towards an individual's preferences, ensure that it is distributed in a means that will attract that person's attention and then follow it up with further, similar, convincing stories. You have a compelling pattern to influence a decision. Repeat this constantly, every minute, every day, for every individual.

GenAI allows a genuinely individual and effective marketing campaign to be generated at meagre costs.

This is where fast intelligence far exceeds recent elections' excesses, corruption, or fakery. Governments were rightly investigated when personal information and data were used to distribute political messages, targeting specific groups and demographics to influence an election. This targeting, whilst more specific than previously experienced, was still quite broad and required both specialist skills and significant crafting to be effective. The individuals at the heart of such scandals were richly rewarded due to the uniqueness of their skill set, and they could influence groups rather than individuals.

No longer. Fast intelligence can now deliver optimised messages targeting individuals and, with the proper access to data, deliver those messages far more effectively than previously witnessed. It can deliver those messages at greater volume, faster pace, and significantly lower cost.

Anyone with an internet connection and willingness to experiment with GenAI can produce a cheaper, quicker, and more effective mass distribution of highly impactful information. This enables any politically minded individual to have the disruptive potential previously controlled by nation-states, prominent political parties, or global social media organisations.

This year, GenAI will generate previously unseen levels of misinformation and disinformation.

For a democracy, most fake cases will fall into the misinformation category, where information has been wrongly sourced, wrongly evidenced, or is just plain wrong. The intent may have been fair, but the facts used to prove the intent were false. Misinformation is also the most likely category that people will witness during next year's elections, as

GenAI creates misinformation because it is flawed and not perfect. We see regular cases of individuals trusting AI-generated material because it appears compelling and evidentially supported. A recent personal case occurred when I asked AI to write a 250-word response to a question. The answer was 311 words, but the AI insisted it was 250. Eventually, after a long pause, the AI admitted it was 311 and that it "will be better at counting words in the future".

If we use GenAI to generate election campaign materials, then, due to GenAI's flawed nature, we will see an increase in misinformation, where false facts are used to support a political narrative. Most politicians and political parties remain broadly honest in their public engagements with the electorate, and these cases of misinformation can be resolved honestly.

Disinformation, where false facts are distributed deliberately to influence or persuade a decision, is far more worrying. Disinformation used by politicians seeking to win at all costs or foreign states intending to sway a political outcome can be highly damaging. Disinformation can immediately influence a decision, perhaps swaying a crucial seat or electoral count.

Generating disinformation with GenAI is also increasingly easy, despite controls introduced into these tools. If you ask tools like Google Gemini or OpenAI ChatGPT to create a disinformation campaign plan, it will initially reply, "I'm sorry, but I can't assist with that request."

However, using a few simple workarounds, a malicious actor can create a campaign and then target individuals with personalised products, and this is without reverting to creating their own GenAI tool sets that would be even more effective.

If used this way, GenAI will not just influence swing seats and states or target specific demographics to vote against their interests. The long-term damage to democracy is far more profound, as this GenAI disinformation damages democracy itself. Even when discovered, disinformation harms the public trust in politicians and politics. It creates the view that all politicians are dishonest or creates a belief that all elections are rigged, not just a very few. It creates a culture, if unchecked, that all information is disinformation and, therefore, no information can be trusted or that only information from a specific person or group can be trusted.

GenAI disinformation damages the trust in our democratic institutions.

Politicians are looking at GenAI with fear, and as a result, some are seeking to control how or when it is used during political activities. This movement will gain little traction before the 2024 elections, but assuming that a spotlight will be shown on GenAI Disinformation after the elections, we can expect more vigorous calls for control in 2025. Sadly, that may be too late.

In 2024, the UK Electoral Commission will be able to ask political parties how much they spent on AI-generated materials after the election but not during it. There will be no legislation or compulsion to explain that a political message, image, or advert has been created using AI. Using deep fakes

Some voluntary Codes of Practice on Disinformation have been introduced in the EU, and the Digital Services Act forces large online platforms to prevent abuse like disinformation on their systems. DSA also prevents the micro-targeting of minors with AI-generated campaigns, yet they are too young to vote anyway. Where campaigns are distributed in direct messages or not in bulk, DSA has limited controls.

More recently, the EU AI Act requires foundation model providers (like Google, Microsoft, and OpenAI) to ensure robust protection of fundamental rights, democracy, the rule of law, health, safety, and the environment. An extensive list, and nobody wants foundation model creators to damage these fundamental rights wilfully.

Negotiations continue in the UK and EU on how technology companies will prevent their products from being used for illegal activities and, in the UK, the "legal but harmful" category. This needs to be quickly resolved and is unlikely to be agreed upon before 2025.

Yet the honest politicians negotiating and legislating for these changes are missing the key issues that AI cannot, by itself, resolve these challenges to democracy or elections. AI is a tool like any other software, hardware, device, or vehicle. A criminal using a car to rob a bank or a hacker using a computer to defraud money does not have the defence that it was the tool's fault for not stopping them from committing the crime. Any judge would pay short shrift to such a defence and convict the criminal on the evidence of the crime.

Honest politicians must do this before dishonest ones seize an advantage and our democracies are damaged beyond repair. We need to bring three aspects together:

  1. Using AI to support democracy. AI can enable greater access and awareness of political processes and content. It can monitor trends across elections and predict results, enabling the identification of discrepancies or deliberate manipulations. AI can also be used to detect the use of other AI with proper training and development. AI could be used by bodies like the Electoral Commission to build trust, visibility, and confidence in democracy.

  2. Punishing criminal activity at the source of the crime. The source of election fraud is the person committing the fraud, not the digital printer that produced fake voting slips. Crimes that damage democracy must face the harshest punishments. When discovered, a politician elected using GenAI Disinformation should be removed from office. Political parties using GenAI Disinformation to change opinions wrongly should be removed from ballot papers. These are stiff punishments. Harsher than those that the foundation model builders are facing. Yet our democratic institutions demand harsh protection. We have waged bloody, painful world wars to protect and ensure democracies can flourish. Punishing corrupt politicians who abuse that democracy is a small price in comparison.

  3. Improve AI Awareness. Start campaigns now to highlight how GenAI Disinformation could be used to damage democracy. Punishing politicians and monitoring AI exploitation will improve elections, but hostile actors seeking to damage our institutions will care little about criminal punishments. Increasing the electorates' awareness of how AI will be misused helps reduce the damage it can cause and, hopefully, will inoculate the electorate against its worst abuses.

It may sound extreme to bar candidates and remove politicians from office. It is also probable that dishonest politicians will seek to defer blame to others to avoid punishment. Yet, if we do not take this situation seriously, democracy will not be fit enough to address these concerns later. These are topics that politicians need to address as they are best placed to resolve the issues and create energy around the required resolutions. If we allow GenAI Disinformation to destroy our trust in democracy, we will never recover that lost trust.

Previous
Previous

Dignity at work with the AI Revolution - TUC Union perspectives

Next
Next

What can resolve AI Anxiety?