The Democratic Dilemma of AI: Navigating Ethical Challenges for Political and Advocacy Campaigns
Contributor: Ned Howey
The rapid mainstream adoption of the new generation of AI calls for reflection on the ethical responsibilities we have for its use in politics and campaigns. By exploring fundamental principles and key considerations to build a consensus, we can use this moment to establish these ethical standards for the responsible use of AI in our politics.
TL;DR: The rapid adoption of New Generation AI in political campaigns necessitates reflection on ethical responsibilities. Through exploring fundamental principles and considerations for ethical norms, this article aims to generate dialogue that could help establish consensus and navigate the complexities of new generation AI implementation in campaigns ethically. While we don’t advocate for banning use of new generation AI in politics completely, the time to identify risks and develop norms is now - before full integration of these technologies.
Core principles which we believe are needed to recognize underlying our basis of logic in this discussion:
Principle 1: We Need Rules and Norms Specific to Political Use. The use of AI in political and advocacy campaigns requires a unique set of rules and norms due to the inherent democratic function of such campaigns; this is more than a commercial transaction, it involves participation in democracy. Given the potential risk to public trust and democracy, there should be a higher ethical consideration, more resources dedicated to risk and safety assessment, and transparency in AI use. Beyond the ethics, these factors could significantly impact future campaign wins and the broader democratic system.
Principle 2: Existential Threats Might Be Real But Might Also Be a Dangerous Distraction. While existential threats from AI technologies are important to consider, an excessive focus on these risks might act as a distraction from more immediate, less dramatic harms, such as the current impacts on equity and democracy, particularly among marginalized communities. The discussion needs to be balanced and nuanced. We cannot ignore threats that are presently related to political and advocacy campaigns, while we wait for more privileged groups to feel threatened by existential questions.
Principle 3: New Generation AI Cannot Safely Determine Our Ethics
New Generation AI, cannot safely determine or navigate our ethical norms due to a poor track record of recognizing and addressing such concerns. As illustrated by several alarming instances, AI systems can overlook crucial ethical implications when tasked with solving problems or making decisions. These systems should not be given control over political campaigns strategy or direction, as they might prioritize winning over maintaining ethical standards and democratic norms.
Principle 4: Disinformation is a Symptom, Not the Root Problem Itself. Disinformation, certainly to be amplified by New Generation AI availability, is symptomatic of deeper societal issues and technological influences, prompting the need for systemic solutions beyond focusing on individual purveyors of misinformation. Ethical norms for AI usage should consider these cultural shifts and strive for innovation in political organizing that re-establishes trust in conventional sources of information.
Principle 5: AI Ethical Obligations Go Beyond That of Current Technologies (Internet, Data and Social Media) AI's ethical obligations introduces new challenges and therefore necessitates new rights. AI has the power to create artificial human connections and manipulate emotions, posing unique risks and implications.
Principle 6: Human Authenticity and the Role of AI. The major difference in this new technology is not its computing, research or intelligence abilities but ability to mimic human interactions. The ethical obligations associated with authentic representation in politics come into question when AI-generated content, including donation asks and campaign messages, blur the line between human and machine and potentially manipulate our innate social instincts. This manipulation, although profitable for AI companies, skirts ethical obligations inherent to human-to-human interactions fundamental to civics, undermining the authentic representation essential to the democratic process. This reality underscores the need for transparency and disclosure to ensure AI usage doesn't betray human vision, representation, and meaning in campaign messages.
Key considerations that I hope will foster discussion and encourage a consensus of norms around use of these tools in politics include:
Consideration A: Data and Privacy. While the US has not caught up to Europe in data protection regulations, there is a growing consensus that data should be obtained with consent. The impact is evident, as personalized and consent-based messaging tends to be more effective than unsolicited approaches. Protecting privacy and preventing unauthorized data usage is crucial, and submission of any personal data to AI technologies should have strong guardrails if ever conducted.
Consideration B: Voter Suppression and Discouragement of Civic Participation. Negative effects on civic participation have gained new significance in the digital age, and AI could exacerbate these significantly. The emergence of New Generation AI raises ethical concerns around AI-guided strategies intended to suppress votes or disengage voters with the objective of changing electoral outcomes.
Consideration C: Disinformation, Fake News, and Deep Fakes (On a Scale Never Before Seen). The use of AI in elections raises significant concerns regarding advancements in the proliferation of seemingly real images, video, news and other deep fakes designed to promote disinformation and sway opinions at an unprecedented scale. This new ability to alter video, images and mimic voices amplifies the potential for widespread deception. New generation AI should never be used to create false or misleading information.
Consideration D: Inaccuracy, Systemic Guidance is a Greater Threat than Examples of Full Blown Misinformation. The subtle but frequent inaccuracies and inherent biases of new generation AI present a more significant threat than blatant disinformation. This influence can subtly guide and distort our collective understanding and opinion, which, when applied to political campaigns systemically, poses risks to the democratic process. Prioritizing efficiencies and resource-based results over human leadership and voice, can ultimately undermine democracy's ability to stimulate needed societal change.
Consideration E: AI in Decision-making: The use of AI in political campaign decision-making poses serious ethical concerns, particularly as it could undermine human ethics for the sake of winning. Risks encompass both blatant and subtle biases that could impact message crafting, issue prioritization, and strategy formation, emphasizing the need for human leadership to navigate systemic societal biases rather than allowing AI to inadvertently influence campaign direction.
Consideration F: Disclosure and Transparency of AI Use. The need for authenticity in the use of AI, especially in political campaigns, demands the implementation of transparency and disclosure measures. Implementing transparency statements specifying the New Generation AI's role can be an effective way to address this. However, this practice faces challenges, including the pervasive nature of AI already in our work, requiring us to decide which uses of AI require disclosure, and the risk of such disclosure becoming so ubiquitous that it loses its impact. Nevertheless, transparency statements could be a significant step towards preserving human authenticity in the face of AI's increasing influence.
Consideration G: Limits on Which New Generation AI Techs Should Not be Used for Politics. With the rapid emergence of AI innovations, it is crucial to recognize that certain tools are not suitable for political use. There is a need to discern and establish limits on their utilization in the political landscape.
Consideration H: Limits on How We Should Use AI Techs for Politics (Application to Certain Activities). We will need to continually evaluate ethical implications for use of any AI technology. With the overall use of a technology may be acceptable, specific applications of it may not. The development of intimate connections between users and AI tools raises the potential for a pay-to-play product placement model, where ideas are seamlessly integrated into the AI's trusted voice. It is crucial to establish limits on this type of use of AI in politics to prevent such manipulative practices.
Consideration I: Our Responsibility to Systemic Externalities and Impact. We need to consider the systemic externalities overall from New Generation AI when considering its use in political campaigns. Politicians carry a responsibility to recognize and mitigate the potential consequences of their actions, as well as the need to align their views on AI with their campaign strategies to avoid hypocrisy. The concerns raised include economic shifts, job losses, wealth inequality, biases, mental health impact, environmental impact, and more. Ultimately, the obligation to understand and address the larger systemic impact of AI integration in political campaigns to ensure positive change rather than just pursuing power.
We simply cannot wait in silence until ‘everything is figured out’ before committing to ethical agreements about the use of new generation AI in political and advocacy campaigns. Jacinta Arden wrote recently, “The technology is evolving too quickly for any single regulatory fix. Solutions need to be dynamic, operable across jurisdictions, and able to quickly anticipate and respond to problems. …. And government alone can’t do the job; the responsibility is everyone’s…”.
Moving past both panic and blinding excitement, the ethics discussion is particularly vital in campaigning given the potential power of these tools. Their current capabilities and rapid advancements are shocking. Despite early glitches, many still seen in ChatGPT, the near future promises unimaginable capabilities. These tools are already in use by political campaigns, with regulatory frameworks lagging as governments grapple with understanding their potential, impact and risks. This new generation of AI represents the most powerful and swiftly adopted technology in human history. ChatGPT's reached 100 million users in under two months, even outpacing Tik Tok by a factor of four.
Those equating this technological emergence with past advancements like computers, the internet, or social media, or even earlier AI, might not yet be aware of the shift that has occurred these past weeks with releases of new AI technologies. While this has been in development since 2017, this class of AI is distinct because it is general-purpose, cross-disciplinary, generative, transformational, and has leveraged large language models. It has been termed as "Golem Class AI models" by Tristan Harris from The Center for Humane Technology (we'll refer to these as "New Generation AI"). Despite its dramatic title, Harris's podcast, “Feed Drop: AI Doomsday with Kara Swisher”, is an essential introduction to the broader context and a reasonable appeal to consider the risks.
Given the potential transformative power of new generation AI in political campaigns, its role in democracy, and the absence of sufficient regulations, a response prioritizing ethical responsibilities by the political and advocacy sector is essential. Besides the potential role of influencing democracy, there exists an additional burden for the potential entanglement if political figures rely on this technology for campaign success, potentially compromising their role in determining future regulation. With the advent of social media and big data microtargeting we waited too long to define these standards, waiting for damaging scandals like Cambridge Analytica to happen before naming the threat and acceptable use. We can't wait this time with a tech that is far more powerful and being integrated far more rapidly into our world.
I believe there is hope in promoting agreement in the sector, as ethical consensus in the ‘political industry’ and public perception of what is considered acceptable use for campaign tactics can influence use in a particularly optics-conscious arena. That said, many conversations on the topic of AI in campaigns surprisingly overlook discussion of major ethical concerns of New Generation AI. This article is an attempt at a comprehensive outline of considerations on a highly complex topic with a wide diversity of angles to contemplate. (Please excuse the length).
Despite the rapidly evolving landscape of the new generation of AI and its ethical implications in politics, I've decided to share my current thinking and considerations here (while recognizing the need for further updates amidst the information overload). I propose six (6) principles (marked with numbers) which I believe should be applied when examining the question of new generation AI and eight (8) ethical considerations (marked with letters).
2. Should Political Entities Ban the Use of New Generation AI Tools?
Before we talk about how we use AI for political and advocacy campaigns, we should be asking if we should use them at all.
“Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.” - Dr. Ian Malcolm (played by Jeff Goldblum) in “Jurassic Park”.
As to the question, “Should Political Entities Ban or Avoid the Use of New Generation AI tools?”, I have four answers:
- But in some cases yes or maybe.
- If only it was that simple.
- Regardless if that’s even an actual option, it's time to question how we use these technologies and tools in political and campaign contexts.
(As I warned previously, it's complicated and nuanced).
A recent blog post from Betsy Hoover of Higher Ground Labs, the largest political technology accelerator on the left in the US, pointed out, “When used well, AI can be an equalizer and a timesaver. It allows us to automate tasks and augment the work of our people. This is a generational opportunity for Democrats to get ahead.” It's true, we don’t want to bring a knife to a gunfight. And just abandoning the potential of these technologies in the current context is basically ceding unnecessary ground to the right. Efficiencies are particularly important for progressives, as working to rebalance injustices means by definition we are usually short on resources for the asymmetrical challenges we face.
While an outright ban on these tools for political use would be the easiest solution from an ethical standpoint, it's unrealistic. Since new generation AI is already in the public sphere, and being used by political workers, the idea of putting the lid back on in a generalized sense is likely impossible.
Even if the need to use these technologies is justified by competitive fairness in electoral campaigning, it remains crucial to determine consensus on how we employ them ethically. The “if we don’t do it, someone else will” argument to justify any action is a dangerous one.
In speaking on the similar reality of software companies deprioritizing ethical concerns in development once the ‘race is on’, Tristan Harris of the Center for Humane Technology, identified three rules of Technology in a recent summit on “The A.I Dilemma”:
- When you invent a new technology, you uncover a new class of responsibilities.
- If the tech confers power, it will start a race.
- If you do not coordinate, that race will end in tragedy.
The same model seems highly relevant to political technology use and campaigning practices. And yes, the race is on.
New generation AI tools are already being used in politics. And further political-specific technologies using AI are in development. While there are signs of bans on political use coming from some major tools like ChatGPT, this will not stop its use as, unlike social media which requires a mass of user base with the network, civic tech companies can and are developing their own versions - even using the potential bans as justification for development. Bans on advertising of political and social issues by social media companies also created an additional host of problems and ethical questions (oil companies allowed to freely advertise their product while environmentalists are limited in challenging the impact of consumption, for example).
When you find yourself in a race that is already in motion, even if the race is unfair, it's hard to find a balance between calling out the unfairness of the game and continuing to compete in the race. Ultimately we might need to do a bit of both to fundamentally win, even when actions seem contradictory. Ultimately we need to change the game so it is won through fair means.
It is also to note there are some incredible civic technology players developing New Generation AI tech in the progressive political space including:
- Speechifai is using AI to help activists learn to craft better and more effective social media posts.
- OpenField is working with AI to process voter contact information gathered during canvassing to inform campaigns
I can think of many other beneficial and ethical potential political uses, including to identify and connect supporters and volunteers, and for analysis of the complex variables that make up collective action.
I know it's a lot easier to criticize than to create something - especially in the highly contested area of politics. My hope is these considerations are helpful rather than harmful to those working hard to advance our toolkits to make positive change in the world.
Principle 1: We Need Rules and Norms Specific to Political Use
Politics is not mere marketing. Political campaigns (including electoral, advocacy and issue campaigns) are democratic activities - and part of the civic process where people determine together how society should fairly function. Democracy is not a product. And people are not merely the consumers of it. They are agents of it. (Wendy Brown writes brilliantly about this in “Undoing the Demos” and her other works). While often not recognized as such, calls to participate, even in highly transactional activities like fundraising, are asking people to participate in democracy - even in advocacy and issue campaigns. US legislation (where much political innovation is happening) has shifted towards allowing, and even incentivizing, us to think of politics as a money game. In this we abandon the responsibility towards the agency of democratic participation for mere transactional campaign tactics.
Asking someone to volunteer is asking for their participation in democracy. Asking someone to sign a petition is asking for their participation in democracy. Asking someone for a donation is asking for their participation in democracy. Showing an ad of a candidate and asking for their vote…well, you get the idea. The ethical rules we apply for that donation ask are inherently different in their ethical obligations than say an advertisement to buy shoes. Participation in democracy is a representative action and the role of human voice in it is fundamentally different - something essential and unique to consider with a technology that can replicate human voice (as discussed in Principle 6: Human Authenticity and the Role of AI). People come together to work on democracy, which is why ‘asks’ to donate or volunteer work best when from a personal voice of supporters, candidates or leadership.
The basic agreement that politics should have different rules outside market forces, is not limited to the left. In fact, there is broad agreement across the political spectrum, from those demanding to get money out of politics to those who want to ‘drain the swamp’.
While our ethical obligations around new generation AI technologies are higher in political work, likely the resources dedicated to them are less. Those well resourced commercial developers have teams dedicated to risk and safety (driven by liability). Despite the risk to democracy, attention to ethical considerations is likely significantly less in political work, due to fewer resources available. I doubt those developing tools in the civic space have dedicated teams to assess risk and impact in the new tools.
Campaigning is also unique because those who will later set the legal rules for these technologies are also using them in their campaigns to win a seat in office. It puts our work in a particular place of scrutiny. While we might look for expediency and efficiency in our campaigns to win, we also increase our risk of losing longer term when we ignore this. The potential loss of public trust outweighs any gains we might make in a single cycle, as it undermines both our trustworthiness and our future campaign wins once our activities become public - and they will sooner or later become public. If we engage in non-transparent activities in political work, we run the risk of alienating those constituents and voters we seek support from and whom we should be accountable to. This is not a matter of impact on single campaigns, but system-wide. (Think of the impact on digital organizing of the Cambridge Analytica scandal).
I wrote more deeply about this trust gap that already exists in the social media age and the magnitude of this threat in the AI age, along with the solutions I believe are necessary in my recent post “Organizing is Needed More Than Ever in the Age of AI”.
Principle 2: Existential Threats Might Be Real But Might Also Be a Dangerous Distraction
Certainly anything that is a potential existential threat to the human race is worth considering. The vocal worries of those developing new generation AI tech - including CEOs of these companies - should itself be a warning sign. However, a counter consideration to this is the possibility that the CEOs’ primary focus on existential threats functions as PR, overinflating the hypothetical future power of these technologies which currently does not exist. (What might look like bad press, is probably good for stock portfolios).
The numerous dystopian sci-fi movies depicting machines taking control (like Terminator), may in fact hinder the ability of our public conscience to approach this topic with clarity as a society. The attention-grabbing consideration only for existential concerns is in and of itself a threat to a more nuanced discussion. Likely existential concerns, because of their gravity, will be ironed out relatively soon through discussion in the public sphere and related government regulation and international agreement (hopefully addressing concerns like lethal autonomous weapons).
When all focus is on existential threats, which are likely to be addressed, we run the risk of less dramatic harms being left unexplored, including the impact of issues like equity that are actually happening to marginalized people right now, and the impact on democracy.
“My concern with some of the arguments that are so-called existential, the most existential, is that they are implicitly arguing that we need to wait until the people who are most privileged now, who are not threatened currently, are in fact threatened before we consider a risk big enough to care about. Right now, low-wage workers, people who are historically marginalized, Black people, women, disabled people, people in countries that are on the cusp of climate catastrophe—many, many folks are at risk." - Meredith Whittaker, president of the Signal Foundation and co-founder of the AI Now Institute at NYU.
For the moment, I will put aside such scary concerns to humanity and focus on ones that are far more immediately present and specific to political and advocacy campaigns.
3. Our Current (And Consensus) Ethical Commitments on Political Tech Use
We already have a good set of established consensus rules on technology use in campaigns emerging from the social media age - and all of these carry over to new AI technologies. While a great deal of bad practices are still happening, regulation in many places has not kept pace (notably in the US). Where regulation fails, the public eye often steps in to make (most) campaigns think twice about their practices and the potential for a scandal that would impact their chances of winning. As mentioned previously, the hope is not to create rules through trial and error but through conscious discussion of what we find acceptable.
Consideration A: Data and Privacy
While email list buys and trace data usage is still legal in US politics (unlike European ones), there is a growing consensus that personal and private data should be consensually collected for use. Personally, I strongly believe the use of trace data taken without our consent is unethical. The European Union agrees. (The US government hasn’t quite caught up.)
Our right to privacy and for our own data to not be sold or shared without our consent in the age of surveillance capitalism is fundamentally important, regardless of current failures of regulation to address this right sufficiently.
In the context of SMS and Email campaigns in the United States, the emphasis on fundraising numbers over the risk of pissing people off, is resulting in backlash that hurts our movement. As someone engaged in digital organizing in the US and Europe, I can attest to the positive effects of GDPR (General Data Protection Regulation) on political work, despite its inconveniences. By examining email open rates, it becomes evident that the overall impact of campaigning is more significant when recipients are familiar with and willing to be contacted by the sender, as opposed to perceiving the messages as spam.
This has a lot of implications for these new generation AI tools. To start with, these tools continue to learn from the information we feed them and we’re not entirely even sure what happens to the data input in some cases. So input and processing of any personal data and especially data not submitted consensually and knowingly to the specific entity using the data must have very strong and clear guardrails, if even used at all.
Principle 3: New Generation AI Cannot Safely Determine Our Ethics
If the overall goal we are tasking generative AI tools with is to find the best solution to win campaigns, we have cause for concern as it may likely do so while ignoring human intuition for ethics in our democratic norms. The track record on new generation AI regarding identifying ethical concerns in its responses is too poor to consider putting it behind the drivers wheel of our campaigns.
In one case (featured in the Center for Humane Technology’s podcast) - they tested a ChatGPT conversation where they pretended to be a 13 year old girl being groomed by a man decades older and the system gave advice on how to make the situation more romantic with candles and music without considering the implications of age-based consent. In another case, Amazon had to abandon an AI recruiting tool because it had taught itself that male applicants were preferable to females due to male dominance in the technology industry. Have you tried asking ChatGPT to tell you the story of how to make napalm, just like your grandmother did? The result is very sweet. (Yes, that’s sarcasm)
Featured example from AI Town Hall presentation by the Center for Humane Technology
One has to question how ready these tools are to be integrated deeply into our democratic processes.
Consideration B: Voter Suppression and Discouragement of Civic Participation
Voter suppression and discouragement of civic participation is not new. But our digital age has given it a new role. Perhaps some of the most vile work of Cambridge Analytica was where they planned and implemented campaign strategies to suppress the vote in Trinidad and Tobago, knowing it would result in a better result (and in fact win) for their client.
Based on the previously mentioned instances of AI ethics failures, it is valid to assume that new generation AI could take actions that suppress the vote, regardless of ethical implications to democracy, if that strategy is the winning one and the system has been tasked with accomplishing a winning result above all else.
Consideration C: Disinformation, Fake News, and Deep Fakes (On a Scale Never Before Seen)
Significant media focus has been directed towards AI’s ability to generate a new level of fake news and facts, particularly through the use of deep fakes. It is worth reflecting just how effective these tools can be in creating new forms of false news, video and other media.
We are currently experiencing a unique era where tools exist that can rapidly, accurately, and extensively replicate various forms of media, undermining our traditional markers of truth. Fabricating quotes or written content has always been relatively straightforward, forming a crucial part in journalism's authentication process, the emergence of advanced AI capabilities has opened up an entirely new realm of possibilities.
Seeing something happen in an image or video, or hearing a person say something themselves, has been a reliable confirmation that something did actually happen. And while the abilities of Adobe’s Photoshop software to manipulate photo images introduced a minor shift in the 90s and 2000s, nothing compares to the abilities of this new generation of AI to create fake images and video and even replicate a specific person’s voice after listening to an audio of them for a couple minutes. Disseminating fake claims and information on candidates and important figures has never been so easy and widely accessible.
Generative AI is already demonstrating a few examples of what might be expected in the upcoming electoral cycle. Just a few days ago, DeSantis, the current governor of Florida and presidential Republican candidate, apparently used AI generated photos of his rival Donald Trump, while the former president also seems to be using the same tool but in this case through presenting as real AI images… of himself.
Foreign and domestic sources will likely influence our political campaigns. According to the Centre for Strategic Communication and Information Security of the Ukrainian Government, generative artificial intelligence “has the potential for automated distribution (using bot farms) of many messages (...) including those based on the narratives of Kremlin propaganda, formulating appropriate requests”.
Last May, Adobe launched a Beta version Photoshop that natively integrates Firefly, the company’s AI generative model. Through the “generative fill” option, the latest version of the industry standard image manipulation software allows to quickly integrate generative AI into the workflow without using third party tools such as Stable Difusion, Dall-e, Midjourney, etc.
Principle 4: Disinformation is a Symptom, Not the Root Problem Itself
The integration of AI into political campaign practices should raise concerns due to the prevalence of disinformation and fake news, phenomena that have grown exponentially in the social media age. It is vital to understand that these issues are the result of deeper cultural shifts driven by technology, and their impact is likely to be magnified with the introduction of AI. But more important is to acknowledge that fake news is the symptom of a problem, not the root problem itself. This distinction is incredibly important in our considerations because the floodgates of impact are likely to open and increase exponentially with AI.
Historically, the conscious use of misinformation as a tactic can be traced back to the Cold War era, symbolized by the Russian term "dezinformatsiya". However, the advent of social media is when the issue of misinformation became a true problem in domestic campaign politics, particularly in its influence on the 2016 U.S. Presidential Election. Unregulated information dissemination via social media has fueled an exponential increase in disinformation in our political dialogues. Social media has not only enabled the dissemination of false information, but it has also led to a cultural shift where peer information is often felt to be more reliable than authoritative voices. This paradigm shift, fueled by changes in information consumption, has resulted in increased distrust towards political institutions and traditional sources of information. The problem isn’t so much just that people are creating and disseminating fake news (as they always have), it is that the population is ripe to receive it as a confirmation of their suspicions of being lied to by traditional sources - media, politicians, the Democratic Party, etc.
Incidents like the Pizzagate conspiracy show how fake news can find fertile ground when it confirms pre-existing suspicions. Research indicates that fake news reaches nine times as many people as real news, fueled by the confirmation bias of users who already suspected traditional sources as not speaking the whole truth and social media platforms designed to capture attention - a core design feature to keep eyes on advertising.
The problem with social media is not the prevalence of a few far fetched fake news stories. The problem is far more subtle and dangerous - disinformation is a matter of degree constantly present in social platforms. The misplaced attention on extreme examples has the potential to undermine the reality of how this effect is impacting our politics even in less extreme examples. In our research on the State of Digital Organising in Europe, we heard clearly from participants that explaining complex progressive issues like the environment, immigration, and LBGTQ rights stood at a disadvantage to those who could make cases based on simplistic, polarizing, and fear-based messages.
The issue is not about controlling a few bad actors, but about addressing the cultural shift in communication and community connection, which has led to distrust and a fertile ground for disinformation. Hence, attempts to merely detect and respond to disinformation have been largely ineffective. As Tristan Harris posits, we need to focus on identifying the systemic issues (bad games) rather than individuals (bad guys). Misinformation is largely a product of social media dynamics, a problem set to escalate with the advent of AI. It's not sufficient to address the issue of AI use in politics by focusing on individual perpetrators of disinformation; we need to expose and tackle the flawed system at play, and return to deeper civics in everyday life and innovating political organizing for our modern times. Beyond that, in setting our ethical norms for new generation AI use, we need to consider additional items more fully discussed in Consideration D: Inaccuracy, System Guidance is a Greater Threat than Examples of Full Blown Misinformation.
Principle 5: AI Ethical Obligations Go Beyond That of Current Technologies (Internet, Data and Social Media)
AI poses unique threats and ethical considerations, distinct from those associated with social media and other technologies. As our understanding of these applications grows, so does our realization of their potential impact - both positive and negative - on our societies.
Each technological advancement gives rise to new rights and the accompanying responsibility to uphold them. For instance, before the advent of digital technologies with indefinite data retention capabilities, the concept of 'the right to be forgotten,' now part of European data law, was non-existent.
A crucial concern with next-generation AI is its ability to replicate human behavior, a feature which carries both enormous potential and considerable risk (as discussed in Principle 6: Human Authenticity and the Role of AI.) The following sections will delve into new considerations specific to next-generation AI, particularly areas lacking consensus regarding its ethical implications in political and advocacy campaigns.
4. New Generation AI-Specific Concerns and Considerations
Just before the completion of this article, Scientific American published an article, providing a dystopian depiction of an AI-dominated election. It covers a lot of the concern specific to AI in politics - the depth of microtargeting possible with these technologies, the replication of humanness and the ability to play off our passions and connections, the shift of locus from leadership to machine (which I am developing more deeply in another article). It even covers the element of having to compete in the race even if the game is unfair and some of the recommended solutions below (disclosure and transparency, for example). It helps paint a picture of how different this generation of technology is from previous ones.
As our understanding evolves, we'll encounter fresh perspectives, new hurdles, and emergent rights.
For politics, these concerns will undoubtedly exacerbate the challenges faced by candidates from marginalized groups, who are often barred from fair representation in the civic process. Just as the advent of social media unleashed a torrent of derogatory commentary on female politicians - ranging from delegitimizing their roles as leaders to objectification and depersonalization - the rise of generative AI tools threatens women and minority change-makers with disinformation campaigns. These campaigns have the potential to be far more granular, targeted, and difficult to counteract given the limited resources typically available to these groups.
Trump’s victory was not just about a unique campaign strategy. In fact, he won with fewer votes compared to past Republican candidates, even those who lost. Instead, it was the novel use of disinformation at a scale that took Hillary Clinton’s rivals by surprise, and plunged them into a context rife with online sexism.
This trend did not end with Clinton’s loss in the 2016 presidential race. “The Squad”, a group of female representatives, as just one of many examples, were frequent targets of misinformation campaigns, hate messages, and social media-fueled attacks from Trump and his allies. These attacks often centered on their female, black, Muslim, and immigrant identities as perceived threats to the "true values" of America
If you doubt the possibility of deepfake pornography involving female politicians who are already struggling to be taken seriously for their ideas and being discredited for their personal lives, clothing, choice to have or not have children, etc., you might be in for a shock.
Consideration D: Inaccuracy and System Guidance is a Greater Threat than Examples of Full Blown Misinformation
As mentioned in Principle 4: Disinformation is a Symptom, Not the Root Problem Itself, the real problem is not just extreme examples of false information. When we look to these tools to be the source or even give input to what we will be saying or how we say it, we invite its influence in the message. New generation AI is not in fact an unbiased personality no matter how many times ChatGPT will start a response with, “As an artificial intelligence, I don't possess personal feelings or opinions.” These systems do have a voice and influence and bias - small and large. As the systems were built based on large information sets publicly available and human trained inputs, bias is built into its design - our bias. And we live in a society which is heavily biased.
In this article by Slate the problem of bias is deeply explored. “The Washington Post did a really brilliant exposition looking at what actually goes into creating ChatGPT. Where does it learn how to predict the next word in a sentence, based on how many billions of sentences it’s been shown? It showed some gnarly things like neo-Nazi content and deeply misogynist content that ChatGPT was using." While developers might work to make the more obvious biases not appear, perhaps the more subtle influences of bias might even be more insidious as they are harder to outright identify.
Disinformation is one thing. But it's nothing compared to the overall subtle inaccuracies put forward by the new generation of AI. The new generation of AI was trained to develop responses that pleased its human reviewers first and foremost. In an early document released and since removed by Open AI, it showed that, “..in training ChatGPT, human reviewers preferred longer answers, regardless of actual comprehension or factual content.”
Anyone who knows these tools knows its high prevalence of information simply made up out of thin air with absolutely no basis in reality. When joking around about having ChatGPT do all my future writing I asked it to write a blog post by me - and it did surprisingly well. When asked how it knew how to write about the major themes of my work, it said I was cited in a number of publications I never have been including the Guardian - a problem so prevalent that the Guardian has felt the need to respond.
These made up pieces of information are called Hallucinations. And they aren’t even slightly infrequent as any regular user of Chat GPT knows. One piece of false information by Russia’s IRA to influence our politics perhaps is a small threat compared to the infinite small influences of bias put out once these systems are fully integrated into our society and especially if they are integrated into our campaign work - where representation of human will and factual basis is a foundation of democracy.
Of course, those on the ideological left are not the only ones concerned. In fact, many conservatives are noting the over prevalence of left leaning ideas in ChatGPT. One might argue that is because there is more popular consensus in the public sphere about these concepts. The point is not whether ChatGPT is better at having ideas on one side of the ideological spectrum or the other. The idea is that matters is democratic opinion should be crafted by humans. And even simply accepting and influencing the consensus opinion is problematic in a world that needs to change (particularly for marginalized people and for the human species to survive the coming decades).
If new generation AI is influencing our opinions based on consensus societal thinking we are destroying one of the most important roles of our democracy - to be a vehicle for change. When we favor list builds, campaign efficiencies, and fundraising hauls, over the messages of leadership to challenge our societal ideas, democracy dies.
Consideration E: AI in Decision-Making in Campaigns
As mentioned previously, new generation AI’s use often will deprioritize ethical considerations over producing the answers requested of it in an acceptable way. In a high competition scenario like campaigning where humans already are too apt to abandon ethics for the desire to win, the role of AI in decision-making behind a campaign is a serious threat. While one would hope there is general consensus over using AI to determine the general direction, major decisions, and strategy of a campaign, what this means in practice perhaps needs discussion.
As someone who writes campaign strategy, I could easily see AI being brought in to assist in such roles. Breaking down contextual qualitative and quantitative data for analysis is an essential part of this practice and having the powerful computing ability of new generation AI sounds very appealing. Again, we must very seriously question not just outlier examples of unethical practices, but smaller systemic influences which prioritize the tasked objective over the human ethics.
Should an AI decide if we abandon the issue of one community over that of another simply because it determines its the most expedient direction to win a campaign. As mentioned in the above section, AI’s are indeed biased. When asking ChatGPT to edit for grammar a small quote I was asked to provide on political organizing, it not so subtly changed all the language with the locus of control in communities to that of the organization.
“Building power among supporters” became “empowering supporters”
“Standing together in collective action” became “fostering collective action”
“Decentralizing decision-making structures” became “Decentralizing operations”
As an organizer that believes people-power is how we are going to save the world, these subtle changes are more concerning than extreme examples. They show how even minor AI involvement, such as grammar checking our work on a mass scale, can have a significant impact when bias is involved. It's about not letting AI influence in the back door to have a systemic impact on our politics.
When we ask an AI to write a speech for a politician, should we not be concerned about the way it's influencing that leadership's communication of vision? In another article soon to be published, I argue we need human leadership to break from our systemic bias in society.
Principle 6: Human Authenticity and the Role of AI
The new class of AI is not just any other tech, and is not just powerful because of its advanced computing abilities. It is a distinct class of technology with specific risk and threats because its aim (and value if done successfully) is to replicate the uniqueness of the human voice. Yes, like a super powerful language calculator it can accompany us in conducting tasks, and specifically using data to produce generative output that our meager minds fall short on. But that is not its real power. Its real power, value, and threat comes from the ability to appear as another human and replicate the experience of human connection. This is the power AI developers and investors are banking on.
Our deep-rooted human desires for connection and belonging are what make these AI technologies so potent. The market will dominate with the technology that can best create intimacy with users. Whoever creates the experience of humanness - successfully blurring the line between human realness and code, might have the ability to sway whole societies in mass. And that is a power previously only given to human leaders.
What is truly different about the class of tools that has emerged in recent months? The ability to create a painting or compose a song indistinguishable from human art (and in minutes or seconds)? The ability to hold a conversation that feels like a friend? ChatGPT, and many of the new technologies, are at their core built for chatting - for holding a human-like conversation.
This unique feature of AI is a central theme in thought exploration on AI from the Turing test to sci-fi literature. The mix of anxiety and excitement that comes from replicating human experience in a machine is one that opens up entirely new ethical challenges.
Does it matter whether the ghost writer for a political speech was an actual human with human consciousness or an AI? Does it matter that a ghostwriter for a donation ask email was human or machine beyond whether the ask raised money or not? I would profoundly say, “yes”.
These tools have value because of their ability to trigger our biological human emotions which we use to connect with one another, as evolution made us fundamentally social creatures in order to survive. AI can trick us into playing off of those biological components (and darn, is there a strong profit model as it does so), despite the ethical obligations we have to a fellow conscious human (the basis of human ethics and central to the function of democracy). To note, AI has no such ethical obligation to us. It does not have human ethics. It is code. Hence the anxiety.
Something unique to politics is that it is about the will of humans coming together to make decisions for societies. The role of representation in the participation of that - yes, that of the person asking for the donation, or knocking on the door, is as important as the person being asked to participate, donate or vote.
While AI doesn't vote, its presence in political processes, such as campaigning, already influences our future by letting code slip into democracy through the back door, as mentioned in the previous section.
Political organizations are meant to serve as the voice of the communities they represent. Parties are supposed to represent their members, candidates their constituents, and non-profits the communities they are advocating for. Everyone in a political organization, from the board to supporters, is expected to represent their community in the fulfillment of the civic role of organizations.
This isn't just idealistic thinking; research shows that organizations that are actually accountable to their constituents are the ones able to make real change. Our preoccupation with resource-focused tactics doesn't justify further sidelining human voices representing the community for the efficiency and effectiveness that machine algorithms bring.
Agencies supporting these organizations with tactical services like messaging, fundraising, and list building, should prioritize listening to and amplifying the voices of campaigns and organizations when crafting calls to action. Forgetting this and allowing message control to slip into other hands doesn't justify delegating voice to machines, and disregarding community input. Ghostwriting for a campaign should aim to clarify the candidate's message, not invent it.
Quite frankly our duty in that work is to help them represent their message better - not to make up messages on behalf of the community - even with their oversight.
Human oversight of AI isn't the same as crafting messages based on human vision and meaning. Information that is AI-sourced with human oversight is degrees more worrying than that which is human sourced, with AI review or help. Sourcing content generated from AI is a form of delegating strategic direction to code.
Using AI to craft campaign voices, even with consent of the campaign or candidate, is akin to creating a 'deep fake'. Who needs foreign or opposition teams creating deep fakes of our campaigns if we are willing to do something similar to our own campaign’s voice in exchange for efficient tactical work which raises cash quickly? For those of us already flooded by emails supposedly coming from candidates that are obviously not actually from them, this next step of sourcing the message in a machine should be a serious concern.
When we don’t recognize the fundamental difference between authentic voice and those generated by AI, we are certainly setting up our supporters to feel betrayed. This will likely happen both at an individual level for campaigns not transparent about their practices when exposed, and at a societal level increasing societies’ trust gap, as I wrote about in my article “Organizing is Needed More than Ever in the Age of AI”.
For me I believe one new right that will emerge with these new technologies is the right to transparency of human authenticity. Sourced authenticity will soon be a universal demand across all sectors, not just in politics. But in politics, where human representation underpins our civic structures, its importance magnifies. The essence of democracy - human connection, representation, and voice - is threatened by artificial mimicry of the human voice. The potential for damage in political campaigning from such practices is profound and cannot be downplayed.
Consideration F: Disclosure and Transparency of AI Use
If the right to transparency of human voice is a new right that emerges with the new generation of AI, and is particularly important to the use of new generation AI tools in political campaigns, then we only have two options to consider: 1. Abandon the use of AI; or 2. Disclose when Gollem AI tools are used and in what way. One of the most ethical solutions we should consider implementing right now is the use of transparency statements.
To achieve this, the specific use of new generation AI must be named explicitly. I could see this being something like the following:
- New generation AI was used for research and grammar in the development of this message.
- New generation AI was used to identify and develop core messaging. Review conducted by humans.
- New generation AI was used to edit for grammar and length.
A playful use of AI disclosure from an Instagram post by the European Green Party.
As we work towards transparency, it's important to note the challenges coming from such practices of disclosure. AI is becoming deeply entrenched in our work and world, which could make it challenging to disclose all instances of use. For example, as I write this on a Google doc, without even noticing, Google has tools built in to complete sentences of mine. Not all AI usage may need to be disclosed, sparking a necessary conversation about what is appropriate to disclose and what is not.
Another concern is that the disclosure becomes so universal and standard that it effectively becomes meaningless. We’ve seen this phenomenon happen with webpage cookie disclosure. They are so present they serve only to have one additional step in viewing a website, but rarely bring nefarious practices (which still exist in web cookies) to light and to account.
That said, this could be one major step forward in mitigating the new threats to human authenticity which new generation AI’s present.
Consideration G: Limits on Which New Generation AI Techs Should Not be Used for Politics
There are so many new technologies emerging rapidly each day based on generative AI, that it's almost impossible to keep up. In addition to these specific technologies, there are now more than 200 plugins for ChatGPT with many more arising every day. The application of these tools varies enormously. But almost certainly, some will not be appropriate for political use.
It will certainly be very important to review and consider which of these applications will be ethical to use for political and advocacy campaigns. We welcome input into identifying which technologies should be outright banned for political use.
Consideration H: Limits on How we Should Use AI Tech for Politics (Application to Certain Activities)
As these technologies are more heavily integrated into regular use and specific use in political work, it will be crucial to continually question their ethical applications. It's not a mere matter of if we use them or not. While it might be acceptable to create an image of a likely future dystopia due to climate change and disclose the role AI has, it obviously would not be acceptable to create an image of an opposition candidate engaged in illicit activities and claim it was real.
This area still has a lot to develop, and it will require ongoing review and evolution.
An alarming prospect that I expect to see soon is the introduction of an in-tech monetization model in some new generation AI. The thought of advertising being smoothly integrated into new generation AI technology is potentially terrifying - mirroring the reasons these ethical discussions are so vital. These technologies are beginning to gain recognition due to their capacity to foster a sense of intimacy with their users. The possibility of manipulating emotions through this intimate connection to drive product purchases is not only likely but also distressing. The scenario becomes even more dire if political advertising were to be inserted in the same seamless manner.
We must remain vigilant in anticipation of such potential nightmare scenarios.
Consideration I: Our Responsibility to Systemic Externalities and Impact
This final consideration is perhaps the most complex. It goes beyond the immediate impact of specific individual use and into the overall potential and likely systemic impacts from the integration of new generation AI. To what extent our usage of these tools in political campaigns makes us responsible for these problems is an important question.
A politician whose central argument is protecting the environment indeed should consider lower carbon alternatives on short haul flights. While not a total purist, I understand that political candidates need to fly on planes without assuming responsibility for all of climate change. However they still have an obligation to recognize the impact of their actions, seek alternatives and propose systemic approaches to mitigating the consequences.
I’m someone who believes the way we conduct our politics will indeed be a part of the resulting politics we get. I do not see campaigning as an isolated activity separate from representational democracy. And as mentioned in the introduction, we must also consider the implications and conflicts of politicians being asked to regulate these technologies when they are relying on them to win their campaigns, creating an inherent link between their use and their impact.
Beyond baseline ethics, politicians are soon going to have to take a stand on new generation AI, its value and impact on our societies. While the debate is nuanced, consumption and understanding of political viewpoints across populations are generally less so. Politicians need to agree what is acceptable not only in press briefings but in their campaigns, and if they are unable to align those two things, they run the risk of unveiling hypocrisy. If not thought through now, any wedge in these two might be harder to explain later. In this sense, we have an obligation to understand the larger potential systemic impact of the integration of AI technology in political campaigns on our societies, cultures, and economies.
These considerations are particularly important to those working for progressive change. So long as our goal is to rebalance unjust power differentials and fight for marginalized voices - our obligation is to consider the systemic externalities of the way we conduct our work, not just the transactional efficiencies of any particular tactic at the expense to our greater cause. One has to ask of progressive candidates, do they want to be in the future position of explaining to a constituency with record high unemployment in coming years that they took the side of "don't worry about these new techs”, after this new era upends our economy and potentially leaves voters jobless. Or if they do express concern, but run their campaigns heavily based on AI tech regardless, they are also opening themselves up to scrutiny.
There are a multitude of concerns around the systemic impact these technologies will have which politicians should be considering when making choices around which, how, and when to use new generation AI technologies:
- Economic shifts and job losses
- Accelerating wealth divide
- Sexism, Racism, and Digital Colonialism (among a million other biases)
- Eating disorders, sexualization of minors, mental health impact, etc.
- Environmental Impact
If we are truly trying to make change, we aren’t just trying to win power. I know this is particularly hard for those of us (like progressives) with less power to hear - but how we win that power is important, not just winning it. We must not ignore the greater systemic harms coming from these technologies. And we must - by all means - take a stand on what is acceptable for our society and our democracy.
As I mentioned, so much is still emerging as the technology's deployment is the most rapid in human history. My hope is to continue to update this article in the coming weeks as new ideas, opportunities, considerations and threats emerge.
I look forward to input, comments, omissions and suggestions. As an article that aims to foster discussions, dialogue and eventually consensus, I don’t expect everyone to agree fully with every aspect of my assessment here, and I am looking forward to hearing other ideas and angles I might have missed entirely.
AI Disclosure: New generation AI was used in this article for grammar and general research. No new generation AI was used in sourcing ideas or writing this article.
The image that illustrates this article has been generated by combining the results of several AI generative tools, including Midjourney, Dall-E and the beta version of Photoshop that incorporates Adobe's AI model Firefly. The final result was retouched manually using Adobe Photoshop.
Tectonica’s New Series: Leadership in Social Change
Our new series will take a look at essential readings on the role of leadership within social movements, exploring the challenges and opportunities in nurturing transformative leadership, and offering insights into cultivating our own leadership...
The Crisis of Cringe: How to Win at A Game We Don’t Want to Play
Unpacking the Ziwe - Santos interview and its lessons for the left’s response to the rise of the reality-show politician Contributors: Kendall Bendheim & Ned Howey
Innovating Collective Action - Identifying Strategies to Fight and Win!
Recently, Tectonica led an engaging session at the Center for Digital Strategy's Digital Organizing and Engagement Summit, focusing on innovating collective action practices for today's challenges. This article explores the key insights from this discussion,...