Do Activist Androids Dream of Electric Voters?

Examining the frenemy of AI application in civics, the misconceptions of bias in AI use, and the unforeseen revolutionary potential of AI on participatory democracy

Blog-AI_Bias_2.jpg

 

Contributor: Ned Howey 

This article marks the first installment of a two-part series exploring AI’s influence on politics and the pressing need for its intentional application. Stay tuned for the second part, where we dive into the roles of commercial forces, transformative politics and the potential for AI to catalyze positive change in our political practices.

Introduction

The sweeping advancements in recent AI technologies are nothing short of monumental, arguably representing one of the most significant technological leaps in modern human history. Comparable in impact to the Industrial Revolution, the full extent of AI's influence is yet to be imagined. However, this is less about the capabilities of the tools currently at our disposal and more about the rapid pace at which AI technology is evolving. For those deeply engaged with these technologies, the tremendous potential is not only palpable but imminent.

While the scope and implications of AI have been thoroughly examined, both in our previous writings (here and here) and in the broader discourse, there is an urgent need to focus on an insidious and often-overlooked aspect: the potential of AI to inflict systemic harm on our democratic processes. Although the existential threats and concerns about malicious use cases like deep fakes often dominate headlines, they might not be the most significant issues we need to address.

Bias poses a more immediate threat, but it's critical to understand that framing AI—or any creation—as either biased or unbiased is reductive. Every author, artist, or creator inherently infuses their work with their own bias. In the realm of AI, this concern takes on new dimensions. These tools, bereft of human agency but programmed with human voice, introduce unforeseen challenges and risks into our political processes. Their built-in biases, whether in language or output, fundamentally undermine the tenets of democratic leadership and community representation.

Far from worrying about the overtly malicious activities like deep fakes, I am more concerned about the subtle, unseen influence of these tools on campaigners, organizers, and various stakeholders involved in democratic processes. Are we not, in essence, generating our own "deep fakes" of leadership and community voice when we use AI tools to craft speeches, campaign communications, manifestos, fundraising emails, digital ads, and more? These technologies are already finding their way into campaign manifestos and speeches, providing efficiencies and speed—crucial factors for political victories—but at what cost?

What's crucial to understand is that abstaining from using AI in our political systems is neither suggested nor a feasible solution. On the contrary, we should lean into the application of these powerful tools, using them strategically to achieve our political objectives. AI offers the potential for better, more effective outcomes in political campaigning and organizing. It's not a question of whether to incorporate AI, but how to do so responsibly and intentionally. To make the most of AI's transformative power, we urge everyone involved—ranging from grassroots activists and supporters to organizational leaders and the tech community—to actively participate in this crucial discourse about intentionality. In doing so, we can collectively shape a political landscape enriched and effectively guided by AI, one that not only serves our immediate goals but also upholds the foundational values of our democracy and civic life.

We must also emphasize that the power of AI offers incredible opportunities for positively transforming politics and civic engagement. It's not merely a tool for efficiency; it can be a tool for equity. By leveraging AI's capabilities, we have the potential to democratize access to complex civic and government information that has historically been reserved for a privileged elite. This could serve to level the playing field, offering individuals, smaller campaigns and grassroots organizations the resources they need to effectively engage in the political process.

Moreover, AI has the potential to break down barriers to participation by making information more accessible and understandable to all people, regardless of their educational or social background. This can help in creating a more inclusive political discourse and decision-making process. Perhaps even more exciting is the possibility for AI to help analyze and connect individuals in ways that can lead to organizing, action, and impact for change. (All of these possibilities will be more deeply explored in the coming second part of this series!) The opportunities are profound, but they require us to be intentional in how we integrate these tools into our systems of governance and activism.

AI is incredibly biased (of course)

To proclaim that artificial intelligence today is biased is an understatement of grand proportions. From its early days, AI has demonstrated a startling range of biases including but not limited to racism, sexism, ableism, and ageism. These aren't just glitches or anomalies; they stem from the societal biases that the machine learning algorithms soak up as they learn from the vast swaths of human-generated data available on the internet.

Disability rights advocate and consultant Jeremy Andrew Davis brought one such example of AI bias into the spotlight. Using Midjourney to request AI-generated images of autistic individuals, Davis discovered that out of 148 images produced, only two presented as female, five appeared older than 30, and none were non-white. Furthermore, none of the images featured smiling faces; instead, they were overwhelmingly dark, moody, and miserable, perpetuating harmful stereotypes about autistic people.

The gender bias is equally glaring. When one AI was asked to produce images representing various professions, the results were telling. For example, only 12% of generated teacher images were male, whereas 94% of CEO images were male, and an astonishing 99% of engineer images were male. This reveals a deeply entrenched set of societal stereotypes about gender roles and professions.

Activist_Androids_Image_1.png

Yet another disconcerting example was an AI algorithm asked to generate headshots, which lightened the skin tones of people of color, and even changed a person’s race in one example, purportedly to make them appear more 'professional.' This is not mere representation; it’s amplification of existing biases. As Cennydd Bowles puts it: "AI could well be the largest force multiplier we’ve ever made. But we already feel society’s invisible, systemic forces acutely. Some people are elevated and empowered by these forces. Some are crushed. If we keep fostering the same values in technology that we do today, then I think these injustices will only increase."

Notably, the scope of AI's bias is not limited to issues of identity or social justice; it even extends to political biases. Moreover, these biases perpetuate inequalities through AI's implementation, including issues of access. As well, AI detection tools used to prevent cheating in academia have been found to be biased against non-English speakers, raising significant ethical questions.

My own early experiences with ChatGPT serve as a vivid example of how the language generated by AI can subtly shift the locus of control and power. In its initial translations of my writings, ChatGPT changed phrases like "building power among supporters" to "empowering supporters," and "standing together in collective action" to "fostering collective action." Similarly, "decentralizing decision-making structures" became "decentralizing operations." Each of these translations moved the agency of power from the community to a more centralized authority.

Its to note, artificial intelligence systems have recently made significant strides in reducing overt biases and eliminating extremist viewpoints. The extreme misogyny, racism, and neo-Nazi views that were sometimes produced in early iterations have largely been addressed through improved detection algorithms. Additionally, the hallucinations (where the AI simply makes up facts that don’t exist) that were prevalent during the early months of the generative AI boom have become increasingly rare.

The customization of AI systems has also come a long way in mitigating biases. Tools like Quiller now allow for campaign-specific customization in their fundraising emails, aligning the language and messaging with the unique voice and platform of individual candidates. Hillary Lehr, CEO of Quiller, highlights the ethical advantages of the platform over ChatGPT for political use, “…because the technology is not trained specifically on your use case — your voice, your party, your talking points…”.

Similarly, ChatGPT's custom instructions have allowed for a reduction in language biases, including those I initially encountered in translations of my writings.

However, it's important to understand that biases in AI are more endemic than they may appear on the surface. Bias is not merely a matter of identity or social representation; it's woven into the very fabric of how these systems create. Take, for example, the images of autistic individuals in Jeremy Andrew Davis' experiment. The bias wasn't just in how these individuals were represented in terms of identity—race, age, or gender—but also in the very medium of their portrayal. The images perpetuated harmful stereotypes through their moody, dark visual aesthetics, which goes to show that bias can be deeply entrenched in both content and form.

With generative AI, bias is the feature, not the fault

The notion that any voice—be it human or machine-generated—can be unbiased is fundamentally flawed. The concept of bias presupposes a single, universally accepted truth, when in reality, we live in a world composed of many co-existing truths. Society, individuals, and communities actively construct these truths, often shaped by dynamics of power. Especially in the realm of politics, the issue of truth is both contentious and malleable. Thus, the quest for a "non-biased voice" is not only unattainable but also based on a faulty premise.

This discussion extends far beyond the question of bias. Generative AI doesn't just compute or analyze; it replicates the human voice, bringing all the complexities and biases inherent in that voice. Unlike calculators, which simply advance our arithmetic abilities, generative AI tools are becoming increasingly proficient at mimicking human expression and perspective (which we explored further in our piece, “The Democratic Dilemma of AI: Navigating Ethical Challenges for Political and Advocacy Campaigns”).  This distinction is crucial because the AI doesn't attribute its generated content to an individual source, even though the very essence of the "human voice" it replicates comes from an amalgamation of biased individuals. Therefore, when interacting with AI, we're not merely engaging with a tool but with a complex mimicry of human cognition and opinion. And in the realm of politics, we are then injecting that mimicry directly into our political dialogue through campaign practices. This is particularly problematic when we are then attributing its craft to real people - often candidates - without disclosure of its presence. 

The bias question becomes even more nuanced when we consider that AI may depict most CEOs as men and most teachers as women. This portrayal is not a distortion of reality, given that the world does indeed skew this way (and unfairly so). According to Forbes, only 8.8% of CEOs of Fortune 500 companies are female. Indeed, men make up a significant majority of CEOs, while women dominate the teaching profession. Labeling AI’s representation of CEO’s as "bias" misconstrues the issue. The problem is not only that AI reflects the world as it is incorrectly but rather that it lacks the capability to envision a fairer world, a world as it ought to be. The world we want women and girls to envision as they see their full potential and futures in the visual communications we are responsible for.

We can urge AI systems to "do better," to be more balanced in how they represent various roles and identities. However, in doing so, we're essentially asking them to align with our own particular vision of a more equitable world. But who gets to decide what that vision should be? Certainly not the AI itself. This is where human leadership and voice become critical, especially in the context of political processes and social change. In reality those who participate in civics, from everyday activists to politicians, to tech developers create this vision as a part of our political processes they participate in - whether conscious of that role or not. And as we tinker with that process, we must do so with intention. And our replacement of people with AI in those processes bears the potential to degrade that very sight of that intention. 

In the next section, we will delve deeper into the idea that even when we do not overtly perceive bias in what AI generates, it's already implicit in every creation. Bias is baked into the very fabric of AI, from its conceptualization to the meaning it produces, and addressing this requires more than algorithmic adjustments—it requires a rethinking of how we use and interpret these powerful tools.

Words matter (and images and authorship…)

As anyone well-versed in the philosophy of aesthetic, art, or literature will attest, art and authorship are fundamentally intertwined. This has been a subject deeply explored in philosophy, particularly in the early 21st century. While space constraints prevent a deep dive here, it is clear that understanding an author enriches the meaning of their work. This is especially pertinent given that an authentic voice is crucial in both civics and a healthy democracy. Democratic processes were intentionally designed to be driven by representative voices from communities that elect these voices to govern them. Contrary to the predominant paradigm in practice today: Constituents are not sheep bound to leaders. Leaders are created by their constituent communities. 

In the educational landscape, there is significant concern about students "cheating" with AI. Ironically, less attention is given to politicians potentially undermining their own authentic voices through AI. Democracy thrives on authentic representation. Elected leaders aren't merely individuals; they carry with them the collective voices of their communities, which should not be replaced or diluted by AI.

Even when political figures have teams that contribute to their communications, it's essential to remember that these are human voices, meticulously selected to represent a collective ethos. These voices should stem from lived experience, not from a systemic or machine-generated perspective like that provided by AI. In the world of transformative political campaigning, the narrative should center on activists, community leaders, organizers, and politicians who challenge systemic limitations through their individual experiences. Certainly a good first step to mitigate this impact, and some which political-based AI technology platforms are certainly doing, is the creation of our own language sets as the base inputs. It is still, however, questionable whether even in these cases the influence of larger language training can be rooted out.

The complexity extends beyond eradicating bias. Every part of creation carries bias. As an experiment we asked ChatGPT to draft a fundraising email around abortion rights, rallying those most impacted - and it default to the term “woman” - despite the fact that many people who are not women can get pregnant, including transgender men and non-binary people - and also ignoring the fact that many women cannot get pregnant. This is not simply a matter of trying to be the most progressive, ‘politically correct’, or ‘woke’. Several beloved friends and family members of this article's author are among those left behind when we define those who get pregnant as only being women - and usually assumed as cis women at that. These are real people’s lives that suffer as a consequence.

While this specific experiment might not result the same for technologies developed for progressives, built on other data sets, or when ChatGPT is told the prompter is progressive, one has to wonder what other terms are assumed, and what the implications of the language are. There is a debate happening currently around inclusion versus expediency and clarity of call to action with reproductive rights under attack at the moment. While AI may bring efficiency to the drafting process, it lacks the nuanced understanding to make sensitive language choices. Its assumptions remove the debate that is at the center of the entire issue. Uncomfortably, inefficient and as burdensome as it is, this conflict is also a primary function of transformational political work. Who gets to decide whether to use "pregnant people" versus "women"? Certainly it should not be AI. Those most impacted by the policies in question should be central to these decisions and not assumed as my own experiments with AI writing have shown they do.  

Even less likely is AI to consider or incorporate some other term that might represent groups of people not currently seen by our language - or push us to dream it. Language is alive as part of our political processes, where AI is trained on a snapshot that cannot encompass the dynamic development of language and related meaning.

AI is fundamentally programmed to be unchallenging, non-conflicting, and service-oriented. Its incorporation into political discourse can thus be problematic - where conflict is an essential element to create the fundamental civic function of change. Political change is often born from challenge, not from efficiencies.

In redefining our understanding of civics, it's crucial to go beyond the well-trodden path of competing for resources, votes, or even volunteers in our campaigning. Neither should we confine ourselves to envisioning civics as a mere marketplace of ideas awaiting public adjudication. A transformative approach to civic engagement requires us to constantly amplify voices from marginalized or overlooked communities - a constant process of unveiling, thereby reshaping what constitutes the collective consensus. In doing so, we find a potent tool for rebalancing deeply ingrained power imbalances that perpetuate social, economic, and environmental injustices. 

This is not merely about eradicating overt acts of discrimination or surface-level prejudices. Instead, it's an acknowledgment of the inherent biases woven into the fabric of our civic dialogues and infrastructures. These biases, often arising from human conflict and difference, present us with an opportunity for profound awareness and change, thereby catalyzing a transformative impact on our democratic structures. To neglect this dynamic, human element of civics is to cede ground to other dominating forces—such as market dynamics—that are all too willing to fill the vacuum and perpetuate existing inequalities.

The true bias of AI lies in its transactional nature, a feature ingrained in its underlying design. As such, it is likely to yield conservative outcomes that affect politics on a systemic level—even when these tools are employed in progressive contexts. This could potentially divert our progressive work away from its role in driving transformational societal change unless we design and deploy these tools with a transformational intent, an opportunity that indeed exists within these systems.

The real threat of AI bias in political practice: Deepening our transactional politics

In the lexicon of community organizing, the distinctions between "transactional" and "transformational" have long been understood as different poles of political action. While messaging and mobilizing holds transactional value, organizing is distinct in its value for transformation. 

At present, with political organizing in decline, we find ourselves steeped in transactional politics, a trajectory accelerated by technological advances—from mass media to the Internet—that have optimized scale and reach over meaningful, participatory engagement. While traditional schools of thought in campaigning have emphasized either persuasion or mobilization strategies, both these approaches fundamentally miss the potential of transformational politics—politics rooted in community-driven participation.

The current landscape of political campaigning often treats community participation as an afterthought or a mere necessity of resource constraints - for developing resources only, rather than as a cornerstone of political practice. For example, many campaigns employ a quantitative approach to canvassing, focusing on the number of doors knocked, rather than the quality of the conversations held, as if it were a live version of mass messaging with value only for reach - turning volunteers into a function of advertising. This transactional mindset overlooks the value not just for the impact on the people whose doors are being knocked on, but the potential transformative potential not just for those behind the doors, but also for those doing the knocking—those engaged in dialogues that could feed back into and enrich the campaign's objectives as well as their interactions in their own communities.

This drift toward transactional politics is not just a deviation; it's a distortion of our democratic ethos. To paraphrase Wendy Brown on a topic we’ll explore in greater depth in the coming second half of this work, politics is not mere marketing, because people are not consumers of democracy, they are agents of it. Our European Report on the State of Digital Organising corroborated this trend towards the transactional, showing that the more decentralized and personalized an online campaign intervention, the less campaigns were featuring them in their work, despite campaigners acknowledging their need to be utillized for successful campaign wins. This incongruence points toward a larger crisis in progressive politics, fueled by a vanishing landscape of everyday civics—be it in unions, churches, or community gatherings—that once acted as community and everyday local connecting points for participatory democracy.

Activist_Androids_Image_3.svg

One systemic impact from this transactional shift, which is likely to accelerate with AI technologies, is the trust gap resulting from the erosion of everyday civics. While more meaningful civics closer to everyday and community life has evaporated, the official civic structures, such as governance and elections, have remained unreformed, and consequently increasingly remote to the public. This void has served as fertile ground for the rise of populism and anti-democratic movements globally. From Brexit to Trump, people these days are often casting votes less on issues, but as forms of protest against an entire system they don’t feel part of. AI, molded by existing mass consensus, threatens to further deepen this chasm. By replacing even the remaining pockets of community participation with automated efficiency.

Through this lens, the issue of AI bias in politics is not just a question of ethical programming; it's an existential challenge to the very framework of democratic engagement. The disintegration of everyday civics and the resultant trust gap amplify this challenge, demanding not just algorithmic adjustments, but a fundamental reevaluation of how technology intersects with our democratic ideals.

The intrinsic biases in AI, sculpted by prevailing public opinion, could essentially lock in existing social inequities and accelerate discrimination. Far from being a neutral tool, AI has the potential to calcify our politics, narrowing the window for transformative change and reinforcing systemic injustices, by removing all transformational potential left in our political practices

Coming Next

Part 2 can be found here

In the second part of this work, we explore the role of systemic commercial forces in driving the transactional politics of our day and how without intentionality to the value of transformational politics. Additionally we discuss how the integration of generative AI into political practices has the potential to further exacerbate the unforeseen harm to our progressive work. We outline how transformational politics are essential and successful for winning deep change, and how AI tools can be used and developed towards promoting participation, removing barriers to involvement, and fundamentally transform our very political practices for the better. 

AI Disclosure: Generative AI was used to assist in the final editing of this article based upon drafted content and copy created by the author. The featured image was created using ChatGPT 4 plus Dall-e 3, and manually edited.