Navigating AI's Role in Politics, Campaigns, and Advocacy: The TON Reading List
September 08, 2023
Tectonica has compiled a comprehensive reading list on AI's impact on politics, campaigning, and advocacy, covering various perspectives and concerns, from the potential for AI to exacerbate trust issues in politics to the need for ethical guidelines in its use. This compilation encompasses practical applications and speculative pieces on how AI is poised to change our politics, as well as ongoing regulatory efforts.
In this moment as AI technology is rapidly poised to reshape the landscape of politics, campaigning, workers' rights, and advocacy, the emergence of AI is both promising and dangerous in important ways. At Tectonica, we've attempted to understand the multifaceted impact of AI in these domains, regularly sharing our insights and key articles we’ve discovered in our TON newsletter over the past several months. Today, we're excited to share that we have compiled a reading list that gathers and categorises all the AI resources and articles we've shared, offering a comprehensive overview of the conversations currently taking place around AI.
Our previous newsletters have included an array of perspectives that cover the complexity of AI's integration into society. These articles have encompassed concerns about the technology's potential to amplify misinformation, the profound ethical considerations of AI use in campaigning, and the need for transparency and rules governing the use of AI.
Kicking off this compilation are two articles by Tectonica. The first dissects the effects AI will have on human connection and how it is poised to further exacerbate a lack of trust in our politics. The second article calls for consensus, offering guiding principles and thoughtful considerations to shape the ethical implementation of AI in politics and campaigning. These articles set the stage for the resources and ideas that follow.
From practical implementations of AI in our current reality to speculative pieces on its potential to reshape our political sphere, this compilation attempts to capture the complete spectrum of AI dialogue at the moment. We also provide an overview of the ongoing efforts and challenges in regulating this technology, as we grapple with determining which voices should lead these efforts and how we can effectively regulate AI, given its global ubiquity.
Current AI discourse inspires both fears and optimism, echoing worries of a future impacted by biassed algorithms and, conversely, curiosity in how the technology could hold potential to advance progressive causes. We invite you to explore these readings and encourage you to let us know if there are any insightful pieces that we may have missed. We’ve categorised things here, with an eye towards those working in politics and campaigns, to try to ease the pain of finding information most relevant to your interests and needs. As the conversation evolves, so do our insights and our collective understanding, and we look forward to sharing more on this topic through our newsletter and upcoming events.
Index of AI Topics
1. Tectonica’s Take on AI: Organising and ethics in its application to politics
This blog post discusses the challenges faced by progressive movements in the social media age, emphasising the trust gap that has emerged due to social media. While we recognise the potential opportunities and benefits that AI can offer, there is a concern that it may worsen the erosion of human connection and participatory engagement within our movements, elements which are pivotal in building effective and sustainable power.
Amidst the rapid adoption of advanced AI in political campaigns, this article emphasises the need for deep dialogue on the novel ethical responsibilities of AI's use in politics, highlighting its transformative potential and advocating for discussions to establish ethical standards and responsible utilisation to navigate the evolving landscape.
2. Today’s Tools: Current tools, and practical advice for using tools in non-profits and progressive campaigns
This article provides a curated list of AI tools categorised by their utility, offering readers a glimpse into the evolving landscape of AI applications in progressive politics, while acknowledging the ethical and privacy considerations inherent in this fast-evolving domain.
This article explores the use of AI art generators like DALL-E 2 and Midjourney for political campaigns seeking cost-effective image solutions, highlighting their potential as supplementary campaign visuals while cautioning against relying on AI for central campaign imagery.
AI's current usage and impact on campaigns has thus far been limited, excelling in content generation but not replacing human consultants; while AI enhances efficiency, it hasn't yet revolutionised the field due to the nuanced nature of politics and public affairs.
A veteran campaigner examines the impact of AI on political campaigns, discussing the practical advantages of AI in terms of efficiency alongside ethical concerns, and offering his perspective on making the best use of AI, while providing additional resources on the topic.
This article explores the positive applications of AI in political campaigns, highlighting its role in improving audience targeting, tracking disinformation, and aiding the creative process for content creation, while emphasising the need for human supervision to ensure authenticity.
3. Ethical Implications: Thoughts on AI's ethical use in politics and transparency of use
This article explores the evolving role of AI in political advertising, highlighting its potential for personalisation and optimisation while discussing ethical concerns related to data privacy, bias, and misinformation.
The incorporation of AI tools into political email programs raises ethical concerns about non-human communication, while promising efficiency gains and potentially displacing jobs, underscoring the importance of transparency in disclosing AI-generated content.
A nonprofit's failed chatbot project offers lessons on ethical AI implementation, stressing a human-centred approach, cautioning against over-reliance on bots, and advocating for careful design, testing, human oversight, increased AI literacy, and specific use case identification.
4. Potential: Envisioning future potential uses in campaigns
This panel discussion focuses on the challenges and opportunities in political journalism for the upcoming 2024 EU election in light of AI, addressing voter preferences, discontent with institutions, AI's potential in trend identification, and the significance of tech advancements.
This podcast episode discusses two recent articles focused on the positive potential of AI in politics, emphasising its capacity to enhance democracy by reducing campaign costs and levelling the playing field for candidates running for office.
Hear from a digital strategist who contends that the positive transformative impact of AI in politics is underway albeit subtly, citing AI-enhanced ads and other applications in the 2024 race, while also advocating for tech platforms to adopt a more adaptable stance.
This article outlines six potential milestones for a new era of AI-infused politics, envisioning scenarios like AI-generated testimony, novel legislative amendments, and AI-driven political parties, highlighting the complex possibilities AI could introduce to reshape democratic politics.
This article presents contrasting views on AI's influence on politics, discussing its potential to democratise campaigning and enhance accessibility, alongside apprehensions about AI misuse through deep fakes, misinformation, and election manipulation.
5. Threats & Bias: Dangers in using AI in political contexts, considerations when using AI technology, and the potential for malicious uses
Recent reports reveal China's AI-driven disinformation efforts, with concerns about potential interference in elections, as AI-enhanced disinformation becomes more sophisticated and divisive, posing a grave threat to democratic processes.
A recent poll shows that half of Americans fear AI-generated misinformation will affect the 2024 election, leading one-third to express reduced trust in its results, highlighting concerns about AI's influence on public opinion and elections, alongside skepticism about effective AI regulation.
Debate has arisen over political bias in AI tools, with a research paper suggesting a notable progressive bias facing criticism for methodological limitations, but underscoring the challenge of comprehending AI behaviour due to limited transparency from developers.
Prominent women in AI raised early concerns about AI risks due to the lack of diversity in development, noting biases and societal prejudices perpetuated by AI systems, with companies initially overlooking these concerns despite research exposing biassed algorithms.
AI-generated professional headshots are becoming popular among young workers, but biases in the technology's training data are causing problems for women of colour by lightening skin tones, altering hairstyles, and changing facial features.
AI's progress raises concerns about disinformation threatening democracy prior to the 2024 US elections, as easy production of realistic content makes detection difficult, potentially eroding trust in news sources and exacerbating voter suppression campaigns.
A candidate for UK Parliament has utilised AI to formulate his election manifesto, integrating constituents' sentiments via crowdsourcing and machine learning to generate policies, raising concerns about diminishing human representatives and oversimplification of complex matters.
Miami Mayor Francis Suarez, a 2024 Republican presidential candidate, has launched an AI chatbot for his campaign, responding in his voice and providing information about his agenda, but with a limited set of answers and noted shortcomings in addressing certain topics.
Campaign strategists anticipate AI-generated content in the 2024 elections, highlighting the need to educate voters to recognise and combat AI-powered misinformation due to limited AI regulations, while Democratic operatives express scepticism about pre-election regulations.
The rising integration of AI in society is prompting worries about its effects on mental well-being, as it enables disinformation which could undermine trust and personal identity, and impact critical thinking skills, requiring more research into the psychological consequences of AI.
This article imagines an AI-driven political campaign where a machine named Clogger employs personalised messaging and reinforcement learning to manipulate voter behaviour, raising alarms about democratic erosion while advocating for privacy protection and regulatory oversight.
Tech journalist Kara Swisher interviews Tristan Harris, co-founder of the Center for Humane Technology, in a podcast exploring AI risks honestly without instilling fear, offering insights on the complex topic and serving as an excellent introduction to the evolving AI landscape.
Experts warn of the harmful impact of AI on democracy due to its capacity for deep fakes and misinformation spread, highlighting the necessity for regulations, transparency, and testing, as well as a suggested pause in AI's use in UK political campaigns until a framework is established.
This article explores the possible effects of large language models on political campaigns, advocating for proper regulation due to risks such as privacy concerns, automated decision-making, and public opinion manipulation.
Meredith Whittaker, president of the Signal and co-founder of the AI Now Institute, underscores concerns about the current dangers of AI controlled by profit-driven corporations, highlighting issues like data bias, accountability, and concentrated power over more existential concerns.
The article discusses the accelerating use of AI in political campaigns for tasks like predictive analysis and voter data patterns, while also highlighting the potential disruption caused by AI-driven disinformation campaigns that challenge the concept of truth.
Researchers caution that the rise of large language models, exemplified by ChatGPT, poses a democracy threat due to their potential to create high-quality content, automate an overwhelming volume of text, and drown authentic public opinion.
6. Impact on Society: The way AI will change the landscape of our society and the new injustices we will likely need to fight
Billionaires like Peter Thiel, Elon Musk, Mark Zuckerberg, and Marc Andreessen are influencing a new reality through AI, transhumanism, and other radical ventures, causing concern about the power concentration in these techno-oligarchs and the potential for significant social disruption.
Labour unions could leverage generative AI for growth, communication, campaigns, and operations, while being mindful of ethical considerations, by utilising it for creating content, summarising meetings, analysing text, extending education, and improving member services.
AI is rapidly reshaping the future of work, and leadership should embrace and adapt to AI for organisational success, with the article noting effective leaders will be those who understand how to utilise AI tools responsibly and ethically, and invest in employee re-skilling.
The National Eating Disorders Association (NEDA) has replaced its helpline staff with a chatbot named Tessa shortly after staff unionised, leading to a controversy where NEDA claims AI will improve services while union members see it as union-busting.
The Writers Guild of America strike involves concerns about generative AI as it seeks to prohibit AI's involvement in writing, reflecting growing anxiety about automation's impact on the entertainment industry, leading to a greater emphasis on establishing rules for AI use.
a. European Regulation
Proposed EU AI regulations, designed to tackle issues like stereotype reinforcement and cognitive manipulation, could impact on political campaigns by limiting targeted voter outreach, access to sentiment analysis data, and imposing transparency demands on AI algorithms.
DigitalEurope's Pre-Regulatory Sandboxing Initiative assesses the impact of the proposed AI Act on European start-ups and SMEs, revealing support for regulatory clarity but concerns about innovation slowdown, compliance costs, and international competitiveness.
The European Parliament has advanced the A.I. Act draft law, a comprehensive attempt to mitigate AI risks with strict restrictions on facial recognition and increased data transparency for AI developers, showcasing Europe's leading efforts in AI regulation compared to other nations.
b. US Regulation
CEOs, including Musk, Gates, Zuckerberg, and Altman, stressed the need for government AI regulation in a closed-door Senate briefing in D.C., yet the secrecy and CEO questioning limits prompted criticism, led by Senator Warren calling for transparency.
The Federal Election Commission has unanimously agreed to consider a proposal by consumer advocacy group Public Citizen to extend anti-"fraudulent misrepresentation" laws to deceptive AI-generated campaign communications, including generative AI and deepfakes.
The US government is engaging with AI policies and regulations, as evidenced by Biden's announcements, safety commitments from AI companies, and the Senate’s proposed approach, spanning developer rules, regulatory enforcement, research funding, and workforce initiatives.
Senate Majority Leader Chuck Schumer intends to conduct a series of nine "AI Insight Forums" aimed at educating Congress about AI before regulating it, acknowledging the complexities of the technology, and thereby postponing comprehensive AI regulations until at least 2024.
Growing apprehension surrounds the use of deepfake images and videos in political campaigns especially following Ron DeSantis’ recent use of a deepfake in his campaign, as advocacy groups call for federal intervention due to a deadlock at the FEC regarding regulations.
Congress aims to regulate AI, yet faces difficulties in comprehending the swiftly evolving technology while tech companies lobby for regulations balancing existential threats and benefits, but concerns persist this distracts from current fundamental and systemic AI threats.
The increasing use of AI in political campaigns is prompting debates over regulations, as politicians are using AI for generating campaign content, while concerns about disinformation and manipulation are driving efforts to establish safeguards such as disclaimers on political ads.
Missy Cummings discusses the need for policymakers to understand AI's concepts and impacts, outlining her course at George Mason University aimed at educating regulators about AI's risks and effects, urging politicians to become well-versed in AI for informed governance decisions.
Drafting legislation aimed at regulating AI and mitigating its potential negative consequences are complicated by challenges such as AI's rapid evolution, historical struggles in regulating emerging technologies, and computer science and law expertise gaps among lawmakers.
c. Global Approach to Regulation
Divergent approaches to AI regulation are emerging worldwide, with the US, EU, and China each promoting their distinct models. Achieving international coordination is essential to establish consistent standards and promote equitable AI access, but remains challenging.
The evolving global power dynamics, including the rise of new contenders like China and India, and the significant role of tech giants in geopolitical events are discussed in the context of global AI regulations, particularly Europe's AI Act, China's AI measures, and the US's regulatory lag.
This article envisions the year 2035, where AI's advancements coexist with substantial risks, underscoring AI’s potential while necessitating governance to mitigate its challenges using principles such as precaution, agility, inclusivity, and targeting, to create a global model.
OpenAI and Meta are advocating for global citizens' assemblies to regulate AI, funding deliberative processes to establish rules, but scepticism arises in their motivations and conflicts of interest, leading to debates on the authenticity of these initiatives as democratic endeavours.
d. Non-governmental and Industry Regulation
Starting in November, Google will require political ads to prominently disclose their use of AI-generated content in a "clear and conspicuous" manner, reflecting growing concerns about the spread of AI-generated misinformation in political campaigns.
AI-generated content presents a challenge for social media platforms during elections, as the absence of regulations leads platforms to grapple with self-regulation and containment of misinformation, with transparency measures like those on TikTok falling short.
The UK has organised an inaugural global summit on AI safety, uniting countries, tech firms, and researchers to address AI risks, enable international cooperation, and advance responsible AI development, aligning with the nation's dedication to leadership in AI safety endeavours.
OpenAI has intervened to restrict Washington, D.C.-based company FiscalNote from using ChatGPT for political advertising, limiting its use to grassroots advocacy campaigns and implementing measures to monitor and classify text related to electoral campaigns.
Disclaimer: The image used in this article was partially generated by AI and augmented by our design team.
Aug 10 - 2023
Learning from Sociocracy to Better Engage Supporters
Contributor: Weronika Paszewska Progressive movements often struggle to develop practices to better engage their supporters and involve members in power sharing and decision making. In this article, we introduce sociocracy, revealing how it can reshape...
Jul 14 - 2023
Five Lessons for the US from Global Abortion Rights Movements
Contributor: Weronika Paszewska Abortion and reproductive rights are facing an unprecedented global regression, with severe access restrictions and legal assaults jeopardizing hard-won progress. The United States now confronts challenges akin to those endured by other...