OpenAI, LLMs, Influence Operations & Epistemic Security: New report review

17 January 2023
by Elizabeth Seger, Giulio Corsi, Aviv Ovadya, Shahar Avin

Last week OpenAI, CSET, and SIO released a report - Generative language models and automated influence operations: Emerging threats and potential mitigations - investigating the impacts of Large Language Models (LLMs) like GPT-3, ChatGPT, BLOOM, and the much anticipated GPT-4 on political influence operations - “covert or deceptive efforts to influence the opinions of a target audience”. The report's findings were informed by a workshop in October 2021 convening 30 experts in AI, influence operations, and policy analysis including three of this post’s authors who lead CSER’s Epistemic Security research stream (now in collaboration with GovAI and LCFI).

Epistemic security refers to a society’s capacity for reliably averting threats to the processes by which reliable information is produced, distributed, assessed, and used to inform decisions and coordinate action. Effective influence operations pose a significant threat to epistemic security and there is mounting concern about how advances in AI, specifically generative AI like LLMs, might exacerbate the spread of mis- and disinformation (see CSET’s report on AI and disinformation for an overview). In this respect we find the OpenAI/CSET/SIO LLM & Influence Operations report to be a timely contribution to an important subspace of the epistemic security research field.

Report strengths from our perspective

The LLM & Influence Operations report provides a deep and well-balanced investigation into the implications of current and future developments in LLM capabilities. The report neither catastrophizes LLM’s exacerbating adversarial influence operations nor is it overly optimistic about opportunities for mitigating negative repercussions where they arise. 

The report utilizes a cost analysis model to demonstrate how LLMs can improve the cost-effectiveness of influence operations. The costs model allows the authors to offer critical evaluations of the ways in which LLMs are likely to have more or less significant impacts on influence operations. In turn, the approach helps elucidate the most critical intervention points. Such a cost analysis framework should be carried over into epistemic security research more generally to not only investigate costs to epistemic adversaries in sowing disinformation and manipulating public opinion, but also to information consumers in accessing and appraising information and information sources. 

The authors analyze a broad spectrum of innovative risk mitigation proposals intervening on LLM design and accessibility, how information is distributed online, and how information consumers appraise the content they consume. For example, the author’s explore mitigation strategies such as limiting access to compute hardware, integrating "radioactive" training data into LLMs to help identify AI generated content, using proof of personhood to mitigate the proliferation of “Bot” profiles masquerading as humans, and organizing public media literacy education campaigns.  

How our research relates

The Epistemic Security team and CSER, LCFI and GovAI is engaged in several projects related to important topics posed by the LLM & Influence Operations report. Note that our research attends to epistemic threats beyond political influence operations and the impacts of LLMs specifically. 

Measuring Epistemic Security

The LLM & Influence Operations report focuses on epistemic threats posed by a particular technology (LLMs) utilized with the intent of influencing public opinion, and it touches briefly on helping information consumers to be more resistant to epistemic threats. On this theme, we are currently investigating methods of measuring the epistemic security of information consumer communities (i.e. capacity for identifying reliable information and trustworthy information sources). Specifically, we are developing a measurement pipeline incorporating multiple quantitative indicators - such as information quality, community polarization, trust in expertise, and discourse incivility - to map the level of epistemic security of any issue domain. CSER and LCFI research associate Giulio Corsi leads on this research topic.

Developing AI model publication norms & standards

The LLM & Influence Operations report discusses how greater AI model accessibility lowers the barrier to utilizing LLMs to facilitate influence operation. That the largest and most powerful models so far tend not to be publicly published, the authors note, stands as an important barrier to significant malicious use of the technology. 

However there has recently been a push by some AI developers to encourage open source (public) publishing of AI models. The concern is that limiting model access stifles innovation, places important decisions about the future of AI in the hands of a small group of tech leaders, and unfairly restricts the full benefits of AI to be enjoyed by a lucky few. Much more work is needed to navigate this tricky question of AI Governance: How and when can dual use AI models (like LLMs) be safely and responsibly published to maximize benefits and minimize harms?

GovAI Researcher and CSER Research Affiliate Elizabeth Seger works on the subjects of AI model sharing practices and AI democratization. CFI Visiting Scholar Aviv Ovadya has also previously worked on publication norms and release standards (Nature Machine Intelligence).

Using AI capabilities to improve epistemic security

The LLM & Influence Operations report focuses heavily on the potential negative impacts of advanced LLM capabilities. The potential for various AI capabilities (generative AI, recommender systems, etc.) to help improve epistemic security is underexplored. For example, while the report clearly points to the role of large language models in decreasing the cost of influence operations, less is known as to how such models may contribute to raising the cost (monetary and non-monetary) of spreading false content by facilitating detection and moderation of misleading information. Moreover, language models can be used as part of “just-in-time” media literacy support and contextualization systems to help people quickly make sense of information they encounter online.

There is also significant potential to address one of the most dangerous impacts of influence operations—the way that they can create destructive divisions and derail communities and polities, building on advances in AI. Bridging-based ranking provides an alternative to standard engagement-based approaches and can decrease the incentives for producing destructively divisive content. A recent paper by Aviv Ovadya and co-author Luke Thorburn of Kings College London alludes to how bridging systems can benefit from advances in language modeling, referencing recent work from DeepMind.

Finally, Ovadya has proposed the use of democratic mechanisms for answering challenging AI and technology governance questions, including those around model release and the desired tradoffs between safety and access (New York Times, Belfer Center) — and some of those potential democratic mechanisms can themselves benefit from advances in language modeling (NeurIPS FMDM, Collective Response Systems talk). 

Subscribe to our mailing list to get our latest updates