Diminishing Damage in the Disinformation Age

Date

Author

By Casey Moffitt
Headshot of Gladwin Development Chair Assistant Professor of Computer Science Kai Shu

The distribution and consumption of misinformation and disinformation poses a threat to many aspects of a free society, and the emergence of generative artificial intelligence multiplies this threat due to the sheer volume it can produce.

However, Kai Shu, Gladwin Development Chair Assistant Professor of Computer Science at Illinois Institute of Technology, received a United States Department of Homeland Security grant to create new techniques that will combat the effects of misinformation and disinformation.

鈥淲ith the powerful capacity of generative AI such as ChatGPT to generate human-like content, it may pose more challenges and potentially be more harmful than human-written misinformation,鈥 Shu says. 鈥淓xisting misinformation detection models that are heavily trained using human-written misinformation may be less effective when identifying misinformation generated by large language models.鈥

Shu argues that we live in a disinformation age, as it has littered news feeds across social media platforms and infiltrated more traditional and mainstream media outlets. People will act based on the misinformation that they absorb in areas such as health care, finance, politics, and more. Large language models (LLMs) could compound the issue due to the ease of generating misinformation and the vast scale it can produce.

鈥淟LMs have shown promising capacities in generating human-like content,鈥 Shu says. 鈥淔or example, we can ask ChatGPT to 鈥榳rite a piece of news,鈥 and it will generate a piece of news with possible false dates and locations due to the intrinsic generate strategies and the lack of up-to-date information in the training data. LLMs can follow user instructions and generate misinformation with different types, domains, and errors.鈥

Shu鈥檚 research could result in new techniques that advance misinformation detection and improve the attribution of misinformation from human-written and LLM-generated sources. The research will also emphasize explainability, ensuring that the developed models are transparent and understandable to facilitate public adoption.

The research will utilize some of the LLM capabilities that are being used to generate the misinformation itself.    

鈥淟LMs have demonstrated strong capacities in various tasks such as summarization and question answering,鈥 he says. 鈥淲e will investigate novel methods to differentiate human authors and 鈥楢I authors鈥 of misinformation.鈥

Shu says there are many challenges that the research presents, such as making detection more efficient and to develop explanations as to why the information is believed to be false or misleading.

鈥淭he different and new characteristics of LLM-generated misinformation is understudied, as well as how we can potentially combat LLM-generated misinformation,鈥 he says. 鈥淭he proposed research will systematically investigate the detection, attribution, and explanation of LLM-generated misinformation.鈥

Misinformation and disinformation research has significant importance and huge challenges, Shu says. The challenges range from our vulnerabilities to misinformation, to information providers鈥 bias, to the arms race environment between how misinformation is generated compared to how detection techniques are developed.

鈥淢isinformation in the age of LLMs remains an underexplored problem in humanity, though it is pressing to investigate with multidisciplinary research,鈥 Shu says. 鈥淚 am excited about this research project because I see potentials of leveraging trustworthy AI techniques for social good, to detect and intervene in the misinformation that is written by human or even AI models themselves.鈥