LARGE LANGUAGE MODELS, PROPAGANDA AND SECURITY CHALLENGES
DOI:
https://doi.org/10.53477/1842-9904-24-21Keywords:
generative AI, large language models (LLMs), propaganda, influence activities, AI-related security challengesAbstract
The present paper is a non-systematic narrative review of security challenges and solutions related to the LLM-generated propaganda, considered in the context of influence activities. The purpose of the paper is to provide a synthesis of the knowledge on the mentioned topic, based on research, opinion and regulatory documents published between 2017 and 2024. To that end, the developed research protocol is designed to take into account criteria related to the diversity, credibility and eligibility of primary and secondary sources. The synthesis of topic-related knowledge is then illustrated and discussed in a manner as objective as possible. Thus, we consider that the main findings can be of help for researchers to identify, justify and refine hypotheses, focusing on possible pitfalls and gaps, as well as for the general
public to acquire a higher level of situational awareness, given the novelty of the topic. Moreover, they may contribute towards targeting new avenues for research in the field