Large language model (LLM) agents represent the next generation of artificial 
intelligence (AI) systems, integrating LLMs with external tools and memory 
components to execute complex reasoning and decision-making tasks. These agents 
are increasingly deployed in domains such as healthcare, finance, 
cybersecurity, and autonomous vehicles, where they interact dynamically with 
external knowledge sources, retain memory across sessions, and autonomously 
generate responses and actions. While their adoption brings transformative 
benefits, it also exposes them to new and critical security risks that remain 
poorly understood. Among these risks, memory poisoning attacks pose a severe 
and immediate threat to the reliability and security of LLM agents. These 
attacks exploit the agent’s ability to store, retrieve, and adapt knowledge 
over time, leading to biased decisions, manipulation of real-time behavior, 
security breaches, and system-wide failures. The goal of this project is to 
develop a  theoretical foundation for understanding and mitigating memory 
poisoning in LLM agents.

This position, funded by the Swedish Research Council (VR), offers an exciting 
opportunity to work at the forefront of AI security, tackling some of the most 
pressing challenges in the field.

Full information and link to apply: 
https://liu.se/en/work-at-liu/vacancies/27883
_______________________________________________
Corpora mailing list -- [email protected]
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to [email protected]

Reply via email to