An analysis of Large Language Models' (LLMs) vulnerability to prompt injection attacks and potential risks when used in adversarial situations, like on the Internet. The author notes that, similar to the old phone system, LLMs are vulnerable to prompt injection attacks and other security risks due to the intertwining of data and control paths.
This post highlights how the GitHub Copilot Chat VS Code Extension was vulnerable to data exfiltration via prompt injection when analyzing untrusted source code.
Simon Willison explains an accidental prompt injection attack on RAG applications, caused by concatenating user questions with documentation fragments in a Retrieval Augmented Generation (RAG) system.