According to the tag bias, the following results have been found:
Review of vulnerabilities for LLM. Part 1. Attack
Large language models are used everywhere: they generate the appearance of cars, houses, and ships, summarize round tables and conferences, come up with theses for articles, mailings, and presentations. But with all the "perks" of implementing AI, one should not forget about security. Large language models are attacked in various sophisticated ways. In the top news about neural networks are multimillion-dollar investments in protection against prompt injections. Therefore, let's talk about what threats exist and why investors pay big money to create such businesses. And in the second part of the article, I will tell you how to protect against them.