§ AI Wiki / Glossary
One-line definitions, the AI dictionary.
§ Search this category
An attack that tries to bypass an LLM's safety restrictions through prompting.
An API setting that guarantees the model's output is valid JSON.