§ AI Wiki / Glossary
One-line definitions, the AI dictionary.
§ Search this category
Search the Wiki →Training data generated by another model rather than (or in addition to) real-world data.
Voice synthesis that imitates a specific person from a few seconds of sample audio.
A computing structure of interconnected layers of artificial neurons whose weights are learned from data.
The most capable AI models of their generation, often pushing both capability frontiers and novel risk profiles.
OpenAI's text-to-video model that generated wide attention upon its preview.
Stability AI's open-source diffusion image model released in August 2022 that reshaped the field.
A critique that LLMs reassemble training-data patterns probabilistically without genuine understanding.
Technology that converts spoken audio into text.
A special message at the start of a conversation that sets the model's persistent instructions and role.
When the model performs a task with no examples, given only the instruction.
The slow first response when a model or service has been idle and must initialise on demand.
The stage after pre-training that turns a raw model into a helpful, safe, instruction-following assistant.
An inference speedup where a small draft model proposes multiple tokens that the big model then verifies in parallel.
A simple HTTP-based standard for one-way live streams from server to browser.
A dynamic serving technique where new requests can join an in-flight batch and finished ones leave immediately.
Google's language-agnostic tokeniser library that treats whitespace as just another character.
The sampling parameter that controls how 'creative' or 'deterministic' a model's output is.