A large language model (LLM) is a computational model capable of language generation or other natural language processing tasks. As language models, LLMs...
156 KB (13,394 words) - 11:01, 4 October 2024
A language model is a probabilistic model of a natural language. In 1980, the first significant statistical language model was proposed, and during the...
14 KB (2,173 words) - 12:33, 16 July 2024
Llama (Large Language Model Meta AI, formerly stylized as LLaMA) is a family of autoregressive large language models (LLMs) released by Meta AI starting...
35 KB (3,612 words) - 04:02, 29 September 2024
Claude is a family of large language models developed by Anthropic. The first model was released in March 2023. Claude 3, released in March 2024, can...
12 KB (1,184 words) - 11:26, 5 October 2024
Large Open-science Open-access Multilingual Language Model (BLOOM) is a 176-billion-parameter transformer-based autoregressive large language model (LLM)...
4 KB (500 words) - 17:28, 8 September 2024
Transformer) is a series of large language models developed by Google AI introduced in 2019. Like the original Transformer model, T5 models are encoder-decoder...
17 KB (1,616 words) - 17:34, 6 October 2024
state-of-the-art models, and as an early example of a large language model. As of 2020[update], BERT is a ubiquitous baseline in natural language processing...
30 KB (3,401 words) - 08:30, 2 October 2024
Gemini is a family of multimodal large language models developed by Google DeepMind, serving as the successor to LaMDA and PaLM 2. Comprising Gemini Ultra...
44 KB (3,499 words) - 06:58, 5 October 2024
Chinchilla is a family of large language models (LLMs) developed by the research team at Google DeepMind, presented in March 2022. It is named "chinchilla"...
7 KB (615 words) - 22:23, 4 October 2024
A foundation model, also known as large AI model, is a machine learning or deep learning model that is trained on broad data such that it can be applied...
46 KB (5,072 words) - 13:26, 5 September 2024
Generative pre-trained transformer (redirect from GPT (language model))
A generative pre-trained transformer (GPT) is a type of large language model (LLM) and a prominent framework for generative artificial intelligence. It...
49 KB (4,368 words) - 18:33, 4 October 2024
PaLM (redirect from Pathways Language Model)
PaLM (Pathways Language Model) is a 540 billion parameter transformer-based large language model developed by Google AI. Researchers also trained smaller...
12 KB (798 words) - 21:44, 30 June 2024
Prompt engineering (redirect from In-context learning (natural language processing))
generative AI model. A prompt is natural language text describing the task that an AI should perform: a prompt for a text-to-text language model can be a query...
52 KB (5,786 words) - 19:27, 6 October 2024
Transformer (deep learning architecture) (redirect from Transformer model)
Later variations have been widely adopted for training large language models (LLM) on large (language) datasets, such as the Wikipedia corpus and Common Crawl...
98 KB (12,180 words) - 21:58, 5 October 2024
Mistral AI (section Mistral Large 2)
in the AI sector. The company focuses on producing open source large language models, emphasizing the foundational importance of free and open-source...
21 KB (2,191 words) - 19:29, 24 August 2024
GPT-3 (redirect from GPT-3 (language model))
Transformer 3 (GPT-3) is a large language model released by OpenAI in 2020. Like its predecessor, GPT-2, it is a decoder-only transformer model of deep neural network...
54 KB (4,915 words) - 00:40, 2 October 2024
and distributed systems. A large number of modeling languages appear in the literature. Example of graphical modeling languages in the field of computer...
22 KB (2,837 words) - 08:09, 12 July 2024
GPT-4o (category Large language models)
under different names on Large Model Systems Organization's (LMSYS) Chatbot Arena as three different models. These three models were called gpt2-chatbot...
17 KB (1,787 words) - 13:48, 6 October 2024
open-source large language model developed in the United Arab Emirates and launched in August 2023. It was trained on both English- and Arabic-language data...
3 KB (260 words) - 10:33, 19 June 2024
GPT-4 (category Large language models)
Transformer 4 (GPT-4) is a multimodal large language model created by OpenAI, and the fourth in its series of GPT foundation models. It was launched on March 14...
62 KB (5,931 words) - 20:28, 6 October 2024
works on a freemium model; the free product uses the company's standalone large language model (LLM) that incorporates natural language processing (NLP)...
14 KB (1,154 words) - 06:36, 5 October 2024
Stochastic parrot (redirect from On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?)
describe the theory that large language models, though able to generate plausible language, do not understand the meaning of the language they process. The term...
23 KB (2,441 words) - 23:36, 6 October 2024
found. In the context of large language models, research found that training LLMs on predecessor-generated text—language models are trained on the synthetic...
15 KB (2,332 words) - 12:41, 6 October 2024
ChatGPT (category Large language models)
chatbot developed by OpenAI. Launched in 2022 based on the GPT-3.5 large language model (LLM), it was later updated to use the GPT-4 architecture. ChatGPT...
198 KB (17,195 words) - 13:15, 5 October 2024
development. LLaMA is a family of large language models released by Meta AI starting in February 2023. Meta claims these models are open-source software, but...
6 KB (551 words) - 02:17, 9 September 2024
Retrieval-augmented generation (category Large language models)
retrieval capabilities. It modifies interactions with a large language model (LLM) so that the model responds to user queries with reference to a specified...
11 KB (1,148 words) - 11:26, 1 October 2024
MMLU (redirect from Measuring Massive Multitask Language Understanding)
Measuring Massive Multitask Language Understanding (MMLU) is a benchmark for evaluating the capabilities of large language models. It consists of about 16...
4 KB (393 words) - 18:00, 1 October 2024
Multimodal learning (redirect from Multimodal model)
(2023-01-01). "BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models". arXiv:2301.12597 [cs.CV]. Alayrac...
7 KB (2,122 words) - 14:24, 1 June 2024
LangChain (category Large language models)
that helps facilitate the integration of large language models (LLMs) into applications. As a language model integration framework, LangChain's use-cases...
18 KB (742 words) - 19:36, 28 September 2024
Generative artificial intelligence (category CS1 Italian-language sources (it))
Improvements in transformer-based deep neural networks, particularly large language models (LLMs), enabled an AI boom of generative AI systems in the early...
140 KB (12,199 words) - 18:55, 6 October 2024