List of the Best 21 Large Language Models (LLMs) (September 2024)
Large language models are pre-trained on large datasets and use natural language processing to perform linguistic tasks such as text generation, code completion, paraphrasing, and more.
The initial release of ChatGPT sparked the rapid adoption of generative AI, which has led to large language model innovations and industry growth.
In fact, 92% of Fortune 500 firms have started using generative AI in their workflows.
As adoption continues to grow, so does the LLM industry. The global large language model market is projected to grow from $6.5 billion in 2024 to $140.8 billion by 2033.
With that, here is a list of the top 21 LLMs available in September 2024.
LLM Name | Developer | Release Date | Access | Parameters |
GPT-4o | OpenAI | May 13, 2024 | API | Unknown |
Claude 3.5 | Anthropic | June 20, 2024 | API | Unknown |
Grok-1 | xAI | November 4, 2023 | Open-Source | 314 billion |
Mistral 7B | Mistral AI | September 27, 2023 | Open-Source | 7.3 billion |
PaLM 2 | May 10, 2023 | Open-Source | 340 billion | |
Falcon 180B | Technology Innovation Institute | September 6, 2023 | Open-Source | 180 billion |
Stable LM 2 | Stability AI | January 19, 2024 | Open-Source | 1.6 billion, 12 billion |
Gemini 1.5 | Google DeepMind | February 2nd, 2024 | API | Unknown |
Llama 3.1 | Meta AI | June 23, 2024 | Open-Source | 405 billion |
Mixtral 8x22B | Mistral AI | April 10, 2024 | Open-Source | 141 billion |
Inflection-2.5 | Inflection AI | March 10, 2024 | Proprietary | Unknown |
Jamba | AI21 Labs | March 29, 2024 | Open-Source | 52 billion |
Command R | Cohere | March 11, 2024 | Both | 35 billion |
Gemma | Google DeepMind | February 21, 2024 | Open-Source | 2 billion, 7 billion |
Phi-3 | Microsoft | April 23, 2024 | Both | 3.8 billion |
XGen-7B | Salesforce | July 3, 2023 | Open-Source | 7 billion |
DBRX | Databricks' Mosaic ML | March 27, 2024 | Open-Source | 132 billion |
Pythia | EleutherAI | February 13, 2023 | Open-Source | 70 million to 12 billion |
Sora | OpenAI | February 15, 2024 (announced) | API | Unknown |
Alpaca 7B | Stanford CRFM | March 13, 2023 | Open-Source | 7 billion |
Nemotron-4 | Nvidia | June 14, 2024 | Open-Source | 340 billion |
1. GPT-4o
Developer: OpenAI
Release date: May 13, 2024
Number of Parameters: Unknown
What is it? GPT-4o is the latest and most advanced OpenAI language model, succeeding GPT-4, GPT-3.5, and GPT-3. OpenAI claims that GPT-4o is 50% cheaper than GPT-4 despite being 2x faster at generating tokens. This multimodal model includes text, image, video, and voice capabilities packaged into one.
GPT-4o's biggest upgrade is the Voice-to-Voice function, which will improve input response times to an average of 320 milliseconds (compared to a few seconds with GPT-4). This feature is expected to be released in the coming weeks.
2. Claude 3.5
Developer: Anthropic
Release date: March 14, 2024
Number of Parameters: Unknown
What is it? As a new upgrade from the highly rated Claude 3, Claude 3.5 Sonnet is the first release of the new Claude 3.5 model family. Similar to Claude 3, it'll also include the Haiku and Opus models. As debatably the biggest competitor to GPT-4 and ChatGPT, Claude made even bigger improvements to this model by maintaining the 200,000 token context window at a lower cost. This is much larger than GPT-4's 32,000 token capabilities.
According to Anthropic's report, Claude 3.5 Sonnet outperformed GPT-4o in major benchmarks like coding and text reasoning. Plus, this is Claude's most advanced vision model, with the ability to transcribe text from images or generate insights from charts.
Amazon has invested over $4 billion in Anthropic, bringing the startup's valuation to $15 billion. The Claude mobile app was also released in May 2024.
3. Grok-1
Developer: xAI
Release date: November 4, 2023
Number of Parameters: 314 billion
What is it? Created by Elon Musk's artificial intelligence startup xAI, Grok-1 is currently the largest open-source LLM released to date at 314 billion parameters. Grok directly integrates with X (Twitter), and users must pay for an X Premium+ subscription to gain access.
Because of the model’s size, Grok has a mixture-of-experts (MoE) architecture that only uses 25% of its weights for any given input token to maximize calculation efficiency.
In August 2024, both Grok-2 and Grok-2 mini were released to X users in beta. According to xAI's reports, Grok-2 outperforms GPT-4o in numerous categories, such as GPQA, MMLU-Pro, and DocVQA.
4. Mistral 7B
Developer: Mistral AI
Release date: September 27, 2023
Number of Parameters: 7.3 billion
What is it? Mistral 7B is an open-source language model with 32 layers, 32 attention heads, and eight key-value heads. Despite running with fewer parameters, they outperformed the Llama 2 family of models in nearly all metrics, including MMLU, reading comprehension, math, coding, etc.
Mistral 7B is released under an Apache 2.0 license. Customers are free to download it locally, deploy it on the cloud, or run it on HuggingFace. The Paris-based startup is close to securing a new $600 million funding round that would value the company at $6 billion.
5. PaLM 2
Developer: Google
Release date: May 10, 2023
Number of Parameters: 340 billion
What is it? PaLM 2 is an advanced large language model developed by Google. As the successor to the original Pathways Language Model (PaLM), it’s trained on 3.6 trillion tokens (compared to 780 billion) and 340 billion parameters (compared to 540 billion). PaLM 2 was originally used to power Google's first generative AI chatbot, Bard (rebranded to Gemini in February 2024).
6. Falcon 180B
Developer: Technology Innovation Institute (TII)
Release date: September 6, 2023
Number of Parameters: 180 billion
What is it? Developed and funded by the Technology Innovation Institute, Falcon 180B is an upgraded version of the earlier Falcon 40B LLM. It has 180 billion parameters, which is 4.5 times larger than the 40 billion parameters of Falcon 40B.
In addition to Falcon 40B, it also outperforms other large language models like GPT-3.5 and LLaMA 2 on tasks such as reasoning, question answering, and coding. In February 2024, the UAE-based Technology Innovation Institute (TII) committed $300 million in funding to the Falcon Foundation.
7. Stable LM 2
Developer: Stability AI
Release date: January 19, 2024
Number of Parameters: 1.6 billion and 12 billion
What is it? Stability AI, the creators of the Stable Diffusion text-to-image model, are the developers behind Stable LM 2. This series of large language models includes Stable LM 2 12B (12 billion parameters) and Stable LM 2 1.6B (1.6 billion parameters). Released in April 2024, the larger 12B model outperforms models like LLaMA 2 70B on key benchmarks despite being much smaller.
8. Gemini 1.5
Developer: Google DeepMind
Release date: February 2nd, 2024
Number of Parameters: Unknown
What is it? Gemini 1.5 is Google's next-generation large language model, offering a significant upgrade over its predecessor, Gemini 1.0. While it’s only available for early testing, Gemini 1.5 Pro provides a one million-token context window (1 hour of video, 700,000 words, or 30,000 lines of code), the largest to date compared to alternative LLMs and chatbots. This upgrade is 35 times larger than Gemini 1.0 Pro and surpasses the previous largest record of 200,000 tokens held by Anthropic’s Claude 2.1.
9. Llama 3.1
Developer: Meta AI
Release date: June 23, 2024
Number of Parameters: 405 billion
What is it? Llama 3, the predecessor to Llama 3.1, was available in both 70B and 8B versions that outperformed other open-source models like Mistral 7B and Google's Gemma 7B on MMLU, reasoning, coding, and math benchmarks. Now, users will notice major upgrades to the latest version, including 405 billion parameters and an expended context length of 128,000.
Users will also notice more accuracy because of the impressive knowledge base, which has been trained on over 15 trillion tokens. Plus, Meta added eight additional languages for this model. The increased size of this model makes it the largest open-source model released to date.
Customers can still access its predecessor, Llama 2, which is available in three versions: 7 billion, 13 billion, and 70 billion parameters.
10. Mixtral 8x22B
Developer: Mistral AI
Release date: April 10, 2024
Number of Parameters: 141 billion
What is it? Mixtral 8x22B is Mistral AI's latest and most advanced large language model. This sparse Mixture-of-Experts (SMoE) model has 141 billion total parameters but only uses 39B active parameters to focus on improving the model’s performance-to-cost ratio.
The startup also recently released Mistral Large, a ChatGPT alternative that ranks second behind GPT-4 among API-based LLMs.
11. Inflection-2.5
Developer: Inflection AI
Release date: March 10, 2024
Number of Parameters: Unknown
What is it? Inflection-2.5 is the latest large language model (LLM) developed by Inflection AI to power its conversational AI assistant, Pi. Significant upgrades have been made, as the model currently achieves over 94% of GPT-4’s average performance while only having 40% of the training FLOPs. In March 2024, the Microsoft-backed startup reached 1+ million daily active users on Pi.
12. Jamba
Developer: AI21 Labs
Release date: March 29, 2024
Number of Parameters: 52 billion
What is it? AI21 Labs created Jamba, the world's first production-grade Mamba-style large language model. It integrates SSM technology with elements of a traditional transformer model to create a hybrid architecture. The model is efficient and highly scalable, with a context window of 256K and deployment support of 140K context on a single GPU.
13. Command R
Developer: Cohere
Release date: March 11, 2024
Number of Parameters: 35 billion
What is it? Command R is a series of scalable LLMs from Cohere that support ten languages and 128,000-token context length (around 100 pages of text). This model primarily excels at retrieval-augmented generation, code-related tasks like explanations or rewrites, and reasoning. In April 2024, Command R+ was released to support larger workloads and provide real-world enterprise support.
14. Gemma
Developer: Google DeepMind
Release date: February 21, 2024
Number of Parameters: 2 billion and 7 billion
What is it? Gemma is a series of lightweight open-source language models developed and released by Google DeepMind. The Gemma models are built with similar tech to the Gemini models, but Gemma is limited to text inputs and outputs only. The models have a context window of 8,000 tokens and are available in 2 billion and 7 billion parameter sizes.
15. Phi-3
Developer: Microsoft
Release date: April 23, 2024
Number of Parameters: 3.8 billion
What is it? Classified as a small language model (SLM), Phi-3 is Microsoft's latest release with 3.8 billion parameters. Despite the smaller size, it's been trained on 3.3 trillion tokens of data to compete with Mistral 8x7B and GPT-3.5 performance on MT-bench and MMLU benchmarks.
To date, Phi-3-mini is the only model available. However, Microsoft plans to release the Phi-3-small and Phi-3-medium models later this year.
16. XGen-7B
Developer: Salesforce
Release date: July 3, 2023
Number of Parameters: 7 billion
What is it? XGen-7B is a large language model from Salesforce with 7 billion parameters and an 8k context window. The model was trained on 1.37 trillion tokens from various sources, such as RedPajama, Wikipedia, and Salesforce's own Starcoder dataset.
Salesforce has released two open-source versions, a 4,000 and 8,000 token context window base, hosted under an Apache 2.0 license.
17. DBRX
Developer: Databricks' Mosaic ML
Release date: March 27, 2024
Number of Parameters: 132 billion
What is it? DBRX is an open-source LLM built by Databricks and the Mosaic ML research team. The mixture-of-experts architecture has 36 billion (of 132 billion total) active parameters on an input. DBRX has 16 experts and chooses 4 of them during inference, providing 65 times more expert combinations compared to similar models like Mixtral and Grok-1
18. Pythia
Developer: EleutherAI
Release date: February 13, 2023
Number of Parameters: 70 million to 12 billion
What is it? Pythia is a series of 16 large language models developed and released by EleutherAI, a non-profit AI research lab. There are eight different model sizes: 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. Because of Pythia's open-source license, these LLMs serve as a base model for fine-tuned, instruction-following LLMs like Dolly 2.0 by Databricks.
19. Sora
Developer: OpenAI
Release date: February 15, 2024 (announced)
Number of Parameters: Unknown
What is it? OpenAI's latest development is Sora, a text-to-video model that combines LLMs and generative AI to turn text prompts into realistic videos up to 60 seconds long. The model uses a transformer architecture that operates on "spacetime patches" of video and image data rather than text tokens like other LLMs. No official release date for Sora has been announced, but OpenAI expects it to open to the public in late 2024.
20. Alpaca 7B
Developer: Stanford CRFM
Release date: March 27, 2024
Number of Parameters: 7 billion
What is it? Alpaca is a 7 billion-parameter language model developed by a Stanford research team and fine-tuned from Meta's LLaMA 7B model. Users will notice that although being much smaller, Alpaca performs similarly to text-DaVinci-003 (ChatGPT 3.5). However, Alpaca 7B is available for research purposes, and no commercial licenses are available.
21. Nemotron-4 340B
Developer: NVIDIA
Release date: June 14, 2024
Number of Parameters: 340 billion
What is it? Nemotron-4 340B is a family of large language models for synthetic data generation and AI model training. These models help businesses create new LLMs without larger and more expensive datasets. Instead, Nemotron-4 can create high-quality synthetic data to train other AI models, which reduces the need for extensive human-annotated data.
The model family includes Nemotron-4-340B-Base (foundation model), Nemotron-4-340B-Instruct (fine-tuned chatbot), and Nemotron-4-340B-Reward (quality assessment and preference ranking). Due to the 9 trillion tokens used in training, which includes English, multilingual, and coding language data, Nemotron-4 matches GPT-4's high-quality synthetic data generation capabilities.
Conclusion
The landscape of large language models is rapidly evolving, with new breakthroughs and innovations emerging at an unprecedented pace.
From compact models like Phi-2 and Alpaca 7B to cutting-edge architectures like Jamba and DBRX, the field of LLMs is pushing the boundaries of what's possible in natural language processing (NLP).
We will keep this list regularly updated with new models. If you liked learning about these LLMs, check out our lists of generative AI startups and AI startups for more.