Open Vs. Closed Large Language Models: Analyzing GPT-4, Claude, LLAMA-2, and Mistral
Uncategorized5 min read3 weeks ago
In the early days of OpenAI their large language models (LLMs) were released open-source, and the company operated as a non-profit. However, as the models improved, access to these models became restricted and OpenAI is now on pace to generate over $1 Billion in revenue over the next 12 months.
Big corporations like Microsoft, Salesforce, Snapchat, and Morgan Stanley use the OpenAI models via their API. And while OpenAI leads the AI industry in many regards (accuracy, groundedness, latency) the developers integrating them are limited in their ability to innovate due to the closed-source nature.
Relying on OpenAI also brings risk via their pricing, uptime, and restrictions. This is a growing concern, and for many companies, it disqualifies the LLMs. This battle against closed-source is an old story seen time and time again.
Open vs. Closed-Source Technologies
Open-source makes software a public good that anyone can benefit from. This ideology is actually a key driver in the United States capitalist economy, and more specifically, it is a key reason Silicon Valley’s dominance in software is unrivaled by any other place on earth.
Some of the greatest innovations were made possible thanks to open-source i.e PyTorch, an open-source machine learning framework, is used in Tesla’s self-driving technology.
But there is a downside to open-source technologies: the quality often trails closed-source alternatives. Closed-source is operated by large corporations with billions in revenue, and venture backing. They are able to attract the best engineers and researchers in the field – pushing their technology further, faster.
Still, a certain percentage of technology always finds itself open-sourced, as different companies have different needs regarding privacy and control. The LLM space is still sorting itself out in this domain. The chart below gives an overview of some popular technologies that found a healthy mix of both open and closed solutions
Understanding Open vs. Closed-Source LLMs
Closed-source dominates in terms of “out-of-the-box” results. OpenAI’s GPT models and Anthropic’s Claude model have cemented themselves as one and two in the space. With the open source competitors, LLAMA-2 and Mistral-7b trailing by a decent margin.
There is an important nuance to note with LLMs. Open-source models can outperform closed-source models in industry-specific scenarios due to their control, customization, and transparency.
When there is a need to understand and audit a model’s behavior, open-source is a clear winner. This is particularly relevant in industries where transparency, customizability, and community scrutiny are important: such as in research, auditing, and security-sensitive applications.
The gap in quality between open-source models, and the GPT models, are expected to close over the coming years. This could ultimately result in a commoditization of LLMs, which would be a huge win for the open-source community.
Open-Source Advantages Summary:
- Customization and fine-tuning
- Enhanced data security and privacy
- Transparency and code accessibility
Closed-Source Advantages Summary:
- Broad knowledge out of the box
- Simplistic Setup
- Low barrier to entry
The closed source
OpenAI and Anthropic stand as the dominant forces in the field. OpenAI has secured a $13 billion investment from Microsoft, while Anthropic has garnered $4 billion from Amazon. These leading entities are set to significantly outpace their competition.
Perhaps the biggest win for OpenAI and Anthropic in these investments, was the significant amount of cloud compute they received. Ensuring they have the infrastructure in place to improve upon their already gigantic LLM’s in the world – for reference GPT-4 has a size of about 1.8 trillion parameters, which is over 10 times larger than its predecessor, GPT-3.
Although Claude is often regarded as the second leading option, it possesses several advantages over GPT-4. These include superior performance in mathematics, offering more detailed explanations for problem-solving, and showing greater aptitude in specialized fields such as law and medicine.
The open source
The open-source community was electrified by the launch of LLAMA-2, Meta’s Large Language Model (LLM), which was released in an open-source format. Initially, the model’s weights were rumored to have been “leaked” in mid-2023, but they were subsequently made fully available to the public.
This move by Zuckerberg was not simply a gesture of goodwill, but a strategic business decision for Meta. Given OpenAI’s leading position and Anthropic’s firm grip on second place, it is a common strategy for those in third place to open-source their technology. This approach provides a unique advantage that the leading competitors do not offer.
Mistral currently leads by most metrics. However, Meta’s recent announcement that LLAMA-3 is in training and will be released as open-source has led many in the industry to anticipate that LLAMA-3 will reclaim its leading position.
The future of large language models
The future users of these models will encompass a broad spectrum, from enterprise applications and academic research to small businesses and individual creators. Each group will have to weigh considerations such as computational efficiency, model accuracy, and proficiency in processing complex human language.
AI will continue to improve at a rapid rate. Large language models will lead the way, but other aspects of AI will see great gains and creative applications, like computer vision.
GPT-5 is anticipated to be released this year, and be another significant jump in intelligence. The LLM is believed to have a parameter count of up to 5 trillion, including a doubling of compute power to around 25,000 NVIDIA A100 GPUs.
The promise held by large language models is monumental, offering transformative benefits across numerous sectors. As we look to the future, it’s crucial to remain optimistic about their potential in our commitment to pushing the boundaries of innovation.