Visa The Embedded Lending Opportunity April 2024 Banner

Who Will Win the Battle for Generative AI?

GenAI, competition, generative AI

OpenAI’s remarkable success with ChatGPT has sent shockwaves through the tech world, sparking a fierce race to build the next generation of large language models (LLMs).

Rivals are multiplying, aiming to challenge OpenAI’s early dominance. Securing a lead in the artificial intelligence (AI) race could allow a company to dominate a substantial segment of the commerce and payments industry, which is increasingly shaped by AI technologies.

Canadian startup Cohere is a major contender. Its reported $500 million funding round, which potentially values the company at $5 billion, underscores the intense investor interest in dethroning OpenAI. Founded by former Google researcher Aidan Gomez, Cohere has rapidly scaled its revenue, partnering with Oracle to expand the reach of its enterprise AI solutions. 

But tech giants aren’t sitting idly by. Google, Meta, HP, Anthropic and Mistral are aggressively developing their own LLMs. Microsoft, a key OpenAI investor, is independently building foundational models to ensure it remains at the forefront.

While leading tech firms have an advantage in the competition, the quest for supremacy in AI involves more than just the industry’s giants. Open-source projects, collaborations and a focus on ethics and accessibility are emerging as key factors in the race to dethrone OpenAI. Advancing the frontiers of artificial intelligence frequently requires enormous investments in computational power and research talent.

“The hurdle for building a broad foundational model is that training on increasingly large data sets is extraordinarily expensive,” Gil Luria, a senior software analyst at D.A. Davidson & Co., told PYMNTS. “The only reason OpenAI can afford to do so is the backing of Microsoft and the Azure resources it makes available to OpenAI. The broad models, such as the ones leveraged by ChatGPT, have ingested huge portions of human knowledge and continue to train on new content, which is what makes them so versatile in many domains of expertise.” 

The Battle for the Top LLM

The latest AI upstart, Cohere, is close to securing $500 million in funding, potentially raising its valuation to $5 billion, according to a Reuters report on March 21. The company, which develops foundational AI models similar to ChatGPT’s OpenAI, has increased its revenue to $22 million this month, up from $13 million in December after launching its new model, Command-R. 

Valued at $2.2 billion in June following a $220 million funding round, Cohere is courting investors with its enterprise AI solutions and has partnered with Oracle to broaden its model accessibility.

Open Source?

The strategies companies choose to adopt could significantly impact the competitive landscape of LLM development. While some embrace open-source models, offering free access to their data and methodologies, others opt for a proprietary route, keeping their processes and information exclusive.

Karl Jacob, CEO at LoanSnap, highlighted the potential of open-source approaches. 

“Facebook is interesting because of its open-source approach, which we have seen flourish in other areas. Google is interesting because of its massive amount of training data and the fact that it has been working on AI for years,” he told PYMNTS. “Lastly, teams like Anthropic are interesting because they are very focused on speed of training.”

Dmytro Shevchenko, a data scientist at enterprise software firm Aimprosoft, also noted the importance of open-source initiatives such as Falcon, Vicuna and Llama 2. 

“More often than not, these projects present LLMs of different models that are ideal for both small tasks and large cases,” he said in an interview with PYMNTS. “They also solve the problem of data privacy that commercial models lack, even if they say they don’t use your data.”

However, the high cost of developing and training foundational LLMs remains a significant hurdle. This barrier to entry could lead smaller companies to specialize in niche solutions.

“For more specific applications, many software companies are building custom LLMs meant to work within a specific context,” Luria said. These targeted models require less data and computing power, providing an accessible avenue for innovation. 

Smaller AI models might offer businesses a more competitive option. The recent update to Inflection’s Pi chatbot exemplifies the shift toward creating lower-cost models that increase AI’s accessibility to firms.

The new Inflection 2.5 model approaches the efficiency of OpenAI’s GPT-4 while requiring only 40% of the computational power for training. This model is crafted to support natural, empathetic and secure interactions, and has enhanced abilities in coding and mathematics compared to its predecessor.

The enhancement allows Pi chatbot users to explore a broader array of topics, illustrating that smaller LLMs can achieve high performance with greater efficiency.

AI Ethics for Success?

AI ethics is emerging as a potential key factor in the race for AI supremacy. While companies are exploring various approaches to developing ethical AI systems, there is currently little consensus on what constitutes “ethical AI.”

Some experts believe that as AI becomes more prevalent, customers may increasingly favor AI models that are perceived as more ethical. Additionally, future regulations may require companies to prioritize ethical considerations in their AI development efforts. As the field evolves, a company’s stance and track record on AI ethics could significantly influence its competitiveness and overall success in the AI market.

“Anthropic’s emphasis on ethics in the development of Claude may give it a competitive advantage over OpenAI, as it increases trust and security, which is particularly important to users and regulators,” Shevchenko said. “ And a new generation of LLMs, such as Claude 3 Opus, outperforms GPT-4.”

As for the future of LLM supremacy, Luria predicted a split between a small number of broad foundational models and a proliferation of context-specific models.

“While there are likely to be only two to three broad foundational models in the future, we would expect there to be multiple LLMs by many companies on each of our devices as the technology becomes faster and more efficient,” he said. 

John Licato, an assistant professor of computer science and engineering at the University of South Florida, pointed to Google’s parent company, Alphabet, as a formidable contender to OpenAI due to its “institutional expertise” and “access to compute power and data.”

He noted in an interview with PYMNTS that Google’s Gemini models, particularly Gemini 1.5 Pro, provided a context window of up to a million tokens, allowing for longer contexts compared to GPT-4’s 128,000 token limit.

Ultimately, Licato believes Google holds the prime spot in the AI race, citing its vast experience with transformers — the technology at the heart of ChatGPT — alongside unparalleled access to data that few entities globally can rival. He also noted that Meta and Anthropic are strong contenders, poised to continually challenge OpenAI’s position.

“At this point, perhaps the most significant factor is access to a tremendous amount of computing power,” he said. “Companies like Google and OpenAI have millions (perhaps billions) of dollars of GPU processors, as well as more advanced computing technologies like TPUs (tensor processing units).” 

As AI technology rapidly evolves and becomes more specialized, it remains an open question who might eventually surpass OpenAI as the leader in the field. The AI industry is moving at a breakneck pace, with the constant possibility of game-changing breakthroughs coming from unexpected places.