Top 10 AI Predictions for 2025: AI Agents Will Go Mainstream

‍‍‍‍ As 2024 draws to a close, venture capitalist Rob Toews from Radical Ventures shares his 10 predictions for AI in 2025:
01. Meta will start charging for Llama models
Meta is the world benchmark for open AI. In a compelling case study in corporate strategy, Meta has chosen to make its state-of-the-art Llama model available for free, while competitors like OpenAI and Google have closed sourced their cutting-edge models and charged for use.
So the news that Meta will start charging companies to use Llama next year will come as a surprise to many.
To be clear: we are not predicting that Meta will completely close source Llama, nor are we predicting that anyone using the Llama model will have to pay for it.
Rather, we are predicting that Meta will make the terms of Llama’s open source license more stringent so that companies above a certain size that use Llama in a commercial context will need to start paying to use the model.
Technically, Meta already does this today to a limited extent. The company doesn’t allow the largest companies — cloud supercomputers and other companies with more than 700 million monthly active users — to use its Llama models for free.
As early as 2023, Meta CEO Mark Zuckerberg said: “If you’re a company like Microsoft or Amazon or Google, and you’re basically reselling Llama, then we should get some revenue from it. I don’t think there will be a lot of revenue in the short term, but in the long term, hopefully there will be some revenue.”
Next year, Meta will significantly expand the range of companies that must pay to use Llama to include more large and medium-sized companies. Keeping up with the cutting edge of large language models (LLMs) is very expensive. Meta needs to invest billions of dollars each year to keep Llama consistent or close to consistent with the latest cutting-edge models from companies like OpenAI, Anthropic, and others.
Meta is one of the largest and best-funded companies in the world. But it’s also a public company and ultimately accountable to shareholders.
As the cost of making cutting-edge models continues to soar, it becomes increasingly untenable for Meta to invest so much money to train the next generation of Llama models without revenue expectations.
Over the next year, Llama models will continue to be available for free to enthusiasts, academics, individual developers, and startups. But 2025 will be the year Meta starts to seriously make Llama profitable.
02. Questions about “scaling laws”
In recent weeks, the most discussed topic in the field of artificial intelligence has been scaling laws, and the question of whether they are about to end.
Scaling laws were first proposed in an OpenAI paper in 2020. Its basic concept is simple and straightforward: when training an artificial intelligence model, as the number of model parameters, the amount of training data, and the amount of computation increase, the model’s performance will improve in a reliable and predictable way (technically, its test loss will decrease).
From GPT-2 to GPT-3 to GPT-4, the amazing performance gains are attributed to scaling laws.
Like Moore’s Law, scaling laws are not actually real laws, but just empirical observations.
In the past month, a series of reports have shown that major artificial intelligence labs are experiencing diminishing returns as large language models continue to scale up. This helps explain why OpenAI’s GPT-5 release has been repeatedly delayed.
The most common rebuttal to the stagnation of scaling laws is that the advent of test-time computation has opened up a whole new dimension in the pursuit of scaling.
That is, new inference models like OpenAI’s o3 can massively scale computation during inference, unlocking new AI capabilities by letting models “think longer”, rather than massively scaling computation during training.
This is an important point. Test-time computation does represent an exciting new avenue to achieve scaling and AI performance gains.
But another point about scaling laws is even more important, and one that is severely underappreciated in today’s discussion. Almost all discussions of scaling laws, starting with the original 2020 paper and continuing through today’s focus on test-time computation, have focused on language. But language is not the only data modality that matters.
Think about robotics, biology, world models, or networked agents. For these data modalities, scaling laws have not yet saturated; rather, they are just beginning.
In fact, rigorous proofs of scaling laws for these fields have not even been published to date.
Startups building foundational models for these new data patterns (e.g., evolutionary scaling in biology, physical intelligence in robotics, world labs in world models) are trying to identify and exploit scaling laws in these fields, just as OpenAI successfully exploited scaling laws for large language models (LLMs) in the first half of the 2020s.
Shop Laptop Battery the Online Prices in UK – Large Selection of Batteries and Adapters from batteryforpc.co.uk With Best Prices