Tempo di cominciare a valutare il ritorno degli investimenti fatti.

Ma se la competizione si sposta sui prezzi, è forse segno che l'innovazione 
radicale ha esaurito il suo slancio? 
E le innovazioni incrementali renderanno davvero redditivi i prodotti che 
finora non sembrano maturi?

<https://www.theguardian.com/technology/article/2024/jul/26/why-zuckerbergs-multi-billion-dollar-gamble-doesnt-just-matter-to-meta>

Why Zuckerberg’s multibillion-dollar gamble doesn’t just matter to Meta

Dan Milmo

Spending on artificial intelligence could hit a staggering $1tn, according to 
analysts concerned about whether there will be a return on such a spree. Mark 
Zuckerberg’s answer this week to such jitters was to release his latest AI 
system for free.

Meta’s Llama 3.1 405B is its most powerful yet, it says, and one of the most 
capable in the world. While the tech company didn’t disclose how much it cost 
to train, Zuckerberg, its co-founder and chief executive, has previously 
disclosed a $10.5bn (£8.9bn) investment in just the chips required to power its 
AI data centres – with the rest of the electronics, the electricity itself, and 
the physical building an additional cost on top of that.

Yet despite the exorbitant outlay, the parent of Facebook and Instagram will 
charge you nothing for it. If you can get hold of a computer powerful enough to 
run it, you don’t have to pay Zuckerberg a cent.

Whether that gamble will pay off matters to more than just Meta, though: a big 
bet by investors and Meta’s tech peers is hingeing on the same question.

In June, analysts at Goldman Sachs published a note with the sceptical title: 
“Gen AI: too much spend, too little benefit?” It pointed to a $1tn investment 
in AI over the next few years by the tech industries, other companies and 
utilities on infrastructure, including chips and power grids. That prompted the 
question: “Will this large spend ever pay off?”

The note covers a range of views on whether the spree will bring acceptable 
returns, including an economics professor at Massachusetts Institute of 
Technology, Daron Acemoglu, who argues that “truly transformative changes” 
brought about by AI “won’t happen quickly”. In other words, the beneficial 
economic return from this boom might take longer than investors expect.

The research also questions whether power supply can keep up with the demand 
related to training and operating AI systems – and asks the same about the 
chips needed to power those models.

There are also more optimistic takes from Goldman analysts, who argue that AI 
will ultimately automate 25% of all work tasks in the US (thus making the 
economy more productive but also creating new tasks and products) and that the 
spending boom is not out of whack with previous tech investment sprees.

However, Sequoia Capital, an early investor in ChatGPT developer OpenAI, has 
made it clear that AI companies need to work hard to pay back all that 
investment in infrastructure such as chips and data centres. A prior estimate 
that AI companies will need to earn $200bn to pay back their investment has 
risen to $600bn, wrote Sequoia partner David Cahn.

Cahn stressed that backing AI so strongly would “almost certainly” be 
worthwhile but “the road ahead is going to be a long one”.

Benedict Evans, a tech analyst, asked in a note this month whether large 
language models – the technology underpinning tools such as ChatGPT – “might 
also be a trap”, because while ChatGPT might look like a full-fledged product, 
it isn’t.

Evans draws an analogy between LLMs and the first iPhones and the appearance of 
the internet – the potential is there, but other things need to happen as well.

“When most technologies first appear they aren’t ready yet and it’s not clear 
why they’re useful, and they need a bunch more work,” he says. “The iPhone 
didn’t have 3G and it didn’t have apps. The web arrived in the early 90s, but 
no one had a modem, let alone broadband. A whole bunch of stuff had to happen 
before the web could take off.”

Evans adds that chatbots have worked well for people who need time-saving fixes 
for coding and marketing, but it has yet to bring fixes in other areas.

“There are very specific cases where this already saves a huge amount of time, 
but it hasn’t generalised to everybody yet.” He adds: “The feeling that I think 
most people have looking at ChatGPT is, OK, this is amazingly cool … but what 
am I supposed to do with this?”

This week, OpenAI announced it is testing a search engine in the US, in what 
could become a direct challenge to search behemoth Google, which has launched 
AI-generated search answers, too. OpenAI has also hired a former Meta and 
Twitter executive, Kevin Weil, as its chief product officer to help answer the 
question: “What am I supposed to do with this?”

This week, tech news site the Information reported that OpenAI could lose as 
much as $5bn this year, based on analysis of internal financial figures and 
interviews with sources, as it amasses $8.5bn-worth of operating costs 
including $4bn a year on renting servers from Microsoft and $3bn a year on 
training models.

Full-year revenue could be anywhere between $3.5bn and $4.5bn per year the 
Information reported, implying a loss of up to $5bn. San Francisco-based OpenAI 
makes money from charging companies to build consumer products using its 
models, or letting companies build in-house chatbots with its technology, as 
well as generating revenue from charging users $20 a month to access a more 
powerful version of ChatGPT. Google and Anthropic also sell subscriptions to AI 
systems for $20 a month, while Cohere sells its models to business customers 
for tasks such as coding or data analysis.

The Information admitted its OpenAI calculations were “guesstimates” but if 
they are right, the company needs to raise cash over the next 12 months. OpenAI 
declined to comment, although with Microsoft as its main financial backer it 
could call upon a wealthy benefactor – albeit with regulators circling.

But even as tech companies have been fighting for domination at the “frontier” 
of AI, investing billions in building bigger and better systems, a second front 
has opened up in a more traditional battle: cost. OpenAI’s most recent release 
isn’t GPT-4o, the headline-grabbing “multimodal” system that sounded so much 
like Scarlett Johansson that it sparked a lawsuit.

Instead, it’s GPT-4o mini, a stripped-back version of the same AI that the 
company offers to third-party developers at less than 5% of the cost of its 
frontier system – undercutting the previous cheapest models, Google’s Gemini 
1.5 Flash and Anthropic’s Claude 3 Haiku.

On Tuesday, Meta’s release of the latest version of its Llama system undercut 
OpenAI. Again, while the attention was on the frontier version of the model, 
dubbed 405B, the company was equally eager to push its smallest LLM, 8B. Unlike 
its competitors, Llama is available for anyone to download and run on their own 
systems, with cloud providers such as Groq offering it for a third of the price 
of OpenAI’s competitor systems. Llama 3.1 8B underperforms those OpenAI 
equivalents, per benchmarks from Artificial Analysis, but for the price, who 
can complain?

Meta calls Llama “open source”, although critics dispute the claim: it’s 
possible to download the model and use it with a fairly free licence but the 
training data remains entirely closed, and the model is not free for anyone to 
use for any purpose. Some of the restrictions are practical: the copyright 
status of training data for large language models is, at best, controversial. 
Meta keeps the specific data it trained Llama 3.1 on a secret, and almost 
certainly lacks the licence to redistribute it free of charge.

Other restrictions have more commercial weight behind them. By keeping 
restrictions on Llama 3.1’s use in place, Meta places itself at the middle of 
any future sector that grows around its AI: even if it doesn’t charge directly 
for access to the model, it gets to control the direction of development, and 
can always close future avenues if competitors grow too big using its tech.

On Tuesday, Zuckerberg said: “I believe the Llama 3.1 release will be an 
inflection point in the industry… I hope you’ll join us on this journey to 
bring the benefits of AI to everyone in the world.” For investors and other 
tech companies, those benefits need to produce a meaningful return.

Reply via email to