Welcome to AI Decoded, Quick Firm’s weekly publication that breaks down crucial information on the earth of AI. I’m Mark Sullivan, a senior author at Quick Firm, protecting rising tech, AI, and tech coverage.
This week, I’m specializing in Elon Musk’s resolution to lease the computing capability at SpaceX’s Colossus 1 information heart to Anthropic. I additionally take a look at what a brand new Atlantic exposé on David Sacks says about Silicon Valley’s alliance with Trump, and a benchmark that’s stumping high AI coding brokers.
Signal as much as obtain this text each week through electronic mail here. And in case you have feedback on this problem and/or concepts for future ones, drop me a line at sullivan@fastcompany.com, and comply with me on X (previously Twitter) @thesullivan.
Why Grok is promoting compute to Anthropic
Whereas all people else within the AI house scrambles to lock down computing energy, xAI’s Grok fashions are apparently getting used so little relative to friends that the corporate can unload the capability of total information facilities, “colossal” ones at that.
Anthropic said Tuesday it had signed an settlement with SpaceX to make use of all of the computing capability in SpaceX’s Colossus 1 information heart in Memphis. (SpaceX owns xAI.) The deal will give Anthropic entry to greater than 300 megawatts of computing capability, or greater than 220,000 NVIDIA GPUs. Anthropic says the extra capability can be used to serve its Claude Professional ($20 monthly) and Claude Max ($100 to $200 monthly) subscribers.
SpaceX CEO Elon Musk says he gave his much-sought ethical stamp of approval to Anthropic. “By the use of background for many who care, I spent quite a lot of time final week with senior members of the Anthropic crew to know what they do to make sure Claude is sweet for humanity and was impressed,” Musk said in an X post. “Everybody I met was extremely competent and cared an excellent deal about doing the precise factor. Nobody set off my evil detector.”
Musk says xAI had already shifted its coaching workloads to Colossus 2, liberating up Colossus 1 for Anthropic’s use. Anthropic says it should use the power primarily for inference, or the processing required to answer person prompts in actual time.
The partnership may finally prolong past Earth. Anthropic says it has additionally been discussing plans with Musk and SpaceX to develop a number of gigawatts of orbital AI compute capacity. Area-based AI information facilities maintain apparent enchantment as a result of the cost of cooling servers would basically disappear. However main technical hurdles stay, particularly round reliably transmitting huge quantities of knowledge between orbiting infrastructure and Earth.
Musk’s willingness to arm Anthropic with very important computing energy may have one thing to do together with his hatred of Anthropic rival OpenAI, and his dislike of OpenAI founder Sam Altman. Musk sued OpenAI, claiming the corporate’s management betrayed its authentic nonprofit mission to develop AGI for the good thing about humanity fairly than for revenue.
Trump’s discount with Silicon Valley on AI could also be weakening
The Atlantic’s George Packer, in a new article about former White Home “crypto and AI czar” David Sacks, sheds extra mild on how and why Sacks and different Valley elites went full MAGA earlier than the 2024 election. Now there are indicators that the primary factor Silicon Valley needed in change for its help could also be in jeopardy.
Silicon Valley’s most well-liked model of its MAGA conversion story is that influential VC Marc Andreessen met with representatives of the Biden administration and was informed the administration meant to closely regulate AI in order that just a few huge AI labs, and no startups, would have the ability to comply and survive. Andreessen stated Biden needed to “nationalize or destroy” Silicon Valley. He stated Biden needed to kill the whole cryptocurrency business. He stated he and his companion Ben Horowitz determined to help MAGA proper after that assembly.
Biden officers dispute Andreessen’s account of what was stated. However Andreessen’s model was sufficient to set a broader shift in movement amongst tech elites. Sacks held a fundraiser for Donald Trump in June 2024 in San Francisco’s rich Pacific Heights neighborhood. After speaking with Trump on the occasion and on the All-In podcast, Sacks stated: “All of his instincts are Let’s empower the non-public sector; let’s minimize rules; let’s make taxes cheap; let’s get the neatest individuals within the nation; let’s have peace offers; let’s have progress.”
What Sacks and others have been actually after was a promise of AI deregulation and extra tax cuts. They received the tax cuts, and up to now the Trump administration has labored laborious to stifle authorities investigations or rules concentrating on the tech business. Some states have handed legal guidelines requiring authorities oversight, however the administration has been making an attempt to preempt such legal guidelines or problem them in courtroom.
Packer means that Sacks, Andreessen, Horowitz, and different Valley elites may share one thing in frequent with a lot of MAGA: They’re white males witnessing a lack of standing in society. “Andreessen was keen to pay excessive taxes and help liberal causes and candidates so long as he was considered a hero,” Packer writes.
However Silicon Valley’s fall from grace isn’t the fault of Democrats, Biden, or “wokism”; it’s the results of authorities and society slowly realizing that many Silicon Valley elites should not really pushed by idealistic notions of “making the world higher.” As a substitute, they’ve repeatedly proven a willingness to unleash applied sciences they know could also be dangerous. The clearest instance is Meta, which the federal government largely allowed to control itself whereas shielding it from many person lawsuits through Section 230, solely to look at social media platforms contribute to disinformation, political polarization, and harms to kids.
However nothing is everlasting with Trump, as so many others have discovered, and agreements that not present rapid worth will be rapidly deserted.
The White Home announced this week that it’s contemplating a requirement that authorities officers “vet” new AI fashions earlier than they are often launched. Staff Trump was apparently spooked by two issues. An AI mannequin from an organization it just lately declared a supply-chain threat, Anthropic, developed a mannequin known as Mythos that may establish software program vulnerabilities at scale and devise methods to take advantage of them. In the meantime, backlash in opposition to the tech business’s huge information heart buildout is turning into more and more unpopular with components of the MAGA base and will turn into a significant GOP legal responsibility within the midterms.
Perhaps tech elites and MAGA don’t combine fairly in addition to both facet as soon as thought.
Meet the brand new benchmark that’s soundly defeating coding brokers
Maybe essentially the most consequential software of generative AI fashions up to now has been software program engineering, the place brokers generate code and more and more make high-level architectural choices. However how will we inform how good an AI software program engineer actually is? Till now, the business has largely relied on benchmark exams reminiscent of SWE-Bench, which consider fashions on comparatively well-defined duties like fixing bugs or implementing a single function. Now the builders behind SWE-Bench have launched a a lot tougher take a look at known as ProgramBench.
The benchmark is troublesome as a result of the AI agent has to purpose strategically concerning the optimum structure and programming language wanted to breed the efficiency of every of the 200 take a look at packages. As soon as an agent finishes constructing a codebase, the benchmark runs roughly 248,000 exams to measure how intently the recreated software program matches the unique habits.
To this point, the entire main fashions examined on ProgramBench, together with Anthropic’s Claude Opus 4.7, Google’s Gemini 3 Professional, and OpenAI’s GPT-5.4, have scored big fat zeros. In different phrases, none have been in a position to totally full the take a look at builds. A number of fashions, nevertheless, have been in a position to full parts of them.
The outcomes counsel that present AI coding tools nonetheless should not superior sufficient to make the sorts of architectural and systems-level choices human software program engineers routinely make when turning an concept into working software program. The findings may point out that AI brokers nonetheless wrestle to use summary rules discovered throughout coaching to thoroughly novel issues.
Extra AI protection from Quick Firm:
- How a Texas vegan cheese-maker used Claude and Manus to fight back against a big shipping company
- AI power users are pulling away from everyone else, Microsoft says
- AI labels were supposed to help users spot fakes. Here’s why they’re failing
- OpenAI’s trillion-dollar AI bet is a study in ‘riskmaxxing’
Need unique reporting and pattern evaluation on know-how, enterprise innovation, future of labor, and design? Sign up for Quick Firm Premium.

