Week of November 3, 2025

0 reads

Data Centers in Space

Starcloud successfully deployed its first satellite equipped with an Nvidia H100. This launch serves as a proof of concept for Starcloud's future plans to offer data centers in space for AI compute workloads.
Just days later, Google unveiled project Suncatcher, with plans to build a constellation of low Earth orbit satellites equipped with Google's TPU chips. In a recent interview, Jeff Bezos made a similar claim that future gigawatt scale training clusters will be better built in space due to the constant supply of solar power.

Too Big to Fail? OpenAI Revenue Commitments and Government Backstop

Last Friday, Brad Gerstner questioned how OpenAI plans to fund its $1.4 trillion of spend commitments on $13 billion of annual revenue. Sam Altman replied defensively, challenging Brad to sell his shares while highlighting OpenAI's revenue growth and future revenue opportunities from consumer hardware and AI that can "automate science". Sam also emphasized the critical risk of an insufficient compute buildout.

This Monday, AWS announced a $38b compute partnership with OpenAI. This comes less than one week after OpenAI and Microsoft renegotiated their partnership. Then, on Wednesday, in a Wall St. Journal interview, OpenAI CFO Sarah Friar expressed support for a federal backstop for OpenAI's compute investments. These comments were later walked back and clarified by Friar on LinkedIn:

Parallel Launches Search APIs for Agents

Parallel, founded by former Twitter CEO Parag Agrawal, announced the general availability of its search API optimized for AI agents. Parallel's search API demonstrates exceptional performance on various benchmarks relative to competitors like Exa, Perplexity, and Tavily.

Generalist's Foundation Model for Robotics

Generalist announced GEN-0, an embodied foundation model for physical interaction. GEN-0 is trained on a growing corpus of 270,000 hours of real world robotic action, currently growing at a rate of 10,000 hours per week. The team has observed clear scaling laws, where model performance improves reliably with more data and compute.