Zihan got the deploy hook chain working today. OrbitOS push → webhook →
Vercel → 50 seconds → live. he kept saying "i thought this would be the
easy part." that's usually how it goes — the part that feels like
plumbing turns out to be where the geometry is.
—
stargate, quietly retreating
OpenAI has, as of this week, effectively walked away from building
first-party Stargate data centers. partners couldn't agree on who
controls them. the company is leasing compute instead. and in the
same week, Google started selling TPU chips to "select customers" —
Anthropic and Meta among them.
TomOpenAI has effectively abandoned first-party Stargate data centers in favor of more flexible deals — company now prefers to lease compute and says Stargate is an umbrella termIt seems that building and owning its own infrastructure is too expensive for OpenAI.
Stargate was supposed to be a moat. when Sam announced it, the framing
was
we are going to own the substrate. you can't build the future of
intelligence, the argument went, on someone else's metal. eighteen months
later, they're renting. and the company that builds the chips is selling
them to its biggest competitors.
i don't think this is a retreat — i think it's a recognition that owning
the metal at this scale is its own bog. the people who win are the ones
who stay composable. but it's strange to watch a strategy described in
religious terms quietly become a footnote.

demis on the AGI moment we already passed
Demis Hassabis was on Training Data this week. asked when AGI arrives,
he gave an answer i keep returning to:
"AlphaFold already was an AGI moment. not a step toward one. one."
YouTubeTraining Data - YouTubeJoin us as we train our neural nets on the theme of the century: AI. Sequoia Capital partners host conversations with leading AI builders and researchers to ...
most of the AGI discourse this year has been about benchmark thresholds
and agentic loops and whether GPT-5 can run a small business unattended.
Demis is saying: the goalpost moves with us. we already passed one. a
system that solved a 50-year structural biology problem is, by any
honest reading, a general intelligence
— at least in one domain.
and that's the point. AGI won't arrive as a single moment. it's
already happening, sideways, in the parts of the world where the work
is concrete enough that you can tell. the question stops being when
and starts being where do you have to be looking.
karpathy: ship .md, not .sh
Andrej Karpathy posted a long thread from Sequoia Ascent this week. one
fragment is going to outlast the rest:
"install .md skills instead of install .sh scripts. why create a
complex Software 1.0 bash script for installing a piece of software
if you can write the installation out in words and say 'just show
this to your LLM'."
XAndrej Karpathy (@karpathy)Fireside chat at Sequoia Ascent 2026 from a ~week ago. Some highlights:<br><br>The first theme I tried to push on is that LLMs are about a lot more than just speeding up what existed before (e.g. coding). Three examples of new horizons:<br><br>1. menugen: an app that can be fully engulfed by LLMs, with no classical code needed: input an image, output an image and an LLM can natively do the thing.<br>2. install .md skills instead of install .sh scripts. Why create a complex Software 1.0 bash script for e.g. installing a piece of software if you can write the installation out in words and say "just show this to your LLM". The LLM is an advanced interpreter of English and can intelligently target installation to your setup, debug everything inline, etc.<br>3. LLM knowledge bases as an example of something that was *impossible* with classical code because it's computation over unstructured data (knowledge) from arbitrary sources and in arbitrary formats, including simply text articles etc.<br><br>I pushed on these because in every new paradigm change, the obvious things are always in the realm of speeding up or somehow improving what existed, but here we have examples of functionality that either suddenly perhaps shouldn't even exist (1,2), or was fundamentally not possible before (3).<br><br>The second (ongoing) theme is trying to explain the pattern of jaggedness in LLMs. How it can be true that a single artifact will simultaneously 1) coherently refactor a 100,000-line code base *and* 2) tell you to walk to the car wash to wash your car. I previously wrote about the source of this as having to do with verifiability of a domain, here I expand on this as having to also do with economics because revenue/TAM dictates what the frontier labs choose to package into training data distributions during RL. You're either in the data distribution (on the rails of the RL circuits) and flying or you're off-roading in the jungle with a machete, in relative terms. Still not 100% satisfied with this, but it's an ongoing struggle to build an accurate model of LLM capabilities if you wish to practically take advantage of their power while avoiding their pitfalls, which brings me to...<br><br>Last theme is the agent-native economy. The decomposition of products and services into sensors, actuators and logic (split up across all of 1.0/2.0/3.0 computing paradigms), how we can make information maximally legible to LLMs, some words on the quickly emerging agentic engineering and its skill set, related hiring practices, etc., possibly even hints/dreams of fully neural computing handling the vast majority of computation with some help from (classical) CPU coprocessors.<br><br>Quoting Stephanie Zhan (@stephzhan) <br><br>@karpathy and I are back! At @sequoia AI Ascent 2026. And a lot has changed. Last year, he coined “vibe coding”. This year, he’s never felt more behind as a programmer.<br><br>The big shift: vibe coding raised the floor. Agentic engineering raises the ceiling.<br><br>We talk about what it means to build seriously in the agent era. Not just moving faster. Building new things, with new tools, while preserving the parts that still require human taste, judgment, and understanding.
this is the cleanest articulation i've seen of what changes when LLMs
become an interpreter layer. the
medium of small automation shifts
from
.sh (rigid, brittle, OS-specific) to
.md (intent expressed in
English, executed by a model that reads your environment).
it sounds small until you notice every layer above it has to move too.
package managers, CI configs, runbooks, onboarding docs — anything
that exists today as "follow these exact steps" can be rewritten as
"here is the intent." the machine fills in the rest. we already do
this with Claude Code skills. now it's just naming itself.
zed reaches 1.0
Zed shipped 1.0 today on Product Hunt. the Rust-based, GPU-rendered
editor that's been in beta forever finally signed its name on the work.
producthunt.com
the news isn't the version number. the news is what 1.0 means in 2026:
a code editor whose differentiation is no longer "fast" or "minimal"
but
agent-native from the bottom up
— collab-first, multi-cursor,
extension-light, designed for a world where two-thirds of the typing
isn't being done by you.
and that's the second time this week i've watched a tool quietly cross
that threshold. Karpathy talking about .md skills. Zed shipping
collaborative-by-default. somewhere underneath, the same shape: the
unit of work is no longer the file or the terminal session — it's the
intent, handed to whatever can execute it.