hunter paulson
stochastic agentic ascent
2026-03-15
|
||||
| blog | projects | github | ||
building software today feels like training a neural network.
modern neural networks have >1 Trillion parameters. meanwhile, the largest neural network we fully understand has only 10,000 neurons.
because of this we don’t fully understand how Large Language Models (LLMs) predict the next token. but, as long as they do it accurately, we don’t really care. instead we care that for any given input they give us the expected output.
LLMs are black boxes. we grow them through the process of optimization. where we define an objective function to measure performance, and then use an optimizer to update the weights in an iterative feedback loop.
software is now being grown in the same way.
agentic engineering mirrors training a neural network. the code is the weights, your vibes are the objective function, and your coding agent is the optimizer trying to maximize your vibes one update step at a time.
each prompt is new gradient signal telling the optimizer to take another step in parameter space. across a session, updates from previous turns accumulate momentum.
we are starting to produce black box software. one that is updated at a speed that we cannot track, at a scale that we can no longer comprehend, and written in languages that we don’t understand.
but for users and CEOs this is nothing new. for them, software has always been black box. as long as the output matches their expectations, users don’t care how it’s computed. CEOs feel the same way, as long as the system predictably improves on their objective function.
(loading stochastic agentic ascent kernel...)
(loading stochastic agentic ascent chart...)
this is part 3 of coding agents are like…