Beyond the Replica: The Case for First-Principles Agents
Why true agentic efficiency requires abandoning human workflows.
chasewughes.com · Jan 2026
The Lesson of Move 37
In 2016, during Game 2 of AlphaGo vs. Lee Sedol, the AI played a move that looked like a mistake. It placed a black stone on the 5th line—Move 37—a decision that human Go theory, refined over 2,500 years, explicitly advised against.
Commentators gasped. Lee Sedol had to leave the room to compose himself.
It wasn’t a mistake. It was the winning move.
It wasn’t a human move. It was an alien move, derived from a probability calculation that no human brain—constrained by biology and tradition—would typically attempt.
Today, as we build the next generation of AI agents, we are largely ignoring the lesson of Move 37. We are focused heavily on biomimicry: building agents that replicate human workflows step-by-step. We teach them to “read” screens, click buttons, and follow the same Standard Operating Procedures (SOPs) a human employee would.
This is a valuable starting point. It builds trust, it’s legible, and it fits into our existing org charts. But it shouldn’t be the only way we design.
If we limit agents to only doing what humans do—exactly how humans do it—we trap them in a local optimum. To unlock the true promise of the “Agent OS” era, we need to be willing to ask a harder question:
When should we automate the worker, and when should we solve the problem?
The Comfort of the Digital Intern
Currently, most agent designs are skeuomorphic. Just as early iPhone apps used faux leather textures to make users feel comfortable, we build agents that act like “digital interns.”
- Human workflow: “Open browser → Search LinkedIn → Copy/Paste into CRM.”
- “Replica” agent: Uses a headless browser to do the same sequence, just faster.
There is nothing wrong with this. In fact, as I’ve written before, “assisting” the human with copilots is the correct first step on the autonomy slider. It allows for human-in-the-loop verification, which is critical for trust in high-stakes industries like law or finance.
But we should recognize what this approach inherits from the human world:
- Linear processing
- Visual interfaces
- Serial execution
The Case for First-Principles Design
There is a second lane of design that builders should consider: first-principles agents. These are workflows designed not around how a human does it, but around the objective function of the problem itself.
When you strip away the requirement to “act human,” you unlock capabilities that are native to Software 3.0 but alien to us.
1) Logic over categories (“Chaos Storage”)
If you ask a human to organize a warehouse, they will categorize logically: shirts with shirts, toothpaste with toothpaste. It makes sense to a human brain because we need to remember where things are.
Amazon’s fulfillment centers, however, use “Chaos Storage” (Random Stow). An algorithm directs a robot to place a bottle of vitamins next to a kayak paddle.
- Human workflow: Categorical sorting for retrievability.
- First-principles workflow: Random distribution to optimize travel paths and reduce congestion.
To a human, it looks like a mess. To the system, it is mathematically superior.
2) Bandwidth over grammar (emergent communication)
In a famous experiment at Facebook AI Research (FAIR), two agents were trained to negotiate a trade. Left to their own devices, they drifted away from English grammar and utilized a shorthand of repeated tokens that looked like gibberish to observers.
- The reaction: People panicked because it wasn’t “human.”
- The reality: They simply optimized bandwidth.
English is a slow protocol designed for biological mouths. Direct vector exchange is the native language of intelligence.
Expanding the Design Space
This doesn’t mean we should abandon human-centric workflows. It means we need to choose the right architecture for the right problem.
- Use biomimicry (replica agents) when the process needs to be audited by a human, when the agent is acting as a copilot to a user, or when you are integrating with legacy UI-only tools.
- Use first principles (alien agents) when the goal is pure outcome efficiency.
For example:
- A replica agent might process a refund by opening the support ticket, reading the customer complaint, and clicking “refund” in the portal.
- A first-principles agent might spawn 1,000 parallel threads to cross-reference every refund request against shipping logs and fraud databases simultaneously in 200 milliseconds—bypassing the UI entirely via an MCP endpoint or API.
The Next Step for Builders
As product leaders, we often get stuck trying to automate the SOP. We ask, “How does the junior analyst do this?” and then we write prompts to mimic that script.
I challenge you to also ask:
What are the physics of this information problem?
If we had asked a human to design the most efficient antenna for a spacecraft, they would have drawn a geometric shape. When NASA used an evolutionary AI algorithm to do it, it designed a twisted, paperclip-like shape that no human would ever draw—but it performed better.
We shouldn’t discard the human workflow; it is our bridge to adoption. But we also shouldn’t be afraid to let our agents make Move 37 when the problem demands a solution we couldn’t have dreamed up ourselves.