Why We Bet on Humans
Digital intelligence is getting a lot of attention. Human intelligence will always remain king.
April 28, 2026
If you are interested in testing our solution for yourself or your team view our MCP here: Bicameral
We are betting on humans.
Philosophically we believe that human intelligence is an inherent trait that cannot be taught or learned. It can be approximated, but never truly inherited by any other physical or synthetic creature.
By April 23rd 1616 the English language had 1,700 more words than it had just 52 years earlier in 1564. The sole reason for this was William Shakespeare. He created roughly 32 new words every year, some of the best being “lackluster”, “wild goose chase”, and “foregone conclusion”. To be a Shakespeare is to be unique, chaotic, and intelligent. In a word: human.
To teach a model how to be Shakespeare would necessitate giving the model examples of Shakespeare’s writing, teaching it to think like him, be like him. In essence, provide it context. This type of “intelligence” is dead and empty. There is nothing deeper in it than imitation. Human beings on the other hand can take inspiration from each other and create something truly novel. Charles Dickens was a Shakespeare fanatic, but his own works are not simple imitations, neither are they complex and nuanced ones. Rather, Dickens writings are a thing all their own. He gave us words like “boredom”, “flummox”, and the phrase “devil may care”.
The same humanity is found in the most important language of today: code. Donald Knuth titled his work The Art of Computer Programming intentionally. The artistry of coding is inherently tied to the humanity of the engineer. Just like Shakespeare, an AI model can be taught to think like Knuth, to code like Knuth. But the models will only imitate, never inherit his intelligence. The model’s ability to approximate Knuth’s intelligence relies on the training data (context) we provide it and the aim we give it.
In 1990 Knuth quit using email. He said email was for people who want to stay on top of things and he was interested in staying on the bottom of things. That special something, that ability to make strange, but correct decisions is precisely what we at Bicameral believe is essential to keep in the loop. And crucially, we believe it is unique to humans.
I was asked to join Bicameral because of my experience in operations, and more importantly, because of my strong ability to understand people. I am not technical, but what I share with all engineers is curiosity. What made me curious about this problem is that it is essentially a communication issue. The two most important languages of our time, human semantics and coding syntax, have birthed a strange child. The Large Language Model. This technology is impressive, it is wildly helpful, and humanity is adopting it like no other before it. But, underneath all the hype one essential problem remains: humans are bilingual, our new technology is good at portraying itself as such, but fundamentally is not.
Where do we see this in action? No where more clearly than in software development. Engineers have been given the power to write more code at a faster pace. There are other AI tools now that help ensure generated code is safe to ship. Upstream of code generation, however, is where the problem reveals itself. AI promised to give us one thousand Donald Knuths that can write code faster and learn things quicker than the original. Alas as the name suggests, the intelligence of these new Knuths is artificial.
Fred Brooks outlined the crux of this problem in his essay No Silver Bullet—Essence and Accident in Software Engineering. AI is phenomenal at eliminating accidental complexity in software development, but it cannot eliminate essential complexity. Essential complexity are all those things that are so recognizably human. Our forgetfulness, our tendency to ideate in non linear ways, our defining goals only to continually tweak them as we go. The language of code cannot handle these ambiguities. And human beings will never change so drastically as to be rid of them. The result is that our strange new child does not understand our fundamental nature.
The LLMs can speak our language. But ultimately they are grounded in syntax. Without context they will generate endless amounts of code that drift further and further away from the essentially complex goals we set them. In our own research we have found that engineers are frustrated by this communication issue. Product teams and managers have higher expectations due to AI’s ability to generate code quickly. Yet, translating those semantic expectations into a syntax that harmonizes with the code base and remains on track with feature requirements is becoming increasingly difficult.
There are two options. We can either bet on a humanless programming loop. Or we can bet on humans.
The first option views essential complexity as a bug. This bet assumes that human unpredictability and convoluted thinking can be removed, that what will be left over is pure unadulterated intellect. The idea is appealing. It is true that our slowness and inability to be everywhere at once is a type of bottleneck. The best engineer can only be in one place at a time, and he/she will never write code at the speed of an AI agent. It is likely that very soon the best engineer will be an AI agent. That is if the things we care about most are speed and code quality. If the strange new child can be fast, accurate, and interact with the world without its human parent.
At Bicameral we have taken the opposite side of the bet. We think the strange new child will be fast, we believe it will be accurate, but we also think it will need its human parent. On the one side is what we want from the AI. On the other side is what the AI does. In between those two things is a world of ambiguity. Without a translation layer that continuously includes human feedback, the execution of an idea will inexplicably drift away from its original intent. And as the architecture drifts slowly but surely, the corners human engineers have to cut increase. That problem has existed before AI. Software engineers call it technical debt.
Ironically, the solution to this is not pushing humans further into the margins, but anchoring them firmly in the center of decision making. What is needed is a tool which accounts for human entropy, incorporates the speed, volume, and scalability of AI, and loops it all back to the essential component: human intelligence. That’s what we are building at Bicameral.
Software was built to solve human problems. We are betting that humans are still better than anyone else at using software.