The other kind of AI optimism
There’s a cohort of people in your life, and their day is quietly changing.
They used to spend hours every week doing work by hand — reconciling numbers across systems, chasing discrepancies, managing stakeholders who each have slightly different versions of the truth. The slow choreography of keeping the systems, the numbers, and the people all telling the same story.
Today they’re different. They orchestrate tools. They’re learning how their job can change, and how it will change. The work they’re doing now felt unimaginable a year ago — not just to them, but to the world.
You might be this person. You might know them. You might just be starting to recognize the shape in the people around you. Now is the time to think about it, and care about it, deeply.
Most of what I read about AI is about the models. Bigger, smarter, closer to something. It’s a genuinely interesting story, but it’s not the one I keep thinking about.
There’s a second promise floating around — that AI is going to solve the big things. Climate, cancer, education, productivity at the scale of whole economies. There’s something real in that promise. But the bigness is also what makes it paralyzing. When the upside is “solve civilization’s hardest problems,” it’s easy to stand at the edge of your own work and feel like the right move is to wait until someone else figures out how to dive in.
Neither is what I’m focused on. Neither is what grabs my attention. And neither, in this moment, is where I think the everyday person can actually feel the benefit of these tools.
The question I keep coming back to is what a curious person with some support and some time will do. I suspect the answer is a lot more than we think, and most of it will be good.
You don’t need to believe the models will become superintelligent. You don’t need to believe they’ll cure cancer, or replace most jobs, or any jobs. You only need to notice that the cost of turning an idea into a working thing has dropped orders of magnitude in a few years, and to ask what happens when something that was rare and expensive becomes ordinary and cheap.
What usually happens is that a lot of people who were locked out of doing a thing start doing it.
The unit that matters most is the cost of turning an idea into a working thing.
Two years ago, a mid-complexity internal tool — the kind that reconciles two data sources, renders a dashboard, and emails someone when a number goes red — was a two-sprint ticket, assuming you could get it prioritized. Call it three weeks of an engineer’s time, plus the meeting tax. Today that same tool is a Thursday afternoon. The distance from “I wonder if” to “look at this” stopped being a quarter and became a coffee and a quiet hour.
But the cost drop is only half the story. The subtler half: the kinds of ideas people are allowed to have are changing — and so are the kinds of things they’re allowed to do with them.
When the cost of acting on an idea was high, people filtered their ideas before ever saying them out loud. A thought like it’d be interesting to know X, or I wish we had Y, died in the hallway between two meetings, because nobody had time to actually go find out, or nobody knew how to close the gap.
Now that thought doesn’t die in the hallway. It gets answered. They go back to their desk, run it down, and come back thrilled to say look at this. Over months, that shifts what people notice, what they bring up, what they think is worth bringing up. It changes the dialogue of work, and the inner monologue underneath.
None of this is new. Every time a tool gets cheap enough, the same thing happens. The story isn’t “AI makes coding faster.” The story is that a whole group of people start thinking when the thing they couldn’t do yesterday is a thing they can do today.
I see two shapes of this, mostly.
The first is the domain expert who finally builds. Someone who has cared about a problem for years — maybe decades — and has never been the one who could fix it. The gap wasn’t imagination. It was never imagination. It was the translation layer between what they knew and what the machine would accept. That layer got dramatically thinner. Now the person who understood the problem best is the person doing the solving, and the solution is often better than anything an engineer unfamiliar with the domain would have produced — because it’s built by someone who notices the things you only notice after ten years of staring at the problem.
The second is the non-technical person on a team — the product manager, the ops lead, the customer-facing generalist. What’s different is the texture of their work. Instead of walking into meetings with a Figma mockup in hand, they walk in with a working prototype. Instead of writing an RFC about how the data might be organized, they pull the data and look. Instead of waiting to know, they check. Instead of feeling helpless, they feel capable.
Put a few of these people on the same team and the team itself moves differently. The shift happens upstream of anything a productivity dashboard is built to see.
Building shippable production software isn’t the problem these people are trying to solve. That’s the thing that gets missed in most of the discourse about non-engineers using AI — or about “empowering others” more broadly. The question isn’t whether they can replace engineers. I don’t want to make that claim, because I think some of them might be able to — and because the shape of engineering is already shifting either way. Most don’t need to. Most don’t want to.
The real question is whether they can close the distance between having an idea and seeing what it would look like. That distance — which used to be weeks of someone else’s time, or simply never got closed at all — is now closeable in an afternoon.
That’s a different kind of leverage, and it’s the one that compounds. Once a person has closed that loop, it becomes addicting, and they can start doing it consistently. Once a team has a few of those people, the quality of everyone’s thinking improves, because you can stop arguing about what something might look like and start looking at it.
Which brings me to the part I think most people are getting wrong. The question is not whether engineers are still needed. Of course they are. The question we should be asking is: what are engineers for?
Engineers used to hold the keys to the castle. That’s not a metaphor; it was a structural fact of most companies. If you wanted something, you needed an engineer’s attention, and there were never enough of them. Their attention was the most expensive input companies had. Whole layers of organizational design existed to manage that scarcity — the entire choreography of asking engineers for something. That scarcity is easing now. Not gone — easing. The companies I watch most closely are starting to notice.
At companies figuring this out, engineers are becoming shepherds of organizational velocity. They’re not writing every line of code anymore. They’re building the paths along which other people contribute. They’re setting up the guardrails that let a product manager’s Thursday-afternoon prototype become something that ships safely. They’re teaching. They’re pairing. They’re declining to build what others can now build themselves — and building, instead, the thing no one else can: the substrate underneath everything.
It’s a different job, and it’s harder than the old one. It demands taste, judgment, patience, and a willingness to let other people put their hands on things that used to be only in yours. It rewards a kind of engineer who has always existed but was often underpriced in the old world. The one whose superpower was making other people better at their jobs rather than outshipping them.
The companies that get this right will move at a speed others literally cannot imagine. Not because their engineers are faster, but because their non-engineers stopped being bottlenecks. The dark observation is that most companies won’t get this right, because it requires their engineers to give up the cultural centrality that came with being the only people who could make things. It requires their leaders to stop measuring engineering by features shipped and start measuring it by organizational leverage.
The companies that do get this right will look obvious in retrospect. They always do.
The other kind of AI optimism isn’t about the models. Not that they’ll become superintelligent. Not that they’ll solve the big things — though they might. It’s smaller, and closer to your day. A cohort of people doing work they couldn’t do before. Engineers building the substrate that makes it possible. Teams moving differently because the distance from idea to thing finally collapsed. It’s about the people around the models, and the engineers learning to build fences, not barns.