- Published on
Learning to Build in the Age of AI
- Authors
- Name
- Iván González Sáiz
- @dreamingechoes
There's a question that keeps surfacing in my mentoring conversations this year. It comes from different people — career changers, junior engineers, bootcamp grads — but it always carries the same quiet weight.
They've been learning to code for months, building small things they're proud of, starting to feel like they might belong in this world. And then something shifts. A new model drops. A viral demo makes headlines. A thread on LinkedIn declares that "traditional coding is over." And suddenly the question changes.
"Is it still worth it — learning to do this — if AI keeps getting better every week?"
They ask it carefully, the way people do when they're afraid the answer might confirm what they've been dreading. And I notice the weight behind it every time — not just curiosity, but something closer to grief. The grief of wondering whether the door you just walked through is already closing behind you.
It's a real question, and it deserves a real answer — not hype, not dismissal, not a motivational platitude. So here's what I believe, after fifteen years of building software and a few of those years watching this wave arrive:
Yes. It is absolutely still worth it.
Not because AI is a fad — it isn't. Not because we're going back to the way things were — we're not. But because the heart of software engineering was never about typing code. It was always about something more human than that.
The pattern beneath the fear
Software has always evolved through shifts in leverage — each one changing how we build without changing why.
Programming once meant punch cards. Then interactive terminals. Then higher-level languages that let you express ideas in paragraphs instead of binary instructions. Libraries and frameworks arrived, and suddenly you could stand on years of other people's thinking instead of rebuilding every foundation yourself. IDEs brought autocomplete, refactoring, linting — feedback loops that caught mistakes before you even ran the code. CI/CD, cloud platforms, feature flags, observability — each layer made shipping safer, faster, more forgiving.
Every leap reduced friction. Every leap made some parts of the job feel less manual. And every leap triggered a version of the same quiet fear: if the tool does more, does the person matter less?
The pattern, though, has been remarkably consistent. Tools don't remove engineers — they redirect attention. The things that were once hard become easy, and new, harder, more human problems rise to the surface.
AI fits squarely inside this story. It's a powerful new kind of leverage — the ability to draft faster, explore options, generate scaffolding, explain unfamiliar code, reduce the weight of the blank page. Sometimes it's genuinely astonishing. And it will keep improving.
But the deeper question underneath every project remains untouched: what are we building — and is it the right thing?
That question was never answered by typing speed. It won't be answered by generation speed either. And answering it well requires understanding what the job has always been about.
The job beneath the code
One of the most common misconceptions among people who are learning — and I understand why it exists — is that software engineering is primarily a craft of output. Write code. Ship features. Move tickets. Repeat.
Tutorials reinforce this. "Here's a problem, here's the code, here's the solution." And when you're starting out, that's exactly what you need: concrete, repeatable, something you can hold. But the real work that happens inside teams is messy in a completely different way.
In most projects I've led or contributed to, the hardest part was rarely the implementation. It was understanding what the ticket meant. Noticing where the requirement was vague, or where two people on the same team had quietly different mental models of "done."
It was collaborating with Product and Design until the shape of the solution felt honest — not just technically correct, but genuinely useful. Making trade-offs under constraints nobody chose. Shipping something safe for production. And then living with it — maintaining it, evolving it, owning its consequences months after the pull request was merged.
The job includes code, but it's defined by judgment. By the ability to hold a problem long enough to understand it, to ask the question that reframes the whole conversation, to notice when something "works" in a technical sense but fails the human it was meant to serve. I've written before about owning your growth as an engineer — and this is the core of it: the craft lives in the thinking, not the typing.
AI can accelerate implementation. It can suggest ideas. It can reduce certain kinds of friction to almost nothing. But it cannot carry your product's context — the users, the constraints, the team dynamics, the organizational trade-offs, the lived reality that shapes every meaningful decision.
That reality is where engineering happens. And as code becomes cheaper to produce, the ability to make sense of that reality — not alone, but alongside the people closest to it — becomes more valuable, not less.
Software is built in conversations
Judgment doesn't develop in isolation — and neither does software.
It emerges from conversations — between Engineering, Product, and Design, between Support and Data, between the people closest to the problem and the people closest to the system. Those conversations aren't ceremony or status theatre. They're sense-making. They're the work of turning ambiguity into something a team can actually build together.
The engineers I've most admired — and the ones I've seen grow fastest — weren't necessarily the ones with the deepest technical knowledge. They were the ones who could sit in that ambiguity without rushing to resolve it. Who could write a clear summary of a trade-off in plain language. Who gave people context, not just conclusions. Who asked for clarification without apologizing for it. That kind of trust doesn't happen by accident — it requires environments where people feel safe enough to be wrong.
That ability to think with other people is one of the most underrated multipliers in this field. No model replicates it. No tool replaces the moment when someone asks the right question and the entire room shifts.
As code generation gets faster, this layer — the human layer of collaboration, clarity, and shared understanding — doesn't shrink. It becomes the thing that separates teams that ship fast from teams that ship well.
Shipping fast and building well are not the same thing. And the people who grasp that distinction earliest are often the ones who've already lived it somewhere else.
What career changers already carry
This part is for career changers specifically, because I see you underestimate yourselves all the time.
If you're coming from healthcare, education, logistics, finance, hospitality, law, or operations, you might feel like you're arriving late — like everyone else has an invisible head start you don't have. But there's something you carry that most computer science graduates don't have yet, and it's worth naming.
You've lived inside real systems:
You know what work looks like on the ground, not in abstractions.
You've felt what happens when a process breaks under pressure.
You've seen the gap between what's "simple on paper" and what's simple in practice.
You understand that constraints aren't theoretical — they're the air people breathe while trying to do their jobs.
That experience is not a footnote on your learning journey. It's context. And context is exactly what turns code into a product that helps someone.
In a world where AI makes it easier to produce code, the scarce ability is connecting that code to reality — understanding what should exist, why, and for whom. That kind of judgment — focusing on the inputs you control, not the outcomes you can't — is not something you develop only after years of engineering experience. It's something you can start practicing from day one, because you already know how to ask "but what does this actually look like for the person using it?"
The question, then, isn't whether to learn — it's how to learn in a way that lasts.
A calmer way to learn to code
If you're early in your journey, I don't think you need a plan built around keeping up with model releases. The internet rewards urgency, but learning doesn't — and trying to absorb a new craft while simultaneously tracking every breakthrough is a recipe for the kind of anxiety that makes people quit before they've given themselves a real chance.
What I'd suggest instead is something quieter. A learning stance that will still make sense in five years, regardless of what the tools look like.
The foundations are where that starts. Not the trendy kind — the durable kind:
How programs work. How data moves through a system.
How to debug when the output is wrong and you don't know why.
How to reason about trade-offs when there's no obvious right answer.
How to read unfamiliar code without spiraling.
These are the skills that compound. They don't expire when the next model drops, and they're the part of the craft you'll reach for every single time, no matter what tool you're holding. When someone tells you to "just use AI for that," you want to know enough to evaluate what it gave you — and that kind of understanding isn't a luxury. It's what keeps you from feeling fragile six months in.
Somewhere alongside the technical work, start asking why. Not after you feel ready — now. What problem is this solving? Who is it for? What changes in someone's day if this works? Those questions turn "someone who can implement" into someone who can build, and they're available from the first project.
The same instinct applies to working with other people — writing clearly, summarizing decisions, explaining trade-offs in plain language. These aren't soft skills bolted onto engineering. They're the medium engineering happens inside, and practicing them early makes everything else easier.
AI belongs in this picture, too. Let it help you draft, explore, get unstuck. Ask it "why" without embarrassment. But hold onto one quiet rule: if you can't explain what the code does in plain language, you're not done yet. Not because you should feel ashamed — but because building on top of things you don't understand creates a kind of debt that compounds in the wrong direction.
And through all of it — protect your pace. A few sessions a week. Small projects. Plenty of debugging, plenty of "why did this break?" That kind of slow, honest repetition is where real confidence grows — not the performative kind, but the kind that holds when things get messy.
Final thoughts
The future of software isn't humans versus AI. It never was. It's people — carrying more leverage than any previous generation of engineers — still trying to do the same essential work: understand what matters, build it with care, and do it alongside other humans who are also figuring it out as they go.
AI will keep getting better at generating code. That's real, and there's no point pretending otherwise. But the craft of engineering — the judgment, the collaboration, the sense-making, the ability to hold messy reality and shape it into something useful and safe and human — that isn't going anywhere.
If you're starting now, you're not late. You're arriving at a moment where the things you bring — your curiosity, your context, your willingness to sit with hard problems — matter more than they would have five years ago. The door isn't closing. It's shifting. And there is room for you on the other side of it.
Not when you feel perfectly ready. Not when the fear goes away. Right now — with everything you already carry.
Enjoyed this article?
Stay in the loop
Get notified when I publish new articles about engineering leadership, remote teams, and building better tech cultures. No spam, unsubscribe anytime.