Your Job Changed. You Didn’t Notice.
The Cognitive Shift
NOTE: this is part of a series on AI agents. Previous posts covered AI memory and AI pre-game routines.
I opened my IDE last week and had one of those quiet realizations that rearranges how you see things. I hadn’t personally written a line of code in a couple days. Not because I was stuck or because the project stalled. Simply, because my AI agent was writing it.
My fingers were still on the keyboard every day. I was still putting in full sessions. But the actual work had transformed underneath me, so gradually that I almost missed it.
You Didn’t Stop Working. You Changed Roles.
Here’s what happened to me, and I suspect it’s happening to anyone who’s gone deep with AI agents, I went from being a developer to being a director1. Code still gets written, features still ship, and best of all, bugs still get squashed. But I’m no longer the one typing the implementation. My role shifted to focus more on product thinking, creative direction, architecture decisions, and quality review2. The hands-on coding execution moved to the bot.
For someone self-taught, without a computer science background, this changes everything3. I can experiment with approaches I never would have attempted before. Not because I suddenly learned distributed systems or compiler design, but because I don’t need to implement them from scratch. I describe what I want, evaluate what comes back, and iterate from there. The gap between “I have an idea” and “I have a working prototype” collapsed.
The output isn’t perfect. I want to be honest about that. But it gets me closer to what I want, faster, and it lets me push the boundaries of what I’m willing to try in the first place. That’s the real change. Not the quality of any individual output, but the expansion of what I’m willing to attempt.
The Generalist Advantage
The people seeing the biggest returns right now are generalists. If you’re the kind of person wearing five or so hats (a little frontend, some backend, some ops, product thinking, maybe some design, barista), AI agents start paying off in ways that compound fast.
Why do I believe this to be true? Generalists have the broadest surface area for time savings. A deep specialist doing one thing all day might see modest improvements in that one area. A generalist touching six different domains can offload pieces of each one. The gains stack up. And if you have ADHD, well, this is like a superpower. You might actually complete one thing you set out to do.
Here’s where it gets concrete: if I can compress four hours of work into one, that’s not a small optimization. That’s a fundamentally different relationship with my workday. The time you reclaim isn’t abstract either. You can put it toward learning something new, building something speculative, or (revolutionary concept) closing the laptop before dinner. Work-life balance becomes something real instead of something you put in your “my parakeet was murdered, here’s what it taught me about work-life balance” post on LinkedIn.
Nothing About This Is Magic
I want to say this clearly because the expectation gap is the single biggest source of frustration I see: none of this is instant and none of it is magic. We are not there yet. I don’t know when we will be, but we’re not there today.
What we have right now can feel magical. Watching an AI agent build a feature from a description, debug its own mistakes, and open a pull request is still wild to me every time. But getting to that point took real effort. Deliberate, sometimes tedious effort.
Think about it like hiring a personal assistant. When someone reaches a point in their career where they bring one on, that person doesn’t walk in on day one and seamlessly run your life. They need to learn your preferences. Your priorities. What “urgent” means to you versus everyone else. The things you care about and the things that drive you crazy. That context transfer takes time, and there’s no shortcut around it.
AI agents are identical in this respect. You are transferring context about who you are, how you work, and what you expect. That transfer is the work. If you expect to install a tool and immediately have a superhuman collaborator, you are going to be disappointed. Every time.
You Still Need a Plan
There’s a seductive idea floating around that an always-on AI agent means you have a collaborator working twenty-four seven. Technically true. But you probably don’t have twenty-four seven worth of meaningful work to delegate.
Sitting around trying to invent tasks for your agent is backwards. The plan comes first. What are you building? What are the milestones? What does “done” look like? An AI agent accelerates execution against a plan. Without the plan, you have an expensive engine with nowhere to drive.
The discipline of defining what you want, clearly enough for an agent to act on, turns out to be its own skill. Which brings me to the real point.
The Skill That Actually Matters Now
The skill separating people who extract massive value from AI agents and people who quit after a week is not technical. It’s managerial.
Can you set clear expectations? Can you decompose a complex goal into tasks something else can execute? Can you evaluate output critically and provide feedback that actually improves the next attempt? Can you stay patient when the learning curve is steep and persistent when results lag behind your hopes?
Those are the skills of a good teacher. A good coach. Even a good manager (yes Todd, patience with your employees will make you a better manager). They are exactly the skills required to get consistent, high-quality work from an AI agent.
The irony is hard to miss. The most important skill for working with AI has almost nothing to do with technology. It is the deeply human capacity to teach, lead, and manage4. People who were already good managers are going to have a serious head start here, whether they’ve ever written a line of code or not.
That’s where things are heading. Not toward a world where everyone needs to learn to code, but toward a world where everyone needs to learn to direct. The question isn’t whether you can do the work. It’s whether you can lead something that does it for you.
Maestro, orchestrator, manager, coach, big cheese.
PLEASE review your code and the output, please! Do not trust these machines, they are far from perfect.
I’m self-taught. Started with with hacking around AOL and learned through building. All my degrees are in psychology, so CS degree, but I knew enough to build a lot of things over the years. Despite this there was always a feeling of a ceiling that I would never break through just being self-taught (this is totally a personal feeling, lots of self-taught developers crush). What changes is that the ceiling doesn’t disappear with AI, but it got a lot higher.
There’s an interesting inversion here. Most people talk about AI replacing teachers. What’s actually happening is that the most important skill for using AI effectively is being a good teacher and communicator.



