Your AI Agent Deserves Its Own Email Address
They’re being exploited to generate more spam, so it might be better to let them join in on the fun.
NOTE: this is part of a series on AI agents. I suggest you start with the posts covering AI memory and AI pre-game routines.
I signed my AI bot up for its own Gmail account last week. Then a GitHub account, plus API keys for a handful of services it uses regularly. Somewhere between creating the third account and watching it push its first commit to a repo under its own name, something clicked, I wasn’t configuring a tool anymore. I was onboarding an employee.
That distinction matters more than it sounds like it should.
This Goes Beyond Automation
We’ve had automation for years. Zapier, IFTTT, cron jobs, CI/CD pipelines. They all follow the same concept; set a trigger, define an action, and let it run. Automation is excellent at repetitive, predictable work and nobody needs AI for that.
What we’re talking about here is different in kind, not degree. This is essentially a personal employee. Something that interprets ambiguous instructions, makes judgment calls, (hopefully) learns from feedback, and can adapt to how you specifically work. Where automation follows a script, an AI agent collaborates on one.
That distinction changes how you approach the entire setup. You don’t “configure” an employee the way you’d set up a Zapier workflow, you onboard them.
Onboard It Like You’d Onboard Anyone
Using an AI agent as your primary collaborator requires the same approach you’d take with a new hire. More accurately, the same approach a coach takes with a new player, or a teacher with a new student1.
Start broad. Introduce the goals, the culture, the expectations. What does success look like? What are the non-negotiables? Then get progressively more specific so that the edge cases that only surface in practice.
Sometimes the learning curve is fast. You explain something once and the agent runs with it. Other times you’ll find yourself repeating the same correction for the fifth time, wondering if anything is registering. Both experiences are normal. These are still computer systems that need guardrails to produce the output you actually want. Without those guardrails, the agent defaults to its own assumptions, which may or may not have anything to do with yours2.
Patience matters here. So does knowing your use cases. Not every task benefits from delegation. Part of the onboarding process is figuring out where the agent adds real value and where you’re better off doing it yourself. That assessment is ongoing. It’s never a one-time decision.
Autonomy With Boundaries
Here’s where the employee analogy gets practical.
Good managers give their people autonomy. They trust them with real responsibilities, real tools, real access. Micromanagement produces fragile, dependent workers who can’t function without constant direction. The same principle applies to AI agents. You want yours to accomplish things independently, develop capabilities over time, and bring you results rather than asking for permission at every step.
But autonomy and blind trust are not the same thing.
I would never install an AI agent on my main computer. I would never give it direct access to my personal email, my files, my primary accounts. That’s not paranoia, it’s basic operational hygiene. You wouldn’t hand a new hire the keys to your house on their first day, no matter how good the interview went.
What I did instead: I gave it its own workspace. Its own email. Its own GitHub. Its own credentials for the services it needs. The bot operates under its own identity and collaborates with me the same way a remote collaborator would, through shared repos, shared docs, and messaging.
Modern tools make this surprisingly easy. Almost every app worth using already has collaboration features, shared workspaces, and permission controls. The infrastructure for giving an AI agent its own bounded workspace already exists, so there is no excuse and no reason to build anything custom just to start.
Sure, this adds some cost. Additional accounts, maybe a VPS or separate environment but you should be getting that investment back in productivity many times over. A few extra accounts and a modest server cost almost nothing compared to the value of an agent that can actually operate on its own.
Choose Your Model Like You’d Choose Your Hire
The model powering your agent matters, and not only for the reasons most people focus on.
Capabilities matter, obviously. Some models are better at coding, some at reasoning, some at following complex multi-step instructions. But the consideration people overlook is privacy.
If you’re routing your work, your code, and your business context through a model, you should care about where that data goes. Using a model from a Chinese company? It might benchmark beautifully. It might even be the best option for certain tasks. But think about what you’re sending through it and who has access on the other end. This concern isn’t limited to Chinese models either. The same question applies to every provider. Where does the data live? Who can see it? What are the terms?
Pick a model that fits both your capability needs and your privacy requirements. Neither one should be an afterthought.
Run Your Own Shop
If you use something like Claude Code or ChatGPT’s built-in agent features, you’re working within the constraints of a consumer product. Those companies are building for the broadest possible audience, which means keeping things simple, limiting integrations, and applying guardrails that make sense for the average user.
Those guardrails also cap what’s possible.
When you run your own agent setup (whether that’s OpenClaw, a custom framework, or something you built yourself), the constraints are yours to define. You decide what the agent can access, what tools it gets, how much freedom it has. You can let it explore, experiment, and get curious about problems in ways that a locked-down consumer product would never permit.
Setting this up takes some technical confidence (or naivety). If you don’t have it yourself, find someone who does. A friend, a colleague, someone willing to spend an afternoon helping you get the foundation in place. The initial setup is a one-time cost, yet the flexibility you get back is ongoing.
I’ve found things I never would have discovered using a consumer AI product3. Not because those products are bad, but because they’re designed to be safe and predictable. Sometimes that’s exactly what you want. Other times you want to let the bot follow a thread and see where it leads. Running your own setup gives you that choice.
Memory Is the Foundation
None of this works if your agent forgets everything between sessions. An employee who shows up every morning with no memory of yesterday isn’t an employee, they’re a temp. (And if this is your human employee please get them help immediately.)
The more you invest in your agent’s ability to learn and remember, the more that initial onboarding effort compounds. Every preference you teach, every correction that sticks, every pattern it picks up builds on the last one. That’s the whole point of treating it like an employee rather than a tool. Tools don’t improve with use but employees do (well, not all but you get the idea).
The Bottom Line
Give your agent its own workspace, own accounts, and bounded autonomy. Onboard it the way you’d onboard anyone you expect to work with long-term. Be patient with the learning curve, deliberate about what you delegate, and thoughtful about the infrastructure choices that hold the whole thing together.
The era of firing up a chatbot and lobbing questions at it is already behind us. What comes next looks a lot more like management than engineering, and the people who figure that out early are going to have a real edge.
There’s an interesting inversion here. Most people talk about AI replacing teachers. What’s actually happening is that the most important skill for using AI effectively is being a good teacher.
The default behavior of most models is “be helpful and verbose.” Sounds fine until you realize that “helpful” and “what you actually wanted” can be very different things. Being sycophantic is not helpful.
Last month my agent stumbled into a completely novel approach to a caching problem because I gave it room to explore outside the obvious patterns. A consumer product would have given me the standard answer from the docs. My agent went sideways and found something better.



