A Day in the Life of a Principal Engineer in the Age of AI
I haven't written a line of code since the beginning of the year. I ship to production almost every day. Here's what a typical workday looks like when AI stops being a tool and starts being your team.
In one terminal, my AI is optimizing a CI/CD pipeline. In another, it’s rewriting four microservices at once. In the third, it’s building itself a new SKILL. In the fourth, it’s preparing a board report.
At the same time, I’m on a video call, discussing a completely different project — and using NotebookLM as a live reference assistant to keep up with the technical details being discussed.
This is not a demo. This is a Tuesday.
I’m a Principal Engineer at Allegro, working in the Fintech domain. This is what a typical workday looks like when AI stops being a tool and starts being your team.
8:20 AM — The Bus
My workday doesn’t start when I walk through the office door. It starts on the bus.
Warsaw mornings are still cold enough to keep me off the bike, so it’s public transit — bus, metro, thirty minutes door to door. Enough time to scroll Slack, check email, and do the thing that actually structures my morning: talk to Gemini.
I ask it to summarize my day — what meetings I have, what’s on my task list, what I should prepare for. Sometimes I ask it to summarize a document I need to review before a 9 AM call. If I have a complex meeting coming up, I want the key points fresh in my head before I sit down. Gemini gives me that.
Sometimes — if I assigned overnight tasks to GitHub Copilot’s cloud agent the evening before — I’ll also check the results. Pull requests waiting for review, experiments that ran while I slept. But that’s not every morning. The AI assistant is.
By the time I step off the metro, I know what my day looks like.
Observation: At this stage of the day, AI is an executive assistant. Nothing fancy. But “what do I need to do today” is answered before I reach the office. That mental load — gone.
9:00 AM — The Desk
I walk in, open my laptop, plug in two external monitors.
And then the transformation happens.
On the two big screens — four terminal windows, each running a separate GitHub Copilot CLI session. On the laptop screen — Slack, video calls, documents.
The small screen stays human. The big screens belong to the machines.
This is where AI stops being an assistant and becomes a team of engineers.
I’m not part of a typical development squad. I don’t have a Jira board with my name on it. My role is cross-cutting — architecture, strategy, enablement. And yet, I open pull requests almost every day. Multiple ones.
Though “open” is the wrong word. “Supervise” is more accurate.
Copilot opens them on my behalf. What are they about? It varies. But over time I’ve settled into a pattern — a sweet spot of about five parallel workstreams that I juggle throughout the day:
One: Being human. Meetings, conversations, working with teams face to face. Being present. This is the part no AI can replace — and the part I genuinely enjoy about being in the office.
Two: AI for people. Building SKILLs — structured instruction sets that teach AI agents how to perform specific tasks in your environment — exploring agentic coding patterns, talking to teams and identifying their pain points. That’s how SKILLs for service catalog documentation were born. SKILLs for monitoring integration — Prometheus, Azure Application Insights. SKILLs for project boards — Azure DevOps, GitHub Projects and Issues. The goal: make AI useful for everyone, not just for me.
Three: Experiments. The work that never had time before. CI/CD pipeline optimization. A multi-tenant canary release concept. Exploring approaches nobody tried yet because the cost of trying was too high. With AI handling the implementation, the cost of experimentation dropped to near-zero.
Four and five: Active projects. The ones where I’m the architect. Design reviews, technical decisions, implementation oversight. The bread and butter.
This isn’t a rigid split. Some days all four terminals are thrown at experiments and POCs. Other days, the projects eat everything. But the pattern holds — and it works because the machine handles the execution while I handle the direction.
And it’s not just code. Copilot CLI helps me with analysis too — identifying accounting schemas, reverse-engineering poorly documented processes, making sense of legacy systems that nobody fully understands anymore. Things that used to take days of reading and cross-referencing now happen in a conversation. For the most demanding tasks — complex architecture decisions, deep code analysis, multi-step reasoning — I switch the underlying model to Claude Opus or Sonnet. Copilot CLI supports multiple models, and I pick the right one for the job.
It doesn’t always work. Last week one of the Copilot sessions went down a rabbit hole — spent forty minutes refactoring a module that didn’t need refactoring. Another time, it started doing things that made no sense at all — outputs that were disconnected from the task, responses that ignored previous context. As it turned out, context compaction — the process where the agent summarizes earlier conversation to free up memory — had gone too far. The compression was too aggressive, and the agent literally forgot who it was and what it was doing. I caught both during quick checks between meetings. Rolled back. Clarified. Got things back on track.
That’s the part people don’t see. The directing isn’t passive. It’s active supervision — like watching four junior engineers through a glass wall. You’re not writing their code, but you’d better be paying attention.
Observation: It’s not “AI writes code for me.” It’s closer to managing a small engineering team that never gets tired and never pushes back on a Friday afternoon task. The trade-off? They need clear instructions, well-defined scope, and constant quality checks. That’s not coding. That’s engineering management.
The Screenshot That Doesn’t Exist
I can’t show you what my desk looks like. Company policy, and that’s fair. But let me paint you a picture of one specific moment from last week.
Terminal 1: Copilot and I were discussing CI/CD pipeline optimization. Not a vague “make it faster” — a structured conversation about what’s slow and why. Within an hour, we had identified bottlenecks, proposed changes, implemented them, tested them, and deployed.
The result: GitHub workflow execution time dropped by 57%.
An hour. The kind of improvement that normally lives on a backlog for months because nobody has a free sprint to dedicate to “make CI faster.” It just… happened. Between other things.
Terminal 2: A cross-repository change spanning four services, with two more used as reference. Not four copy-paste changes — the agent understood the pattern across all repos, designed the changes to be consistent, tested them, and opened a pull request in each one. Four PRs, one conversation.
Terminal 3: Copilot was building itself a SKILL for integrating with our metrics and monitoring stack. Let that sink in — the AI was writing its own instruction manual for a tool it would later use.
Terminal 4: Preparing a status document for an executive review. The inputs: previous status reports, the team’s project board, meeting notes, and my direct input via conversation.
Laptop screen: I was on a video call with a team discussing a different project — and actively participating, not just listening. NotebookLM was open alongside, loaded with the project’s technical documentation. When someone referenced a requirement or a design decision, I could verify it in real time — cross-referencing against the docs while the conversation was happening.
Think of it like a lawyer’s assistant in a courtroom. The lead attorney does the talking, but the assistant is right there, flagging relevant passages, pulling up precedents, making sure nothing gets missed. That was NotebookLM for me in that meeting.
Five streams. One person. None of them interfering with each other.
Observation: A year ago, that CI/CD work alone would have taken three or four days. The cross-repo changes — two to three weeks end to end: a week or two just gathering knowledge and coordinating across teams, another week of actual development. The monitoring SKILL — a side project I’d never get to. All of it happened in one afternoon. Not because I worked harder. Because I worked differently.
The Other Side of the Coin
But this is not just a story about using AI. A large part of my actual job is AI.
In my day-to-day role, I design agent systems, AI assistants, real-time and batch processing solutions. Every day I look for places where AI can be deployed — not to put “AI-powered” on a slide, but to generate real, measurable value.
I talk to teams. I look at workflows. I ask:
“Where are you spending time on things a machine could do?”
And then I build the bridge.
The SKILLs I mentioned earlier? They’re not just for me. They’re for entire teams — so that every engineer can delegate the repetitive parts and focus on the parts that actually require human judgment.
Why This Works For Me (And Might Not For Everyone)
Here’s a thing I want to be honest about: this way of working isn’t automatic. It draws on skills I didn’t know I was building for years.
I studied Management and Production Engineering. If that sounds like an odd combination — it was. My days at university looked like this: mechanics and machine construction in the morning, marketing after lunch, welding in the afternoon, human resource management before dinner — with economics, physics, and law scattered in between. Every day was a context-switching marathon. It was chaos. It was also the best training I ever got. I learned to be open, curious, comfortable jumping between wildly different domains — and most importantly, I learned to do it without losing the thread. That muscle memory is exactly what lets me run five workstreams at once today.
After university, I spent four years as a credit risk analyst. That taught me something entirely different: how to look at things critically. How to spot what could go wrong before it does. How to assess risk, mitigate it, and find solutions when the obvious path doesn’t work.
Risk analysis and context-switching. Not the most glamorous skillset. But it’s the foundation of everything I described above — the ability to jump between terminals without losing the thread, and the instinct to check whether the AI’s output actually makes sense before it hits production.
I bring this up because I don’t want this post to sound like “just use AI and you’ll 5x your output.” That’s not honest. The way I work leverages a very specific combination of experiences, and I know that what feels natural to me might not feel natural to someone else. The tools are available to everyone. The ability to use them this way — that’s individual.
The Questions I Don’t Have Answers To
Here’s the part that most “AI productivity” posts skip.
I see the risks.
Work is faster. The world is faster. But where’s the limit? When I’m running five parallel workstreams and switching between a video call, four terminals, and a document analysis tool — am I still delivering quality?
For now, yes. Years of engineering work have trained me to context-switch fast. I know how to keep multiple threads in my head without dropping any. But what about the long run?
Will this pace cause burnout? Will I lose the ability to focus deeply on one thing? Will the expectations from colleagues and leadership just keep growing?
“Well, you delivered five things last Tuesday, so surely you can do seven this Tuesday.”
Where’s the line where I say: “No. This is too much”?
Today, I don’t know the answers. But I’m experienced enough to know that asking the questions matters more than pretending they don’t exist.
AI makes my work easier. Faster. Testing ideas and shipping things I never had time for has never been this simple. But this isn’t pure excitement — I approach it with caution. I see the risks. I see the downsides.
And I absolutely see the need for human-in-the-loop.
At the end of the day, AI is a tool. A tool that can do a lot of good — and a lot of damage. If I hit my finger with a hammer, I don’t blame the hammer. I blame the hand that held it. When AI makes a mistake in production, it won’t be the AI’s fault. It will be the fault of whoever was supposed to be watching.
AI is like a child. I raise it every day. Every week I let it do a little more. But I always keep an eye on what it’s doing.
Observation: This is the unsexy part of the AI story. The part that doesn’t make it into LinkedIn posts. It’s not just about doing more. It’s about knowing when “more” becomes “too much.”
5:00 PM — The Handoff
Before I leave the office, I do one last thing.
I assign tasks to Copilot’s cloud agent. Things I want done by morning — a refactor that needs testing, a documentation update, an experiment I want to see results from. The night shift starts.
Copilot doesn’t go home. Copilot doesn’t sleep. By the time I’m on the bus tomorrow, there will be pull requests waiting.
5:15 PM — The Bus, Again
On the way home, I open Gemini one more time. Summarize the day. What got done, what didn’t, what needs attention tomorrow. I jot down notes for the morning.
The day started on a bus with AI and it ends on a bus with AI. Thirty minutes each way — and everything in between happens at a desk.
But even the desk is different now. Two monitors for the machines. One small screen for the human.
The Punchline
I haven’t written a line of code since the beginning of the year.
I ship to production almost every day.
My role hasn’t been replaced. It’s been transformed. I don’t write code — I direct the things that do. I don’t execute — I decide what gets executed and make sure it’s done right.
The role of an engineer is changing. Fast. Engineers are becoming less like programmers and more like… parents.
Think about it. You teach AI how your company works. You teach it your team’s standards. You supervise its output. You correct it when it’s wrong. Every week, you let it do a little more.
You don’t program AI agents. You raise them.
But that’s a story for another post.
But before that post comes — let me leave you with something else.
There’s a reason I do all of this. And it’s not to fill my day with more work.
It’s to empty it.
All the delegation, the parallel workstreams, the AI handling execution — it’s not about squeezing more output from the same hours. It’s about getting the hours back. Hours for a bike ride through the forest. Hours for being offline, unplugged, not optimizing anything.
In this race — and make no mistake, it is a race — we have to remember what AI is actually for. Not to make us work more. To make us work less. To give us time to be human. To have hobbies, to breathe, to do things that have absolutely nothing to do with terminals and pull requests.
For me, AI done right means more time for yourself, not less.
That’s the real punchline.