I’ve been using AI coding assistants heavily since 2023 — first Copilot, then Claude, then a combination. At this point, not having them feels like losing a limb. But the way I use them now is different from how I started, and the difference is mostly about understanding what these tools are good at and building habits that work with their strengths.

What Changed First: The Boring Parts

The first thing AI coding assistants genuinely changed was the friction of writing code I knew how to write but didn’t want to type. Boilerplate, tests for obvious cases, documentation, configuration, one-off scripts. Tasks that weren’t intellectually interesting but took real time.

A test file that would have taken 45 minutes to write carefully now takes 10 minutes — I generate a first pass, review it, adjust the cases that aren’t quite right, and move on. The cognitive load is lower because I’m in review mode rather than generation mode.

This sounds like a modest gain. Compounded across a workday and an engineering team, it’s significant. The kinds of tasks that used to require dedicated focus blocks (“I’ll write the tests tomorrow when I have a clear morning”) now get done inline.

What Changed Second: Exploring Unfamiliar Code

When I’m working in a codebase I don’t know — a new team, a new library, investigating a dependency — AI assistants have changed the ramp-up process substantially.

Asking “how does X work in this codebase?” or “what are the relevant functions for handling Y?” used to mean reading docs, running grep, following call stacks. It still sometimes means that. But a first-pass answer from an assistant — even an imperfect one — orients me faster than starting from scratch.

The critical habit: verify everything in unfamiliar territory. The assistant doesn’t know your specific codebase version or your exact configuration. It gives you a starting point, not a complete answer.

What Hasn’t Changed: The Hard Parts

The parts of engineering work that AI tools don’t improve much, in my experience:

System design decisions. “Should this be a synchronous call or an event?” “Where does this responsibility belong?” “What’s the right tradeoff between consistency and availability here?” These require context about the system, the team, the business — context that the assistant doesn’t have and that’s hard to provide concisely. The answers I get are generic and often not useful.

Debugging non-obvious production issues. Complex bugs in production — race conditions, subtle performance regressions, emergent failures under specific load patterns — require a kind of hypothesis-driven investigation that doesn’t translate well to the back-and-forth chat interface. The assistant can suggest likely causes, but the discipline of systematically ruling out hypotheses doesn’t get easier with AI.

Code review that matters. AI code review tools catch obvious issues. They don’t catch “this abstraction doesn’t compose well with the three other services that will need to use it” or “this approach will be extremely painful to debug when it fails in production.” The judgment calls that make code review valuable are still judgment calls.

Architecture over time. Maintaining conceptual integrity across a large codebase, over many contributors, over time — that’s still entirely human work.

The Habits That Matter

After a year, the habits that make AI tools work well:

Be specific about context. “Fix this function” produces generic output. “Fix this function — it’s used in a high-throughput Kafka consumer loop and the current implementation allocates on every call” produces something useful. The assistant needs the same context a human would need to help effectively.

Generate, then review, then use. Never blindly paste AI-generated code. The review step is where you catch the plausible-but-wrong output — the function that almost does the right thing but has an off-by-one, or uses a deprecated API, or doesn’t handle the error case that matters in your context.

Use it for the first draft, not the only draft. Generated code is a starting point. It usually needs adjustment — not always major adjustment, but enough that you need to read and understand it before deploying it.

Know when to stop asking. If after two or three exchanges you’re not getting useful output, stop and think from first principles. The conversational loop can consume time that would be better spent reading documentation or running experiments.

Maintain your own understanding. The risk of leaning heavily on AI tools is that you stop building mental models of the code you ship. If you can’t explain what the generated code is doing, you can’t debug it, you can’t maintain it, and you can’t make good decisions about changing it. The assistant is a tool; the engineering judgment is yours.

The Workflow That Works For Me

New feature:
  1. Design (human) — what's the interface? what are the invariants?
  2. Generate scaffold (AI) — skeleton implementation, happy path
  3. Review and adjust (human) — correctness, edge cases, style
  4. Generate tests (AI) — obvious cases
  5. Write non-obvious tests (human) — the cases that actually matter
  6. Review everything together (human) — does this hold up?

Debugging:
  1. Form hypothesis (human)
  2. Ask for related code / patterns (AI) — sometimes useful for orienting
  3. Test hypothesis (human)
  4. Repeat

Code review:
  1. AI tool for first pass — catches syntax/obvious issues
  2. Human review — catches everything that matters

The pattern: AI tools accelerate the generative phases, humans own the decision phases. The mistake is using AI tools to avoid the decision phases — it produces code that works in the demo and fails in production.

The Longer-Term Question

The tools are improving faster than any other category of software I’ve used. What doesn’t work well today may work well in 12 months. The habits that matter — being specific, reviewing everything, maintaining your own understanding — are likely to remain valuable even as the tools improve. The underlying work is still reasoning about systems, and that reasoning is still the engineer’s job.

What I’m more uncertain about: what happens to the skill-building that used to happen through doing the tedious parts. The tedious parts of software engineering — writing boilerplate, reading through unfamiliar code, debugging obvious issues — are also where junior engineers build the pattern recognition that makes them good senior engineers. If AI tools remove most of that friction, do they also remove the learning?

I don’t have a confident answer. The tools are too new and moving too fast for anyone to have a confident answer. What I’d say provisionally: be intentional about which parts of the work you delegate and which you keep. The parts you delegate don’t teach you anything.