The standard startup narrative is that small teams move fast because they cut process. No PRD approval chains, no design committee sign-off, no six-week delivery timelines. Just engineers and a product idea, shipping.

That narrative is true as far as it goes, and incomplete in important ways. The startup I joined from 2019 to 2021 was fast for reasons that went beyond “we skipped the bureaucracy.” Understanding those reasons changed how I think about engineering productivity in any context.

What “Fast” Actually Meant

At the startup, we shipped production changes multiple times per day. We went from idea to user-facing feature in days, not months. An engineer could open their laptop in the morning and have code running in production by afternoon.

This wasn’t magic. It had specific causes.

The Codebase Was Small

The single biggest productivity multiplier was that the entire codebase fit in one engineer’s working memory. Not every file — but the architecture, the major abstractions, where things lived, what changed what.

When the codebase is small enough to hold in your head:

  • You can make cross-cutting changes without archaeology
  • You know what will break before you run the tests
  • You can refactor confidently without weeks of risk assessment
  • New engineers become productive faster because there’s less to learn

This sounds obvious. It’s easy to underestimate how much large codebases slow you down through cognitive overhead alone — not process overhead, not technical debt overhead, just the time spent understanding the system before you can change it.

The discipline required to maintain this: constantly asking “can we solve this problem without adding more code?” Most features can be implemented in multiple ways with different code footprints. The habit of choosing the smaller implementation (all else equal) compounds.

Decisions Were Made By the Person Doing the Work

When I needed to choose between two implementation approaches, I chose. When I needed to integrate with a third-party API, I evaluated the options and integrated. When I needed to change the data model, I changed it.

Not arbitrarily — there were code reviews, technical discussions, occasional escalations. But the default was decision authority at the point of execution, not at the top of a decision hierarchy.

The cost of this model is inconsistency. Different engineers make different choices, and the codebase accumulates some variety in approach. This is a real cost. The benefit is that decisions get made in minutes rather than days, and the engineer making the decision has the most context.

At scale, you need more consistency (contracts between teams, API stability, shared standards). At small scale, the overhead of centralised decision-making often exceeds the cost of the inconsistency it prevents.

We Invested in Developer Tooling

The startup spent engineering time on internal tooling that I initially found surprising for a company of that size: fast local development setup, one-command environment provisioning, automated deployment pipelines, good local test coverage.

The ROI on this is non-obvious because it’s diffuse and ongoing rather than a single feature shipped. But the arithmetic is clear: if a slow feedback loop adds 10 minutes per iteration and an engineer iterates 20 times per day, that’s 200 minutes (3+ hours) of friction per engineer per day. At 8 engineers, that’s 24+ hours per day company-wide. Investing 40 hours to cut the feedback loop to 2 minutes pays back in two weeks.

The specific investments that mattered most:

Tool/PracticeTime saved per engineer per day
Fast test suite (< 30s)30–60 min
Local docker-compose env20–40 min
Automated deployment30–60 min
Good error messages in failures15–30 min
Total1.5–3 hours

“Fast test suite” deserves emphasis. Slow tests are the single most reliable way to destroy developer flow. When tests take 10 minutes, engineers batch changes and context-switch while waiting. When tests take 30 seconds, they run them continuously and catch problems immediately. The difference in code quality is significant, not just the time.

We Didn’t Over-Engineer

The temptation in any technical environment is to solve problems with more sophistication than required. In a startup, this is particularly dangerous because you’re solving problems against a backdrop of uncertainty — you don’t know if the feature will ship, if users will use it, or if the product will pivot.

The heuristic we used (explicitly, though informally): build for 10× current load, not 100×. If we had 1,000 users, build for 10,000. Don’t build for 1,000,000. The architecture for 10,000 users is dramatically simpler than for 1,000,000, and if we reach 1,000,000 users we’ll have the resources and information to redesign correctly.

This principle was violated regularly in ways that were instructive:

  • A caching layer added for performance before profiling showed a bottleneck. The cache added complexity, had bugs, and when we profiled it turned out the bottleneck was a missing database index. Removing the cache and adding the index: faster and simpler.

  • An event-driven architecture for a workflow that had 10 users per day. The asynchronous complexity was unjustified; a synchronous implementation would have been equally reliable and half the code.

The discipline of not over-engineering is harder than it sounds. Engineers like to build sophisticated things. The conversation you have with yourself (“but what if we need to scale?”) is seductive. The answer that worked for us: “we’ll deal with that if and when it happens, with better information than we have now.”

What Slowed Us Down

For completeness, the things that reliably eroded velocity:

Accumulated technical debt in specific modules. Not the codebase overall — specific files or components that had been changed frequently and carelessly. These became expensive to modify. Periodic refactoring sessions (not rewrites) kept these under control.

Unclear ownership. When a bug crossed the boundary between services or components that different engineers owned, it slowed down proportionally to the coordination required. Explicit ownership — “this service is yours” — eliminated most of this.

Production incidents. The hidden cost of cutting corners is the time spent debugging production issues that stem from those corners. An hour of careful implementation might prevent four hours of investigation. The calculus isn’t always obvious at implementation time.

Hiring without enough care. Adding an engineer who needed substantial mentorship or who made decisions that required rework cost more than it saved for several months. At small team sizes, everyone’s contribution to velocity (positive or negative) is large and immediately visible.


The pattern I took away: velocity is mostly a function of how much friction exists between an idea and running code. The friction comes from codebase complexity, decision overhead, tooling slowness, and rework from insufficient care. Most of it can be addressed without process — by investing in the right things and resisting the urge to add complexity before it’s justified.