The first question on every new Go service with a database: standard library database/sql? sqlx? pgx directly? An ORM like GORM? sqlc for code generation?
The honest answer is that the right choice depends on the access pattern, the team’s SQL proficiency, and how much you value type safety at the cost of verbosity. Here’s what each option looks like in practice.
The database/sql Baseline
Go’s standard database/sql package provides a database-agnostic interface with connection pooling, prepared statements, and transactions. For PostgreSQL, you add a driver:
| |
Verbose, but explicit. The Scan into individual variables is tedious for wide rows but makes the mapping completely clear.
sqlx: Less Boilerplate, Same Control
sqlx extends database/sql with struct scanning, named parameters, and IN clause handling:
| |
Select and Get scan results into structs by matching column names to db tags. NamedExecContext maps struct fields to named parameters. This eliminates most of the Scan boilerplate while keeping full SQL control.
sqlx is the choice I default to for services where:
- The team is comfortable writing SQL
- The query patterns are varied (different predicates, aggregations, joins)
- You want predictable query plans
pgx: PostgreSQL-Specific, More Efficient
pgx is a PostgreSQL driver that bypasses database/sql entirely. It uses PostgreSQL’s extended protocol, which avoids some overhead:
| |
pgx advantages:
- Faster for high-volume queries (avoids
database/sqlinterface overhead) - PostgreSQL-specific types (
pgtype.Numeric,pgtype.Timestamptz) with correct handling COPYprotocol for bulk inserts (100-1000× faster than individual inserts)- Better error types (you can get the PostgreSQL error code directly)
The cost: tied to PostgreSQL. If you ever need to test with SQLite or switch databases, you need the pgx/stdlib compatibility shim.
For services that are clearly always going to use PostgreSQL and need maximum performance, pgx directly is the right choice. For most services, sqlx with the pgx driver (via database/sql) gives 90% of the benefit with less lock-in.
sqlc: Type-Safe SQL via Code Generation
sqlc takes a different approach: you write SQL queries in .sql files, and sqlc generates type-safe Go code for them:
| |
sqlc generate produces:
| |
sqlc advantages:
- Type safety: the generated code has the exact types from the database schema
- Schema-aware:
sqlcreads your migration files and knows the exact column types - No runtime reflection: queries are static strings, not built dynamically
- Change detection: if a migration changes a column type,
sqlc generatefails with a type error
The tradeoff: queries are defined ahead of time. Dynamic queries (user-specified filters, arbitrary sort orders) don’t fit cleanly. You need a code generation step in your build.
For services with well-defined, stable query patterns — CRUD operations, reporting queries — sqlc provides the highest type safety for the least runtime overhead. For services with dynamic query requirements, it’s limiting.
ORMs: When the Advice to Avoid Them Is Wrong
The conventional Go wisdom is “don’t use ORMs.” This is mostly right, for the right reasons:
- ORMs abstract away SQL, producing inefficient queries for non-trivial access patterns
- The abstraction layer adds complexity without always providing value
- Debugging “what SQL did this generate?” is harder than reading the SQL you wrote
But the advice is sometimes wrong. ORMs are appropriate when:
The query patterns are simple and standard. GORM’s CRUD operations for a service that does straightforward creates, reads, updates, and deletes generates correct, efficient SQL. The verbosity of sqlx for simple CRUD is unjustified when gorm.Create(&user) does the same thing with less code.
The schema changes frequently. Auto-migration features (gorm.AutoMigrate) reduce the friction of schema changes during rapid development. In a production environment with careful migrations, this feature is dangerous; in early development it’s convenient.
The team doesn’t know SQL well. A junior team that doesn’t write SQL fluently will produce less buggy code with a good ORM than with database/sql where they’re manually constructing queries.
The Decision Matrix
Use case Recommendation
────────────────────────────────────────────────────────────────────
Simple CRUD, team knows SQL sqlx
Complex queries, max performance pgx directly
Stable queries, type safety priority sqlc
Rapid development, simple schema GORM (accept the tradeoffs)
High-volume analytics/aggregations pgx + COPY + raw SQL
Dynamic queries (user-specified) sqlx with query builders (squirrel)
The most common mistake I see is applying the “don’t use ORMs” rule reflexively to every project, including ones where a simple ORM would save weeks of CRUD boilerplate. The second most common mistake is using GORM for a complex financial system where query performance and explicit SQL control genuinely matter.
Match the tool to the actual requirements. “What do other Go engineers use?” is a worse heuristic than “what does this specific service need?”