The first question on every new Go service with a database: standard library database/sql? sqlx? pgx directly? An ORM like GORM? sqlc for code generation?

The honest answer is that the right choice depends on the access pattern, the team’s SQL proficiency, and how much you value type safety at the cost of verbosity. Here’s what each option looks like in practice.

The database/sql Baseline

Go’s standard database/sql package provides a database-agnostic interface with connection pooling, prepared statements, and transactions. For PostgreSQL, you add a driver:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
import (
    "database/sql"
    _ "github.com/lib/pq"  // or "github.com/jackc/pgx/v5/stdlib"
)

db, err := sql.Open("postgres", dsn)

// Query:
rows, err := db.QueryContext(ctx,
    "SELECT id, symbol, quantity FROM positions WHERE account_id = $1",
    accountID)
defer rows.Close()

for rows.Next() {
    var id, symbol string
    var quantity int64
    if err := rows.Scan(&id, &symbol, &quantity); err != nil {
        return nil, err
    }
    positions = append(positions, Position{ID: id, Symbol: symbol, Quantity: quantity})
}
return positions, rows.Err()

Verbose, but explicit. The Scan into individual variables is tedious for wide rows but makes the mapping completely clear.

sqlx: Less Boilerplate, Same Control

sqlx extends database/sql with struct scanning, named parameters, and IN clause handling:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
import "github.com/jmoiron/sqlx"

db, err := sqlx.Connect("pgx", dsn)

type Position struct {
    ID       string  `db:"id"`
    Symbol   string  `db:"symbol"`
    Quantity int64   `db:"quantity"`
}

// Scan directly into a slice of structs:
var positions []Position
err = db.SelectContext(ctx, &positions,
    "SELECT id, symbol, quantity FROM positions WHERE account_id = $1",
    accountID)

// Single row:
var pos Position
err = db.GetContext(ctx, &pos,
    "SELECT id, symbol, quantity FROM positions WHERE id = $1",
    positionID)

// Named parameters:
_, err = db.NamedExecContext(ctx,
    "INSERT INTO positions (id, symbol, quantity) VALUES (:id, :symbol, :quantity)",
    pos)

Select and Get scan results into structs by matching column names to db tags. NamedExecContext maps struct fields to named parameters. This eliminates most of the Scan boilerplate while keeping full SQL control.

sqlx is the choice I default to for services where:

  • The team is comfortable writing SQL
  • The query patterns are varied (different predicates, aggregations, joins)
  • You want predictable query plans

pgx: PostgreSQL-Specific, More Efficient

pgx is a PostgreSQL driver that bypasses database/sql entirely. It uses PostgreSQL’s extended protocol, which avoids some overhead:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
import "github.com/jackc/pgx/v5/pgxpool"

pool, err := pgxpool.New(ctx, dsn)

// Typed scanning with pgx's scan API:
rows, err := pool.Query(ctx,
    "SELECT id, symbol, quantity FROM positions WHERE account_id = $1",
    accountID)

positions, err := pgx.CollectRows(rows, pgx.RowToStructByName[Position])

pgx advantages:

  • Faster for high-volume queries (avoids database/sql interface overhead)
  • PostgreSQL-specific types (pgtype.Numeric, pgtype.Timestamptz) with correct handling
  • COPY protocol for bulk inserts (100-1000× faster than individual inserts)
  • Better error types (you can get the PostgreSQL error code directly)

The cost: tied to PostgreSQL. If you ever need to test with SQLite or switch databases, you need the pgx/stdlib compatibility shim.

For services that are clearly always going to use PostgreSQL and need maximum performance, pgx directly is the right choice. For most services, sqlx with the pgx driver (via database/sql) gives 90% of the benefit with less lock-in.

sqlc: Type-Safe SQL via Code Generation

sqlc takes a different approach: you write SQL queries in .sql files, and sqlc generates type-safe Go code for them:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
-- name: GetPosition :one
SELECT id, symbol, quantity
FROM positions
WHERE id = $1;

-- name: ListPositionsByAccount :many
SELECT id, symbol, quantity
FROM positions
WHERE account_id = $1
ORDER BY symbol;

sqlc generate produces:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
// Generated — do not edit
func (q *Queries) GetPosition(ctx context.Context, id string) (Position, error) {
    row := q.db.QueryRowContext(ctx, getPosition, id)
    var i Position
    err := row.Scan(&i.ID, &i.Symbol, &i.Quantity)
    return i, err
}

func (q *Queries) ListPositionsByAccount(ctx context.Context, accountID string) ([]Position, error) {
    rows, err := q.db.QueryContext(ctx, listPositionsByAccount, accountID)
    // ... scan loop
}

sqlc advantages:

  • Type safety: the generated code has the exact types from the database schema
  • Schema-aware: sqlc reads your migration files and knows the exact column types
  • No runtime reflection: queries are static strings, not built dynamically
  • Change detection: if a migration changes a column type, sqlc generate fails with a type error

The tradeoff: queries are defined ahead of time. Dynamic queries (user-specified filters, arbitrary sort orders) don’t fit cleanly. You need a code generation step in your build.

For services with well-defined, stable query patterns — CRUD operations, reporting queries — sqlc provides the highest type safety for the least runtime overhead. For services with dynamic query requirements, it’s limiting.

ORMs: When the Advice to Avoid Them Is Wrong

The conventional Go wisdom is “don’t use ORMs.” This is mostly right, for the right reasons:

  • ORMs abstract away SQL, producing inefficient queries for non-trivial access patterns
  • The abstraction layer adds complexity without always providing value
  • Debugging “what SQL did this generate?” is harder than reading the SQL you wrote

But the advice is sometimes wrong. ORMs are appropriate when:

The query patterns are simple and standard. GORM’s CRUD operations for a service that does straightforward creates, reads, updates, and deletes generates correct, efficient SQL. The verbosity of sqlx for simple CRUD is unjustified when gorm.Create(&user) does the same thing with less code.

The schema changes frequently. Auto-migration features (gorm.AutoMigrate) reduce the friction of schema changes during rapid development. In a production environment with careful migrations, this feature is dangerous; in early development it’s convenient.

The team doesn’t know SQL well. A junior team that doesn’t write SQL fluently will produce less buggy code with a good ORM than with database/sql where they’re manually constructing queries.

The Decision Matrix

Use case                              Recommendation
────────────────────────────────────────────────────────────────────
Simple CRUD, team knows SQL           sqlx
Complex queries, max performance      pgx directly
Stable queries, type safety priority  sqlc
Rapid development, simple schema      GORM (accept the tradeoffs)
High-volume analytics/aggregations    pgx + COPY + raw SQL
Dynamic queries (user-specified)      sqlx with query builders (squirrel)

The most common mistake I see is applying the “don’t use ORMs” rule reflexively to every project, including ones where a simple ORM would save weeks of CRUD boilerplate. The second most common mistake is using GORM for a complex financial system where query performance and explicit SQL control genuinely matter.

Match the tool to the actual requirements. “What do other Go engineers use?” is a worse heuristic than “what does this specific service need?”