Go has a famous concurrency proverb: “Do not communicate by sharing memory; instead, share memory by communicating.” This is good advice. It’s also misapplied regularly. Channels are not universally better than mutexes — they’re a different tool for a different set of problems.

After years of Go in production, here’s when I reach for each.

What Each Primitive Is For

Channels are designed for transferring ownership of data between goroutines. When one goroutine creates a value and another goroutine needs to consume it, a channel is the natural boundary.

Mutexes are designed for protecting shared state that multiple goroutines read and write concurrently.

The mental model:

Channel: goroutine A produces → sends → goroutine B receives → owns
         (single owner at any time, transfers between goroutines)

Mutex:   goroutines A, B, C all share the same data
         (one is in the critical section at a time, all can access)

When to Use Channels

Pipeline stages — data flows from one goroutine to the next:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
func pipeline(in <-chan RawTrade) <-chan EnrichedTrade {
    out := make(chan EnrichedTrade, 64)
    go func() {
        defer close(out)
        for raw := range in {
            out <- enrich(raw) // enrich transfers ownership of result
        }
    }()
    return out
}

Clean, composable, backpressure-aware (the send blocks when out is full). The intermediate state belongs to exactly one goroutine at any moment — no locking needed.

Fan-out / fan-in — distribute work and collect results:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
func fanOut(jobs []Job, workers int) []Result {
    jobCh    := make(chan Job, len(jobs))
    resultCh := make(chan Result, len(jobs))

    for i := 0; i < workers; i++ {
        go func() {
            for job := range jobCh {
                resultCh <- process(job)
            }
        }()
    }

    for _, j := range jobs { jobCh <- j }
    close(jobCh)

    results := make([]Result, 0, len(jobs))
    for range jobs {
        results = append(results, <-resultCh)
    }
    return results
}

Done signals and cancellation — signal goroutines to stop:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
done := make(chan struct{})
go func() {
    for {
        select {
        case <-done:
            return
        case event := <-events:
            handle(event)
        }
    }
}()
// ...
close(done) // signal all goroutines watching done

(In practice, use context.Context for this — it’s the standard pattern.)

When to Use a Mutex

Protecting a cache or map — multiple goroutines read/write a shared data structure:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
type Cache struct {
    mu   sync.RWMutex
    data map[string]Entry
}

func (c *Cache) Get(key string) (Entry, bool) {
    c.mu.RLock()
    defer c.mu.RUnlock()
    v, ok := c.data[key]
    return v, ok
}

func (c *Cache) Set(key string, e Entry) {
    c.mu.Lock()
    defer c.mu.Unlock()
    c.data[key] = e
}

Modelling this with channels would require a dedicated goroutine to serialise all access:

1
2
3
4
5
6
// Don't do this for a simple cache — it's more complex for no benefit
type cacheRequest struct {
    key    string
    result chan Entry
}
// ... goroutine that serves requests from a channel

The channel version adds a goroutine, an allocation per request, and more code to achieve the same result. The mutex is the right tool here.

Simple counters and statistics — a mutex (or sync/atomic) is far simpler than a dedicated goroutine:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
type Metrics struct {
    mu       sync.Mutex
    requests int64
    errors   int64
}

func (m *Metrics) Inc(field *int64) {
    m.mu.Lock()
    *field++
    m.mu.Unlock()
}
// Or: atomic.AddInt64(&m.requests, 1) — no mutex needed

Structs with invariants — if you have a struct where multiple fields must be consistent with each other:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
type Position struct {
    mu        sync.Mutex
    netQty    int64
    avgPrice  float64
    lastTrade time.Time
}

func (p *Position) Apply(trade Trade) {
    p.mu.Lock()
    defer p.mu.Unlock()
    // Update all three fields atomically — invariant: avgPrice consistent with netQty
    newQty     := p.netQty + trade.Qty
    p.avgPrice  = (p.avgPrice*float64(p.netQty) + trade.Price*float64(trade.Qty)) / float64(newQty)
    p.netQty    = newQty
    p.lastTrade = trade.Time
}

A channel-based approach would require sending an entire Position struct on each update — more allocation, more copying, same result.

The Decision Heuristic

Are goroutines producing values that other goroutines consume?
  → Channel (transfers ownership)

Are multiple goroutines accessing shared state concurrently?
  → Mutex (protects shared access)

Are you signalling between goroutines (done, start, stop)?
  → Channel (or context.Context for cancellation)

Is the shared state a simple counter or flag?
  → sync/atomic (no mutex needed at all)

Do you have a stateful object with invariants?
  → Mutex (easier to reason about local invariants than message protocols)

Performance Comparison

Channels are not free:

OperationApproximate cost
sync.Mutex lock/unlock (uncontested)15–25ns
sync/atomic.AddInt645–10ns
Buffered channel send (space available)50–100ns
Unbuffered channel send + receive (goroutine switch)200–500ns
Buffered channel send (full, must wait)goroutine park + reschedule

For a hot path that’s called hundreds of thousands of times per second, a mutex or atomic is often 5–10x cheaper than a channel. This doesn’t mean avoid channels — it means don’t use them where a mutex is simpler and sufficient.

The Anti-Pattern to Avoid

The most common misapplication I see: using a channel-based actor to protect all access to a struct, under the assumption that “channels are the Go way”:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
// Anti-pattern: channel as mutex substitute
type Actor struct {
    ops chan func()
}

func (a *Actor) Do(op func()) {
    a.ops <- op
}

func (a *Actor) run() {
    for op := range a.ops {
        op() // serialised operations
    }
}

This works. It’s also more complex, slower, and harder to understand than a sync.Mutex for the same use case. The actor pattern is genuinely useful when the operation involves goroutine lifecycle, I/O, or significant asynchrony. As a mutex substitute for in-memory state, it’s overengineering.

Use channels for what they’re designed for: communication between goroutines. Use mutexes for what they’re designed for: protecting shared state. The goal is clarity and correctness, not channel maximalism.