The trade blotter is the dashboard every trader stares at during the day: live positions, recent fills, P&L, risk utilisation. It’s the interface between the trading system and the humans who run it.

The blotter has an interesting engineering property: it’s not in the critical path (orders execute without waiting for the blotter to update), but it has to be consistently correct and fast-updating, because traders make decisions based on what it shows. A blotter that shows stale positions leads to over-trading; one that misses fills leads to missed hedges.

The Consistency Problem

The blotter aggregates data from multiple sources:

Price feed   → current mid-price (for P&L calculation)
Order service → recent executions (fills, rejections)
Risk service  → current positions and limits
Blotter DB   → persistent record of historical trades

Each source updates at different rates. Price updates arrive thousands of times per second. Order fills arrive tens to hundreds of times per day. Risk positions update after each fill. The blotter DB is authoritative but lags by 50–200ms.

The naive implementation: poll each source on a timer, render the current state, send to the UI. The problem: between polls, state changes. The blotter shows a position that’s already been updated. Worse, it might show a fill without the corresponding position change if the sources update out of order.

What “Consistent” Means for a Blotter

For the blotter, we defined consistency as:

  • After a fill, the position shown must include that fill
  • P&L must be calculated using the position after including all fills shown
  • A fill should never appear in the blotter before the corresponding position change

This is a read-your-writes + causal consistency requirement applied to a view layer. The blotter must show a state that could have existed — no temporal anomalies where the position is stale relative to the fills shown.

The Event-Driven Approach

Rather than polling each source independently, the blotter subscribes to a unified event stream:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
public enum EventType {
    FILL_RECEIVED,
    POSITION_UPDATED,   // always follows FILL_RECEIVED
    PRICE_UPDATED,
    RISK_LIMIT_CHANGED
}

public record BlotterEvent(
    EventType type,
    String correlationId,  // links FILL to its POSITION_UPDATED
    Instant timestamp,
    Map<String, Object> payload
) {}

The order service publishes FILL_RECEIVED(correlationId=X) then POSITION_UPDATED(correlationId=X) as an atomic pair to the event stream. The blotter subscribes and applies updates in event order.

Causal consistency is enforced by the correlation ID: the blotter only shows a fill after it has received the matching POSITION_UPDATED. Fills without a matching position update are buffered:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
private final Map<String, FillEvent> pendingFills = new LinkedHashMap<>();

void onEvent(BlotterEvent event) {
    switch (event.type()) {
        case FILL_RECEIVED -> pendingFills.put(event.correlationId(), asFill(event));
        case POSITION_UPDATED -> {
            FillEvent fill = pendingFills.remove(event.correlationId());
            if (fill != null) {
                applyFill(fill);
                applyPosition(asPosition(event));
                notifyUI();
            }
        }
        case PRICE_UPDATED -> {
            updatePrices(asPrice(event));
            recalculatePnl();
            notifyUI();
        }
    }
}

The buffer handles the common case where FILL_RECEIVED and POSITION_UPDATED are processed out-of-order by the consumer. In practice they arrive within microseconds, but the consumer’s event loop processes them in FIFO order so the correlation is reliable.

Throttled UI Updates

Price updates arrive thousands of times per second. Sending each one to the UI would saturate the WebSocket connection and overwhelm the browser’s rendering.

The solution: collect updates in a dirty-state map and flush on a timer:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
private final ConcurrentHashMap<String, Double> dirtyPrices = new ConcurrentHashMap<>();
private final ScheduledExecutorService flusher =
    Executors.newSingleThreadScheduledExecutor();

// Called from event thread (high frequency):
void onPriceUpdate(String symbol, double mid) {
    dirtyPrices.put(symbol, mid);  // just marks as dirty, no UI push
}

// Called on timer (50ms = 20 fps):
void flush() {
    if (dirtyPrices.isEmpty()) return;

    Map<String, Double> snapshot = new HashMap<>(dirtyPrices);
    dirtyPrices.clear();

    // Recalculate P&L using snapshot prices, push to UI:
    BlotterSnapshot update = buildSnapshot(snapshot);
    wsSession.sendText(json.writeValueAsString(update));
}

The dirtyPrices map naturally coalesces rapid updates — if EUR/USD updates 50 times in 50ms, the UI receives one update reflecting the latest price. The trader sees a smooth 20fps update rate without the UI thrashing.

Handling Reconnection and State Sync

WebSocket connections drop. Mobile traders roam between networks. After reconnection, the blotter must send a full consistent snapshot before resuming incremental updates — otherwise the client might be showing a state from before the reconnection.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
void onClientConnect(WebSocketSession session) {
    // Freeze the event stream while we snapshot:
    BlotterSnapshot snapshot;
    synchronized (stateLock) {
        snapshot = buildFullSnapshot(positions, fills, prices);
        pendingSessions.add(session);  // buffer events during snapshot build
    }

    session.sendText(json.writeValueAsString(SnapshotMessage.of(snapshot)));

    synchronized (stateLock) {
        // Flush buffered events and start normal streaming:
        pendingSessionEvents.getOrDefault(session, List.of())
            .forEach(e -> session.sendText(json.writeValueAsString(e)));
        pendingSessions.remove(session);
        activeSessions.add(session);
    }
}

The snapshot + buffered-events pattern ensures no events are lost during the snapshot build. Events that arrive while the snapshot is being built are buffered and sent immediately after.

The Load Test That Found the Bug

Before deploying, we ran a load test simulating peak market conditions: 5,000 price updates/second, 10 fills/second, 3 connected blotter clients.

The test found a race: under high load, the flush() timer occasionally fired while an event was being applied, resulting in a snapshot that included a price update but not the corresponding P&L recalculation. The UI showed updated prices with stale P&L for one frame.

Fix: the flush timer acquires the same lock used by the event processor:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
void flush() {
    Map<String, Double> snapshot;
    synchronized (stateLock) {
        if (dirtyPrices.isEmpty()) return;
        snapshot = new HashMap<>(dirtyPrices);
        dirtyPrices.clear();
        recalculatePnl(snapshot);  // update P&L while holding lock
    }
    wsSession.sendText(json.writeValueAsString(buildSnapshot()));
}

Holding the lock during P&L recalculation ensures prices and P&L are always in sync in the snapshot. The lock is held for microseconds (hash map iteration over ~50 currency pairs), so contention with the event thread is negligible.


A blotter isn’t complex systems engineering — no microsecond latency requirements, no exotic data structures. But it illustrates a general principle: UI components that aggregate from multiple data sources need explicit consistency reasoning. “Poll everything on a timer” works until you have to explain to a trader why their position is showing the wrong P&L.