An order book is a data structure that maintains the set of outstanding buy and sell orders at each price level. For an FX market maker, maintaining a locally consistent view of the book at each LP (liquidity provider) is the foundation everything else sits on. At 100,000 book updates per second across 50 currency pairs, the implementation choices matter.

What an Order Book Contains

EUR/USD Order Book

BIDS (buy orders, descending price)    OFFERS (sell orders, ascending price)
─────────────────────────────────────  ──────────────────────────────────────
Price      | Qty        | Orders       Price      | Qty        | Orders
───────────┼────────────┼──────────   ───────────┼────────────┼──────────
 1.08450   | 5,000,000  |  3           1.08452   | 3,000,000  |  2
 1.08448   | 8,000,000  |  5           1.08455   | 7,000,000  |  4
 1.08445   | 2,000,000  |  2           1.08460   | 4,000,000  |  3
 1.08440   | 12,000,000 |  7           1.08465   | 9,000,000  |  5
 ...                                   ...
                                     ↑
                                 Spread = best offer - best bid
                                 = 1.08452 - 1.08450 = 0.2 pips

Operations needed:

  • Add a new order at a price level (create the level if it doesn’t exist)
  • Delete an order (remove the level if it becomes empty)
  • Modify an order quantity
  • Get best bid / best offer (top of book)
  • Iterate levels in price order (for market data consumers)

Updates arrive as incremental FIX X (Market Data Incremental Refresh) messages — one operation at a time. The book must be correct after each update.

Data Structure Options

TreeMap

TreeMap<BigDecimal, Level> is the natural Java choice. Prices as keys, level data as values, natural ordering gives bids (reverse) and offers (ascending) for free.

1
2
3
4
5
TreeMap<BigDecimal, Level> bids   = new TreeMap<>(Comparator.reverseOrder());
TreeMap<BigDecimal, Level> offers = new TreeMap<>();

BigDecimal bestBid  = bids.firstKey();   // O(log n) first time, O(1) cached
BigDecimal bestOffer = offers.firstKey();

Performance profile:

OperationTreeMap complexityActual cost
Add levelO(log n)~200ns (tree rebalance, allocation)
Delete levelO(log n)~180ns (tree rebalance)
Update quantityO(log n) + O(1)~120ns (lookup + field write)
Best bid/offerO(1)~20ns
Iterate top NO(N)~50ns/level

The problems:

  • Allocation: every new price level allocates a TreeMap.Entry node on the heap. At 50k updates/second with frequent level creation/deletion, this is meaningful GC pressure.
  • Cache unfriendliness: tree nodes are scattered across the heap. Traversing even 5 levels for a depth-5 snapshot requires pointer-chasing through non-adjacent memory.
  • BigDecimal for prices adds overhead — comparison and hashing are more expensive than primitive comparison.

Price-as-Integer Encoding

FX prices have at most 5 decimal places (pips). Storing the price as a long integer (price × 100,000) eliminates BigDecimal overhead entirely.

1
2
3
// 1.08450 → 108450L  (price × 100,000, exact integer arithmetic)
long priceAsLong = Math.round(rawPrice * 100_000);
TreeMap<Long, Level> bids = new TreeMap<>(Comparator.reverseOrder());

Long key comparisons are ~5x faster than BigDecimal. No risk of floating-point representation errors. The book stays consistent because we’re in integer space.

Array-Based Book (Fixed Price Range)

For a known price range, an array indexed by price-as-integer eliminates the tree entirely:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
// EUR/USD typically within 500 pips of current price
// 500 pips = 50,000 ticks at 0.1-pip precision
static final int LEVELS = 100_000; // +/- 50,000 from mid
static final long MID_PRICE = 108450L; // updated periodically

Level[] bids   = new Level[LEVELS];
Level[] offers = new Level[LEVELS];

// Index = price offset from mid
int index(long price) { return (int)(price - MID_PRICE) + LEVELS/2; }

bids[index(108450L)] = new Level(5_000_000, 3);  // O(1) insert

Performance profile:

OperationArray complexityActual cost
Add/update levelO(1)~15ns
Delete levelO(1)~10ns
Best bid/offerO(1) with cached pointer~5ns
Iterate top NO(N)~10ns/level (sequential memory)

The sequential memory access pattern means the CPU’s prefetcher does useful work. The top 5 bid levels are adjacent in memory; iterating them fits in 2–3 cache lines.

The constraints: the array must be large enough to cover the expected price range, and the mid-price anchor must be updated when the market moves significantly (not hard — a 20-pip move triggers re-anchoring). Pre-allocate the Level objects to avoid per-update allocation.

The Update Processing Loop

A realistic update loop for incremental FIX market data:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
enum UpdateAction { NEW, CHANGE, DELETE }

void applyUpdate(UpdateAction action, char side, long priceInt, long qty, int orders) {
    Level[] book = (side == '0') ? offers : bids; // FIX: 0=bid, 1=offer
    int idx = index(priceInt);

    switch (action) {
        case NEW:
            book[idx] = levelPool.acquire(); // from pool — no allocation
            book[idx].set(qty, orders);
            updateBest(side);
            break;
        case CHANGE:
            if (book[idx] != null) {
                book[idx].set(qty, orders);
            }
            break;
        case DELETE:
            if (book[idx] != null) {
                levelPool.release(book[idx]);
                book[idx] = null;
                updateBest(side);
            }
            break;
    }
}

levelPool is a pre-allocated object pool. Level objects are recycled rather than GC’d. At 50k updates/second this keeps allocation rate near zero on the hot path.

updateBest only runs on NEW and DELETE actions — the best price only changes if a new top-of-book level appears or the current one disappears. On CHANGE (quantity update at existing level), the best price doesn’t change.

Snapshot vs Incremental State

Two book subscription models exist in FIX:

  • Snapshot (MsgType W): full book state, all levels. Use to initialise or recover.
  • Incremental (MsgType X): delta updates. Use for ongoing maintenance.

A common failure mode: applying an incremental update to a stale snapshot. If you missed some updates while reconnecting, your book is wrong. The correct recovery sequence:

1. Subscribe to incremental updates → buffer them (don't apply yet)
2. Request snapshot → apply to fresh empty book
3. Apply buffered incrementals that arrived after the snapshot's sequence
4. Switch to live incremental processing

Miss any of these steps and you’ll have a book that shows stale levels that no longer exist, or missing levels that do. Trading from a stale book is how fat-finger errors happen.

What We Actually Ran

In production: the array-based book for the 8 high-volume pairs (EUR/USD, GBP/USD, USD/JPY, USD/CHF, the majors), TreeMap-with-Long-keys for the 40+ less active pairs where update rates were low enough that allocation overhead didn’t matter.

The hybrid approach kept the code surface manageable — the TreeMap implementation was the reliable fallback and handled the long tail of instruments. The array implementation handled the instruments where performance mattered. Both exposed the same interface; the pricing engine didn’t know which it was talking to.