Every trade needs to be journaled. You need a durable, ordered record of every order, fill, and state change — for risk, reconciliation, and regulatory purposes. The naive solution is a database write on the hot path. That’s a roundtrip to an external process, a network call, and often a disk fsync. It’s also hundreds of milliseconds of latency per event.
Chronicle Queue gave us persistent journaling at sub-microsecond overhead. Here’s how.
Memory-Mapped Files: The Key Idea
A memory-mapped file is a region of virtual memory backed by a file on disk. Reads and writes to that memory region are transparently flushed to disk by the OS page cache. From the application’s perspective, it’s just memory — no write() syscall, no explicit buffering.
The OS handles durability asynchronously. You can force a sync with msync(), but for an append-only journal (where you can replay from the beginning on recovery), you often don’t need to — the OS will flush eventually, and the data is in the page cache immediately.
Chronicle Queue is built on this: an append-only log backed by memory-mapped files, with a binary wire format that reads/writes directly into the mapped memory. No intermediate buffers. No object allocation for the write path. No GC pressure.
Wire Format: No Serialisation
Standard Java serialisation is GC-hostile — it allocates output streams, byte arrays, object graphs. For a 200-byte trade event at 50,000 events/second, that’s meaningful heap pressure.
Chronicle’s wire format writes directly into the mapped memory using Unsafe.putLong(), putInt(), etc. No intermediate allocation. The write of a trade event is ~50 nanoseconds. Not microseconds — nanoseconds.
Reading is the same: the mapped memory is read directly into fields. No deserialisation in the traditional sense.
| |
The Persistence Guarantee
This is the subtle part. Writes go to the page cache immediately, but “immediately in the page cache” is not the same as “on disk.” If the process crashes, the page cache survives (it’s in kernel memory). If the machine crashes, you lose whatever the OS hadn’t flushed.
For a trade journal, the relevant question is: what’s your recovery scenario? If you can tolerate replaying from another source (matching engine, upstream FIX session), then page cache durability is sufficient. If you need strict durability, you need msync() or to flush explicitly — and then you’re paying disk latency.
We used Chronicle for intra-day journaling and reconciled against the exchange’s trade reports at end-of-day. The latency budget for EOD reconciliation is measured in minutes. The latency budget for intra-day journaling is measured in microseconds. Chronicle served the latter; the database served the former.
Chronicle Map: Low-Latency Key-Value
Chronicle Map is a different beast — a persistent hash map backed by a memory-mapped file. Useful for reference data (instrument definitions, counterparty details) that you want to load fast on startup and access without GC overhead at runtime.
For our use case: instrument reference data. ~50,000 instruments with various parameters. With Chronicle Map, startup was sub-second (just mapping the file, not loading into heap). Lookups were ~100ns. No heap allocation, no GC involvement.
The trade-off: Chronicle Map’s off-heap objects aren’t managed by the GC, which means you can have off-heap memory leaks if you’re not careful. The ChronicleMap.close() must be called — and it should be part of your shutdown hook, not just something you hope the finaliser handles.
What This Pattern Doesn’t Replace
Chronicle is excellent for:
- Append-only event journals
- Read-heavy reference data with infrequent updates
- IPC (inter-process communication on the same machine via shared memory-mapped file)
It’s not a general-purpose database. Complex queries, transactions, arbitrary updates — use a proper database. Chronicle is a tool for the specific problem of persistent storage on the latency-critical path.