Building a Browser-Based Time-Series Database: High-Performance Video Storage with OPFS
Quick Start
- Source Code: Star the project on GitHub — search for
screen-recorder-studio/screen-recorder - Live Demo: Install Screen Recorder Studio from the Chrome Web Store (search by name)
Abstract Recording gigabytes of high‑definition video directly in the browser quickly exposes the limits of traditional storage approaches such as in‑memory
Blobaccumulation orIndexedDB. This article presents the storage engine behind Screen Recorder Studio: a high‑throughput, crash‑resilient local system built on the Origin Private File System (OPFS) andSyncAccessHandle, inspired by log‑structured file systems and time‑series databases.
1. Hitting the Memory Wall
Most browser‑based recording implementations eventually converge on the same pattern: collect encoded chunks, push them into an array, and assemble a Blob at the end. This works for short clips, but collapses under sustained recording workloads.
Two issues surface quickly:
-
Out‑of‑Memory crashes A one‑hour 4K/60FPS recording easily produces 4–8 GB of encoded data. Modern browsers enforce strict per‑tab memory limits, and once the heap approaches that ceiling, the result is an abrupt tab crash.
-
Asynchronous I/O overhead Offloading data to
IndexedDBavoids heap growth, but introduces a different bottleneck. At 60 Hz write frequency, the cost of promise scheduling, transaction lifecycles, and event‑loop contention begins to interfere with the encoder itself, leading to dropped frames and jitter.
Long‑running recording requires a different primitive: something that writes directly to disk, bypasses the main thread, and persists data incrementally rather than at teardown.
Design Goals
From a systems perspective, persistent video recording has four competing requirements:
- Sustained throughput at 60 FPS or higher
- Seekability for later editing and scrubbing
- Crash tolerance in the face of browser or OS failure
- Exportability into standard containers such as MP4 or WebM
2. OPFS and Synchronous I/O
Chrome introduced FileSystemSyncAccessHandle as part of the Origin Private File System (OPFS). Unlike traditional web file APIs, it enables blocking, synchronous file operations inside a Dedicated Worker.
This constraint is intentional: blocking I/O is forbidden on the main thread, but acceptable—and often desirable—in an isolated worker.
Why Synchronous I/O Matters
Compared to asynchronous writers, SyncAccessHandle behaves much closer to a systems‑level file descriptor:
- No event‑loop scheduling: writes execute immediately on the worker thread
- Explicit buffer control: data is written from
TypedArrayorDataViewwithout intermediate copying - Predictable latency: critical for real‑time pipelines
For Screen Recorder Studio, this maps cleanly to the architecture: all disk I/O lives in a dedicated opfs‑writer‑worker, completely decoupled from UI rendering and encoding.
3. A Log‑Structured Storage Model
Writing a valid MP4 container incrementally is fragile. The format requires global metadata updates and finalization steps that do not tolerate interruption.
Instead, the recording pipeline treats video data as an append‑only log, deferring containerization until export time.
Each recording session maps to a directory containing three files.
3.1 data.bin — Payload Log
data.bin functions as a write‑ahead log.
- Append‑only: no seeks, no overwrites
- Content: raw encoded video chunks (VP9 / H.264)
- Benefit: maximal write throughput and minimal failure surface
3.2 index.jsonl — Sparse Time Index
The index records where each frame lives inside data.bin:
{"offset":0,"size":45023,"timestamp":0,"type":"key"}
{"offset":45023,"size":1204,"timestamp":33333,"type":"delta"}
offset/size: byte range insidedata.bintimestamp: microsecond precision for A/V alignmenttype: keyframe vs delta frame, required for seeking
The index is deliberately human‑readable during early iterations, favoring debuggability and recovery over optimal density.
3.3 meta.json — Session Metadata
A small control file storing codec parameters, resolution, frame rate, and a completed flag used during recovery.
4. Write Path Implementation
The writer worker follows a simple append protocol with asymmetric durability guarantees.
// Write binary payload immediately
accessHandle.write(u8Buffer, { at: currentOffset });
// Buffer index entries in memory
pendingIndexLines.push(metaLine);
// Periodic index flush
if (chunksWritten % 100 === 0) {
flushIndexToFile();
}
- Video data is written synchronously and immediately
- Index data is buffered and flushed in batches to reduce small‑write overhead
If the process terminates unexpectedly, losing a few seconds of index entries is acceptable; losing raw video data is not.
5. Crash Recovery Semantics
This storage model is designed around failure.
- The payload log is append‑only and survives crashes without finalization
- The index can be truncated to the last valid entry
- Export is deferred until recording completes
On startup, the application inspects meta.json. If completed is false, it treats the session as interrupted and recovers the usable prefix defined by the index. In practice, this allows partial recordings to be exported with minimal data loss.
6. Read Path Optimization
Editing workloads emphasize seek latency, not sustained throughput.
Rather than issuing one read per frame, the reader worker batches requests:
- Compute the byte span covering a time window
- Perform a single
slice()read - Split the buffer in memory using the index
const slice = file.slice(startOffset, endOffset);
const buffer = await slice.arrayBuffer();
This reduces I/O calls by orders of magnitude and enables smooth timeline scrubbing.
7. Storage Management
Each session lives in its own directory, enabling atomic deletion and straightforward quota accounting. Exported media is written outside OPFS, allowing users to reclaim space by deleting project data independently of final outputs.
8. Limitations and Future Work
data.binis not directly playable and requires a muxing step- Text‑based indexing does not scale indefinitely
- Reader performance could improve by adopting synchronous handles
Conclusion
By treating video recording as a log‑structured data problem rather than a file‑format problem, OPFS enables a class of browser applications that were previously impractical. This architecture prioritizes throughput, resilience, and recoverability—properties more commonly associated with databases than front‑end code.
In practice, OPFS and WebCodecs together blur the line between browser applications and native media pipelines.