Why It Scales
The mechanical chain that turns a one-instance program into a distributed one
"It just scales" is a slogan. This page is the proof. JustScale turns a program written for one process into a program that runs correctly across many — not by hiding coordination in a framework runtime, but by refusing to compile code that would be wrong under coordination. The type system is the enforcement layer. The adapters are the execution layer. Domain code is written once and is correct in both.
What follows is the four mechanical rules that make this work, followed by the actual end-to-end test that proves it on two real Node processes sharing a Postgres database.
Rule 1 — Mutations require Locked<T>
Every mutating method on the repository demands proof of a lock in its signature. The type signature is the contract:
abstract class Repository<T> {
// Reads: no lock required
abstract get(ref: Ref<T>): Promise<Persistent<T> | null>
abstract findOne(where: Partial<T>): Promise<Persistent<T> | null>
// Writes: Locked<T> required. There is no overload.
abstract update(entity: Locked<T>, patch: UpdateData<T>): Promise<Persistent<T>>
abstract save(entity: Transient<T> | Locked<T>): Promise<Persistent<T>>
abstract delete(entity: Locked<T>): Promise<void>
}The only way to obtain a Locked<T> is to call repo.lock(ref). TypeScript will not let you construct one by hand, cast to one, or receive one from anywhere outside the lock API. This is a closed contract: if your code compiles, every write passed through a lock.
Rule 2 — repo.lock() is atomic with the read
Acquiring a lock is not a two-step "fetch, then lock" dance. On Postgres it is a single statement — SELECT ... FOR UPDATE — that returns the locked row's current contents:
// packages/adapters/postgres/src/repository/pg-repository.ts
async lock(entity: Ref<T>): Promise<Locked<T> | null> {
const id = extractId(entity)
// Row-level lock + fresh read in ONE statement.
// No other session can modify this row until we release.
const result = await sql`
SELECT * FROM ${sql(this.tableName)} WHERE id = ${id} FOR UPDATE
`
if (result.length === 0) return null
// The Locked<T> is built from the row we just read under the lock.
// Whatever the caller passed in is irrelevant to the returned state.
return brandLocked(this.rowToEntity(result[0]))
}Tip
Locked<T> in your program contains data that is authoritative as of the moment the lock was acquired. No other process can have modified it since — no other session can hold the lock concurrently.Combined with Rule 1, the conclusion is strong: in a JustScale app, every write happens on data that was re-read atomically with the lock that protects it. There is no optimistic-concurrency version column, no retry loop, no CAS — because there is no race to resolve. The moment a writer has the lock, the writer has the truth.
The in-memory provider implements the same contract with a mutex plus a closure-local map read; the pg provider uses advisory locks plus FOR UPDATE. Domain code cannot tell the difference.
Rule 3 — Locked<T> cannot cross process boundaries
A lock is an async-context fact on a specific node. Sending one across the network would be a lie. The framework enforces this at the serializer:
// The encoder refuses Locked<T> in any signal payload or
// cross-process value. A lock guarantee is local; shipping
// it would produce a brand that no remote session can honor.
if (isLocked(value)) {
throw new ProcessableEncodeError(
'Locked<T> cannot be serialized across processes. ' +
'Convert to Reference<T> via Model.ref(locked) first.',
)
}This turns a subtle correctness trap into a compile-adjacent error. You will learn at the edge of the wire — not in production, three weeks later, after a silent divergence — that you tried to smuggle a local guarantee across a boundary. The fix is always the same: Model.ref(locked) unwraps to a Reference<T>, which is wire-safe; the receiver re-acquires the lock on their own process if they need to mutate.
Rule 4 — Signals carry typed identity, not free-form payloads
Cross-process coordination in JustScale travels on signals. A signal is not an arbitrary event — it is a path with typed parameters:
export class FulfillmentSignals extends defineSignals(signal => ({
shipped: signal('/shipment/:shipment/shipped')
.types({ shipment: Shipment }),
delivered: signal('/shipment/:shipment/delivered')
.types({ shipment: Shipment }),
})) {}The path is the topic on the pg NOTIFY bus. The typed params are the routing key. defineSignals rejects duplicate path params at definition time, and emission throws if a path parameter is missing at runtime. Every signal that can be defined can be routed; every signal that is routed carries exactly the identity needed to deliver it.
Signals with Locked<T> path params are unwrapped to Reference<T>automatically before emission — the caller's lock stays on the sending node; receivers get a reference and re-acquire a fresh lock under Rule 2 if they need to mutate.
The proof: two real processes, one database
The chat-app example ships a cross-process end-to-end test that exercises all four rules. It spawns two actual Node processes — not workers, not threads — each running the production just dev entrypoint bound to a different port but pointed at the same Postgres database:
// Two real child_process.spawn() instances on :6301 and :6302.
// JUSTSCALE_NO_SOCKET=1 disables the cluster unix socket, so A and B
// cannot coordinate via local IPC — the ONLY path between them is pg.
const proc = spawn(JUST_BIN, ['dev'], {
env: {
...process.env,
PORT: String(port),
DATABASE_URL: DB_URL, // shared pg database
SIGNAL_CHANNEL, // shared pg NOTIFY channel
JUSTSCALE_NO_SOCKET: '1', // no shortcut
},
})
// ...after boot:
it('cross-process chat: A <-> B via pg LISTEN/NOTIFY + advisory locks', async () => {
const alice = await register(PORT_A, 'alice@proc.test', ...)
const bob = await register(PORT_B, 'bob@proc.test', ...)
const room = await createRoom(PORT_A, alice.token, 'proc-room')
await joinRoom(PORT_B, bob.token, room.id)
// Alice opens a WebSocket on A. Bob opens a WebSocket on B.
const aliceWs = await openRoomWs(PORT_A, alice.token, room.id)
const bobWs = await openRoomWs(PORT_B, bob.token, room.id)
// Alice posts on A. Bob's WS on B receives the message.
aliceWs.socket.send(JSON.stringify({ type: 'post', text: 'hello from A' }))
const onB = await waitForMessage(bobWs, m =>
m.type === 'message' && m.data.text === 'hello from A', 10_000)
assert.ok(onB, "bob on :6302 never received alice's message from :6301")
})What this proves, concretely:
- No shared memory. Two OS processes, separate heaps.
- No shortcuts. The cluster unix socket is disabled with
JUSTSCALE_NO_SOCKET=1. Every coordination path between A and B must travel through Postgres. - Domain code is unchanged. The chat controllers, services, and signals are the same code that runs in the single-process tests. Nothing in
src/knows about multi-process deployment. - All four rules are exercised. Every message send locks the room under Rule 1 and Rule 2; the broadcast signal travels under Rules 3 and 4 via pg
LISTEN/NOTIFY.
The test runs in roughly 800ms. It is the practical answer to "does this actually scale?".
What the framework does NOT force
Honesty matters. The four rules above close the write path. The remaining distributed-system footguns live on the read path and in user-held caches, and the framework cannot forbid them at the type level without making every program miserable to write:
- Reads are not locked.
repo.findOnereturns aPersistent<T>without coordinating with writers. That is the right default — locking every read would be a performance disaster — but it means an unlocked read may observe state that is being mutated by another process. If you branch on that read and then lock, Rule 2 guarantees your write is on fresh data, but the decision to write may have been made on a stale snapshot. For decisions that must be made under lock, lock first. - Caches in service closures are allowed. A service can stash a
Map<id, Persistent<T>>and return from it. Nothing in the type system objects. Convention (and the "no module-level state" rule that JustScale's DI pattern nudges you toward) catches this in review; the compiler doesn't. Caches can poison read paths; they cannot poison mutation paths because the mutation path always re-enterslock(). - Raw adapter access bypasses everything. If you reach past the repository API and execute raw SQL against the pg client, you can do anything you like. That is a deliberate hostile act, not an accident the framework should prevent.
The framework's claim is narrow and strong: every write in a JustScale app that goes through the repository API is correct under any number of coordinated processes.Reads and caches remain the developer's judgment call — because for most reads, coordination would be pure overhead.
Summary
Rule 1. repo.update/save/delete require Locked<T> [type-checked]
Rule 2. repo.lock() is atomic SELECT ... FOR UPDATE [pg adapter]
Rule 3. Locked<T> cannot serialize across processes [encoder throws]
Rule 4. signals carry typed identity on the path [defineSignals]
Consequence: every write in a JustScale app runs against data
that was re-read under the lock that protects it, on any number
of processes, against any supported adapter. The domain code
does not change when you add nodes.That is what "just scales" means here. Not a marketing claim — a provable property of the type system plus the adapter contracts, demonstrated end-to-end by a test that spawns two real processes.