Reactive Flows

Imagine a spreadsheet. Cell A1 contains 5. Cell B1 contains the formula = A1 * 2. You change A1 to 7---and B1 automatically becomes 14. No one told B1 to update. It just reacted.

This is reactive programming---declaring relationships between values, letting the system propagate changes automatically. Instead of manually updating dependent values, you declare the dependency and trust the runtime.

This declarative approach---describing what relationships exist rather than how to maintain them---is one of the most important mechanics of thought in programming. Being declarative means expressing intent without prescribing execution. It's a theme we've seen throughout this book, and reactive programming is perhaps its purest expression.

The Imperative Problem

In imperative code, derived values become stale:

python
    x = 10
    y = x * 2    // y = 20
    
    x = 15       // x changes...
    
    print(y)     // Still 20! Stale.

We assigned y the value of x * 2, not the relationship. When x changes, y doesn't know. We'd have to manually update:

python
    x = 15
    y = x * 2    // Manual update

With one dependency, this is manageable. With hundreds? A nightmare. We forget updates. We update in the wrong order. The system becomes a tangled web of manual synchronization.

The Reactive Solution

Reactive systems maintain relationships automatically:

python
    signal x = 10
    derived y = x * 2    // y REACTS to x
    
    print(y)             // 20
    
    x = 15               // x changes...
    
    print(y)             // 30! Automatically updated.

We declared that y depends on x. The runtime tracks this dependency. When x changes, the runtime knows to recompute y.

Signals and Derived Values

Modern reactive systems use signals (or reactive variables) as the primitive:

python
    // Create a signal (reactive source)
    count = signal(0)
    
    // Create a derived value (reactive computation)
    doubled = derived(() => count.value * 2)
    isEven = derived(() => count.value % 2 == 0)
    
    // Create an effect (reactive side-effect)
    effect(() => {
        print("Count is now: " + count.value)
    })
    
    count.value = 5   // Triggers: doubled=10, isEven=false, prints message

Three concepts:

Signal
A reactive value that can change over time. The source of truth.
Derived
A computed value that depends on signals (or other derived values). Recomputes when dependencies change.
Effect
A side-effect that runs when dependencies change. For I/O, DOM updates, etc.

The Dependency Graph

Under the hood, the runtime builds a dependency graph:

python
    // Declarations
    a = signal(1)
    b = signal(2)
    c = derived(() => a.value + b.value)
    d = derived(() => c.value * 2)
    e = derived(() => a.value - 1)

The runtime tracks these dependencies as a directed graph:

When a changes, the runtime knows to update c, d, and e---but not recompute things that don't depend on a. When b changes, only c and d need updating.

When a signal changes, the runtime:

  1. Marks all dependent nodes as "stale"
  2. Recomputes them in topological order (dependencies before dependents)
  3. Notifies effects

This is more efficient than recomputing everything---only affected nodes update.

Observables and Streams

Signals represent values that change. Observables represent streams of values over time:

    clicks = observeClicks(button)  // Stream of click events

    clicks
        .map((click) => click.position)
        .filter((pos) => pos.x > 100)
        .throttle(100)  // At most one per 100ms
        .subscribe((pos) => {
            showTooltip(pos)
        })

The difference:

  • Signal: One value at a time, replacing the previous
  • Observable: Many values over time, each an event

If a Promise is a container for a single future value, an Observable is a container for many future values. And just as Promises can be chained and composed, Observables can be transformed and combined. There's also a connection to Channels from the previous chapter: both represent sequences of values over time, but Observables emphasize transformation while Channels emphasize communication between processes.

Operators

Observables support rich operators:

python
    // Combining streams
    merged = merge(stream1, stream2)

    // Pairing latest values
    combined = combineLatest(streamA, streamB)

    // Switching to new stream
    search_results = search_input
        .debounce(300)
        .switchMap((query) => fetchResults(query))

    // Accumulating over time
    total = clicks.scan(0, (sum, _) => sum + 1)

This rich set of operators makes Observables functors---structures that support mapping while preserving their shape. We can transform the values inside without changing the container's nature. This is the same pattern we saw with map on lists and .then on Promises.

Functional Reactive Programming

FRP combines functional programming with reactive primitives. Values are functions of time:

python
    // Position as a function of time
    position(t) = initial_position + velocity * t
    
    // Or dependent on another reactive value
    follower_position = leader_position.delay(500ms)

Pure FRP distinguishes:

  • Behaviors: Continuous values over time (like position)
  • Events: Discrete occurrences (like clicks)

Reactive in Practice

Many modern UI frameworks are built on reactive principles:

React: Components re-render when state changes.

javascript
    function Counter() {
        [count, setCount] = useState(0)
        
        return <div>
            <p>Count: {count}</p>
            <button onClick={() => setCount(count + 1)}>
                Increment
            </button>
        </div>
    }

Svelte: Reactivity is built into the language.

javascript
    <script>
        let count = 0
        $: doubled = count * 2  // Reactive declaration
    </script>
    
    <p>{count} doubled is {doubled}</p>
    <button on:click={() => count++}>Increment</button>

SolidJS: Fine-grained signals.

javascript
    function Counter() {
        [count, setCount] = createSignal(0)
        
        return <div>
            <p>Count: {count()}</p>
            <button onClick={() => setCount(c => c + 1)}>
                Increment
            </button>
        </div>
    }

Reactive Variables and Concurrency

Some languages have reactive variables built-in, with interesting concurrency implications:

javascript
    // Hypothetical language with reactive variables
    
    async function fetchUser(id) {
        user = await api.getUser(id)  // Pauses here
        return user
    }
    
    // When accessing a signal that's still loading...
    name = user.name  // Automatically pauses until user is available!

The runtime handles:

  • Detecting access to pending values
  • Suspending the current computation
  • Resuming when the value becomes available

Push vs Pull

Two strategies for propagating changes:

Push: When a source changes, it immediately pushes updates to dependents.

python
    // Push: immediate propagation
    x.onChange((newValue) => {
        y = newValue * 2  // Computed immediately
    })

Pull: Dependents are marked stale; they recompute when accessed.

python
    // Pull: lazy computation
    x = 10
    y = lazy(() => x * 2)  // Not computed yet
    
    x = 15                  // y marked stale
    
    print(y.value)          // NOW y computes: 30

Most systems use a hybrid: push invalidation (mark as stale) with pull computation (recompute on access).

Glitches and Consistency

A subtle problem: what if updates propagate in the wrong order?

python
    a = signal(1)
    b = derived(() => a.value * 2)      // b = 2
    c = derived(() => a.value + b.value) // c = 3
    
    a.value = 2
    
    // If c updates before b:
    //   c = 2 + 2 = 4  (wrong! b is stale)
    // Then b updates:
    //   b = 4
    // c should be 2 + 4 = 6, but we saw 4 briefly

This momentary inconsistency is a glitch. Good reactive runtimes prevent glitches by:

  • Computing the dependency graph
  • Updating in topological order (dependencies before dependents)
  • Batching updates within a single "tick"

When to Go Reactive

Reactive programming shines when:

It's less suited when:

  • Many values depend on few sources (UIs, spreadsheets)
  • Changes are frequent and must propagate efficiently
  • You want to declare "what" not "when"
  • Event streams need complex transformations
  • Relationships are simple (just compute it directly)
  • You need precise control over timing
  • The dependency graph is highly dynamic
  • Debugging propagation is difficult in your environment

The Reactive Philosophy

Reactive programming is a shift in perspective:

ImperativeReactive
Values are snapshotsValues are relationships
You manage updatesSystem manages updates
Time is explicit (loops, callbacks)Time is implicit (change propagation)
"Do this, then that""This depends on that"

Like functional programming, reactive programming emphasizes what over how. You declare relationships; the runtime handles the mechanics.

We've explored four approaches to concurrency: threads with shared state, asynchronous operations, message passing between isolated actors, and reactive propagation. Each has its place. The art is choosing the right model for your problem---or combining them thoughtfully.

The concurrent world is complex, but these patterns help us navigate it. Systems with many things happening at once are the norm, not the exception. Understanding concurrency is understanding modern software.

A Final Note: Concurrency Shapes Architecture

The concurrency model you choose doesn't just affect a few functions---it shapes your entire system architecture.

Reactive/FRP systems view the world as a network of data dependencies. Your entire application becomes a description of how events flow and transform. Data enters, propagates through derived values, and emerges as effects. This is elegant for UIs and data pipelines, but it creates friction with object-oriented designs where behavior is distributed across encapsulated entities.

Actor systems distribute event processing across isolated entities. Each actor is a self-contained unit with its own state and behavior. This aligns naturally with OOP's emphasis on encapsulation, but at the cost of the global view that FRP provides. You can't easily see how data flows through the whole system---it's distributed across many mailboxes.

Shared-state concurrency with locks keeps the familiar sequential model but adds coordination complexity. It's often the path of least resistance for small changes to existing systems, but it scales poorly.

These models don't just coexist peacefully---they represent fundamentally different ways of thinking about computation:

ModelEmphasisTrade-off
FRPData flow, global viewLess encapsulation
ActorsIsolation, local reasoningNo global view
Shared stateFamiliar modelCoordination burden

Choose deliberately. The concurrency model is a foundational decision that echoes through every layer of your system.