From Spelling to Speaking
For decades, programming meant spelling. Every character mattered. A misplaced semicolon crashed the program. Syntax errors were the daily struggle of beginners and experts alike.
But something is shifting. Large language models understand intent. They can translate "make the knight move there" into precise code. The age of spelling may be ending. The age of speaking may be beginning.
The Old Contract
Traditional programming required humans to speak the machine's language:
Human intent: "Move the knight to f3"
Required code: state = state.with({
board: state.board.with({
[g1]: empty,
[f3]: state.board[g1]
}),
turn: opposite(state.turn)
})The human had to translate their intent into exact syntax. Any mistake---a missing bracket, a misspelled name---meant failure.
The New Contract
With language models, the contract flips:
Human: "Move the knight from g1 to f3"
LLM: // Translates to correct code automatically
state = apply_move(state, knight g1 → f3)The human expresses intent in natural language. The machine does the translation. Errors in expression are tolerated---"mve the knigt to f3" still works.
What This Changes
This shift has profound implications:
Lower barrier to entry: You don't need to learn syntax to start programming.
Focus on intent: Energy goes to what you want, not how to say it.
Rapid iteration: Describe, see result, refine. No compile-error loops.
Natural documentation: The description is the documentation.
What Remains the Same
But the foundations we've studied still matter:
- State and change: You still need to understand what state means
- Pure vs effects: The distinction matters for reasoning
- Types and validity: Constraints don't disappear
- Events and history: Time-based patterns remain
The LLM translates. But what it translates must still be coherent. "Make it work better" isn't useful input---you need to understand the domain well enough to express meaningful intent.
Levels of Abstraction
Programming has always been about raising abstraction:
1950s: Machine code (binary)
1960s: Assembly (symbolic instructions)
1970s: High-level languages (C, Pascal)
1980s: Object-oriented languages (C++, Smalltalk)
1990s: Managed languages (Java, Python)
2000s: Frameworks and DSLs
2010s: Declarative and reactive patterns
2020s: Natural language interfaces (LLMs)Each level hides complexity from the level below. LLMs continue this trajectory---they hide syntax, grammar, and low-level patterns.
Semiotics: Signs and Meaning
The philosopher Charles Sanders Peirce studied how signs carry meaning. Code is a sign system---symbols that represent operations, data, relationships.
LLMs are remarkable because they understand multiple sign systems:
- Natural language (English, Spanish, etc.)
- Programming languages (Python, JavaScript, etc.)
- Visual representations (diagrams, UI layouts)
- Mathematical notation
They can translate between them. "Implement the formula " becomes code. "This Figma design" becomes a React component. The barriers between representations are dissolving.
The New Literacy
What does it mean to be "literate" in programming now?
Old literacy: Knowing syntax, avoiding errors, writing correct code.
New literacy: Understanding concepts, expressing intent clearly, evaluating outputs.
You no longer need to memorize whether Python uses elif or else if. You do need to understand what conditional logic means and when to use it.
The concepts in this book---state, events, types, functions, abstraction---become more important, not less. They're the vocabulary of intent.
We've moved from spelling to speaking. But something even more profound is happening. The boundary of what counts as "the system" is expanding. In the final chapter, we explore the expanded mind.