I embed my real-world inspirations into an invented (constructed) world, and that way I remember them better. Books, ideas, events & experiences, people, etc...
At the same time, this invented alternate reality helps me model and think about ideal learning systems and tools for thought.
Yeah, that’s what I’ve been doing for years. Nerding out about memory systems and lifelong learning within/through an imaginary universe.
Just a play.
In the past few days though, it started to feel like preparation.
Because technology has leapt so dramatically in just the past weeks of February 2026 that the dilemma of whether we’re building AI tools to replace our thinking or to deepen it is becoming the most important question in tech and soon, in culture and society, too.
Suddenly, ancient memory systems and cognitive strategies become newly relevant.
So I'd like to dedicate this month's letter to the things that make us irreplaceable: capacity to think, to notice and play, to connect ideas that no one else would connect.
Web 1.0 is coming back
Let me start with a trend: personal websites and newsletters replacing social media. People opting out of a system that feels broken and rebuilding something that feels human-scaled.
The tools are modern, but the philosophy is pure Web 1.0: you own your space, control your distribution and build direct relationships.
And personal websites become the source of truth. How people can verify that you are real, that your ideas have a history and a context.
Provenance becomes the premium
Origin gains value: where something comes from, how it was made, what preceded it, who stands behind it, and why.
That’s one and beautiful reason personal websites are returning. They hold continuity of thought in one place:
- a reader can see context and trace it across time,
- authenticity functions as infrastructure.
The early internet coming back as resistance against the AI-soaked world. Cool.
But this line has another practical function: a personal site or personal project creates space for training.
A person regularly formulates their own thoughts, holds their own frames, and builds their own knowledge.
That effort becomes a key ingredient of working with technology in the AI era.
And that’s where the topic of cognitive surrender opens up.
When a person adopts the system’s judgment
Cognitive surrender can be distinguished from cognitive offloading.
Offloading = a person strategically hands off a task to an external tool. A calculator for arithmetic. Navigation for routing. A phone note for remembering. Control and judgment remain on the person’s side.
Cognitive surrender goes deeper = the decisionmaker stops constructing an answer. They adopt an answer generated by an external system. And together with the answer, they adopt the judgment.
At a cognitive level, what shifts is critical evaluation. The user relaxes cognitive control and takes the AI’s judgment as their own.
This mechanism is seductive because it looks like productivity.
The system simulates intelligence convincingly. The text or code is fluent, confident, immediate. The result feels finished. Verification gets postponed even when intuition signals something is off.
In the background, a shift unfolds: a trade-off of mental autonomy for algorithmic velocity. A smooth and strange decline of critical thought disguised as performance.
Becoming a battery begins 🤗
Or as Cory Doctorow says, becoming a reverse centaur. Because it's nice to be a centaur, and it's horrible to be a reverse centaur, as he points out.
Btw, read the Cory's text, if you are interested in how to be an AI critic whose criticism inflicts maximum damage on the parts of AI that are doing the most harm.
But I digress a bit, AI criticism is a topic for another day/letter.
A new HCI paradigm: interface includes practice
Working/living with AI increasingly pulls a user into the mode of reviews, coordination, and decisions that happen at boundaries.
We are no longer fully inside the work, more like supervising it, or trying to supervise. Spending more time reviewing outputs, deciding when and where to intervene. Execution becomes cheap and universal, power shifts to judgment.
I believe it’s more urgent than ever to expand what we mean by the human–AI interface to include the habits people bring to the machine.
= Practice as part of an emerging Human-Computer Interaction (HCI) paradigm.
This perspective shift matters because practice is where cognitive autonomy is either maintained or quietly surrendered. The drift happens through micro-routines:
- writing in your own words,
- creating own condensations,
- coining one's own concepts,
- deciding when verifying and when skipping verification,
- when accepting an answer as a finished judgment.
As we start asking whether AI replaces thinking or deepens it, practice becomes part of the interface by necessity.
I kept looking for a concept or/and a language for describing what that interface could be, and then it clicked that I’d had one in my hands for a long time: cartotech.
A prototype of practice and a prototype of interface
Cartotech is a design fiction artefact.
Existing within the world of the Mirage Mir story as an ancient learning and memory system developed by dream hunters and used later by the Glass Bead Game students and players.
Technically, cartotech consists of network of short notes, a private imaginary world and a various numbers of stories from/about this world.
Together they enable a superior system for capturing, storing and accessing information/knowledge.
Cartotech can be illustrated with the concept of view control in photo editing:
- in one view you work with the image as you see it,
- in another view you manipulate the graphs rather than the representational image.
Changing the view changes the type of work you can do.
With cartotech, the same same knowledge can be grasped in multiple ways. And multiple ways, how to understand + communicate with the world:
1. Notes: the data layer of external memory
The foundation of cartotech is a network of short, autonomous, self-written notes.
Think of it as cards in a shoe box, each linked to at least one other card.
Check Zettelkasten method or Andy Matuschak's Evergreen notes to find out more (or everything:)) about writing and working with notes.
Textual notes matter for a direct reason: when AI saves you from writing, it also saves you from the thoughts that writing generates = writing produces ideas.
Writing also produces comprehension checks, because your own words quickly reveal what’s truly clear and what’s merely adopted.
2. Paracosm: entry points through characters and scenes
Above notes is the paracosm layer: characters, scenes, environments and other objects from your own imaginary world.
Building and living inside an imagined world, also called worldplay, works as a powerful cognitive strategy because it gives your mind a hands-on “learning lab”.
When you keep a world consistent over time, you practice deep focus and sustained attention. You repeatedly model a reality, test how things fit together, and gradually organize what you know through stories, patterns, and connections.
You also tend to produce concrete artifacts (notes, maps, sketches, timelines, documents) which makes thinking visible and easier to refine.
Along the way, worldplay trains the same abilities that show up in mature creative work: imagining alternatives, spotting patterns, empathizing through characters, and solving problems you set for yourself inside the system you’re building.
Whether it turns into professional output depends on opportunity, desire, and long-term practice, but the core benefit stays the same: worldplay keeps you rehearsing how to learn, discover, and create.
Paracosm as a memory layer or environment also creates bridges into knowledge, because years of cumulative thinking can be encoded to an imaginary scene or a character and kept in mind for a quick retrieval.
So much that a person is able to carry around in their mind: imaginary characters, their stories and landscapes are cues to knowledge and information.
A mobile database with ultra low latency access, resting on the idea that the human mind is primed to remember images and physical locations better than anything else.
Maybe you've heard of the memory palace = the practice of memorizing something by turning it into images, which you then stow around a place you remember very well, like your childhood home.
To remember the passage, you walk around the palace, retrieving the images.
Emily Brontë and her siblings, Stanisław Lem, J. R. R. Tolkien or Ursula K. Le Guin were some of the famous worldplayers.
3. Story: a linear snapshot as a dashboard to external memory
The third and outermost layer is story.
It’s a way to turn a non-linear network and/or paracosm into a linear sequence that can be shared and discussed.
Story means binding objects and events in the course of time = story as a time-based art, a DJ set, a sequence, a chain, a route, a journey = it has segments, junctions, and waypoints, but you experience them in order, from start to finish.
You need time to take it in.
It functions as a dashboard to data stored in external memory, too.
Story layer also matters for practice: when a person regularly creates snapshots, they must clarify what they think. They must choose, hold a frame.
David Lynch used to say: "If you have 70 notecards, each representing a scene, you’ve got a movie."
Exactly the kind of work cognitive surrender weakens when it gets shifted to the system.
Outro
In Mirage Mir the story, cartotech is an old technique how dream hunters keep knowledge alive across years and a lifelong strategy for discovering, learning, and creating/working, because it cultivates useful factors that make you acquire a network of useful skills and behaviors.
In February 2026, it starts reading less like lore and more like a blueprint.
The temptation of this moment is endless generation. You can ask for a story, a scene, a model, a whole universe and receive it instantly.
Yet scale isn’t the point. Without connection to your life, generated worlds lose charge. They don’t stick, because they don’t point back to anything you’re actually trying to hold.
There’s also another/deeper risk: cognitive surrender, letting the system’s fluent outputs become your judgment, with minimal scrutiny, until the habit of thinking in your own words starts to thin out.
That’s why cartotech may be useful as a working concept in emerging HCI discourse. A working concept that offers language for describing practice as part of the interface.
I’ve been thinking about cartotech as an ancient method inside a fictional world. At this point, I’m starting to treat it as a practical design problem = I don’t only want to describe it.
I want to build it 🌱
. . .
Dear reader, before you go
I’m collecting pointers related to the topics above. If you know good sources, please send links:
- theoretical work on open virtual worlds and worldbuilding, RPG game design,
- essays and research on storing knowledge in space, spatial computing, and spatial workspaces,
- writing on new HCI paradigms and emerging UX/design patterns.
Thanks a lot & take care 🖤
Peter