Missing a serious context of use
- draft
In LLM-productivity discussions, but also generally
This is a draft. I'm working on this idea actively.
- AI output is spiky (brilliance and slop coexist closely), and this makes general talk about “usefulness” unreliable without context.
- Public discourse has hype-wave incentives: they select for narratable, one-sided stories and underreport evaluation and integration costs.
- “Serious context of use (Matuschak)” is a discriminator: it’s where the work is pulled by a mission not about the system itself, and where reality keeps you honest.
- Operationally, a context is “serious” when a tool is repeatedly exercised in service of an exogenous purpose, under whole-problem pressure, with reality-backed feedback and stakes such that failure becomes legible and costly.
- The key test isn’t “is this impressive/innovative/high-quality?” but “what concrete constraints force contact with reality?” (e.g., gates, budgets, adoption, maintenance, consequences).
- This feels personally relevant: I’ve been (literally) missing a serious context of use. The more I move it to the foreground, the more I notice how much it changes what “productivity” even means.
-
Matuschak originally writes about this in the context of note-taking.
-
I asked around in an online community that tried to “pressure test” the claim by looking for public counterexamples: people who write a lot about note-taking and clearly have a serious context of use outside that discourse.
- Cal Newport was mentioned as a possible example.
-
A useful way to pressure-test seriousness across domains (writing, Large Language Model coding, productivity systems) is to ask: If the tool vanished tomorrow, would the underlying mission still demand progress—and would failure still cost me something?