- Becoming AI-Native
- Posts
- Workmaxxing
Workmaxxing
Confessions of an AI tokenmaxxer

Hi, and happy Tuesday.
If you want to try “workmaxxing” yourself, skip towards the end, but first…
I have something to confess.
Since the start of the year, I’ve been addicted to Claude Code. You may have experienced your own burst of enthusiasm with your own AI project.
But, for me, Claude Code is the type of addiction where you’d look up and suddenly it's 2 AM and your spouse has stopped asking when you're coming to bed.
I'd describe an idea, and minutes later five working screens are in front of me. It feels like cheating: No specifications, no back-and-forth with UX, development and QA - just instant gratification.
Then I would try to actually use what it built.
The magic disappeared fast.
The screens didn't talk to each other.
The half-formed idea I'd brought to the conversation came back as a half-formed app, only now with plenty of bugs baked in.
What looked like a finished product was, on inspection, just a very confident demo.
The last mile turned out to be 80% of the trip.
For a while I assumed I was doing it wrong.
Then I started talking to other people doing the same thing. Everyone had the same story. Same midnight enthusiasm, same morning hangover. Slowly, we realized it wasn’t the tool that was broken.
We’ve been missing the process.
And the process is less exciting than the fantasy we’ve been living in
But the process works.
(1) You start by writing down what you actually want to build - in a document, not Claude Code.
(2) You ask the AI to “grill you” on the document until it has dragged out of you all the things you didn't realize you hadn't thought of.
(3) The AI then produces an implementation plan, down to outlines of filenames.
(4) Lastly, the AI builds, starting from the pre-work in the implementation plan.
While this sounds simple, there are two important nuances:
(a) At every step, a second AI model reviews everything that has been produced. I’ve found GPT5.5 particularly good for this.
(b) After the “grilling” step, the two AIs complete the rest of the work completely autonomously, without any human input.
GPT5.5 and Claude Code’s Opus effectively “fight it out” until they produce something that actually runs. It can take 40min to several hours, but this “fighting” is what gets the rest of the way through that last 80%
But, there’s just one problem.
Each round of “fighting” creates a new version, and it can take a further 20 versions to get to something close to what you’d hoped for.
And each version takes a tremendous number of AI tokens. (Tokens are the units of inputs and outputs, about ~4 to 5 characters, processed by AI models.)
The first version may have taken 150,000 tokens. By the 20th, you’ve used 3,000,000.
This is, it turns out, what serious people are now calling tokenmaxxing (and what I’m calling “workmaxxing”).
Tokenmaxxing
For those working at the cutting edge with AI, the realization that AI models can be turned on themselves to produce higher quality work has led to a warped reality.
At Meta, there have been internal competitions for who can consume the most tokens.
Engineers at OpenAI, who operate without limits on AI usage, have started referring to themselves as "token billionaires."
Jensen Huang has handed every NVIDIA employee an annual "token budget" equal to roughly half of their salary.
The rationale for this warped reality?
It’s the idea that staff are now operating “software factories” that produce software autonomously, rather than writing lines of code by themselves.
In software development, the old flex was lines of code shipped. The new flex is the number of tokens used - i.e. a measure of the raw materials these factories consume and produce.
The strategic case for this was made most cleanly by Andrej Karpathy.
Karpathy’s argument is that the human's job is no longer to write the code, or to instruct the AI step by step. Instead, it's to define the arena (the context, the goal, the constraints), and to define how to evaluate whether the AI got there.
Once those two things are pinned down, you let the AI agents run in loops until it clears the evaluation bar. You stop being the worker. You become the designer of the arena within which the agents operate.
Which, when I sit with it, is the same thing I wrote in my own book - Do More With Less - about the future of work: we are not doing the work anymore, we are designing the work.
Workmaxxing
Now you too can burn tokens like the token billionaires.
Today, you can download OpenAI’s desktop application - Codex - and use the /goal command. You type in /goal and a durable objective (i.e. the evaluation criteria for “done”). Codex will then continuously keep working toward that objective instead of stopping after one normal turn.
This is an example of how a translation app was built using this approach:
At PreScouter, we’re applying this to decision diligence.
Right now, clients come to us with various critical decisions - ranging from scouting technologies for improving their manufacturing processes to determining the feasibility of selling their products into a new country. We put together teams of subject matter experts and analysts, who comb through data, conduct interviews and reach conclusions.
We can imagine a new model where these experts and analysts build “always on” agents for clients - agents that perform as much of the work themselves as possible, with input and assurance from our teams as necessary. The work of the experts and analysts becomes that of designing the arenas: the data to work against, the guidance, the metrics to measure against and such.
Clients can then give these agents higher-level objectives, which the agents use to proactively pull data, perform work and message them - much like a colleague does. Clients can then use these agents to operate their own work factories, doing far more than they thought possible - workmaxxing.
Two months ago, I put together a closed door demo to share the concept. You can catch the 26 min recording here:
Next week, we’re partnering with TNO - the Dutch research organization - to share a live update.
If you've been having your own sleepless nights about AI, or been watching your team have theirs, I invite you to join us.
It will give you some insight on the shape of things that are coming.
Best,

Dino