Your AI Team Will Drown You (If You Let It)

It took five days.

Five days from the moment I gave my AI team its first task to the moment I realised I could not hold it all in my head. Not because the work was bad – it was good. Infrastructure monitoring deployed, photo management running, blog live, site redesigned, backup systems rebuilt, six dashboards configured, three research pieces completed, network audit done, network reconfiguration planned. Seventeen distinct deliverables in five days, each one competent, each one something I had asked for.

And I could not tell you, on the morning of day six, what state half of them were in.

This is the piece I wish I had read before I started. In “Stop Deploying AI Tools,” I argued that you should build your own AI team – that personalisation is what makes AI useful, and you cannot get personalisation from someone else’s product. I still believe that. But there is a critical thing I left out: the system that makes your AI team powerful is the same system that will overwhelm you if you do not build the framework to absorb its output.

You would not build a house without foundations, however good your tools are. A knowledge framework is the foundation. Without it, everything you build sits on sand.

The speed we were warned about

In 1970, Alvin Toffler published Future Shock and described a world where the pace of change would outstrip humans’ ability to adapt. He was not writing about technology specifically – he was writing about the psychological cost of too much change in too short a time. “Future shock,” he wrote, “is the dizzying disorientation brought on by the premature arrival of the future."[4]

Fifty-six years later, that description fits almost perfectly. Not because the world changed too fast – we adapted to most of it – but because AI teams compress the cycle into days. What used to take months of gradual adjustment now happens in a working week.

Information overload itself is not new. Bertram Gross coined the term in 1964. The concept goes back to Conrad Gessner in the sixteenth century, who worried about the “unmanageable” volume of books after the printing press. Every communication technology triggers the same cycle: the printing press, the telegraph, the telephone, email, social media. Each one increased the volume of information humans had to process. Each one spawned predictions of cognitive disaster. Each one was eventually absorbed – through new norms, new tools, new frameworks for managing the flow.

But AI teams introduce a qualitative difference. Previous technologies increased the volume of input – more emails from other people, more messages, more documents landing in your inbox. AI teams increase the volume of output that you yourself generated. You are drowning not in other people’s communication but in your own team’s production. That is psychologically different. It is harder to ignore or filter, because it is all, nominally, stuff you asked for.

A Fortune article from March 2026 captured the new paradox precisely: “We got 8 hours of work down to 2 hours – but now they give us 20 hours of work."[5]

The output scales. The human does not. Toffler’s future shock is no longer about society. It is personal.

Your brain is smaller than you think

Most people have heard of George Miller’s “magical number seven” – the idea that human working memory holds about seven items. It is one of the most cited findings in psychology. It is also, by modern standards, too generous.

Nelson Cowan’s research, published across two decades and synthesised in his 2010 paper “The Magical Mystery Four,” presents strong evidence that the true capacity of working memory is approximately four chunks, not seven. Multiple independent studies have corroborated this. The number depends somewhat on what you are counting and how chunks are defined, but the direction is clear: we hold less in our heads than we think we do.[1]

Four chunks. On day five of my AI team’s existence, it had produced seventeen deliverables, each with its own context, dependencies, and open questions. That is roughly four times the capacity of working memory generated per day. No amount of discipline or note-taking closes that gap through willpower alone.

I know this because I tried.

Before the AI team existed, I was trying to solve the same problem with my eInk notebook. I had a daily page – just today’s tasks, goals, and some ad-hoc notes – and then a meetings section that accumulated knowledge from conversations over time. It was a simple system: one surface for what is happening now, another for what has been learned. A prototype, really, for what this has since evolved into.

It broke down quickly. Not because the method was wrong, but because it was hard to maintain manually, and it was always the first thing to drop when time pressures got severe. The daily page would go blank for a week. The meetings section would fall behind. And once it fell behind, the effort to catch up made it easier to just stop. The insight from that experience is important: the framework has to maintain itself, or it will not survive contact with a busy week.

Three flavours of overwhelm

John Sweller’s Cognitive Load Theory, developed from 1988 and refined over three decades, explains why this matters. It identifies three types of cognitive load that compete for your limited working memory:[2]

Intrinsic load is the inherent complexity of the material. Setting up infrastructure monitoring with multiple data sources, alert rules, and dashboards is irreducibly complex. You can manage this through expertise and chunking, but you cannot make it simple.

Extraneous load is the load imposed by poor organisation. If the output of seventeen projects is dumped into a flat list of files with no index, no connections, and no summary – that is extraneous load. It makes the material harder to process than it needs to be.

Germane load is the productive work of understanding things – seeing how the monitoring connects to the backup strategy connects to the network reconfiguration. This is the cognitive work you want to be doing.

Here is the key: total cognitive load must not exceed working memory capacity. If intrinsic load is high (because the AI team produces complex, varied output) and extraneous load is also high (because nobody organises it), there is no capacity left for germane load. No capacity to understand what was built, how it connects, or what to do next.

You are busy. You are not thinking.

The “brain fry” data

In March 2026, BCG published a study of 1,488 US workers examining the cognitive effects of AI tool oversight. The findings were stark. Workers with high AI oversight loads experienced:[3]

  • 14% more mental effort at work
  • 12% more mental fatigue
  • 19% more information overload
  • 33% higher decision fatigue
  • 11% more minor errors and 39% more major errors
  • 34% active intention to quit (compared to 25% among unaffected workers)

The threshold was clear: productivity collapsed when workers used four or more AI tools simultaneously. Three or fewer was the sweet spot. Julie Bedard, the study’s author, summarised it: “Things were moving too fast, and they didn’t have the cognitive ability to process all the information.”

Now, I should be honest about what this study does and does not say. It surveyed workers using corporate AI tools – people who were, in many cases, assigned tools rather than choosing them, and who had limited control over how those tools integrated into their work. The headline finding was that only 14% of workers reported being significantly affected. For most people using a chatbot to draft emails or summarise documents, cognitive overload is not the problem.

But that is not what this piece is about. This is about what happens when you go deeper – when you build a system of multiple AI agents producing substantial, varied output at speed. When you cross that threshold from “AI as assistant” to “AI as team,” the BCG data becomes directly relevant. And the three-or-fewer-tools finding is not an argument against building a larger system. It is an argument for building the framework that keeps your effective cognitive load within manageable limits, even when fifteen agents are working behind the scenes.

The BCG study also reveals something important about oversight itself. If every piece of AI output requires your review before it can be trusted, you have not reduced cognitive load – you have added a layer of it. The more agents you run, the more oversight you need, and the faster you hit the wall. Any serious framework has to address this directly: you need to minimise the human oversight burden through multi-layer review and automation, so that what reaches you has already been checked, challenged, and refined. Without that, you are not managing AI. You are babysitting it.

The system thinks, not just the brain

Edwin Hutchins spent years studying navigation teams aboard US Navy ships in the early 1990s. What he found reshaped how cognitive scientists think about thinking. Plotting a course, he observed, was “not the work of any single mind, but of a system that included people, artifacts – charts, compasses, instruments – and shared procedures.” Cognition was not happening inside one head. It was distributed across the entire system.[6]

This is not a metaphor for what an AI team does. It is a literal description. The knowledge base holds what the human cannot remember. The task backlog sequences what the human cannot track. The daily digest compresses what the human cannot review. One agent surfaces connections the human would not have spotted. Another verifies claims the human cannot check at volume. The human provides direction, judgement, and values. The system as a whole thinks – no single component could it alone.

Andy Clark and David Chalmers pushed this further in their 1998 paper “The Extended Mind.” They asked: where does the mind stop and the rest of the world begin? Their answer: when external tools are used fluently and reliably – when they are trusted, accessible, and integrated into cognitive processing – they become part of the cognitive system. The notebook of a person with memory loss is not used by their mind. It is part of their mind.[7]

A knowledge base that is reliably updated, searchable, and trusted becomes part of how you think. It is not a filing cabinet you occasionally visit. It is cognitive infrastructure – as essential to your thinking as the notes you took in university, except it never loses a page and it cross-references itself.

The scaffold, not the crutch

There is a reasonable objection here: does offloading all this cognitive work to a framework make you dependent? Does it atrophy the very capabilities it is meant to support?

A 2025 paper in Communications Psychology addressed this directly. The researchers drew a crucial distinction between AI as scaffolding – which empowers the human to develop capability – and AI as substitution – which replaces human capability and fosters dependency. Their conclusion: “Whether a technology scaffolds or substitutes depends less on its technical sophistication than on its design philosophy, integration context, and patterns of use."[8]

This distinction matters. The knowledge framework I am describing is scaffolding. It augments human cognitive capacity without replacing human judgement, direction, or understanding. I still read the research, make the decisions, set the priorities, and evaluate the quality. The framework handles storage, retrieval, compression, and connection – the tasks that exceed human working memory. But the thinking remains mine.

The design has structural defences against passive consumption. The quality review loop means another agent challenges work before it reaches me, but I still make the call. The daily digest requires my engagement – it surfaces decisions that need making, not decisions already made. I am not a passenger. I am the person who decides where the system goes.

David Allen put it best, years before AI teams existed: “Your mind is for having ideas, not holding them."[9] The knowledge framework holds the ideas. My mind has them.

What the framework is made of

In cognitive science terms, a knowledge framework’s primary job is to minimise extraneous load so that limited working memory can be spent on germane load – the meaningful work of understanding and directing. Here is what that looks like in practice.

A knowledge base. A persistent, searchable, growing store of what you know. Every research finding, every decision, every connection filed as a discrete entry. This is the extended mind in practice – trusted, fluent, always available. Without it, every session starts from zero. With it, the system accumulates intelligence over time. The principle borrowed from Luhmann’s Zettelkasten method is useful here: notes should be atomic (one idea per entry) and connected (linked to related entries with explicit meaning), because connections compound into insight more reliably than collections do. But the connections themselves have to carry context. As the Zettelkasten community puts it: “to collect connections without an explicit intention, captured meaning, or statement of relevance is not knowledge production, and as a habit, it is even counter-productive."[10] A link without a reason is just clutter.

Task tracking connected to goals. A prioritised backlog is not enough – it has to be connected to what you are trying to achieve. This is pure GTD with a Covey filter: externalise your commitments into a trusted system, but make sure every task connects to an objective, not just an urgent request. Stephen Covey’s distinction between urgent and important is critical here: without the connection to goals, your AI team will happily generate an endless stream of urgent-seeming tasks that keep you busy without moving you forward.[9b] At AI speed, this is not a productivity hack. It is how you stop the system from generating sophisticated busywork.

A daily digest. Automated compression of everything that happened. In cognitive load terms, this converts high-extraneous-load output – dozens of completed tasks, new findings, open questions, blocked items – into a low-extraneous-load summary that preserves germane load. The human reviews a handful of items in the morning briefing, not forty. This is not summarisation. It is cognitive load management by design.

Connections with context. Curated and inferred relationships between knowledge entries – but only when those relationships carry explicit meaning. A transcript from a meeting about network changes gets linked to a separate research piece on security best practices, and the link explains why they are related. The interconnections between knowledge are the real power, but those connections have to have context in themselves. A web of links without meaning is just a more sophisticated mess.

Quality review. Independent verification before output reaches the human. This is not just about catching errors – it directly addresses the BCG finding about oversight load. If you have to personally verify every piece of AI output, your cognitive load scales linearly with your team’s productivity. That defeats the purpose. A quality review layer means that what reaches you has already been challenged and refined. The human engages on high-value judgements – direction, priorities, decisions that require experience and values – not on checking whether a source was cited correctly.

These five components map cleanly onto the cognitive science. The knowledge base is the extended mind. Task tracking is GTD. The daily digest is cognitive load compression. Connections are distributed cognition. Quality review is both decision fatigue mitigation and oversight load reduction. None of them are novel ideas. What is novel is that AI agents can maintain all of them at machine speed, removing the human maintenance bottleneck that killed every previous generation of knowledge management – and, in my case, killed the eInk notebook method within weeks.

That is what the framework makes possible in practice. The daily digest compresses everything into one briefing. The knowledge base is one searchable place. The task backlog is one prioritised list. Fifteen agents, three surfaces.

The paradox

Here is the feedback loop:

AI makes you more productive. You ship seventeen things in five days instead of one. More output means more things to remember, more decisions to make, more connections to track. More cognitive load exceeds working memory capacity – four items, not seventeen. Exceeding capacity leads to decision fatigue, errors, lost context, the feeling of drowning.

If you build the framework, this loop becomes virtuous. Extraneous load is minimised, germane load is preserved, the human can direct even more effectively, and each cycle adds to the knowledge base, making the next cycle easier. It compounds.

If you do not build the framework, the loop breaks. Cognitive overload, chaos, abandonment. The BCG study found exactly this pattern: workers using four or more AI tools without adequate structure saw productivity collapse. The framework is the difference between compounding capability and compounding chaos.

The very acceleration that makes AI teams valuable is what makes them unsustainable without a knowledge framework. The framework is not a nice-to-have. It is the structural condition for the system’s viability.

Where to start

The natural instinct is to build capabilities first and organise later. I know this because it was my instinct. It is wrong, or at least it is wrong at AI speed.

If you are building a personal AI team, the knowledge framework should be among the first three things you build:

  1. A knowledge base – because without it, every session starts from zero and nothing compounds.
  2. Task tracking connected to goals – because without it, priorities drift and work duplicates within days.
  3. A daily digest – because without it, the human falls behind the system within forty-eight hours.

Then add connections with context, quality review, and persistent memory. They make the framework more powerful. But those three are the minimum for sustainability.

Vannevar Bush imagined something like this in 1945. In “As We May Think,” he described the memex – a device for storing, linking, and retrieving personal knowledge through associative trails. He wanted to “transform an information explosion into a knowledge explosion.” Eighty years later, we have the tools to build what he imagined. The question is whether we will also build the framework that makes it usable.[11]

I have spent twenty-five years building things – for clients, for organisations, for myself. What I have learned in the last few months is that building is not enough. You have to build the thing that lets you keep building.

The knowledge framework is not the last thing you add. It is the first.


Sources

[1] Cowan, N. (2010). “The Magical Mystery Four: How Is Working Memory Capacity Limited, and Why?” Current Directions in Psychological Science, 19(1), 51-57. Available at PMC. See also Miller, G. A. (1956). “The Magical Number Seven, Plus or Minus Two.” Psychological Review, 63(2), 81-97.

[2] Sweller, J. (1988). “Cognitive Load During Problem Solving: Effects on Learning.” Cognitive Science, 12(2), 257-285. For a summary: The Decision Lab, Cognitive Load Theory.

[3] BCG, “When Using AI Leads to Brain Fry” (March 2026). Published in Harvard Business Review. Survey of 1,488 US workers. See also Fortune coverage.

[4] Toffler, A. (1970). Future Shock. Gross, B. (1964). The Managing of Organizations. For historical context: Wikipedia, Information Overload.

[5] Fortune, “The AI Productivity Paradox” (March 2026).

[6] Hutchins, E. (1995). Cognition in the Wild. Cambridge, MA: MIT Press. See also Hutchins, “The Distributed Cognition Perspective on Human Interaction”.

[7] Clark, A. & Chalmers, D. (1998). “The Extended Mind.” Analysis, 58(1), 7-19. Available at PhilPapers.

[8] “Cognitive Offloading or Cognitive Overload? How AI Alters the Mental Architecture of Coping” (2025). Communications Psychology. Available at PMC.

[9] Allen, D. (2001). Getting Things Done: The Art of Stress-Free Productivity. See gettingthingsdone.com.

[9b] Covey, S. R., Merrill, A. R., & Merrill, R. R. (1994). First Things First. For the urgent/important matrix, see also Covey, S. R. (1989). The 7 Habits of Highly Effective People.

[10] Luhmann’s Zettelkasten method. See zettelkasten.de/introduction. The quoted passage on connections is from the same source.

[11] Bush, V. (1945). “As We May Think.” The Atlantic, July 1945. Available at W3C archive.