Developers using AI felt 24% faster

They were 19% slower

Perceived
+24%
Actual
−19%
43 point gap

METR 2025 — n=16 experienced developers, mature codebases

You spend less time writing, more time checking. The AI writes fast. Finding bugs in plausible-but-wrong code is slow.

What makes collaboration work

Blaurock et al. analyzed 106 studies asking: what makes AI collaboration work? Three factors showed strong effects:

Transparency Strong effect
Process Control Strongest effect
Engagement Negative effect

When the AI asks questions, frequent users performed worse

"Show your work. Let them steer. Don't interrupt with questions."

— Synthesized from Blaurock et al. (2024), Journal of Service Research

Cognitive extensions

Clark & Chalmers (1998), "The Extended Mind"

Otto uses a notebook to remember addresses
Inga uses biological memory

If we call Inga's process "remembering," we should call Otto's the same.

The notebook is part of Otto's mind.

The parity principle: If a process were done in the head, we'd call it cognition. When external but functionally equivalent, it's cognitive extension.

This is why we call them cognitive extensions, not tools.

The question isn't "is AI helpful?" but "what kind of mind are you building?"

Three ways AI extends capability

Complementary

Human role Learns, guides, improves
Outcome Better with and without AI

Constitutive

Human role Learns, guides new capability
Outcome Does what was impossible alone

Substitutive

Human role Passively consumes
Outcome Skills atrophy

The distinction isn't what task you're doing.
It's how you're doing it.

Proof: Design determines outcome

Harvard researchers gave students the same AI tutor. The only difference was how it was designed.

Unrestricted access
−17%
exam performance
Guided step-by-step
~0%
no significant harm

Same AI. Same students. Design made the difference.

Bastani et al. (2025), PNAS — randomized controlled trial, n=1,000+

Why these factors matter

Why do transparency and control matter? One study found a mechanism.

Students who treated AI as a learning tool — something to question and learn from — maintained critical thinking. Students who treated it as an answer machine didn't. The difference: 35.7x more likely to stay sharp.

This comes from ACU Research Bank — not a top journal yet. But the 35.7x effect is huge, and it matches what validated studies predict. Treat as suggestive, not proven.

Your mindset may matter more than the model.

OrientationBehaviorOutcome
MasteryViews AI as scaffold, questions outputProtected
PerformanceViews AI as oracle, accepts outputAt risk

ACU Research Bank (2025) — effect requires replication

Design principles

These principles guide how we build cognitive extensions.

  1. 01

    Collaborative agency

    Both human and AI retain agency. You see the reasoning. You can redirect. You stay in control.

  2. 02

    Bidirectional learning

    The human grows, not just consumes. Each interaction leaves you more capable, not more dependent.

  3. 03

    Transparent abstractions

    Extensions are readable text, not black boxes. You can fork them, modify them, understand them.

  4. 04

    Compounding value

    Each solution makes the next one easier. What you learn today becomes a foundation for tomorrow.

Foundations compound

Complementary Capability compounds
Substitutive Atrophy compounds

Every interaction either builds capability or erodes it. Small differences compound.

These extensions are designed for the upper path.