The through-line across all five of these: AI is an organizational technology, not exclusively a productivity tool. It changes who talks to whom, what expertise is worth, where the boundaries of the firm sit, what teams are for, and what collaboration feels like. Whether it makes us more connected (INSEAD) or less dependent on each other (P&G) might come down to how we build and ground the tools. Shannon Mattern would remind us not to confuse the resulting legibility with understanding. The map is getting really, really good. It's still not the territory.
Company as Code

Daniel Rothmann proposes treating your entire organizational structure as version-controlled code. A company manifest, showing who reports to whom, what policies apply where, which roles carry which accountabilities, written in a domain-specific language, diff-able, branchable, queryable. Want to model what happens if you merge two departments? Spin up a branch, run the impact analysis, and merge when ready.
I wrote more about this last week, but the short version is: this idea has tons of precedent (GlassFrog, Team Topologies' TeamAPI, the whole DAO movement, Open Policy Agent) and almost zero adoption. This is the real question! It's still hard to get leaders to use Tableau; expressing reporting relationships in a programming language is a different kind of behavior change. (Is it also a ngmi thing?)
If your org structure is machine-readable, an LLM can reason about it. "Show me every role that touches customer data." "What happens to escalations if we merge these two teams?" The combination of structured org data plus AI querying is powerful, and it connects directly to Transparency and Rule of Law. Most organizations run on a fog of half-remembered org announcements and outdated wiki pages. An all-seeing Logbook would fix that.
But should we push this metaphor all the way? Because...
A City is Not a Computer

Shannon Mattern's book is a corrective to the exact impulse that makes "company as code" so seductive. She's writing about cities, but it's basically the same thing: technofuturists treat complex human systems as engineering problems and import assumptions about what counts as knowledge. Citizens become users, residents become rows in a database.
My favorite chapter is about libraries. Where dashboards watch, libraries serve. Where digital twins model behavior from above, libraries grow knowledge from below. Libraries are built around care even at the cost of efficiency—librarians who can interpret a vague question, community rooms where strangers meet, collections no algorithm would assemble. Her alternative to the computer metaphor is gardening-adjacent, about branching what already exists...systems that layer and grow together imperfectly over time. She points toward maintenance and repair as values worth designing for.
What might that "care infrastructure" look like inside a company? Probably office hours with no agenda, Slack channels where "I don't know" is a normal answer, documentation maintained as a commons.
So yes, make organizations more legible and inspectable. And remember that the code will never capture the whole system, that the informal network always expands beyond the written org chart, and that some of the most valuable infrastructure in any organization is the stuff that looks like inefficiency from a dashboard.
OK so that's the philosophy. What's AI actually doing to the social fabric of work? Two new field experiments, two very different answers.
The Cybernetic Teammate

This is the latest from the Dell'Acqua/Mollick crew, and the headline finding is striking: individuals working with AI matched the performance of two-person teams working without it.
776 P&G professionals were randomly assigned to work on real product innovation challenges, either alone or in cross-functional pairs (one R&D, one Commercial), with or without GPT-4. Individuals with AI (+0.37 SD) performed at essentially the same level as human teams without AI (+0.24 SD). Teams with AI did a tiny bit better (+0.39 SD), but they were a whole lot better at having breakthrough outcomes (keep reading).
Even more interesting is what happened to expertise boundaries. Without AI, R&D people proposed technical solutions and Commercial people proposed commercial ones, so functional silos doing what functional silos do. With AI, that distinction vanished. Both groups produced balanced, cross-functional solutions regardless of background. AI broke the silo without requiring the team.
The emotional findings are wild: people reported more positive emotions (excitement, energy, enthusiasm) working with AI than working alone, matching the emotional lift typically associated with having a human teammate. The researchers call this the "cybernetic teammate" effect, after Norbert Wiener. AI is starting to provide some of what we've always said only teamwork provides—performance, expertise sharing, and social engagement. All three pillars, from one interface.
This is an uncomfortable org design question: if AI can substitute for the benefits of teamwork, what's the justification for teams? The paper's answer, which I think is right, is that teams + AI still produce the best exceptional outcomes (they were THREE TIMES more likely to land in the top decile). At least for now, breakthrough performance still needs humans and AI together.
Now here's where it gets interesting, because a different experiment found something nearly opposite...
The Impact of Generative AI Adoption on Organizational Networks
Where the P&G experiment asks "can AI replace a teammate?" this randomized controlled trial goes after "what happens to the org chart when you give people AI?" and gets a surprisingly social answer:people talked to each other more, not less.
The study covered 316 employees across 42 teams in an European tech services firm. Half got a RAG-powered GenAI assistant grounded in the company's knowledge base. Half didn't. Three months later, collaboration ties jumped (+7.77 degree centrality vs. +1.12 for control). Knowledge-sharing ties showed similar gains. The network visualizations are striking—treatment group nodes go from scattered clusters to a dense, interconnected mesh.
Specialists became knowledge magnets. Technical experts saw the biggest jump in being sought out for knowledge. The AI made deep expertise more valuable, not less—it helped people find and access the right expert faster. That's basically what Guilds are supposed to do, achieved through tooling rather than structure. Generalists shipped more. Sales staff completed roughly 28% more projects. The AI handled enough coordination overhead that integrators could actually integrate (Expanded Available Power in action).
So how do you reconcile this with P&G? I think the difference is context grounding. P&G used generic GPT-4. INSEAD used a RAG system embedded in the firm's own knowledge base—CRM data, internal docs, meeting recordings. Generic AI might substitute for teammates; grounded AI might make teammates more valuable by lowering the cost of finding and consulting them. The tool you build determines the social structure you get.
The unresolved tension in the INSEAD paper: knowledge In-Degree and project output were negatively correlated. The people everyone consults aren't the ones shipping the most projects. If specialists become knowledge magnets, do they eventually drown?
When it Starts Feeling Like a Video Game

I built three functional things in a weekend with Claude as my teammate—a research survey, a team charter tool, a book promotion page—and it felt like playing a video game. The bad friction was gone. The space between "I want this to exist" and "it exists" collapsed to almost nothing.
This reminds me of Jane McGonigal's framework for why people play, or put in purely economic terms, why people do things for free: to accomplish satisfying work, to spend time with people you like, to get good at something, to be part of something bigger. AI hits three of four. The second one, "spend time with people you like," is where it gets weird. IMO/IME the best teammates have always been the ones whose easy-mode is your hard-mode, and Claude's easy-mode is expansive. But it does not give a shit about my easy-mode. It does not notice whether I'm good at having opinions about what should exist. That asymmetry is what makes it work and what makes it not quite collaboration.
This is basically Coase's Law playing out in real time. When the cost of building drops below the cost of buying, the buy market collapses. A lot of mid-tier SaaS exists not because the idea is hard, but because the building was hard. When the building takes a weekend, that value proposition evaporates.


