Key Takeaways
-
AI writing tools cut email drafting time by 40%, but research shows AI-composed messages damage perceived manager credibility and authenticity, and the efficiency gain comes at a trust cost most organizations aren’t measuring.
-
Atlassian found 87% of staff waste an average of five hours per week clarifying confusing written communication, suggesting the real problem was never writing speed. It was the decision to write at all.
-
A Visier survey found 83% of workers engage in at least one form of performative work behavior, with the top behaviors centered on embellishing responsiveness. AI tools are now automating the performance itself.
-
Gloria Mark’s research at UC Irvine shows average attention spans collapsed from 2.5 minutes in 2004 to 47 seconds by 2020, with email as the primary driver. Adding more polished, AI-generated email to this environment isn’t a solution, it’s an accelerant.
George Orwell’s 1984 Communication Loop
When machines write what humans won’t read, and humans pretend they did both
A manager has one thought Monday morning: the Q3 rollout needs to move up two weeks. One sentence. But one sentence doesn’t look like leadership. So he opens Copilot, feeds it the bullet point, and watches it bloom into four paragraphs of stakeholder alignment language. He skims it, changes one word, hits send to forty-three people.
Across the building, a program lead sees the email land in an inbox 87 messages deep. She highlights the body and asks Gemini to summarize. The summary: Q3 rollout moving up two weeks. Action needed by Friday. One sentence. The same sentence the manager started with, before AI inflated it into organizational theater neither human wanted.
Tom Fishburne captured this loop in a cartoon that has quietly become one of the most shared workplace observations of the past year. The left panel: “A.I. turns a single bullet point into a long email I can pretend I wrote.” The right panel: “A.I. makes a single bullet point out of a long email I can pretend I read.” It’s funny because it’s happening in every organization with a Microsoft 365 or Google Workspace license. It’s less funny when you realize what it reveals about how work actually functions (or doesn’t) inside most companies.
This isn’t a story about AI doing something wrong. The tools work exactly as designed. The problem is what “as designed” assumes about how organizations communicate, and why nobody is asking whether the communication itself should exist.
The inflation-deflation engine
The data on workplace communication volume has been alarming for years. McKinsey found that knowledge workers spend 28% of their workweek, roughly 11.2 hours or 580 hours annually, reading and answering email. That was before generative AI made producing email nearly frictionless. The average knowledge worker now receives between 100 and 150 emails daily, a number that has been climbing steadily since 2019.
AI writing assistants were supposed to help. And by one narrow metric, they did. Research published in early 2025 found that ChatGPT-assisted email drafting cut writing time by approximately 40%. Microsoft reported that Copilot users saved measurable time on composition tasks. Google’s Gemini for Workspace team claimed users saved an average of 105 minutes per week.
But here’s the part that doesn’t make the vendor keynote: if everyone can write emails 40% faster, the equilibrium response isn’t fewer emails. It’s more emails written faster. The friction that once discouraged a manager from sending a four-paragraph update to forty-three people, the twenty minutes it would have taken to write it properly, that friction was load-bearing. It was an informal filter. It forced a question that AI bypasses entirely: is this worth writing?
The same dynamic applies on the receiving end. AI summarization tools don’t reduce the volume of communication; they reduce the perceived cost of ignoring it. When an inbox of 150 emails can be processed in minutes through automated summaries, the organizational signal that “we send too many emails” gets muted. The symptom is treated. The disease progresses.
What we’ve built, without quite meaning to, is an inflation-deflation engine. AI inflates sparse thoughts into verbose messages on one end. AI deflates verbose messages into sparse summaries on the other. The information content at each terminus is roughly identical. The entire middle layer, the carefully formatted paragraphs, the stakeholder language, the professional tone, is generated by machines, transmitted between servers, and consumed by machines. Humans touch the edges. Machines process the middle. And everyone reports being more productive.
The authenticity tax
If the communication loop were merely wasteful, it would be a cost problem. But emerging research suggests it’s also a trust problem.
A study presented at the ACM Web Science Conference in 2025 examined how recipients perceive AI-composed emails versus human-written ones. The findings were instructive and uncomfortable. AI-generated messages were consistently classified as “polite” because large language models are trained to produce polite outputs. But this uniform politeness created an uncanny flatness. Human-written emails contained varied emotional expression: frustration, enthusiasm, dry humor, bluntness. AI-written emails sounded like they were all authored by the same agreeable middle manager.
More damaging, research from USC Marshall found that when employees detected AI involvement in manager communications (and they detected it more often than managers assumed) it eroded perceived authenticity and trustworthiness. The effect was dose-dependent. Managers who used AI lightly, for polishing or proofreading, were perceived as competent and efficient. Managers who relied heavily on AI to draft relationship-oriented communications, including team congratulations, motivational messages, and performance feedback, were perceived as lazy and uncaring. The message they sent, regardless of its content, was: you weren’t worth my actual words.
This creates a particularly cruel irony for HR teams, who produce some of the most relationship-dependent communication in any organization. The all-hands recap. The benefits enrollment reminder. The RIF notification. The DEI update. These are messages where tone, specificity, and perceived human authorship carry real weight. When HR leaders use AI to draft these communications faster, they may be optimizing for the wrong variable entirely. Speed of composition was never the bottleneck for organizational trust.
The research aligns with something practitioners already sense intuitively: the most important workplace communications are important precisely because they require effort. A handwritten note means more than a printed card. A specific piece of feedback means more than a generated summary. When AI removes the effort, it doesn’t just remove the time cost. It removes the signal that effort communicates.
Performative work finds its perfect tool
The communication loop doesn’t exist in a vacuum. It fits neatly into a broader pattern that predates AI by decades: performative work.
A 2023 survey by Visier found that 83% of workers admitted to engaging in at least one performative work behavior in the prior twelve months. The most common behaviors weren’t about faking deliverables or inflating metrics. They were about communication. Embellishing responsiveness. Sending emails at strategic times to signal availability. Crafting unnecessarily detailed updates to demonstrate engagement. The performance of work, rather than the work itself, had become a core workplace competency.
AI tools are now automating this performance. A manager who once spent twenty minutes composing a strategic-sounding update to justify their involvement in a project can now produce the same update in ninety seconds. The performative output is identical. The human cost is lower. And this is exactly the problem: when performative communication becomes cheap to produce, organizations get flooded with it. The ratio of signal to theater shifts further toward theater.
HR technology leaders should recognize this pattern, because they’ve seen it before. It’s the same dynamic that produced dashboard-driven engagement theater. Organizations invested in pulse surveys not because they planned to act on the results, but because measuring engagement felt like addressing engagement. The dashboard became the deliverable. Similarly, AI-composed communications feel like communication. The email becomes evidence that leadership is engaged, aligned, and “looping in stakeholders.” Whether anyone actually reads it, whether it changes a single behavior or decision, is beside the point.
Gartner’s research supports this structural concern. Their 2024 findings showed only 23% of digital workers reported being completely satisfied with their work applications, down from 30% in 2022. More tools. More AI features. Less satisfaction. The problem isn’t the tools. It’s that the tools are being deployed to make broken communication patterns more efficient rather than to question whether those patterns should exist.
The attention economy’s last straw
Every AI-generated email that arrives in an inbox, no matter how well-summarized by another AI, still creates a cognitive event. It still triggers a notification. It still demands a micro-decision: read, skim, summarize, ignore, flag, delegate.
Gloria Mark, a professor of informatics at UC Irvine, has spent two decades studying how digital interruptions fragment attention. Her findings should terrify anyone building AI tools for workplace communication. In 2004, the average time a knowledge worker spent on a single task before switching was two and a half minutes. By 2020, that number had collapsed to 47 seconds. The primary driver of this fragmentation was email and messaging tools. Workers check email 74 to 77 times daily. And critically, Mark’s research found that people are as likely to self-interrupt, switching to email unprompted, as they are to be interrupted externally. The inbox has trained us to check it compulsively.
AI summarization tools address the symptom of this crisis by reducing the time spent processing each individual email. But they don’t address the cause: too many messages demanding fragmented attention. A worker who receives 150 emails and AI-summarizes them all still experienced 150 context switches, 150 micro-decisions, 150 moments of broken focus. The summary made each interruption cheaper. It didn’t make the interruptions stop.
Cal Newport has argued for years that the fundamental architecture of knowledge work, built around constant communication and rapid responsiveness, is hostile to the deep, focused work that actually produces value. AI communication tools don’t challenge this architecture. They reinforce it. They make it easier to participate in the communication stream, easier to produce and consume messages, easier to maintain the appearance of responsiveness that organizations reward. The deep work that Newport describes as a “superpower,” sustained uninterrupted focus on cognitively demanding problems, becomes harder, not easier, when communication is cheaper to produce and cheaper to process. The floor drops out on both sides.
The middle management compression
There’s a structural consequence to the communication loop that HR leaders need to grapple with, because it directly affects organizational design.
Gartner predicted that through 2026, 20% of organizations would use AI to flatten their structures, eliminating more than half of current middle management positions. The logic is straightforward: if AI can summarize upward and distribute downward, the human layer that once performed this translation function becomes redundant. Middle managers have historically served as information routers: synthesizing team updates for leadership, translating strategic direction for teams, contextualizing organizational decisions for their reports.
The communication loop threatens this function directly. When a senior leader can ask AI to summarize a project team’s Slack channel, email threads, and status updates into a single brief, the middle manager’s synthesis role evaporates. When AI can translate a strategic directive into team-specific action items with contextual framing, the middle manager’s translation role evaporates. What remains is the parts of middle management that AI currently handles poorly: coaching, conflict resolution, political navigation, and the judgment calls that require understanding unstated organizational dynamics.
But here’s the risk that Gartner’s prediction underweights: middle managers don’t just route information. They filter it. They decide what’s worth escalating and what isn’t. They add context that only comes from being in the room. They protect their teams from organizational noise. When AI replaces the routing function, it also removes the filtering function. The result isn’t a leaner organization. It’s an organization where every signal reaches every level without human judgment about whether it should.
HR leaders who are planning organizational restructuring around AI capabilities need to think carefully about which middle management functions are truly redundant and which are load-bearing walls that happen to look like overhead. The communication loop makes the routing function look redundant. It tells you nothing about whether the filtering, contextualizing, and shielding functions can be safely removed.
The enterprise vendor playbook
It’s worth noting who benefits from the communication loop, because it isn’t the knowledge workers drowning in AI-generated email.
Every major enterprise software vendor, from Microsoft and Google to Salesforce and ServiceNow, is now selling AI features that simultaneously increase communication output and provide tools to manage the increased volume. Microsoft Copilot writes your emails and summarizes your inbox. Google Gemini drafts your documents and creates your meeting recaps. The product strategy, stripped to its logic, is: we’ll help you create more, and then we’ll help you deal with having created more. The meter runs in both directions.
This isn’t a conspiracy. It’s an incentive structure. Vendors are optimizing for adoption and engagement metrics within their platforms. More emails written in Outlook means more Copilot usage. More documents created in Workspace means more Gemini queries. The communication loop isn’t a bug in the enterprise AI strategy. It’s a feature of the business model.
HR technology leaders need to evaluate AI communication tools not by the efficiency gains they promise on individual tasks, but by their system-level effects on organizational communication volume, attention fragmentation, and trust. A tool that saves each manager fifteen minutes per day on email composition but collectively increases organizational email volume by 30% hasn’t created value. It’s redistributed cost from writers to readers, and then sold readers a tool to manage the cost it created.
The critical question for any HR or IT team evaluating these tools is: does this reduce the total amount of communication in our organization, or does it reduce the friction of producing communication? These are opposite outcomes wearing the same marketing language.
What Actually Works
The organizations that break the communication loop won’t do it with better AI tools. They’ll do it by challenging the communication norms that the tools are built to serve.
Default to shorter formats. Some organizations have begun enforcing communication norms that eliminate the need for AI inflation entirely. Amazon’s famous six-page memo format works not because six pages is the right length, but because it forces structured thinking before communication. Other companies have experimented with maximum email lengths, required subject-line-only updates for routine decisions, and “no email” blocks for deep work. The point is that communication constraints force better decisions about what’s worth communicating.
Audit communication volume, not just tool adoption. Before deploying AI writing assistants, measure how many emails, messages, and documents your organization produces per employee per week. Deploy the tools. Measure again in ninety days. If volume went up, the tools aren’t solving a problem. They’re accelerating one. Few organizations track this metric, which is exactly why the communication loop persists unchallenged.
Separate relationship communication from operational communication. The USC Marshall research makes a clear case: AI is appropriate for operational messages (status updates, scheduling, routine notifications) and destructive for relationship messages (recognition, feedback, difficult conversations). HR teams should develop explicit guidance on which communication types are AI-appropriate and which require human authorship. The line matters.
Protect the filtering function. Before eliminating middle management roles on the assumption that AI can handle information routing, identify which managers are primarily routers (AI can probably replace this) and which are primarily filters (AI probably cannot). The difference is whether the manager’s value lies in moving information or in deciding what information moves. These require fundamentally different skills, and only one of them is commoditized by AI.
Measure what gets read, not what gets sent. Most organizations track email and messaging volume from the sender’s perspective: how many messages were sent, how quickly were they drafted, how professional do they look. Almost none track from the recipient’s perspective: how many messages were actually read, how many drove a decision or action, how many were summarized by AI and never seen by human eyes. The communication loop is invisible from the sender’s side. It’s painfully obvious from the recipient’s.
The Real Question
The Tom Fishburne cartoon works because it captures a choice most organizations don’t realize they’re making. On one side: AI helps a person inflate a bullet point into an email that performs the act of thoughtful communication. On the other: AI helps a person deflate that email back into the bullet point it always was. The technology on both sides functions perfectly. The communication in the middle is a ghost.
The question facing HR and IT leaders isn’t whether to adopt AI communication tools. That ship sailed when Microsoft bundled Copilot into every enterprise license. The question is whether your organization has the discipline to use these tools to communicate less, more precisely, more intentionally, with higher signal and lower volume, or whether you’ll take the path of least resistance and let AI automate the theater of work that was already consuming a third of your employees’ weeks.
The evidence is clear on which path produces better outcomes. It’s also clear on which path most organizations will take.
The bullet point was always enough. The question is whether anyone is willing to just send it.