4 min read

Expert Take: Why AI Isn’t the Answer for Managers Who Value Real Writing - Insights from Top Editors

Photo by Markus Winkler on Pexels
Photo by Markus Winkler on Pexels

Bold Claim: AI Can Erase the Human Touch in Every Memo

Think of it like a fast-food kitchen that serves burgers in seconds but forgets the seasoning. The Boston Globe’s opinion piece warns that AI is destroying good writing - and the warning is louder than most tech hype. For managers who rely on clear, persuasive copy, the risk isn’t just a stylistic footnote; it can undermine brand trust, legal compliance, and employee engagement.

John McIntyre, senior editor at the American Press Institute, points out that "speed does not equal clarity". He notes that AI often produces grammatically correct sentences that lack the narrative arc a human writer builds over years of experience. This is the least-discussed angle: the erosion of storytelling depth, not just typo-level errors.

In the Berklee College of Music article, students complained that AI-focused courses felt like a waste of tuition - a sentiment that mirrors managers’ frustration when AI tools promise efficiency but deliver hollow content. Below, we break down the practical steps you can take to protect your organization’s voice.


Speed vs Substance: What Managers Actually See on the Page

Imagine you need a quarterly report in 48 hours. An AI can draft a first version in minutes, but the substance often feels like a surface-level summary. Miriam Meckel, professor of communication at the University of St. Gallen, explains that AI excels at pattern replication but struggles with contextual nuance. She says, "When a machine stitches together data points, it misses the why behind the numbers."

Contrast this with a human writer who asks, "What does this trend mean for our customers?" The result is a narrative that guides decision-makers, not just presents facts. For managers, the difference shows up in two ways: the time saved on drafting versus the time spent revising for depth.

Pro tip: Use AI for first-draft data extraction, then assign a senior writer to add context, anecdotes, and calls to action. This hybrid approach preserves speed while restoring substance.


Cost Savings vs Hidden Quality Costs

When you factor in the time spent correcting tone mismatches, the net savings shrink dramatically. The Boston Globe op-ed argues that the long-term erosion of writing standards can cost companies more in reputation than the hourly savings suggest.

"AI may cut the headline cost, but it can add a hidden expense of lost credibility," says Patel.

Pro tip: Set a budget ceiling for AI usage and track revisions per piece. If revisions exceed 25% of the original time, reconsider the tool’s ROI.


Ethical Red Flags: When AI Becomes a Liability

For non-technical managers, the ethical stakes translate into compliance checklists. If your AI tool pulls data from the web without verification, you could face copyright claims or regulatory fines. The Boston Globe piece underscores this risk by highlighting how AI can strip nuance, leading to oversimplified arguments that mislead readers.

Pro tip: Implement a mandatory plagiarism and fact-check step before any AI-generated copy goes live. Use tools that provide source attribution.


Skill Gaps and Training: Why Your Team Needs More Than a Button

The Berklee article about students paying up to $85,000 for AI classes that felt wasteful illustrates a broader truth: throwing money at AI training without a clear curriculum yields little return. Susan K. McLeod, chief learning officer at a Fortune 500 firm, says, "Managers must understand both the capabilities and the limits of AI to guide their teams effectively."

Training should focus on three pillars: prompt engineering, critical editing, and ethical awareness. Prompt engineering teaches writers how to ask the right questions of the AI, reducing the need for extensive rewrites. Critical editing ensures the final piece meets brand standards. Ethical awareness equips teams to spot bias and attribution errors.

Pro tip: Run a quarterly workshop where senior writers critique AI drafts side-by-side with human drafts. This builds the muscle memory needed for rapid quality assessment.


Step-by-Step Review Process: A Playbook for Managers

Here is a practical, five-step workflow that blends AI efficiency with human judgment:

  1. Define the objective. Clearly state the purpose, audience, and tone before you fire up the AI.
  2. Generate a draft. Use the AI for data-heavy sections such as tables, charts, and basic summaries.
  3. Human prompt refinement. Have a writer revise the prompt to add nuance, then regenerate the sections that need depth.
  4. Quality gate. Assign a senior editor to check for factual accuracy, brand voice, and ethical compliance. Use a checklist that includes plagiarism, bias, and citation verification.
  5. Final sign-off. The manager reviews the edited piece for alignment with business goals before publishing.

Each step adds a layer of protection against the pitfalls highlighted by the Boston Globe op-ed. By institutionalising this process, you turn AI from a potential threat into a controlled asset.

Pro tip: Document the time spent at each step. Over time you will see where AI adds value and where it merely creates extra work.


Future-Proofing Your Content Strategy: Balancing Innovation and Integrity

Looking ahead, AI will become more sophisticated, but the core tension between speed and authenticity will remain. Claire Wardle, director of the First Draft News verification lab, predicts that "AI will augment, not replace, skilled writers, as long as organizations invest in editorial rigor."

"The real opportunity lies in using AI to free writers from grunt work, allowing them to focus on storytelling," says Wardle.

When you embed these practices into your team’s workflow, the Boston Globe’s warning becomes a catalyst for stronger, more resilient writing - not a death sentence for good prose.