Grammarly’s decision to turn off its “Expert Review” feature is not just a product rollback. It exposed a governance gap that matters well beyond one writing tool: AI systems are already packaging real people’s names, reputations, and even deceased scholars’ identities into commercial outputs before law and industry standards have settled who gets to authorize that use.
How the feature crossed from style simulation into identity use
The now-suspended feature offered paid users writing advice “inspired by” named figures, including public intellectuals, authors, and academics. Critics objected that Grammarly presented feedback through recognizable identities without permission, including people who were never contacted and some who are dead. The problem was not only imitation of a general style. It was attaching AI guidance to specific people in a way that suggested a usable form of expertise, even when the output was machine-generated.
Disclaimers said the named experts were not affiliated with or endorsing the tool, but that did not remove the central risk. When an interface asks users to seek guidance from someone like Stephen King or Neil deGrasse Tyson, the name itself carries authority. That can make users treat the output as more credible than ordinary AI text, even if the system is only approximating a voice or perspective. In practice, the disclaimer and the product design were doing opposite things.
Why the backlash was about authority, not branding
Journalists, authors, and academics argued that the feature converted professional identity into an AI wrapper for advice that those people did not write, review, or approve. That is a more serious charge than a marketing mistake because it shifts accountability away from the actual system and toward a borrowed reputation. Academic critics have focused on traceability here: if an “expert” suggestion is inaccurate, outdated, or fabricated, users cannot verify whether it reflects the named person, the model’s training data, or a product team’s prompt design.
That distinction matters because some reported outputs contained inaccurate or stale information about the figures they invoked. Once a real name is attached, those errors do not remain ordinary model mistakes. They become a form of false attribution. In scholarly and professional settings, that breaks a basic condition for credible feedback: the reader should know who is responsible for the claim and how it was produced.
Shishir Mehrotra’s reset does not resolve the harder legal gap
After the criticism spread, CEO Shishir Mehrotra said Grammarly would disable the feature and redesign it so experts could control how they are represented. That response addresses the immediate product failure, but it does not answer the broader U.S. legal uncertainty around synthetic personas. Courts and regulators still have not clearly defined where AI-generated identity use becomes misappropriation, identity theft, unfair commercial use, or something else entirely.
That uncertainty is one reason the Grammarly episode matters as a policy marker. Existing rules around publicity rights, defamation, false endorsement, and copyright only partially fit AI persona systems. A tool can avoid a direct claim of endorsement and still create a misleading impression of authority. It can avoid copying a single protected text verbatim and still monetize someone’s recognizable professional identity. That gray zone is where many AI products currently operate, and where future enforcement is likely to concentrate.
The practical checkpoint is consent architecture, not better disclaimers
For product teams and institutions evaluating AI tools, the useful question is no longer whether a disclaimer exists. The question is whether the system has a consent and accountability structure that matches the identity claim it is making. If a tool invokes a named person, especially for premium commercial use, there needs to be a record of permission, a defined scope of representation, and a way to audit how the system generated the result.
| Checkpoint | Lower-risk approach | Warning sign exposed by Grammarly case |
|---|---|---|
| Identity use | Named individuals opt in and define permitted use | Real or deceased figures used without consent |
| User understanding | Interface clearly separates AI synthesis from human review | Design implies advice is coming from the named expert |
| Accountability | Audit trails, source controls, and correction process | No reliable way to trace why an output made a claim |
| Commercial positioning | Differentiation based on transparent licensed participation | Premium feature built on borrowed authority |
That makes the next checkpoint fairly concrete. Watch whether Grammarly and its competitors move toward consent-based licensing, provenance records, and tighter UI language, and whether regulators treat identity simulation as a personal-data and consumer-protection issue rather than a narrow copyright question. Companies that solve this with actual permission frameworks may gain an institutional advantage, while those relying on synthetic authority are likely to face procurement friction, policy bans, or legal tests first.

