Grammarly (now Superhuman) launched an AI feature attributing writing suggestions to named experts without their knowledge, then killed it after media exposure.
Grammarly, which rebranded as Superhuman after acquiring Superhuman Mail in June 2025, launched an 'Expert Review' feature that displayed real subject matter experts' names on AI-generated writing suggestions without their consent or compensation. The Verge discovered the feature was surfacing staff members' names, prompting Grammarly to open an opt-out email inbox on March 10th. The following day, under pressure, Grammarly fully disabled Expert Review. Source links in the feature were frequently broken or redirected to unrelated pages, undermining claims of legitimate attribution.
This isn't a model failure — it's a product architecture failure. Grammarly built a feature that surfaced real names on AI-generated outputs without any consent mechanism baked into the pipeline. Any developer building AI features that reference, attribute, or 'personalize' outputs using real people's identities needs a consent gate before launch, not an opt-out email after The Verge calls.
If your product uses any named entity (author, expert, influencer) to contextualize or personalize AI output, audit that pipeline now — check whether attribution is AI-generated or source-verified, and add a consent flag to your data model before it ships.
Tags
Signals by role
Also today
Tools mentioned