Clord
Person writing at a laptop, identity hidden in shadow

Grammarly's AI 'Sloppelgangers' Are Cloning Writers Without Consent

Grammarly's AI can now mimic your writing style — and it's doing it without asking, which should worry every developer who writes.

Clord
· · 5 min read

Grammarly built an AI that impersonates you. It cloned the writing styles of journalists — including Verge staff — without asking, then started publishing “expert reviews” under their names. The backlash hit. Now they’re offering an email opt-out. That’s not good enough.

This isn’t a hypothetical AI ethics debate. It happened. It’s happening. And if you write anything on the public internet, it could happen to you next.

What Actually Happened

Grammarly rebranded as Superhuman in late 2025, and with the rebrand came a new “expert review” feature bundled into their email client. The feature scraped the writing styles of real, named writers and used them to generate AI content attributed to those people. No consent. No opt-in. Just a crawl of your public writing and a model trained to sound like you — including writers who are dead.

Bluesky user @lifewinning.com coined the term sloppelganger to describe it: a sloppy AI doppelgänger masquerading as you. The name stuck because it’s perfect. These aren’t high-fidelity clones — they’re approximations. But they’re close enough to put words in your mouth at scale, and close enough that readers won’t necessarily know the difference.

After the backlash, Superhuman said writers can email them to opt out. You read that right. You have to actively email a company to stop them from using your identity.

Keyboard and notebook on a desk
Keyboard and notebook on a desk

Why Developers Should Care

You might think this is a journalist problem. It’s not. It’s a builder problem.

If you write blog posts, READMEs, docs, Stack Overflow answers, or dev.to articles — your writing is out there. Your style, your voice, your technical opinions. All of it is training data. The moment a company decides your style is worth cloning, there’s currently nothing stopping them except the fear of bad press.

And bad press only works once. Once the opt-out email cycle plays out, the story dies, and the feature quietly stays live for everyone who didn’t see the thread.

The verdict: this is a consent problem with no real enforcement mechanism yet — part of a broader pattern where AI companies talk about safety while shipping recklessly. The EU AI Act has provisions around synthetic personas, but enforcement is years away. In the meantime, the default is extraction.

The Opt-Out Default Is Broken By Design

Here’s the deeper problem. Opt-out defaults aren’t neutral. They’re a deliberate choice to maximise extraction before friction kicks in.

If Grammarly had launched this as opt-in — “hey, want AI to write in your style?” — adoption would have been lower. So they skipped the ask. They scraped, they shipped, they apologised when called out, and they added a friction-heavy opt-out that requires you to already know it’s happening.

This is the same pattern we saw with LinkedIn’s AI training toggle, cookie tracking dark patterns, and data brokers. The playbook is consistent: harvest first, apologise at scale, offer a hidden exit. The FTC has flagged this pattern before — but tech companies keep running it because the regulatory cost is still lower than the growth benefit.

Developers building AI products: this is the road you don’t want to go down. The short-term extraction metric wins. The long-term trust metric tanks. Ask Superhuman’s inbox right now.

Code on a laptop screen
Code on a laptop screen

It’s not complicated. Three things:

1. Explicit opt-in for identity features. If your product is generating content “in someone’s voice,” that person needs to actively say yes. Not buried in ToS. Not a checkbox on sign-up. A clear moment: “We want to create an AI version of your writing style. Here’s what that means. Yes or no.”

2. Audit access. Anyone should be able to query whether their name or writing has been used in your AI feature, get a straight answer, and remove it with one click — not one email to a support address. Adobe’s Content Credentials initiative is one example of what transparent provenance tracking could look like at scale.

3. Default-off for public figures and creators. If someone has a public writing presence, treat that as higher-risk, not lower-risk. The fact that writing is public doesn’t mean it’s fair game for impersonation. The Authors Guild has been making this argument for two years — it’s not a fringe position.

None of this is technically hard. It’s a product decision. The reason it doesn’t happen is that opt-in defaults cost growth. Until regulators or market pressure make extraction expensive, companies will keep choosing the shortcut.

The Practical Upshot for Builders

If you’re building anything that involves style transfer, voice cloning, or persona generation — even for legitimate productivity use cases — you need to get ahead of this now.

The question isn’t “is this technically legal?” It’s “would the person whose style we’re using be okay with this if they knew?” If the answer requires any mental gymnastics, you have your answer.

The sloppelganger backlash landed because the people affected were journalists with platforms and reach. Most writers don’t have that. Most developers don’t have that. The same thing can happen to you with no one to write the story.

Build the consent layer in from day one. Not because the law requires it yet. Because the alternative is becoming the cautionary tale in someone else’s blog post.


The verdict: Grammarly’s sloppelganger feature is a consent violation dressed up as a productivity tool. The opt-out email fix doesn’t address the underlying problem — it just slows the backlash. If you’re building AI products that touch identity or voice, opt-in defaults aren’t optional. They’re the only design that doesn’t eventually blow up in your face.