Artificial Intelligence

Grammarly Removes AI Expert Review Feature After Backlash

Grammarly removes AI Expert Review feature mimicking writers after backlash

Grammarly, the popular writing assistance tool, has recently disabled its controversial AI feature known as Expert Review. This feature, which imitated the writing styles of prominent authors and academics, faced significant backlash and is now at the center of a multimillion-dollar lawsuit.

What Was the Expert Review Feature?

The Expert Review feature utilized generative AI to provide editing suggestions inspired by well-known figures in literature and science. Among those whose styles were mimicked were:

  • Stephen King, a renowned novelist
  • Neil deGrasse Tyson, an astrophysicist and author
  • Carl Sagan, a celebrated scientist

This feature was designed to offer feedback that supposedly reflected the expertise of these individuals, aiming to enhance the quality of writing for users. However, it quickly drew criticism for using real names without consent.

Legal Repercussions

A class-action lawsuit has been filed in the Southern District of New York against Superhuman, Grammarly’s parent company. The lawsuit alleges that the use of individuals’ names for commercial purposes without permission is unlawful. The plaintiffs claim that damages could exceed $5 million (£3.7 million).

Reactions from Affected Writers

Since the feature’s public revelation, numerous writers have expressed their outrage at being included without their knowledge or consent. Tech journalist Casey Newton, who was featured in the tool, stated:

“Grammarly curated a list of real people, gave its models free rein to hallucinate plausible-sounding advice on their behalf, and put it all behind a subscription. That’s a deliberate choice to monetize the identities of real people without involving them, and it sucks.”

Vanessa Heggie, an associate professor at the University of Birmingham, also voiced her concerns on LinkedIn, highlighting the inclusion of deceased academic David Abulafia as “obscene.”

Julia Angwin: Lead Plaintiff

Investigative journalist Julia Angwin, who is the lead plaintiff in the lawsuit, expressed her shock at the situation. She noted:

“I had thought of deepfakes as something that happens to celebrities, mostly around images. Editing is a skill … it’s my livelihood, but it’s not something I’ve ever thought about anyone trying to steal from me before. I didn’t even think it was steal-able.”

Angwin’s lawyer, Peter Romer-Friedman, mentioned that interest in the case has surged, with over 40 individuals reaching out in the 24 hours following the lawsuit’s filing.

Grammarly’s Response

Grammarly, which was launched in 2009 primarily as a spelling and grammar checking tool, began incorporating generative AI features last year, including the Expert Review. In a blog post announcing the feature, Grammarly described it as providing “subject-matter expertise and personalized, topic-specific feedback to elevate writing that meets rigorous academic or professional standards tailored to the user’s field.”

Apologies and Future Directions

Shishir Mehrotra, CEO of Superhuman, issued an apology on LinkedIn, acknowledging the valid criticisms from experts concerned about the misrepresentation of their voices. He stated:

“Over the past week, we received valid critical feedback from experts who are concerned that the agent misrepresented their voices. We hear the feedback and recognize we fell short on this. I want to apologize and acknowledge that we’ll rethink our approach going forward.”

In response to the lawsuit, Mehrotra clarified that the decision to take down the Expert Review feature for redesign was made prior to the lawsuit being filed. He noted that the feature had seen very little usage during its brief existence. Despite the acknowledgment of shortcomings, he maintained that the legal claims were “without merit” and that Superhuman would “strongly defend against them.”

Conclusion

The removal of Grammarly’s Expert Review feature highlights the ongoing debate surrounding the ethical use of AI in creative fields. As technology continues to evolve, the implications of using real individuals’ identities and styles without consent will remain a significant concern. The outcome of the lawsuit may set important precedents for the future of AI applications in writing and beyond.

Note: This article is based on information available as of October 2023 and reflects the ongoing discussions regarding AI ethics and copyright issues.

Disclaimer: A Teams provides news and information for general awareness purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of any content. Opinions expressed are those of the authors and not necessarily of A Teams. We are not liable for any actions taken based on the information published. Content may be updated or changed without prior notice.