Grammarly Unveils ‘Expert’ AI Reviews Featuring Insights from Iconic Authors—Living and Deceased

Do you ever wish you had guidance from your favorite authors or professors, even if they’re no longer with us? A new tool may make that dream a reality. Recently, Grammarly, now rebranded as Superhuman, has introduced a feature that provides feedback on your writing by mimicking both living and deceased literary figures—without their consent.

Originally created for grammar and spell-checking, Grammarly evolved to incorporate advanced AI features. CEO Shishir Mehrotra announced the rebranding in October, emphasizing its growth into a more comprehensive suite of writing solutions. Despite the rebranding, the core tool still operates under the Grammarly name.

The enriched Grammarly platform now offers various AI features including chatbots to answer questions during drafting, a paraphraser for style changes, a "humanizer" to adjust voice, an AI grader that predicts document scores, and tools to refine phrases that might sound like they came from AI. However, one of the newest offerings is particularly concerning: an “expert review” feature that claims to provide critiques inspired by renowned writers and scholars.

Small disclaimers clarify that the individuals referenced in the tool were not involved in the process and do not endorse the reviews provided. Users can request feedback from virtual representations of both contemporary figures like Stephen King and historical icons, such as William Zinsser or Carl Sagan. The legitimacy of this content harvesting raises serious ethical and legal questions, as many of these figures never consented to have their works parsed by an algorithm.

Grammarly’s marketing suggests that this “Expert Review” feature analyzes written work and surfaces advice inspired by notable figures in alignment with the content’s subject matter. Yet, critics argue this reduces genuine scholarly input to mere algorithmic mimicry, stripping away the agency of living thinkers and the legacies of those who have passed.

Such adoption of AI, particularly in the humanities, invites skepticism. Critics, including academics like Vanessa Heggie, have condemned this approach as exploitative, noting how it trades on the names and reputations of writers without their permission. Similarly, other scholars express concern that this practice undermines the integrity of academic feedback, facilitating a misunderstanding of scholarly engagement among students.

The effectiveness of these AI features is also called into question, as existing tools like the plagiarism detector have missed critical elements within user documents. With a growing reliance on AI-generated content, educators face the challenge of addressing potential academic dishonesty among students.

The prospect of students relying on an app for feedback from hollow avatars of their heroes raises ethical dilemmas about the future of education. This newfound reliance may also foreshadow a shift away from traditional teaching methodologies and toward an increasingly automated academic environment.

In summary, while the ability to garner insights from the likes of King or Zinsser may seem appealing, the ethical implications and potential for misuse highlight a troubling trend in the evolution of writing assistance technology. The challenge remains: How do we balance technological advancement with the preservation of intellectual integrity?

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Article

US Government iPhone-Hacking Toolkit Falls into the Hands of Foreign Spies and Criminals: What It Means for Security

Next Article

Cato Networks Unveils Adaptive Threat Defense for Enhanced SASE Security

Related Posts