Are there generative AI tools I can use that are perhaps slightly more ethical than others?
The short answer is no. The ethics surrounding generative AI hinge on the way these models are developed, particularly the legitimacy of the data used for training them and their ongoing environmental impact. The development of generative AI tools demands an immense amount of data, and the methods used to gather this data often lack transparency and raise ethical questions.
Numerous creators—ranging from authors to social media users—have voiced concerns about their content being harvested without their consent to train AI systems. Advocates of AI often argue that obtaining consent from these individuals would stifle innovation due to the complexity involved. Even in cases where companies have established licensing agreements with publishers, this "clean" data only represents a fractional part of the vast datasets that these AI models rely on.
While some developers are exploring ways to compensate creators for their work being used in training models, these initiatives are typically small-scale compared to the dominant players in the industry.
Additionally, the environmental toll of running generative AI systems is significant. These tools consume far more energy than non-generative alternatives. This means that using a chatbot as a research assistant can contribute more to the climate crisis compared to simply searching online.
How do we make AI wiser and more ethical rather than smarter and more powerful?
This is a crucial question that many in the AI development community are grappling with. Companies like Anthropic are trying to embed core values into their AIs through unique strategies, such as their “constitutional” approach with the Claude chatbot.
Part of the confusion here lies in the terminology we use to describe AI software. Recent models are touted for their “reasoning” capabilities, but the truth is that these AIs do not possess thoughts or understanding as humans do. Words like reasoning are merely euphemisms for how algorithms process data. The ethical considerations surrounding AI outputs always circle back to the input—specifically, the intentions behind user interactions and the biases present in the training data.
Ultimately, fostering ethical development practices and responsible user engagement is essential, rather than simply striving to enhance the wisdom of AI systems themselves.