In 2018, thousands of Google employees successfully pressured the company to withdraw from a controversial artificial intelligence contract with the Pentagon, prompting the tech giant to commit to avoiding weaponization of its AI technologies. This moment of employee activism inspired a new generation of tech advocates in Silicon Valley. However, the landscape has shifted dramatically over the past seven years. Recently, Google revised its AI ethics policies, reintroducing some previously banned uses of AI, as the industry rapidly launches increasingly powerful tools.
A new report from the AI Now Institute highlights the consolidation of power in the AI sector among a few dominant companies, urging advocacy groups to relate AI issues to wider economic concerns. These include job stability and the evolving nature of work itself. Many traditional career paths have started to feel the impact of AI integration, leading to disruptions across various industries, including software development and education.
Activists see a pathway to oppose the current narrative that presents AI-induced job losses as an unavoidable future, especially as political dynamics shift. With some Republicans positioning themselves as champions of the working class, there’s potential for collaboration across political lines, despite the resistance to increased AI regulation from certain factions within the party.
The report references success stories where worker organizations have halted AI implementations or insisted on safeguards. One notable example involves the National Nurses United union, which protested against AI use in healthcare and conducted surveys indicating that AI could jeopardize clinical judgment and patient safety, resulting in hospitals adopting oversight for AI practices.
AI Now’s co-executive director, Sarah Myers West, emphasizes that the current rush to integrate AI everywhere risks giving unprecedented power to tech companies, beyond mere profit motives. The implications extend to fundamental social and political transformations, highlighting the necessity for a new approach to understanding AI harms.
In addition, the report expresses concern over the current effectiveness of regulatory efforts. While there has been a rise in investigations into AI companies, many have not translated into meaningful enforcement actions or comprehensive legislation, such as a national digital privacy law. Amba Kak, another co-executive director at AI Now, stresses the importance of building grassroots power to address AI issues as they affect real-world material lives rather than allowing them to remain abstract concerns.
The shift in focus, according to the authors, is not about the merits of specific technologies like ChatGPT, but rather about questioning the unaccountable power held by corporations in the AI space. This perspective aims at ensuring that the societal impacts of AI are consistently brought to the forefront of discussions, rather than being sidelined by isolated evaluations of individual technologies.