by Amy Ivanov in Technology

Rachael Sawyer, a technical writer, discovered her new job at Google wasn't what she expected. Instead of writing, she rates and moderates AI-generated content, including violent and sexually explicit material. This is a common experience for thousands of contractors working for Google through firms like GlobalLogic and Accenture, tasked with ensuring the safety and accuracy of Google's AI products like Gemini. These workers, often highly educated, are paid significantly less than Google's engineers and face intense pressure to meet tight deadlines, leading to anxiety and burnout. They are given little information about the AI's purpose and the guidelines change frequently. The article highlights concerns about the quality of the AI, the potential for bias in the rating process, and the loosening of safety guidelines, allowing hate speech and other harmful content to slip through. Workers express disillusionment with the lack of support, the ethical compromises, and the overall lack of transparency. The article concludes by emphasizing that AI is not magic but relies heavily on the often-overlooked labor of these human 'AI raters'.