• Krista Pawloski, who works on Amazon Mechanical Turk, once had to label racist tweets and almost missed the slang term “mooncricket.” That experience made her realize the extent of human error in the moderation chain, leading her to ban generative AI in her family.
  • Many other AI raters (evaluating AI output) told the Guardian that they also avoid using AI and warn their relatives. One Google rater had to evaluate medical answers despite having no specialized training — she banned her 10-year-old child from using chatbots due to a lack of critical thinking skills.
  • Google stated that rating is only one aggregated signal and that there are mechanisms to protect quality; Amazon said MTurk allows workers to choose tasks themselves.
  • Media expert Alex Mahadevan commented that the fact that AI workers do not trust AI shows that the pressure for rapid launch outweighs safety — raters’ feedback is easily ignored.
  • Brook Hansen, a worker with experience since 2010, said they often receive vague instructions, minimal training, and short deadlines: signs that businesses prioritize speed and profit over quality and ethics.
  • According to NewsGuard, the percentage of chatbots “refusing to answer” sharply dropped from 31% (08/2024) to 0% (08/2025), while the rate of repeating false information increased from 18% to 35% — showing the models are more confident but less accurate.
  • A Google rater shared that questions about Palestinian history were consistently refused, but questions about Israel were fully answered. He reported it, but no one acted. This reinforces the “garbage in, garbage out” principle: incorrect or missing data leads to a biased model that cannot be fixed.
  • Many workers advise avoiding AI-integrated phones, not sharing personal data, and delaying updates that add AI features.
  • AI labor researchers argue that the public is often “charmed” by AI because they do not see the teams collecting data, evaluating, filtering content, and the limitations — while insiders see a fragile system, dependent on humans, and full of compromises.
  • Pawloski and Hansen presented at a Michigan education conference, revealing that the environmental cost, hidden labor, and data bias shocked many; some defended AI as a promising technology.
  • Pawloski compared the AI industry to the textile industry: when consumers do not see the cheap labor and terrible conditions, they rarely question it. Only when they learn the truth do they begin to demand transparency and change.

📌 Many AI output evaluators become deeply skeptical after witnessing errors, biases, pressure for speed, and signs that businesses prioritize speed over safety. The rate of chatbots repeating false information increased to 35% in 08/2025, indicating the risk of widespread misinformation. Workers warn the public: AI is only as good as its input data, and the silent labor behind it is easily ignored. They call for questioning the data sources, ethics, and labor conditions to drive change.

Share.
© 2025 Vietmetric
Exit mobile version