• Research shows that users tend to trust AI like ChatGPT, Gemini, or Claude even when the information is incorrect.
  • According to a previous study by the BBC, AI chatbots provide incorrect answers in up to 45% of cases.
  • In an experiment, more than 50% of participants chose to use ChatGPT even when not required.
  • The phenomenon of “cognitive surrender” occurs when users let AI make decisions for them.
  • In a group of 359 people, they followed the AI correctly 92.7% of the time.
  • Alarmingly, they still followed the AI’s incorrect advice 79.8% of the time.
  • Users not only follow but also become more confident in the answers provided by AI.
  • AI is causing humans to engage in “outsourcing thinking” — transferring the act of thinking to machines.
  • This leads to a decline in critical thinking skills and the ability to verify information.
  • As AI integrates deeper into life, the level of dependency will continue to increase.
  • The risk comes not only from smarter AI but from humans becoming more dependent.
  • Comparison with other technologies: Just as air conditioning or vehicles reduce physical movement, AI can reduce mental exertion.

📌 Conclusion: Research reveals a worrying reality: nearly 80% of people still trust and follow AI even when it is wrong, reflecting the phenomenon of “cognitive surrender.” With an error rate of up to 45%, AI is not yet absolutely reliable, but it is gradually replacing human thought processes. If this trend continues, humans may lose the capacity for critical thinking — a core skill for decision-making and survival in the AI era.


Share.
VIET NAM CONSULTING AND MEASUREMENT JOINT STOCK COMPANY
Contact

Email: info@vietmetric.vn
Address: No. 34, Alley 91, Tran Duy Hung Street, Yen Hoa Ward, Hanoi City

© 2026 Vietmetric
Exit mobile version