• A study of over 1,300 Americans (ages 18–84) shows that the majority do not believe personal messages could be generated by AI, even if they use AI regularly.
  • The experiment divided participants into four groups: unknown source, known human author, known AI author, or uncertain origin.
  • When informed that a message was written by AI, readers rated it more negatively, using terms like “lazy,” “insincere,” or “low effort.”
  • Conversely, the same content was rated as “sincere,” “thoughtful,” and “grateful” when thought to be written by a human.
  • Notably, when source information was absent, readers defaulted to believing the writer was human and gave similarly positive ratings.
  • Frequent AI users were no better at detection; they only slightly reduced their negative ratings when AI use was known.
  • The “AI disclosure penalty” phenomenon shows that disclosing AI use reduces credibility, whereas silent use remains undetected.
  • This creates an ethical paradox: honesty is penalized, while concealment provides an advantage.
  • This trend may cause recruiters to devalue cover letters and shift toward assessments via personal relationships or face-to-face meetings.

📌 Research shows humans are nearly “blind” to AI-generated content, with over 1,300 participants defaulting to the belief that texts are human-written. However, when revealed, ratings drop significantly, creating a paradox between honesty and benefit. This may change how society values written communication, leading to a decline in trust in text and an increased role for face-to-face interaction in work and life.

Share.
VIET NAM CONSULTING AND MEASUREMENT JOINT STOCK COMPANY
Contact

Email: info@vietmetric.vn
Address: No. 34, Alley 91, Tran Duy Hung Street, Yen Hoa Ward, Hanoi City

© 2026 Vietmetric
Exit mobile version