- A study involving 27,491 people shows that written works labeled as AI-generated are consistently rated lower, even when the content is identical.
- The experiment consisted of 16 different studies covering poetry, short stories, and various styles, but the results remained consistent: AI is “penalized.”
- When readers know a text was written by AI, they give lower ratings for quality, creativity, and overall enjoyment.
- This effect is independent of the content, genre, or perspective (first person, third person, etc.).
- Even when evaluated based on logical and objective criteria rather than emotional ones, AI is still rated lower.
- “Humanizing” AI (giving it a name or describing it with emotions) or explaining its high capabilities does not reduce the bias.
- Users do not feel “mixed emotions”; they simply evaluate the work more negatively once the AI origin is known.
- Even when told that humans used AI as a supporting tool, the work is still scored as low as if it were written entirely by AI.
- The main cause is a perceived “lack of authenticity” in AI-generated products.
- This phenomenon is very “robust” and difficult to change, despite various psychological intervention attempts.
📌 Even if AI can write like a human to the point of being indistinguishable, human psychology remains deeply attached to the value of “authenticity.” With over 27,000 participants and 16 experiments, the results show that simply knowing the origin is AI leads to a significant drop in evaluation. This indicates that the major barrier for AI is not just technology but social perception, where humans still prioritize creativity with a human touch over machines.
