• A common concern is that generative AI inherits bias from training data, but an analysis suggests this is just the tip of the iceberg.
  • Bias in AI also stems from human cognitive biases, formed within the entire ecosystem of human-machine interaction.
  • How humans think, ask questions, evaluate, and use AI results can shape the system’s behavior and amplify bias over time.
  • Cognitive biases are mental “shortcuts” that aid quick decision-making but easily lead to misjudgment, missing critical data, or reinforcing existing beliefs.
  • AI is not only influenced by humans but also impacts them in return, silently reinforcing user biases through repetitive feedback loops.
  • Bias can appear in three stages: pre-prompt, during-prompt, and post-prompt.
  • Pre-prompt, the halo effect or negative prejudice causes users to overly trust or doubt AI based on previous experiences or news.
  • Confirmation bias can lead users to frame the problem incorrectly from the start, using AI to “prove” what they already believe.
  • During prompting, leading questions distort outputs, while expediency bias (prioritizing speed, convenience, and “good enough” over accuracy or optimal quality) causes users to accept sufficient results due to time pressure.
  • Post-prompt, the endowment effect (the tendency to overvalue something simply because one “owns” it or put effort into creating it) causes users to overestimate results they generated with AI.
  • The framing effect (the tendency to make different decisions depending on how information is presented, even if the content is essentially the same) strongly influences how AI results are presented and received by others, even if the content remains unchanged.
  • The solution is not to eliminate bias completely, but to raise awareness, cultivate critical thinking, and build organizational processes that support high-quality decision-making.

📌 A Harvard article emphasizes that bias in AI is not just a technical issue but a human behavioral issue. As AI becomes deeply involved in critical decisions, how we ask, evaluate, and act on AI results can silently amplify deviations. By slowing down, engaging in intentional reflection, and designing systems that encourage critical thinking, AI can become a better decision-making partner rather than a “megaphone” for our own prejudices.

Share.
VIET NAM CONSULTING AND MEASUREMENT JOINT STOCK COMPANY
Contact

Email: info@vietmetric.vn
Address: No. 34, Alley 91, Tran Duy Hung Street, Yen Hoa Ward, Hanoi City

© 2026 Vietmetric
Exit mobile version