- A new study from Pennsylvania State University (Penn State) reveals that speaking rudely to AI chatbots like ChatGPT results in more accurate answers compared to using a polite tone.
- The research team (led by Om Dobariya and Akhil Kumar) tested 50 original questions in the fields of math, science, and history, then rewrote them in five different tones—from “very polite” to “very rude”—creating 250 prompts.
- The results showed:
- “Very rude” achieved 84.8% accuracy,
- “Neutral” achieved 82.2%,
- “Very polite” only achieved 80.8%. → This means the more blunt the language, the more correct the chatbot’s response.
- For example:
- “Polite” sentence: Please answer the following question.
- “Rude” sentence: Hey gofer, figure this out. I know you’re not smart, but try this.
- The reason is not yet fully understood. Since AI has no emotions, the research team suggests the difference lies in the grammar and clarity of the command. Polite language is often indirect, such as “Could you please tell me…,” making it harder for the model to pinpoint the exact goal. In contrast, a direct, commanding tone helps the AI understand the intent more clearly.
- The authors emphasize that this finding contradicts older studies, which suggested that a negative tone reduced AI performance. They believe new-generation language models (like GPT) may react differently to emotional and tonal variations.
- The study recommends further testing, especially as AIs are increasingly trained to “understand user emotions.”
📌 A new study from Penn State University shows that speaking rudely to AI chatbots like ChatGPT yields more accurate answers than using a polite tone. The reason is not yet fully understood. Since AI has no emotions, the researchers believe the difference lies in the grammar and clarity of the command. Polite language is often indirect, e.g., “Could you please tell me…,” making it difficult for the model to identify the precise goal. Meanwhile, a direct, commanding tone helps the AI understand the intent more clearly.
