- New research shows that large language models like ChatGPT, Claude, Gemini, Grok, Mistral, and DeepSeek tend to give similar strategic advice, leaning toward modern management “buzzwords” rather than analyzing the specific context of a business. This phenomenon is called “trendslop” by researchers.
- The research team examined seven common strategic tensions: exploration vs. exploitation, centralization vs. decentralization, short-term vs. long-term, competition vs. cooperation, disruptive innovation vs. improvement, differentiation vs. commoditization, and automation vs. human augmentation.
- In thousands of simulations, AI models consistently chose the same side: prioritizing differentiation, human augmentation with AI, cooperation, long-term thinking, and decentralization. These choices appeared almost regardless of the business context.
- Testing over 15,000 times with GPT-5 showed that “better prompts” almost never eliminated bias. One strongly influencing factor was the order of options: simply swapping the order of choices could change the results by about 19%.
- Providing detailed context—such as tech startups, banks, hospitals, or non-profits—only reduced bias by about 11%, but did not eliminate it.
- The cause lies in the training data: LLMs learn from the internet, where concepts like “innovation,” “collaboration,” or “differentiation” are viewed positively, while “commoditization” or “centralization” are often seen as outdated.
- Another risk is the “hybrid trap”: when allowed, AI often suggests combining two conflicting strategies, such as pursuing both differentiation and cost leadership, which traditional strategic theory suggests can leave a business “stuck in the middle.”
📌 Research indicates that LLMs are not the neutral strategic advisors many leaders believe them to be. In thousands of tests and over 15,000 simulations with GPT-5, AI consistently recommended “trendy” strategies such as differentiation, cooperation, and long-term thinking, regardless of the business context. This tendency to provide identical strategic advice, favoring modern management jargon over specific situational analysis, is termed “trendslop” by researchers. This stems from the internet data and modern management culture that AI learns from. Therefore, LLMs should be used to generate ideas and analyze options, but the final strategic decisions must remain the responsibility of humans.

