- South Korea will officially implement the Artificial Intelligence Act starting January 21, 2026, becoming the first nation to establish legal safety requirements for high-performance AI, also known as “Frontier AI.”
- According to the Ministry of Science and ICT, the law aims to promote the growth of the domestic AI industry while establishing minimum safety barriers for emerging technological risks.
- The government emphasized that this is not a move to show off achievements, but a step based on a “fundamental global consensus” on AI safety.
- The law creates a foundation for national AI policy, establishing the Presidential Council on National AI Strategy and the AI Safety Institute to evaluate reliability and safety.
- A broad support package includes R&D, data infrastructure, human resource training, startup support, and international market expansion.
- Businesses will enjoy a grace period of at least one year; during this phase, there will be no investigations or penalties, only consultation and training via the AI Act support desk.
- The scope of regulation covers only three groups: high-impact AI, safety obligations for high-performance AI, and transparency for generative AI.
- High-impact AI refers to fully automated systems in key sectors like energy, transport, and finance; currently, no domestic services fall under this category.
- Unlike the European Union, which relies on application-based risk, South Korea uses technical thresholds such as total training computing power.
- Currently, no domestic or foreign AI models meet the regulatory threshold; enforcement is light-handed with no criminal charges.
- Violations are only subject to a maximum fine of 30,000,000 won (≈$20,300 USD) if corrective orders are not followed.
- For generative AI, misleading content like deepfakes must be clearly labeled; other content can use hidden watermarks via metadata; personal and non-commercial purposes are exempt.
Conclusion: South Korea will officially implement the Artificial Intelligence Act on January 21, 2026, becoming the first country to set legal safety requirements for high-performance AI—also known as Frontier AI—while prioritizing growth and compliance over punishment. The law establishes a national AI policy framework, creates the Presidential Council on National AI Strategy, and builds the AI Safety Institute for reliability assessments. Unlike the EU’s application-based risk approach, South Korea utilizes technical thresholds like total training compute. With at least a one-year grace period and a maximum fine of 30 million won (≈$20,300 USD), this law provides a flexible foundation to both protect society and nurture the AI ecosystem during this technological boom.
