May 23, 2024

EU Approves Landmark AI Rules Amid Global Safety Pledge 🇪🇺

EU states have finalized landmark regulations on artificial intelligence (AI) that will govern powerful systems like OpenAI’s ChatGPT. With the European Parliament’s approval in March, the “AI Act” will now be enacted after its publication in the EU’s official journal. The legislation aims to protect citizens from AI’s potential dangers while leveraging its benefits. It imposes stricter obligations on high-risk AI systems and bans the use of AI for predictive policing and biometric-based inferences. Companies must comply by 2026, with rules for AI models like ChatGPT taking effect 12 months after the law becomes official.

Simultaneously, major AI companies have pledged to develop AI safely at the AI Seoul Summit, a follow-up to the UK’s AI Safety Summit. Co-hosted by South Korea and the UK, this summit gathers world leaders to address AI risks and promote its benefits. Sixteen leading AI companies, including Meta, OpenAI, Google, Amazon, and Microsoft, committed to ensuring the safety of their advanced AI models through accountable governance and public transparency. They promised to publish safety frameworks outlining risk measurement procedures, reinforcing their dedication to AI safety and innovation.

Share article