Economy

EU States Unanimously Approve Game-Changing AI Regulations!

EU Approves Landmark Rules on Artificial Intelligence

In a significant move, EU states have given their final backing to groundbreaking rules on artificial intelligence that will regulate powerful systems like OpenAI’s ChatGPT.

Protecting Citizens While Harnessing Technology

The European Parliament had previously approved the law in March, and it is set to come into force after being published in the official EU journal in the upcoming days. The EU emphasizes that the law aims to safeguard citizens from the dangers of AI while also leveraging the technology’s potential in Europe.

Urgency Amid Technological Advancements

Initially proposed in 2021, the rules gained increased importance following the introduction of ChatGPT in 2022, showcasing generative AI’s human-like capability to generate articulate text in mere seconds. Other examples of generative AI, such as Dall-E and Midjourney, can create images in various styles with a simple input in everyday language.

AI Act: A Risk-Based Approach

Dubbed the “AI Act,” the law adopts a risk-based approach, where high-risk systems require companies to fulfill a stricter set of obligations to safeguard citizens’ rights. The legislation includes strict prohibitions on using AI for predictive policing and systems that utilize biometric information to infer an individual’s race, religion, or sexual orientation. Companies must adhere to the regulations by 2026, with rules specific to AI models like ChatGPT coming into effect 12 months after the law’s official publication.

Pledge for Safe Development

During a mini summit on AI, leading companies pledged to develop the technology safely and committed to discontinuing operations if they cannot mitigate extreme risks effectively. World leaders are expected to further discuss agreements on artificial intelligence as they convene virtually to address AI’s potential risks and ways to foster innovation and benefits.

AI Seoul Summit: Collaboration for Safety

The AI Seoul Summit serves as a follow-up to the high-profile AI Safety Summit at Bletchley Park in the UK, where participating countries agreed to collaborate in containing the potentially “catastrophic” risks posed by rapid AI advancements. The two-day meeting, co-hosted by South Korea and the UK, coincides with major tech companies like Meta, OpenAI, and Google unveiling their latest AI models.

Commitment to Safety and Transparency

Sixteen AI companies, including Amazon, Microsoft, France’s Mistral AI, China’s Zhipu.ai, and G42 of the UAE, made voluntary commitments to AI safety during the summit. These companies vowed to ensure the safety of their cutting-edge AI models through accountable governance and public transparency, pledging to publish safety frameworks detailing how they will assess risks associated with their models.