THE FINAL APPROVAL OF THE AI ACT BY THE EUROPEAN COUNCIL AND THE NEXT STEPS.

The European Council recently approved the AI Act, marking a crucial milestone in the regulation of artificial intelligence (AI) within the European Union. This legislative act represents a significant step towards establishing a clear and harmonized legal framework for AI, while simultaneously ensuring the protection of fundamental rights of European citizens and promoting technological innovation.

The AI Act aims to establish harmonized rules for the development, commercialization, and use of AI. It classifies AI systems based on the risks they pose, dividing them into four main categories: unacceptable risk, high risk, limited risk, and minimal risk. AI systems that pose unacceptable risks will be prohibited, while those deemed high risk must comply with strict requirements regarding transparency, safety, and human oversight.

With the European Council’s approval, the AI Act enters a crucial phase: its implementation.

This implementation phase requires a series of fundamental steps to ensure that the established norms are effectively implemented and respected.

  1. Adoption of Implementation Norms: Member states must adopt and integrate the provisions of the AI Act into their national legislations. This process will require significant coordination to ensure the harmonization of national laws with the European regulatory framework.
  2. Creation of National Supervisory Authorities: Each member state must establish or designate a competent authority for the supervision of AI systems. These authorities will be responsible for monitoring compliance with the norms, managing reports of violations, and applying the prescribed sanctions.
  3. Training and Awareness: It will be essential to initiate training and awareness programs for businesses and professionals in the sector. These programs should provide the necessary skills to comply with the AI Act requirements and promote a culture of responsibility in the use of AI.
  4. Development of Technological Infrastructures: Member States and European institutions must invest in the development of technological infrastructures that support compliance with the AI Act. This includes platforms for the certification of AI systems, tools for risk assessment, and mechanisms for algorithm transparency.
  5. International Collaboration: The EU must continue to collaborate with other jurisdictions globally to promote shared international standards for AI. This is crucial to avoid regulatory fragmentation that could hinder innovation and international trade.
  6. Periodic Review: Finally, the AI Act provides for a periodic review of its provisions to adapt them to the rapid technological developments in the AI field. This review will allow for updating the norms based on new challenges and opportunities that will arise in the sector.

The final approval of the AI Act by the European Council represents a decisive step towards creating a solid and forward-looking regulatory framework for artificial intelligence in the EU. The next steps will be crucial to ensure that the established norms are effectively implemented and that AI can develop in a safe, transparent, and rights-respecting manner. With a joint commitment from institutions, businesses, and citizens, the European Union can become a global leader in the regulation of artificial intelligence, promoting an ethical and sustainable technological future.

DISCLAIMER: This article merely provides general information and does not constitute legal advice of any kind from Macchi di Cellere Gangemi which assumes no liability whatsoever for the content and correctness of the newsletter. The author or your contact in the firm will be happy to answer any questions you may have.