Open menu
Artificial Intelligence Consulting

Artificial Intelligence Consulting

As CottGroup, we offer advanced artificial intelligence solutions to enhance your business efficiency and gain a competitive advantage. Our expert team develops and implements custom AI strategies that improve your customer experiences and optimize your operations. Additionally, we train large language models (LLMs) using your company's data to ensure your AI tools align perfectly with your business goals.

Machine Learning Project Consulting

Machine Learning Project Consulting

Our machine learning project consulting supports you at every step, from ideation to deployment, delivering robust and effective models. We integrate these solutions into your workflows, facilitate seamless communication with suppliers, and foster innovation to achieve measurable business outcomes.

Data Governance Services

Data Governance
Services

Our data governance services focus on maintaining data quality and security while ensuring compliance with regulations such as GDPR. By building a resilient data infrastructure, we support your sustainable growth and enable data-driven, informed decision-making.

The European Union Artificial Intelligence Act

05 July 2024

    The European Union Artificial Intelligence Act

    The rapid development of artificial intelligence (“AI”) systems, diversification of focus areas, and the increasing number of users have introduced potential harms related to privacy, ethics, security, and fundamental rights. In response to the need to mitigate these potential harms and prevent possible violations, the Council of the European Union approved the Artificial Intelligence Act (“AI Act”) on May 21, 2024, which has been under preparation since April 21, 2021, as a framework law1.

    Generally, the AI Act is based on a risk-based approach and has constructed a tiered structure, with rules designed according to the potential harm In summary, the AI Act aims to establish standards for AI systems to ensure their reliability within the framework of ethical rules. It categorizes AI systems based on the risk they pose to society and ensures that these systems are highly transparent and subject to auditing.It is important to emphasize that the AI Act, as the first regulation aiming to systematically organize AI systems, seeks to create standards for AI systems and their users.

    Risk-Based Approach Of The Ai Act

    The AI Act adopts a risk-based approach and classifies AI systems into four categories, aligning the applicable rules accordingly2. The AI Act categorizes AI systems into unacceptable risk, high risk, specific transparency risk/limited risk, and minimal risk. It prohibits AI systems that pose threat to ethical, privacy, security, and societal or individual rights under the unacceptable risk category3.

    For instance, as stated in Article 5 of Section II titled “Prohibited AI Practices”, the AI Act defines systems like social scoring, emotion recognition and AI that manipulates human behavior or exploits people’s vulnerabilities posing unacceptable risks and therefore, forbidden their use.

    On the other hand, as detailed in Articles 6 and 7 in Section III titled ‘High-Risk AI Systems’, AI systems used in critical infrastructure, education and vocational training, employment, essential private and public services, certain systems in law enforcement, migration and border management, justice and democratic processes are classified as high-risk. Although their use is not forbidden, stringent obligations are imposed, including prior conformity assessments, transparency, oversight, and monitoring to protect users' fundamental rights and freedoms. These obligations aim to ensure that high-risk AI systems are reliable, transparent, and accountable.

    Additionally, the AI Act categorizes chatbot and content-producing models as low-risk AI systems, imposing simpler obligations. For specific transparency risk/limited risk AI systems, it requires informing the other party that they are interacting with an AI system and, if an output is produced, ensuring it is identifiable as AI-generated.

    From another perspective, AI systems like calculators or simple games, which do not pose a threat to users' fundamental rights or security, are classified as minimal risk and do not face additional requirements beyond general obligations.

    Transparency, Innovation And Developments

    As briefly mentioned above, the AI Act regulates crucial obligations concerning transparency, distinguishability and security for chatbots or specific transparency risk/limited risk AI systems that generate content, aiming to increase consumer awareness. In particular, paragraphs 133 and 134 of the AI Act state that if content is created by AI, it is obligatory to disclose that the content is AI-generated. Additionally, the AI system should be designed to prevent the production of illegal content. The AI Act states that "In light of those impacts, the fast technological pace and the need for new methods and techniques to trace origin of information, it is appropriate to require providers of those systems to embed technical solutions that enable marking in a machine readable format and detection that the output has been generated or manipulated by an AI system and not a human." and it is also stated that AI systems must comply with EU copyright laws. In summary, these provisions of the AI Act aim to prevent misinformation and keep user and consumer awareness high regarding AI-generated content.

    Perhaps one of the most important aspects of the AI Act is the requirement for AI systems to be subject to oversight. The AI Act envisages establishing an AI office within the European Commission to implement rules, ensure effectiveness, develop expertise and skills, and provide technical advice. Paragraphs 148 to 153 of the AI Act clearly explain the necessity of establishing an AI office to oversee and protect the interests of the AI ecosystem, detailing the structure of the management body and the roles and responsibilities of the committee.

    Moreover, the AI Act mandates that public service organizations providing high-risk AI systems must register in the EU database. This registration aims to assess the impact on fundamental rights and freedoms and to ensure transparency.

    Non-Compliance And Sanctions Under The Ai Act

    The AI Act imposes various sanctions for non-compliance, emphasizing the necessity of adherence. In the event of violations, penalties are calcuated based on the company's global annual turnover from the previous financial year or a predetermined fixed amount.Chapter VII of the AI Act explicitly includes provisions for corrective measures, activity restrictions, and monetary fines. The maximum fine can reach up to 35 million euros or 7% of the company's annual global turnover, whichever is higher, depending on the severity of the violations.

    Next Steps

    Following its signing by the European Parliament and publication in the Official Journal of the European Union, the AI Act will come into force 20 days after publication. Specific AI systems will be granted specific compliance time frames based on their risk groups.The AI Act, which will be forced gradually over 6, 12, and 24 months from its enforcement date, specifies that the provisions related to banned AI systems will be applicable 6 months after the enforcement date, and general AI rules will be effective 12 months after the enforcement date. In summary, all rules of the AI Act will be applicable within a maximum of 24 months from its enforcement.4

    Conclusion

    In conclusion, the AI Act, which supports both the risk-based approach and transparency, aims to ensure the organized and systematic use of AI systems. It also includes provisions that encourage the development of innovative AI systems. The AI Act brings comprehensive regulations to ensure that AI technologies are used safely, transparently, and ethically, aiming to protect individual rights and promote the secure development of innovative AI applications. These regulations will undoubtedly contribute to the ethical and human-centered development of AI, ensuring the opportunities provided by technology are fully utilized while building an AI ecosystem that protects societal and individual rights.

    References

    1- European Commission,

    2- European Parliament

    3- European Parliament,

    4- European Parliament,

    Should you have any queries or need further details, please contact us.

  • Notification!

    The content in this article is for general information purposes only and belongs to CottGroup® member companies. This content does not constitute legal, financial, or technical advice and cannot be quoted without proper attribution.

    CottGroup® member companies do not guarantee that the information in the article is accurate, up-to-date, or complete and are not liable for any damages that may arise from errors, omissions, or misunderstandings that the information may contain.

    The information presented here is intended to provide a general overview. Each specific case may require different assessments, and this information may not be applicable to every situation. Therefore, before taking any action based on the information provided in the article, it is strongly recommended that you consult a competent professional in the relevant fields such as legal, financial, technical, and other areas of expertise. If you are a CottGroup® client, do not forget to contact your client representative regarding your specific situation. If you are not our client, please seek advice from an appropriate expert.

    To reach CottGroup® member companies, click here.

  • /tr/yapay-zeka/item/avrupa-birligi-yapay-zeka-yasa-tasarisinin-ve-kapsaminin-degerlendirilmesi

    Other Articles

    Let's start
    Get a quote for your service requirements.

    Would you like to know more
    about our services?