Open menu
Artificial Intelligence Consulting

Artificial Intelligence Consulting

As CottGroup, we offer advanced artificial intelligence solutions to enhance your business efficiency and gain a competitive advantage. Our expert team develops and implements custom AI strategies that improve your customer experiences and optimize your operations. Additionally, we train large language models (LLMs) using your company's data to ensure your AI tools align perfectly with your business goals.

Machine Learning Project Consulting

Machine Learning Project Consulting

Our machine learning project consulting supports you at every step, from ideation to deployment, delivering robust and effective models. We integrate these solutions into your workflows, facilitate seamless communication with suppliers, and foster innovation to achieve measurable business outcomes.

Data Governance Services

Data Governance
Services

Our data governance services focus on maintaining data quality and security while ensuring compliance with regulations such as GDPR. By building a resilient data infrastructure, we support your sustainable growth and enable data-driven, informed decision-making.

Recommendations on the Protection of Personal Data in the Field of Artificial Intelligence

12 August 2025

    Recommendations on the Protection of Personal Data in the Field of Artificial Intelligence

    Artificial intelligence systems, with their rapid development in recent years, have not only brought about a technical transformation but have also created new areas of debate regarding fundamental rights and freedoms, especially the right to the protection of personal data. In particular, in scenarios such as automated decision-making, profiling, and bias generation, personal data processing activities have become systemic, making a regulatory framework in this area inevitable.

    The guide titled “Recommendations on the Protection of Personal Data in the Field of Artificial Intelligence”, published by the Personal Data Protection Authority in April 2025, provides a comprehensive framework to address this need. The guide defines specific areas of responsibility for developers, manufacturers, service providers, and decision-makers, aiming to ensure that AI systems are developed and implemented in compliance with Law No. 6698 and the relevant legislation.

    In this article, in light of the guide in question, the principles, obligations, and practical recommendations regarding the protection of personal data in AI systems are revisited, offering a guiding perspective for all relevant actors.

    1. Fundamental Data Protection Principles and General Recommendations

    The guide emphasizes that in the processes of developing and implementing AI systems, all activities related to personal data processing should be carried out within a framework based on human rights, data security, and transparency. The main principles set out in the guide are as follows:

    1. Lawfulness and Fairness
      • Personal data processing activities must be carried out in compliance with Law No. 6698 and relevant secondary regulations.
      • The purpose, scope, and method of data processing must be clearly defined, and these activities must comply with the rules of fairness.
    2. Transparency and Accountability
      • All parties involved in the data processing process must provide clear and understandable information to eliminate any doubt about how personal data is collected, processed, and what consequences it leads to.
      • A data protection compliance program should be established from the beginning of the project, documented, and made available for sharing with competent authorities when necessary.
    3. Privacy by Design
      • All systems involving personal data processing should be structured according to data protection principles from the design stage and maintained throughout the system’s lifecycle.
      • Where possible, systems should be developed to achieve their objectives without processing personal data; otherwise, data should be processed in anonymized form.
    4. Proportionality and Data Minimization
      • Processed data must be limited to specific, explicit, and legitimate purposes; unnecessary data collection should be avoided.
      • The source, quality, accuracy, and currency of the data should be regularly checked. Procedures should be established to detect outdated or inaccurate data and manage destruction processes.
    5. Risk-Based Approach and Privacy Impact Assessment (PIA)
      • The potential impact of data processing activities on individuals and society should be analyzed using a risk-based approach.
      • In the context of the impact of AI systems on personal data, a risk-based approach should be adopted. For projects involving high risk, a Privacy Impact Assessment (PIA) should be conducted, and its results integrated into decision-making processes. The PIA systematically analyzes the effects of data processing on individuals and helps mitigate data protection risks.
    6. Higher Sensitivity for Special Categories of Data
      • If special categories of personal data such as health, biometric, genetic, or similar data are processed, the scope of technical and administrative measures should be increased, and the risk level should be assessed more carefully.
      • Where necessary and according to the sensitivity of the data processed, special permission and audit processes should be defined.
    7. Allocation of Responsibility Among Parties
      • In any project, the role of stakeholders (developer, service provider, employer, etc.) as data controller or data processor should be clearly defined at the project’s outset.
      • This distinction is critical to correctly distributing responsibilities for processes such as fulfilling the obligation to inform, obtaining explicit consent, ensuring data security, and managing requests.

    2. Recommendations for Developers, Manufacturers, and Service Providers

    Developers, manufacturers, and service providers involved in all processes from the development to distribution, deployment, and updating of AI systems bear direct responsibility for the protection of personal data. The guide also includes several recommendations for these actors:

    1. Compliance Starting from the Design Stage
      • AI systems should be designed with a privacy-centered approach in line with the principle of “privacy by design.
      • The algorithms used should consider not only technical efficiency but also ethical, social, and human rights-based impacts.
      • For example, in AI-based health and exercise recommendation systems, local processing should be preferred to prevent data from being sent to central servers, or anonymization methods should be incorporated during the design of the data architecture.
    2. Risks of Bias, Discrimination, and Context Loss
      • The risk of out-of-context use should be assessed to prevent algorithms from having unforeseen effects on individuals.
      • During development, patterns that could cause discrimination based on variables such as gender, ethnicity, or age should be identified and prevented in advance.
      • Collaboration with academic institutions and independent experts is recommended, especially in applications with limited transparency.
      • In particular, since the risk of automatic discrimination is high in profiling activities, attention should be paid to ensuring that discriminatory patterns are absent.
    3. Data Quality and the Principle of Minimal Data
      • The quality, source, currency, and accuracy of data used in AI systems should be regularly checked, and unnecessary or excessive use of data should be avoided.
      • Modeling should be chosen in accordance with the principle of “high performance with less data.” The accuracy rate, currency, and reuse scenarios of data should be regularly reviewed, and data should be destroyed when the reason for processing no longer exists.
    4. Individual Control and Right to Intervene
      • Users should be provided with clear information on:
        • The rationale for processing,
        • The methods used,
        • Potential outcomes.
      • The rights to object, request deletion or anonymization of data, and stop data processing should be integrated into the system design.
    5. Accountability and Lifecycle Monitoring
      • Mechanisms supporting the principle of accountability for personal data protection should be established from the design stage and maintained throughout the lifecycle of products and services.
      • Not only the performance of systems but also their data protection impact should be regularly assessed. For example, EU institutions have created an auditable and transparent platform called the AI Register for the AI systems they use.
    6. Alternative and Safe Design Options
      • Wherever possible, alternative solutions that interfere less with personal rights should be offered, and users’ right to choose should be guaranteed.
      • Application architectures that force processing should be avoided; preference-based configurations should be recommended. Users should be allowed the right not to trust system recommendations.
      • Developers are encouraged to create a design choice matrix that ensures maximum impact with minimal data.

    3. Recommendations for Decision-Makers

    Managers, public authorities, or senior private sector representatives who decide to use or integrate AI systems within their organizations cannot leave responsibility for personal data protection solely to technical teams. According to the 2025 guide of the KVKK, the following principles apply to this group:

    1. Accountability and Application Matrices
      • The principle of accountability should be observed in every process involving personal data, and a trackable structure should be established to monitor this principle throughout the project.
      • Application matrices defining data processing risks for each sector and application should be created, allowing for comparative analysis across different hardware and software.
    2. Certification and Codes of Conduct
      • For processes involving personal data processing in AI systems, codes of conduct, ethical policies, and, where possible, certification mechanisms should be developed.
      • These rules should cover not only internal processes but also business partners.
    3. Ensuring Human Intervention
      • Human intervention should be mandatory in decision-making processes. Where fully automated decisions have significant impacts on individuals, human oversight and, if necessary, the ability to intervene should be ensured.
      • Individuals’ freedom not to rely on the results produced by AI systems should be protected, and alternative channels should be offered.
    4. Cooperation with Regulatory Authorities
      • In cases where AI applications may significantly affect individuals’ rights, the relevant regulatory authorities should be consulted.
      • Cross-sector cooperation with competent institutions in areas such as consumer rights, anti-discrimination, and competition law should be encouraged.
    5. Social Impact, Education, and Participation
      • Applied research to evaluate the ethical, sociological, and psychological impacts of AI should be supported, and the role of individuals and groups in decision-making mechanisms should be transparently defined.
      • Digital literacy programs should be promoted to help employees and service users within the organization understand these technologies.
    6. Creating an Open and Secure Ecosystem
      • Digital infrastructures that support the secure, fair, lawful, and ethical sharing of data, preferably based on open-source principles, should be developed.
      • Such structures both facilitate internal audits and increase trust among stakeholders.

    4. Conclusion and Evaluation

    The fact that AI systems have become structures that directly affect individuals’ private lives and personal data has made it necessary to evaluate these systems not only technically but also legally, ethically, and in terms of governance. The guide published by the Personal Data Protection Authority in April 2025 offers an important and constructive starting point in this context. The clear definition of separate areas of responsibility for developers, manufacturers, decision-makers, and service providers is a first in the context of implementation in Türkiye.

    Although the guide does not introduce directly binding regulations on topics such as automated decision-making, explainability, and human intervention, considering that previous decisions and practices of the Board explicitly reference the General Data Protection Regulation (GDPR) and the guidelines of the European Data Protection Board (EDPB), it is clear that practices in these areas will also be instructive for the Turkish data protection regime.

    In particular:

    • The effects of profiling processes,
    • The risk of subjecting individuals to fully automated decision-making systems,
    • The integration of mechanisms such as Privacy/Data Protection Impact Assessments (PIA/DPIA) into legal and corporate decision-making processes,
    • And the principles explicitly regulated in the GDPR, such as explainability, accountability, and the right to human intervention, can be expected to take shape in KVKK practice through case law.

    For this reason, it is important for all public and private institutions using or integrating AI to adopt a data protection compliance approach that not only adheres to current Turkish legislation but also takes GDPR standards into account, in order to be prepared for future audit processes.

    In this context, all public and private institutions using or integrating AI should consider the following questions, alongside compliance with Turkish legislation, in line with GDPR standards:

    • Which personal data do our AI systems process, for what purpose, and for how long?
    • Can we explain the logic behind the system’s decisions to the user?
    • To what extent does the user have control over the outcomes of this system?
    • What are the consequences of decisions made by the system for the individual, and is human intervention possible in these outcomes?
    • Can we explain the operating logic of the algorithm? Can we provide clear and understandable information to the relevant individual on this matter?
    • What is the risk of the system generating bias, causing discrimination, or deviating from fairness in terms of personal data?
    • Can we redesign this system with data minimization and alternative methods?

    The answers to these questions will form the basis not only for legal compliance but also for a trust-based digital transformation. You can access the relevant guide here.

    Protection of Personal Data in AI Systems – 8 Fundamental Principles in Light of the 2025 Guide

    Principle What It Means for Practitioners
    1. Compliance Starting from the Design Stage Plan the system to align with data protection principles before any code is written.
    2. Transparency and Accountability Clearly explain to users what you do, why, and how; document your processes thoroughly.
    3. Data Minimization Do more with less data. Avoid collecting unnecessary information.
    4. Risk-Based Approach Assess risks of automated decisions, bias, and discrimination. Conduct Privacy Impact Assessments (PIAs) where necessary.
    5. User’s Right to Intervene Allow users to object to data processing, request deletion, or challenge system outputs.
    6. Human Intervention and Alternatives Design hybrid models that allow human oversight and intervention when needed.
    7. Allocation of Responsibility Among Stakeholders Clearly distinguish between data controllers and data processors. Assign legal obligations accordingly.
    8. Training and Awareness Educate both developers and managers on data privacy. Promote digital literacy.

    These principles should form the foundation for building a strong data protection culture at both technical and organizational levels.

    This article is based on the following official document:

    KVKK Publications No: 76 – Recommendations on the Protection of Personal Data in the Field of Artificial Intelligence, April 2025 (in Turkish)

    Should you have any queries or need further details, please contact us here.

  • Notification!

    The content in this article is for general information purposes only and belongs to CottGroup® member companies. This content does not constitute legal, financial, or technical advice and cannot be quoted without proper attribution.

    CottGroup® member companies do not guarantee that the information in the article is accurate, up-to-date, or complete and are not liable for any damages that may arise from errors, omissions, or misunderstandings that the information may contain.

    The information presented here is intended to provide a general overview. Each specific case may require different assessments, and this information may not be applicable to every situation. Therefore, before taking any action based on the information provided in the article, it is strongly recommended that you consult a competent professional in the relevant fields such as legal, financial, technical, and other areas of expertise. If you are a CottGroup® client, do not forget to contact your client representative regarding your specific situation. If you are not our client, please seek advice from an appropriate expert.

    To reach CottGroup® member companies, click here.

  • /tr/yapay-zeka/item/yapay-zeka-alaninda-kisisel-verilerin-korunmasina-dair-tavsiyeler

    Other Articles

    Let's start
    Get a quote for your service requirements.

    Would you like to know more
    about our services?