Open menu

24 February 2026

Artificial Intelligence in Recruitment Processes and the Protection of Personal Data
CottBlog

Author Ecem Kumsal Başyurt, Category KVKK - GDPR, Work Life, Technology

Artificial Intelligence in Recruitment Processes and the Protection of Personal Data

Recruitment processes have become one of the areas most rapidly transformed by digitalization. Today, many organizations rely on artificial intelligence–enabled systems in candidate screening and evaluation stages. CV-screening algorithms, video interview analytics tools, and automated scoring mechanisms increasingly shape decisions such as shortlisting, interview invitations, and candidate rejection through data-driven models.

CottBlog Abone Ol
CottBlog Subscribe

In this context, guidance and decisions issued by data protection authorities such as the European Data Protection Supervisor (EDPS), the UK Information Commissioner’s Office (ICO), and the Turkish Personal Data Protection Authority (KVKK) clarify the legal boundaries of automated decision-making systems in recruitment. The shared position of these authorities is that such systems can only be considered compatible with fundamental rights where they are operated with meaningful and effective human oversight, transparency, and risk-based governance mechanisms.

This article examines the legal nature of AI use in recruitment processes, the scope of human intervention, the risks of discrimination and algorithmic bias, and the role of Data Protection Impact Assessments, in light of the guidance published by the EDPS, ICO, and KVKK.

1. Automated Decision-Making and Its Legal Characterization

1.1. The ICO Perspective and the UK GDPR Framework

Under UK GDPR Article 22, special safeguards are provided against decisions concerning an individual that are based solely on automated processing and that produce legal effects or otherwise similarly significant effects. The ICO expressly notes that, in the recruitment context, outcomes such as a candidate being eliminated or not being shortlisted should be treated as falling within the scope of “legal or similarly significant effect.”

According to ICO guidance, for a decision not to qualify as “solely automated,” the process must include meaningful human involvement. Such involvement must:

  • entail a substantive assessment of the decision,
  • include the authority to change or override the algorithmic recommendation,
  • not be symbolic, perfunctory, or merely formalistic.

If these conditions are not met,, an assertion that the process is “subject to human review” may not be accepted as legally valid.

1.2. The EDPS Approach: The Limits of “Human Oversight”

In TechDispatch #2/2025 – Human Oversight of Automated Decision-Making and in its earlier guidance (including materials published in 2018), the EDPS emphasizes that human oversight should not be treated as merely procedural element.

According to the EDPS, the mere presence of a human in the workflow is insufficient for oversight to be effective. For human involvement to be genuinely meaningful, certain minimum conditions must be met. In this context, the EDPS identifies four foundational elements of effective human oversight:

  1. Genuine power to intervene: The human operator must not be limited to viewing or approving the system’s output. Where necessary, the operator must be able to halt the process, amend the decision, or disable the system.
  2. Access to information enabling evaluation of the decision: The operator must have access to sufficient information to understand the criteria and data on which the system output is based. Review without visibility into the decision’s rationale cannot constitute real oversight.
  3. Practical ability to decide independently: The operator must be able to override the system without fear of organizational pressure, performance consequences, or sanctions. Human intervention must be feasible in practice, not merely theoretical.
  4. An organizational approach oriented toward fairness and fundamental rights: Human oversight must be designed not as a compliance formality, but as a mechanism to reduce discrimination risk and protect fundamental rights. Without an institutional culture aligned with these aims, oversight becomes ineffective.

The EDPS further highlights the risk of automation bias: human operators may tend to accept algorithmic outputs as technically “more accurate” or “more neutral,” approving them without genuine scrutiny. In such cases, the decision may appear to be taken by a human, while in reality the algorithm becomes determinative. In the literature, this phenomenon is often described as “quasi-automation.”

This approach draws a clear distinction between the mere existence of human oversight and its actual effectiveness. For the EDPS, what matters is not that a human is “in the loop,” but that the human’s participation can meaningfully influence the outcome.

1.3. Turkish Law and the KVKK Perspective

The Turkish Personal Data Protection Law (Law No. 6698) does not contain an express provision equivalent to GDPR Article 22. Nevertheless, the general principles set out in Article 4, especially processing in accordance with law and good faith, purpose limitation, proportionality, and data minimization, applies directly to automated decision-making processes.

In its guidance titled Recommendations on the Protection of Personal Data in the Field of Artificial Intelligence, the KVKK underlines that algorithmic systems should be designed and operated in line with principles of transparency, accountability, and human oversight.

In addition, Board decisions reflect a strict approach to the validity of consent and the principle of proportionality in employment-related contexts. In particular, assessments noting that reliance on explicit consent may often be invalid due to the imbalance of power in employment relationships indicate that a candidate’s consent in recruitment must likewise be approached with particular caution.

2. Discrimination and Algorithmic Bias

ICO and EDPS guidance explicitly recognizes that recruitment algorithms may reproduce historical biases embedded in training data.

For example:

  • historically male-dominated recruitment datasets may be encoded in the model as a “success profile”,
  • gaps in a CV may be automatically treated as “risk indicators”,and
  • accent or manner of speech may be misinterpreted as a performance signal.

Such outputs raise the risk of indirect discrimination. Under Turkish law, the equality principle in Article 10 of the Constitution and the duty of equal treatment under Article 5 of the Labor Law (Law No. 4857) should be interpreted to cover algorithmic assessment systems as well. Moreover, the Article 4 principle of processing in accordance with law and good faith may render discriminatory processing unlawful.

3. Transparency and Candidate Rights

The ICO states that, where fully automated decision-making is used, candidates must be provided with “meaningful information about the logic involved.”

Similarly, the EDPS stresses that individuals must receive explanations sufficient to understand the decision and to object effectively.

At a minimum, candidates should be informed of:

  • the use of an automated system,
  • the criteria on which the decision is based,
  • whether (and at what stage) human intervention exists,
  • the right to object and request reassessment.

Under Article 10 of the KVKK, the data controller must inform the data subject of the purposes, method, and legal basis of processing. Concealing the use of AI or failing to explain the automated nature of the decision may therefore constitute a breach of the duty to inform. In this regard, invoking trade secrets does not eliminate the obligation to provide core, meaningful explanations.

4. Data Protection Impact Assessment and a Risk-Based Approach

The ICO considers the use of automated decision-making in recruitment a high-risk processing activity and requires a Data Protection Impact Assessment (DPIA).

The EDPS likewise emphasizes that human oversight can be meaningful only where it is supported by systematic risk assessment.

Although Turkish law does not impose an explicit DPIA obligation, when Article 12 (the obligation to take technical and organizational measures) is read together with accountability and a risk-based approach, the conclusion follows that a prior impact assessment is necessary for high-risk AI applications.

4.1. How Should a Data Protection Impact Assessment Be Conducted?

Because AI and automated decision-making in recruitment can produce legal and factual consequences for candidates, a comprehensive impact assessment should be carried out before deployment. This assessment should not be reduced to a formal compliance document; it must operate as a practical pre-decision analysis designed to identify, measure, and manage risks.

First, an effective assessment requires clear mapping of data categories. It must be established which personal data is collected, from what sources, and for what specific purposes. Where biometric analysis, facial recognition, voice analysis, emotion recognition, or behavioral inference is used, the risk of processing special categories of data or of indirectly inferring such data must be examined explicitly. Even where health data are not collected directly, a system capable of inferring psychological condition or disability may create a heightened special-category risk.

Second, the legal basis must be concretely determined. It should be clarified whether processing relies on explicit consent, necessity for entering into a contract, or legitimate interests. In the employment context, whether consent is truly freely given requires scrutiny. Where legitimate interests are invoked, a balancing test must be conducted between the controller’s interests and the candidate’s fundamental rights and freedoms, and this test should be documented.

Third, necessity and proportionality must be examined. If the same objective can reasonably be achieved by less intrusive means, preference for high-risk automated systems may conflict with the proportionality principle. Accordingly, alternatives such as human-assisted pre-screening instead of automated elimination should be evaluated.

One of the most critical components of the assessment is discrimination and bias analysis. The model should be tested statistically to determine whether it produces different outcomes for different demographic groups. False-positive and false-negative rates should be measured, and any systematic disadvantage affecting groups should be identified. This analysis must be evidence-based and built into a recurring monitoring mechanism, rather than remaining purely theoretical.

Human intervention design is also integral to the assessment. It must be established at which stage the decision is handed to a human, whether the operator has authority to halt or override the output, and whether interventions are logged. Human oversight must not function as a procedural “sign-off”; it must be capable of influencing the outcome.

Data security and transfers should be assessed as well. If the system is cloud-based, the location of data storageand the legal basis for any cross-border transfer must be analyzed. The vendor’s role of processor, controller, or joint controllermust be determined, and roles and responsibilities must be clearly set out contractually.

Retention and deletion rules must also be defined. Indefinite storage of rejected candidates’ data may violate the proportionality principle. Retention periods should be specified, and automated deletion mechanisms should be implemented.

Finally, candidates’ ability to object and request reassessment must be operationalized. The objection period, application method, and review procedure should be clear, accessible, and effective in practice.

At the conclusion of the impact assessment, the organization must objectively determine whether identified risks can be mitigated. If discrimination risk cannot be reduced through reasonable technical and organizational measures, if meaningful human intervention cannot be credibly designed, or if transparency cannot be ensured, refraining from deploying the system may be the legally safer course.

Accordingly, a DPIA should not be treated as a tool to automatically legitimize a system. It is a governance instrument that must also enable a decision not to use the system where fundamental rights risks cannot be managed. In AI-based recruitment, a robust impact assessment is one of the clearest indicators of legal accountability and respect for fundamental rights.

5. Conclusion

Taken together, the ICO, EDPS, and KVKK approaches converge on the following core principles regarding AI use in recruitment:

  • Decisions producing legal or similarly significant effects cannot be left entirely to automated systems.
  • Human involvement must be real, independent, and effective.
  • Discrimination and bias risks must be tested regularly.
  • Candidates must be informed transparently and must have a right to object.
  • High-risk applications require a prior impact assessment.

Ultimately, the use of AI in recruitment is not merely a matter of technological preference; it is a legal responsibility that must be evaluated through the lens of fundamental rights protection. Human oversight should not be a formalistic mechanism that legitimises deficiencies in system design; it must be structured as a core component of rights-based governance.

References

  1. ICO (Information Commissioner’s Office), Guidance on AI and Data Protection, available at: https://ico.org.uk
  2. ICO, Automated Decision-Making and Profiling Guidance, UK GDPR Article 22 guidance documents.
  3. UK GDPR, Article 22 – Automated individual decision-making, including profiling.
  4. EDPS, TechDispatch #2/2025 – Human Oversight of Automated Decision-Making, 2025.
  5. EDPS, Guidelines on Automated Individual Decision-Making and Profiling, 2018.
  6. KVKK, Recommendations on the Protection of Personal Data in the Field of Artificial Intelligence.
  7. Law No. 6698 on the Protection of Personal Data (KVKK).
  8. Constitution of the Republic of Türkiye, Article 10.
  9. Labour Law No. 4857, Article 5.
  10. EDPB, Guidelines on Automated Decision-Making and Profiling under Regulation 2016/679, WP251 rev.01.

Notification!

The content in this article is for general information purposes only and belongs to CottGroup® member companies. This content does not constitute legal, financial, or technical advice and cannot be quoted without proper attribution.

CottGroup® member companies do not guarantee that the information in the article is accurate, up-to-date, or complete and are not liable for any damages that may arise from errors, omissions, or misunderstandings that the information may contain.

The information presented here is intended to provide a general overview. Each specific case may require different assessments, and this information may not be applicable to every situation. Therefore, before taking any action based on the information provided in the article, it is strongly recommended that you consult a competent professional in the relevant fields such as legal, financial, technical, and other areas of expertise. If you are a CottGroup® client, do not forget to contact your client representative regarding your specific situation. If you are not our client, please seek advice from an appropriate expert.

To reach CottGroup® member companies, click here.

About The Author

/tr/blog/kvkk-gdpr/item/ise-alim-sureclerinde-yapay-zeka-ve-kisisel-verilerin-korunmasi

Other Articles

Lets start
Get a quote for your service requirements.

Would you like to know more
about our services?