Open menu

04 October 2021

Recommendations on the Protection of Personal Data in the Field of Artificial Intelligence

Written by Onur Saygın, Posted in KVKK - GDPR

Recommendations on the Protection of Personal Data in the Field of Artificial Intelligence

A guideline titled "Recommendations on the Protection of Personal Data in the Field of Artificial Intelligence" was published by Turkish Personal Data Protection Authority on September 15, 2021. The guideline includes recommendations for the development and application of artificial intelligence technologies for the purpose of protecting personal data within the scope of Turkish Personal Data Protection Law, which entered into force on April 7, 2016.

Artificial intelligence (AI) technologies have led to significant transformations in almost every aspect of social life. Artificial intelligence technologies, which are based on collecting and analyzing large amounts of data, have transformed especially the economic field, and all kinds of information have become the main factor of generating economic value through datafication activities. This trend has enabled the targeting of personal data and the generation of value through personal data within the scope of a wide range of activities. In addition, artificial intelligence technologies, especially in which personal data are used, affect exceedingly other aspects of daily life day by day.

Risk to Personal Data under Artificial Intelligence Technologies

Due to the fact that personal data is subject to artificial intelligence technologies, the problems created by this issue have been experienced, and it has been predicted that it may lead to different problems. Evaluations regarding the current and future risks have emerged that issues focused on personal data, particularly discrimination, violation of personal data privacy, transparency, and accountability, should be considered. Ethical problems that arise due to biased decisions made as a result of data analysis involving natural persons are examined under the title of "discrimination." With respect to this, the decision about Glovo made by Italian Data Protection Authority is an example that reveals how artificial intelligence technologies can cause people to be discriminated. In particular, the widespread use of the internet and social media and the execution of daily life processes through technological environments bring risks regarding the protection of personal data. In cases where privacy violations occur, personal data is subject to automatic decision-making processes without the data subject's consent. As we have also announced in our monthly newsletters, the issue that the especially large-scale personal data being seized by third parties, which has been the subject of data breaches in recent years, shows the will to be subject to artificial intelligence technologies to generate economic value, the activities carried out through Cambridge Analytica in the presidential elections held in the United States of America in 2016 are important examples of the risks in this regard. Another main risk area is related to transparency and accountability. While the complex nature of artificial intelligence technologies creates difficulties in examining whether artificial intelligence decisions are unlawful or not, problems may arise at the point of clarification of the data subjects. As examined in detail in our article named "Algorithmic Transparency within the Scope of GDPR", the operating logic should be explained to the data subjects while fulfilling the obligation of clarification for personal data processing through complex algorithms by data controllers.

Related Legislation and Publications

With the realization and prediction of the above-mentioned risks, the efforts to create a legal infrastructure have gained momentum. Various regulations regarding the processing of personal data through artificial intelligence technologies have been included in the laws, commissions have been established, and guiding studies have been carried out in various countries. Article 22 of European General Data Protection Regulation (GDPR) which was published on April 14, 2016, regulates the right of data subjects not to be subject to automatic decision-making processes, including profiling that has a significant effect on them; obligations to take additional measures regarding the protection of data subject to automatic decision-making processes have been introduced for the data controllers through the relevant article. Furthermore, within the scope of this article, it is stipulated that explicit consent must be obtained from the data subject for personal data will be subject of automatic decision-making processes other than establishment and/or performance of a contract and activities based on the laws of the European Union or member states. In addition, it has been stated that in the 71st and 72nd recitals related to the relevant article, the persons about whom a decision is made based on an algorithm have the right to demand an explanation from the data controller. Within the scope of Article 11 of Turkish Data Protection Law, which regulates the rights of data subjects, the right to object to the emergence of a result against the data subjects themselves by analyzing processed personal data through automated systems. China Personal Information Protection Law, which is approved to enter into force on November 1, 2021, also reserves the right to restrict and object to the processing of personal data subject to automated decision-making processes for the data subjects. In addition to legal regulations, especially by organizations such as the European Commission, the Council of Europe, OECD, guiding texts such as "Ethical Draft Rules for Trustworthy Artificial Intelligence", "Guidelines on Artificial Intelligence and Personal Data Protection", "White Paper on Artificial Intelligence", "Recommendation of Council on Artificial Intelligence" have been published, which also referred to in the guide "Recommendations on the Protection of Personal Data in the Field of Artificial Intelligence" by Turkish Personal Data Protection Authority.

Reviewing the Turkish Personal Data Protection Authority's Guide on the Subject

In the light of these regulations, when the guideline published by the Personal Data Protection Authority is examined, it is seen that it has a similar approach with the studies mentioned above. Development and application of artificial intelligence technologies, in general, the fundamental rights and freedoms of the data subjects should be respected, and the principles of legality, honesty, proportionality, accountability, transparency, personal data to be correct and up-to-date, and specific and to be limited to the purpose on processing personal data should be based. It has been stated that data security and social & ethical values should be considered in processing personal data. In this context, during the activities for which high risk is foreseen, while it is prescribed that the privacy impact assessment should be applied, and the legal compliance decision should be made based on this assessment, it is emphasized that all processes, including the design, should be carried out in compliance with the data protection legislation. In addition, if the same result can be achieved without processing personal data in the development and application of artificial intelligence technologies, it has been explained that the anonymizing method should be preferred in processing.

In the guideline published by the Turkish Personal Data Protection Authority, the customized recommendations for different parties in terms of artificial intelligence technologies are as follows.

Recommendations for Developers, Producers, Service Providers

  • Taking the national and international regulations and documents into account during the design processes
  • Evaluating possible negative risks on fundamental rights and freedoms and carrying out preventive actions
  • Carrying out activities to prevent negative effects such as discrimination that may occur
  • Adopting the minimum data usage policy
  • Evaluating the negative effects of decontextualized algorithm models
  • Conducting processes together with academic institutions, as well as obtaining the opinion of impartial experts or organizations in areas where transparency and stakeholder participation are difficult
  • Supporting the active participation in risk assessment as well as protecting all the rights of data subjects arising from national and international legislation regarding data processing activities especially, objection, deletion, suspension, destruction, anonymity
  • Fulfilling the obligation to inform people who interact with the application and designing an effective approval mechanism

Advice for Decision-makers

  • Observing the principle of accountability, in particular, at all stages
  • Adopting the risk assessment procedures for the personal data protection and creating the implementation matrix based on the sector/application/hardware/software
  • Taking appropriate measures such as codes of conduct and certification mechanisms
  • Allocating the sufficient resources by decision-makers to monitor whether artificial intelligence models are used for a different context or purpose
  • Establishing the role of human intervention in decision-making processes and protecting the freedom of individuals not to trust the results of suggestions presented by artificial intelligence applications
  • Applying to supervisory authorities when threats to fundamental rights and freedoms occur
  • Promoting the cooperation between supervisors and other authorized bodies on data privacy, consumer protection, promotion of competition and anti-discrimination
  • Supporting application research based on measuring the human rights, ethical, sociological, and psychological effects of artificial intelligence applications
  • Ensuring that individuals, groups and stakeholders are actively involved in the debate focused the role of artificial intelligence and big data systems in shaping social dynamics and decision-making processes that affect them.
  • Encouraging appropriate open software-based mechanisms to create a digital ecosystem that supports the safe, fair, legal, and ethical data sharing
  • Investing in digital literacy and educational resources to raise awareness about understanding AI applications and impacts for data subjects
  • Encouraging training within the framework of data privacy to raise awareness of personal data protection for application developers

Should you have any queries or need further details, please contact your customer representative.

Notification!

Contents provided in this article serve to informative purpose only. The article is confidential and property of CottGroup® and all of its affiliated legal entities. Quoting any of the contents without credit being given to the source is strictly prohibited. Regardless of having all the precautions and importance put in the preparation of this article, CottGroup® and its member companies cannot be held liable of the application or interpretation of the information provided. It is strictly advised to consult a professional for the application of the above-mentioned subject.

Please consult your client representative if you are a customer of CottGroup® or consult a relevant party or an expert prior to taking any action in regards to the above content.

About The Author

This website is using cookies.
In this website, we use cookies to develop your user experience, obtain efficient work and track statistical data. You are agreeing to our use of cookies by browsing our website. Please review Çerezler (Cookies) page for detailed information of how we manage the cookies. This choice is valid for 30 days until you delete the cookies in your web browser.
x