Skip to main content
padlock icon - secure page this page is secure

Transparency, automated decision-making processes and personal profiling


The full text article is not available for purchase.

The publisher only permits individual articles to be downloaded by subscribers.

Automated decision-making and profiling techniques provide tremendous opportunities to companies and organizations; however, they can also be harmful to individuals, because current laws and their interpretations neither provide data subjects with sufficient control over assessments made by automated decision-making processes nor with sufficient control over how these profiles are used. Initially, we briefly discuss how recent technological innovations led to big data analytics, which through machine learning algorithms can extract behaviours, preferences and feelings of individuals. This automatically generated knowledge can both form the basis for effective business decisions and result in discriminatory and biased perceptions of individuals’ lives. We next observe how the consequences of this situation lead to lack of transparency in automated decision-making and profiling, and discuss the legal framework of this situation. The concept of personal data in this section is crucial, as there is a conflict between the 29 Working Party and the European Court of Justice at the time to define the artificial intelligence (AI)-generated profiles and assessments as personal data. Depending on whether they are or are not personal data, individuals have the right to be notified (Articles 13–14 GDPR) or right to access (Article 15 GDPR) to inferenced data. The reality is that the data protection law does not protect data subjects from the assessments that companies make through big data and machine learning algorithms, as users lose control over their personal data and do not have any mechanism to protect themselves from this profiling owing to trade secrets and intellectual property rights. Finally, we discuss four possible solutions to lack of transparency in automated inferences. We explore the impact of a variety of approaches ranging from use of open source algorithms to only collecting anonymous data, and we show how these approaches, to varying degrees, protect individuals as well as let them control their personal data. Based on that, we conclude by outlining the requirements for a desirable governance model of our critical digital infrastructures.
No References
No Citations
No Supplementary Data
No Article Media
No Metrics

Keywords: GDPR; data ethics; digital infrastructure; machine learning; open source; transparency

Document Type: Research Article

Publication date: June 1, 2019

More about this publication?
  • Journal of Data Protection & Privacy publishes in-depth, peer-reviewed articles, case studies and applied research on all aspects of data protection, information security and privacy issues across the European Union and other jurisdictions, in the wake of the new EU General Data Protection Regulation (GDPR) and the biggest change in data protection and privacy for two decades.
  • Editorial Board
  • Information for Authors
  • Submit a Paper
  • Subscribe to this Title
  • Terms & Conditions
  • Ingenta Connect is not responsible for the content or availability of external websites
  • Access Key
  • Free content
  • Partial Free content
  • New content
  • Open access content
  • Partial Open access content
  • Subscribed content
  • Partial Subscribed content
  • Free trial content
Cookie Policy
Cookie Policy
Ingenta Connect website makes use of cookies so as to keep track of data that you have filled in. I am Happy with this Find out more