Big brother is «gendering» you. Il diritto antidiscriminatorio alla prova dell’intelligenza artificiale: quale tutela per il corpo digitale?
Recent advancements in artificial intelligence are revolutionizing how easily and readily organizations can collect data and perform «data-driven» decisions across institutional contexts. Companies and institutions can now link a great variety of data sources, sometimes innocuous on their own but not in the aggregate, to inform an increasingly broad range of decisions tied to activities like credit reporting, advertising, hiring, judging. In this article, I analyse how data-driven decisions can discriminate by explaining how even unprejudiced algorithms and decision-makers can generate biased decisions and I try to verify the effectiveness of the anti-discrimination law categories in the face of discriminatory results of automated decisions. Although many risks of big data are well-known, other problems can arise from the refusal to acknowledge or collect certain data. In fact, under the idea of AI neutrality, we end to ignore or hide, rather than prevent, discrimination, because decisions can be biased even in the absence of socially disadvantaged groups data. This leads us asking the following questions: What are the legal remedies to unmask the discrimination of an algorithmic decision? How can we protect our privacy and fundamental rights?
Full Text:PDF (Italiano)
- There are currently no refbacks.
This work is licensed under a Creative Commons Attribution 3.0 License.