In a technology context dominated by data-intensive AI systems, the consequences of data processing are no longer restricted to the well-known privacy and data protection issues but encompass prejudices against a broader array of fundamental rights. Moreover, the tension between the extensive use of these systems, on the one hand, and the growing demand for ethically and socially responsible data use on the other, reveals the lack of a framework that can fully address the societal issues raised by AI. Against this background, neither traditional data protection impact assessment models nor the broader social or ethical impact assessment procedures appear to provide an adequate answer to the challenges of our algorithmic society. In contrast, a human rights-centred assessment may offer a better answer to the demand for a more comprehensive assessment, including not only data protection, but also the effects of data use on other fundamental rights and freedoms. Given the changes to society brought by technology and datafication, when applied to the field of AI the Human Rights Impact Assessment must then be enriched to consider ethical and societal issues, evolving into a more holistic Human Rights, Ethical and Social Impact Assessment (HRESIA), whose rationale and key elements are outlined in this chapter.