As artificial intelligence reshapes the global economy—with predictions of nearly 40% of employment exposed to AI and potential GDP increases of $7 trillion—a critical gap has emerged in how we assess its risks. While AI harm taxonomies and incident databases have proliferated, most fail to explicitly connect AI harms to fundamental human rights violations. This new report from King & Wood Mallesons and ICAAD addresses that gap by proposing a systematic framework that maps ten fundamental human rights against specific AI harms, providing both public and private actors with a practical tool for identifying when AI systems may infringe on basic freedoms.

The framework reveals how AI’s unique characteristics—opacity, reliance on vast datasets, potential for bias, and autonomous decision-making—create unprecedented risks across the human rights spectrum. From discriminatory algorithms in employment and healthcare that violate equality rights, to facial recognition systems that breach privacy protections, to deepfakes that threaten freedom of expression and democratic participation, the report documents how AI harms manifest in real-world incidents. Notably, it highlights that many businesses still consider human rights inapplicable to their AI deployments, despite mounting evidence of adverse impacts. The report’s mapping table offers an actionable starting point for organizations to assess whether their AI systems could potentially result in human rights breaches—though it emphasizes this must be paired with deeper analysis of specific contexts, system characteristics, and the nuances of human rights law.