ILO reviews global ethics guidelines for Governing AI in the World of Work

Artificial Intelligence. Photo Courtesy: Unsplash

A new ILO study examines how 245 global AI ethics frameworks address issues related to work and labour rights, and the extent to which AI governance is tied with decent work and fundamental principles and rights.

Artificial Intelligence (AI) is increasingly embedded in our professional and social lives, with generative AI tools, large language models (LLMs) and algorithmic decision-making systems transforming organizations, workers tasks, and decision-making processes.

Yet as technology advances, comprehensive governance frameworks designed to ensure ethical and socially responsible use of AI remain fragmented. AI governance tends to rely on soft law mechanisms, such as voluntary guidelines, national strategies or international initiatives.

Corporations, governments, and professional bodies have adopted their own principles and codes of conduct, while non-profit organizations have issued declarations under the banners of “ethical” and “responsible AI”. This raises critical questions about reconciling AI innovation and AI adoption with decent work, fairness and fundamental rights, and whether current global AI ethics frameworks adequately provide safeguards for workers and enterprises. 

A New Global Review of AI Ethics Guidelines

Interest in ethical AI—as a set of normative principles guiding its development, deployment and governance—has grown significantly. Developing AI ethics guidelines is a multidisciplinary effort, bringing together perspectives from economics, law, ethics, computer science and other fields. However, the involvement of diverse actors—each with different norms, values and expectations about AI and its ethical implications—adds complexity. These varying perspectives shape, and sometimes conflict with, efforts to establish a shared understanding of ethical AI, particularly in the context of work and organizations.

A recent study provides one of the most comprehensive analyses of AI ethics frameworks to date. Using Natural Language Processing (NLP) and LLMs, the authors systematically examined 245 AI ethics documents issued worldwide by government agencies, private sector actors, academic institutions and civil society organizations. The study explores the extent to which AI ethics guidelines address issues relevant to the world of work — including international labour standards, particularly fundamental principles and rights at work (FPRW). Its findings highlight both a degree of convergence on certain ethical principles and notable gaps in connecting AI ethics to labour and employment regulation.


The analysis shows a steep increase in the number of AI ethics documents issued since 2017. The surge coincides with the acceleration of deep learning and the first major public debates on algorithmic bias, privacy and transparency. Most guidelines originate in advanced economies, notably the United States and the United Kingdom, with a growing number emerging from international organizations and regional bodies. About 40 per cent of the guidelines have been issued by private sector actors, underlining the central role that corporations now play in shaping the global AI ethics landscape. The early period between 2017 and 2019 marks the ‘foundational phase’ of AI ethics. During this time, numerous technology firms, academic groups and governments published their first ethical charters. Since then, the pace has slowed slightly but the geographical reach has expanded, with increasing contributions from Asia, Latin America, and Africa.The data indicate a shift from abstract declarations of intent towards more operational guidance, although the degree of enforceability remains limited.

Dominant Themes and Principles

Through topic modelling and textual clustering, the same study identifies seven main thematic clusters across the corpus of documents: Ethics of Autonomous Systems, AI Policy, Data Protection and Data Privacy as Human Rights, AI in the Financial Sector, Responsible AI, AI Ethics in Health Care and Governing Artificial Intelligence. Strikingly, the world of work does not appear as a distinct cluster, suggesting that labour-related concerns have yet to occupy a central place in global AI ethics discourse.

Across the corpus, five ethical principles recur most frequently: beneficence, non-maleficence, justice, autonomy and explicability.[4] Non-maleficence, the avoidance of harm and damage, appears in roughly 80 per cent of the guidelines, followed by beneficence, the promotion of well-being and Explicability. Justice, linked to fairness and non-discrimination, is mentioned in about 60 per cent of documents, while Autonomy, the principle most closely associated with human agency, appears in roughly one-third. These patterns mirror the ethical architecture first outlined by Floridi & Cowls (2019), showing a broad international alignment around shared moral reference points.

Relevance of the World of Work and the Missing Link to Labour Standards

The analysis reveals that while many AI ethics guidelines engage with issues directly relevant to the world of work, few make explicit reference to the International Labour Organization or to its normative instruments. References to employment, labour or workplace contexts occur relatively frequently across the corpus, indicating that the implications of AI for work are widely recognized. However, explicit mentions of the ILO, international labour standards, including the fundamental principles and rights at work (FPRW) are rare. In most cases, the connection remains implicit, reflected in the discussion of broader ethical principles such as justice, fairness, or human dignity rather than in direct normative alignment with labour standards. 

The ILO’s normative framework — rooted in the Declaration of Philadelphia and the Fundamental Principles and Rights at Work — provides a well-established ethical and legal foundation for understanding rights in the workplace. Yet these principles are largely absent from AI ethics frameworks. This gap leaves issues, such as algorithmic management, worker surveillance, recruiting or automation-related displacement, without a clear ethical anchor. Closing this gap would require deliberate efforts to integrate labour rights into the global AI ethics debate. Ethical governance cannot be separated from questions of justice and social dialogue at work. Linking AI ethics with ILS could provide a concrete normative reference for national policies and corporate accountability mechanisms.

Implications for Policy Makers and Regulators

The global debate on AI ethics reflects growing awareness that technological innovation must be guided by shared human values. Yet ethical convergence towards certain principles is only a first step. Without linkage to established legal and institutional frameworks — including international labour standards — these principles risk being too abstract to influence real-world outcomes. As AI continues to reshape the organization of work, the need for coherence between ethics, governance, and rights is likely to intensify in the coming years. Future initiatives could focus on developing practical tools for assessing AI systems in employment settings, embedding labour standards into AI risk classification schemes and supporting capacity-building for social partners. Samaan et al. (2025) provide an important empirical foundation for this endeavour, offering evidence that while the ethical vocabulary of AI is expanding, its connection to the world of work remains thin.

For policymakers, the findings underscore the need for a comprehensive approach to AI governance that extends beyond abstract ethical principles. Ethics frameworks are valuable as guidance, but without institutional mechanisms for implementation, monitoring, and enforcement, they risk remaining aspirational. The ILO and its constituents — governments, and employers’ and workers’ organizations — can play a pivotal role in bridging this gap by promoting rights-based approaches to AI governance.