Why the so-called AI Action Summit falls short

Business

Ever since Chat-GPT’s debut, artificial intelligence (AI) has been the center of worldwide discussions on the promises and perils of new technologies. This has spawned a flurry of debates on the governance and regulation of large language models and “generative” AI, which have, among others, resulted in the Biden administration’s executive order on AI and international guiding principles for the development of generative AI and influenced Europe’s AI Act. As part of that global policy discussion, the UK government hosted the AI Safety Summit in 2023, which was followed in 2024 by the AI Seoul Summit, leading up to this year’s AI Action Summit hosted by France.

As heads of states and CEOs are heading to Paris for the AI Action Summit, the summit’s shortcomings are becoming glaringly obvious. The summit, which is hosted by the French government, has been described as a “pivotal moment in shaping the future of artificial intelligence governance”. However, a closer look at its agenda and the voices it will amplify tells a different story.

Focusing on AI’s potential economic contributions, and not differentiating between for example large language models and automated decision-making, the summit fails to take into account the many ways in which AI systems can be abused to undermine fundamental rights and push the planet’s already stretched ecological limits over the edge. Instead of centering nuanced perspectives on the capabilities of different AI systems and associated risks, the summit’s agenda paints a one-sided and simplistic image, not reflective of global discussion on AI governance. For example, the summit’s main program does not include a single panel addressing issues related to discrimination or sustainability.

A summit captured by industry interests cannot claim to be a transformative venue

This imbalance is also mirrored in the summit’s speakers, among which industry representatives notably outnumber civil society leaders. While many civil society organizations are putting on side events to counterbalance the summit’s misdirected priorities, an exclusive summit captured by industry interests cannot claim to be a transformative venue for global policy discussions.

The summit’s significant shortcomings are especially problematic in light of the leadership role European countries are claiming when it comes to the governance of the AI. The European Union’s AI Act, which recently entered into force, has been celebrated as the world’s first legal framework addressing the risks of AI. However, whether the AI Act will actually “promote the uptake of human centric and trustworthy artificial intelligence” remains to be seen. 

It’s unclear if the AI Act will provide a framework that incentivizes the roll out of user-centric AI tools or whether it will lock-in specific technologies at the expense of users. We like that the new rules contain a lot of promising language on fundamental rights protection, however, exceptions for law enforcement and national security render some of the safeguards fragile. This is especially true when it comes to the use of AI systems in high-risks contexts such as migration, asylum, border controls, and public safety, where the AI Act does little to protect against mass surveillance and profiling and predictive technologies. We are also concerned by the  possibility that other governments will copy-paste the AI Act’s broad exceptions without having the strong constitutional and human rights protections that exist within the EU legal system. We will therefore keep a close eye on how the AI Act is enforced in practice.

The summit also lags in addressing the essential role human rights should play in providing a common baseline for AI deployment, especially in high-impact uses. Although human-rights-related concerns appear in a few sessions, the Summit as purportedly a global forum aimed at unleashing the potential of AI for the public good and in the public interest, at a minimum, seems to miss the opportunity to clearly articulate how such a goal connects with fulfilling international human rights guarantees and which steps this entail.

Countries must address the AI divide without replicating AI harms.

Ramping up government use of AI systems is generally a key piece in national strategies for AI development worldwide. While countries must address the AI divide, doing so must not mean replicating AI harms. For example, we’ve elaborated on leveraging Inter-American human rights standards to tackle challenges and violations that emerge from public institutions’ use of algorithmic systems for rights-affecting determinations in Latin America.

In times of a global AI arms race, we do not need more hype for AI. Rather, there is a crucial need for evidence-based policy debates that address AI power centralization and consider the real-world harms associated with AI systems—while enabling diverse stakeholders to engage at eye level. The AI Action Summit will not be the place to have this conversation.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *