Protecting Shared Societies in the Digital Space

We work to promote innovative practices that political leaders can use to foster a more inclusive discourse –to fill social media with powerful messages calling for inclusion and social cohesion

Digital technologies have become fundamental instruments of resilience and inclusion – but also, increasingly, catalysts of exclusion, hate speech and incitement to violence. The prominence of these technologies in all areas, spurred further by COVID-19, makes it all the more important to bring a Shared Societies perspective on two major components of the digital transformation: the growing use of artificial intelligence in the delivery of public services, and the amplification and distortion of public discourse through online platforms.

The dynamics of online discourse favour the spread of divisive and exclusionary rhetoric, studies show, fuelled by preference-reinforcing algorithms. For this reason, we work to promote innovative practices that political leaders can use to foster a more inclusive discourse –to fill social media with powerful messages calling for inclusion and social cohesion.

We are also addressing the impacts of the rapid deployment of Artificial Intelligence (AI) and Automated Decision Making (ADM) systems. Algorithms – the rules and formulas whereby AI/ADM systems process data – now shape crucial elements of every person’s life: credit access, job recruitment processes, school admissions, online content exposure, etc. Others shape key elements of our communities, such as the deployment of emergency and social services. But algorithms are just as good as the rules that compose them and the datasets on which they train. If these are incomplete, erroneous or biased, algorithms will yield unfair outcomes, such as privileging one group over others. Algorithms are also often so complex that the organizations who use them lack the skills to identify, understand and fix their biases, and consequently lack accountability for the resulting discrimination.

Club de Madrid wants the use of artificial intelligence (AI) in public service delivery to enable, not undermine, social justice.

Club de Madrid wants the use of artificial intelligence (AI) in public service delivery to enable, not undermine, social justice. This requires regulators and civil society to understand how the use of AI can lead to discrimination, and what can be done about it. To that end, Club de Madrid sets out to act as a bridge-builder between tech-for-good experts, anti-discrimination activists and political leaders, to help spread awareness of AI-related discrimination and to promote a set of common international principles for AI governance, based on the imperative to protect Shared Societies.