We’ve all heard the stories or seen the movies where artificial intelligence (AI) systems become so advanced that they turn against their human creators. While these instances are still in the realm of science fiction today, the increasing prevalence of AI and autonomous systems in our daily lives raises important ethical questions we can’t ignore.

Being awed by the potential of AI and the promised benefits of optimized processes and insights from big data, we could be glossing over the risks – threats to privacy, employment displacement, unintended consequences, and more. A case in point – Amazon’s disastrous AI recruiting tool, which had to be scrapped in 2018 as it was discriminating against women by downgrading their resumes.

Recognizing the ethical dilemmas

Fortunately, thought leaders and governing bodies like UNESCO are sounding the alarm about these ethical dilemmas. They are calling for scientists, developers, and companies to examine the values underlying our relentless “artificial intelligence” pursuits. Are we truly prioritizing human wellbeing and sustainability? Or is there a disconnected focus on productivity and GDP at all costs?

To address these ethical concerns, respect for human rights should be sacrosanct, with societal wellbeing as the primary success metric. We must enshrine data rights and identity control for individuals, transparency into the reasoning behind automated decisions that impact lives, along with rationales and accountability.

A promising step towards governing the ethical development of AI

The groundbreaking AI Safety Summit hosted by the UK on November 1-2, 2023 at Bletchley Park brought together leading AI researchers, ethicists, policymakers, and representatives from major tech companies to address how to ensure advanced AI systems remain safe and beneficial as they become increasingly powerful and ubiquitous. In a positive outcome, 10 governments including the UK, US, EU and major AI players like OpenAI and Google agreed to cooperate on rigorous testing of advanced AI models before and after release.

In governing this ethical development of AI, we would be wise to draw from 2,500 years of moral philosophy grappling with issues like autonomy, consciousness, and the ethics of intelligent machines, as concerns are bound to surface about these aspects. Author and historian Yuval Noah Harari raises fundamental questions about the development of AI. He challenges us to consider whether full AI is achievable and whether it poses an existential threat to humanity.

Fundamentally, AI should be enhancing human capacity, not replacing it. For example, AI technologies have the potential to advance all 17 SDGs by aiding in tasks such as food production, healthcare, and public health management. And while they carry with them opportunities like improved decision-making and resource management, over-reliance on AI technologies comes with challenges such as the digital divide (the gap between those who have access to modern information and communication technology and those who do not) and unemployment risks (aggravating existing economic disparities).

As AI inevitably decentralizes across the globe, we must ensure its pathways prioritize wellbeing, rights, and accountability, every step of the way. By getting the ethical foundations right from the beginning, we can steer AI as a great social good helping all people to thrive.

Leave a Reply