Human Values for Artificial Intelligence
The stakes are too high to leave the course of AI’s development to researchers, let alone tech CEOs. While heavy-handed regulation is not the answer, the current regulatory vacuum must be filled, and that process demands broad-based global engagement.
MADRID – This may be the year when artificial intelligence transforms daily life. So said Brad Smith, President and Vice Chairman of Microsoft, at a Vatican-organized event on AI last week. But Smith’s statement was less a prediction than a call to action: the event – attended by industry leaders and representatives of the three Abrahamic religions – sought to promote an ethical, human-centered approach to the development of AI.
There is no doubt that AI poses a daunting set of operational, ethical, and regulatory challenges. And addressing them will be far from straightforward. Although AI development dates back to the 1950s, the technology’s contours and likely impact remain hazy.
Of course, recent breakthroughs – from the almost chillingly human-like text produced by OpenAI’s ChatGPT to applications that may shave years off the drug-discovery process – shed light on some dimensions of AI’s immense potential. But it remains impossible to predict all the ways AI will reshape human lives and civilization.
To continue reading, register now.
Already have an account? Log in