Ensuring safe development of Artificial Intelligence (AI)

The development of AI is extremely rapid. Today, AI can diagnose diseases, drive cars, translate languages, and even create art. These abilities, which seemed like science fiction just a few years ago, are now part of our reality. As AI systems keep developing and are becoming even more sophisticated, they are likely to outperform humans in many tasks, if not all, in the near future. Thus, AI can become the most powerful tool we’ve created, and like all tools, it can be used for both good and evil.

While transformative AI could help make progress on some of the biggest global challenges, we still lack understanding of how to ensure that this technology works in accordance with human goals and values. One risk is that we program advanced AI to perform a task and that in fulfilling that goal it does things we neither wanted nor anticipated. Another risk is that malicious actors deliberately use AI to do great damage. Despite the fact that the risks of artificial intelligence are increasingly being noticed, there are relatively few people today who work directly to try to reduce them.

Vivianne is pursuing a dual-degree in Philosophy, Politics, and Economics and Engineering Physics, while also taking part in the Talos Fellowship, aimed to influence responsible AI policies at a global level and ensure AI becomes a force for good.

Read more

Preventing an AI-related catastrophe – article by 80,000 Hours

AI Safety cause area profile – article by EA Sweden

Potential Risks from Advanced Artificial Intelligence – report by Open Philanthropy 

The AI Revolution: The Road to Superintelligence – article series on the blog Wait But Why

Books: Nick Bostrom's Superintelligence, Max Tegmark's Life 3.0 and Stuart Russell's Human Compatible

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
>