Safe development of artificial intelligence (AI)
Share
Home » Career Guide » Positive AI development
Table of Contents
Highest priority
Many experts believe that there is a significant chance that in the twentieth century humanity will develop machines that are more intelligent than ourselves. AI can become the most powerful tool we’ve created, and like all tools, it can be used for both good and evil. Today, a lot of energy is put into the possibilities – to create rapid technological development. That’s good, but we want to encourage more people to work on what’s more overlooked: reducing the risk of AI being used to do harm.
<iframe width=”560″ height=”315″ src=”https://www.youtube.com/embed/MnT1xgZgkpk?si=4vGKagW-ZchCxw7M” title=”YouTube video player” frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share” referrerpolicy=”strict-origin-when-cross-origin” allowfullscreen></iframe>
Nick Bostrom is a philosopher and director of the Future of Humanity Institute at the University of Oxford. He is one of the most prominent thinkers in AI risks and in his TED Talk he gives an introduction to the subject.
We still lack understanding of how to ensure that this technology works in accordance with human goals and values. One risk is that we program advanced AI to perform a task and that in fulfilling that goal it does things we neither wanted nor anticipated. Another risk is that malicious actors deliberately use AI to do great damage. Despite the fact that the risks of artificial intelligence are increasingly being noticed, there are relatively few people today who work directly to try to reduce them.
Markus’ work facilitates AI research
Markus decided early on to make the world a better place and had his sights set on Greenpeace and stopping climate change. But as he became convinced that other problems were more overlooked, he began to consider more options.
Today he works as a project manager at the Center for the Governance of AI . The center is part of the Future of Humanity Institute at the University of Oxford. As the name suggests, they do research on AI governance. It is about the structures that will determine how AI is developed and used (eg norms, incentives, institutions, ideas and social factors). With greater knowledge of how these work and can change, there are better conditions for a positive AI development.
Markus studied philosophy at Cambridge, worked as a management consultant and was secretary general of Effective Altruism Sweden. AI security may sound like an area for (only) technical specialists, but Markus is a good example that there are often a number of different opportunities to contribute to important issues.
“My role is very broad. I do most of what is required for the organization to achieve its goals – managing the budget and finances, recruiting, managing our research and communicating with decision makers about what we come up with.”