Artificial Intelligence (AI) Safety
Explore the field of transformative AI and discover how you can contribute to mitigating the risks associated with the rapid development of AI.
INTRODUCTION TO AI SAFETY
Why it matters?
The development of AI is extremely rapid. Today, AI can diagnose diseases, drive cars, translate languages, and even create art. These abilities, which seemed like science fiction just a few years ago, are now part of our reality. As AI systems keep developing and are becoming even more sophisticated, they are likely to outperform humans in many tasks, if not all, in the near future. Thus, AI can become the most powerful tool we’ve created, and like all tools, it can be used for both good and evil.
While transformative AI could help make progress on some of the biggest global challenges, we still lack understanding of how to ensure that this technology works in accordance with human goals and values. One risk is that we program advanced AI to perform a task and that in fulfilling that goal it does things we neither wanted nor anticipated. Another risk is that malicious actors deliberately use AI to do great damage. Despite the fact that the risks of artificial intelligence are increasingly being noticed, there are relatively few people today who work directly to try to reduce them.
We are seemingly at a pivotal point in time where we can influence how the future of AI will be developed and regulated. The objective of this page is helping you to find out how you can contribute to a positive development.
“The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.”
Max Tegmark, MIT
Photo: Erik Lundback (SR)
Career in AI Safety
Careers in AI Safety offer the opportunity to shape the future of technology, ensuring that advanced AI systems are developed ethically and responsibly to benefit society and minimizing risks.
Technical Roles
Tech Roles
Al Safety Researcher
Machine Learning Engineer
Information Security Expert
Policy & Governance
Policy & Governance
Policy Analyst
Policy Advisor
Politician
Lobbyist and Advocate
Supportive Roles
Supportive roles
Executive Assistant
Operations or Project Manager
Communications or HR Specialist
Leading Organizations
Globally
Centre for the Governance of AI (Gov AI)
Building a global research community, dedicated to helping humanity navigate the transition to a world with advanced AI
Center for Security and Emerging Technologies (CSET)
Providing decision-makers with data-driven analysis on the security implications of emerging technologies.
Institute for AI Policy & Strategy (IAPS)
A think tank of aspiring wonks working to understand and navigate the transformative potential of advanced AI.
Model Evaluation and Threat Research (METR)
Building the science of accurately assessing risks, so that humanity is informed before developing transformative AI systems
Anthropic
AI company that claims to put safety at the frontier of their research and products
Center for AI Safety
Reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards.
Sweden
Umeå University’s Research Group in Responsible AI
Studying the ethical and societal impact of AI through the development of tools and methods that design, monitor and develop trustworthy AI systems and applications.
Stockholm International Peace Research Institute (SIPRI)
Explores the impact artificial intelligence has on international peace and security.
RISE
RISE combines AI research with interdisciplinary research, including Cyber Security risks in AI
AI Safety Gothenburg
Promotes awareness, education, and involvement in AI safety research and initiatives within the Gothenburg community
From Vision to Transformation: Stories of Impact
Marcus Williams
Machine Learning Researcher
Marcus, a Machine Learning Researcher, transitioned from researching ways to cure aging to focusing on AI alignment, convinced that Artificial Super Intelligence (ASI) could either lead to human extinction or solve the world’s biggest challenges. Marcus’ story is inspiring for those considering a more unconventional and independent career path.
Marcus Williams
Machine Learning Researcher
Vivianne Manlai
Student & Policy Research Fellow
Vivianne, a dual-degree student in Philosophy, Politics, and Economics at Stockholm University and Engineering Physics at KTH, has taken significant steps toward an impactful career in AI policy at an international level. Inspired by the EA community and equipped with mentorship and career coaching from EA Sweden, she has taken part in the Talos Fellowship, aimed to influence responsible AI policies and ensure AI becomes a force for good.
Vivianne Manlai
Student & Policy Research Fellow
#TAKE ACTION
Where to start?
Create a tailored career plan
Explore EA Sweden’s free career guide, helping you create a robust plan on how to contribute to the AI Safety field.
Skill up with BlueDot Impact
Increase your skills through BlueDot Impact’s 12 weeks free AI Safety Fellowships, with a Technical or Policy and Governance focus.
Apply for individual coaching
Get input on your career plan, or on a specific career decision, through individual coaching with EA Sweden. It’s free of charge.
Additional resources and learning
Job board
Find open positions
Use 80,000 Hours’ job board to find open AI Safety roles that fit your skills and interests.
Cause area profile
Learn more
Explore 80,000 Hours’ extensive analysis of a career in AI Safety.
Career coaching
Get specialized career coaching
SuccessIf offers senior professionals in AI Safety individual career coaching
Newsletter
Stay up to date
Receive the latest news, insights and expert analyses in AI Safety through The Centre for AI Safety’s newsletter.
AI Safety World
Explore the AI Safety landscape
Interact with aisafety.world’s visual overview of AI Safety organizations
Community
Engage and connect
Connect with researchers and policy-makers for safe artificial intelligence in Europe, through The European Network for AI Safety