Description: Key Responsibilities
- Develop techniques for discovering threat models and generating risk pathway analyses that capture societal and sociotechnical dimensions
- Model multi-node risk transformation, amplification, and threshold effects propagating through social systems
- Contribute to the design of robust technical governance frameworks and assessment methodologies for catastrophic risks, including loss-of-control scenarios
- Provide strategic and tactical quality control for the team’s research, ensuring conceptual soundness and technical accuracy
- Drive or take ownership of original research projects on comprehensive risk management for advanced AI systems, aligned with the team’s objectives
- Collaborate across CARMA teams to integrate risk assessment paradigms with other workstreams
- Contribute to technical standards and best practices for the evaluation, risk measurement, and risk thresholding of AI systems
- Craft persuasive communications for key stakeholders on prospective AI risk management
Required Qualifications
- 5+ years of experience in AI safety, alignment, and/or governance. We are open to candidates at different levels of seniority who can demonstrate the required depth of expertise.
- Strong understanding of multiple risk modeling approaches (causal modeling, Bayesian networks, systems dynamics, etc.)
- Experience with systemic and sociotechnical modeling of risk propagation
- Excellent analytical thinking with ability to identify subtle flaws in complex arguments
- Strong written and verbal communication skills for technical and non-technical audiences
- Publication record or equivalent demonstrated expertise in relevant areas
- Systems thinking approach with independent intellectual rigor
- Track record of constructive collaboration in fast-paced, intellectually demanding environments
- Comfort with uncertainty and rapidly evolving knowledge landscapes
Preferred Qualifications
- Background in complex systems theory, control theory, cybernetics, multi-scale modeling, or dynamical systems
- Work history at AI safety research organizations, technical AI labs, policy institutions, or adjacent risk domains
- Experience with quality assurance processes for technical research
- Ability to model threshold effects, nonlinear dynamics, and emergent properties in sociotechnical systems
- Understanding of international dynamics and power differentials in AI development
- Ability to balance consideration of both acute and aggregate AI risks
- Experience with causal, Bayesian, or semi-quantitative hypergraphs for risk analysis
- Demonstrated methodical yet creative approach to framework development
Salary: $140k-$220k per year
Apply: https://jobs.lever.co/futureof-life/a72cd411-9af3-458a-932b-16cca3ce07dd/apply
Job type: Full‑time
Work mode: remote
