Job Specifications
About The AI Security Institute
The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We’re in the heart of the UK government with direct lines to No. 10 (the Prime Minister's office), and we work with frontier developers and governments globally.
We’re here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action.
About The Chem Bio Team
AISI’s Chem Bio (CB) team conducts research to assess evolving AI capabilities related to science R&D and CB misuse, and the effectiveness of technical safeguards that might mitigate risks arising from those capabilities.
The goal of our research is to inform critical decisions on security, opportunities, policy, and risk mitigation made by governments and AI developers.
We’re a close-knit, unusually interdisciplinary team—made up of machine learning researchers and engineers, software engineers, virologists and bacteriologists, behavioural research scientists, biosecurity experts, long-standing CB policy specialists and talented generalists—who work closely with other technical and policy teams across government. The team is currently led by Sophie Rose.
This role would also involve collaborating closely with AISI’s Safeguards team, who work to evaluate the protections on current frontier AI systems and research what measures could better secure them in the future. The Safeguards team is currently led by Xander Davies and advised by Geoffrey Irving and Yarin Gal.
Role Responsibilities
Lead ambitious research projects to understand the feasibility and effectiveness of potential technical safeguards for AI system’s CB capabilities
Partner with frontier AI developers and AISI’s Safeguards team to rigorously assess and strengthen existing technical mitigations designed to reduce misuse of models’ CB capabilities (e.g. strengthening biological and chemical classifiers—see our recent collaborations with Anthropic and OpenAI).
Design, build and run evaluations that stress-test CB safeguards; analyse results and deliver clear, actionable findings.
Critically review developers’ CB capability assessments, safeguards safety cases, and related policies to raise the bar on safety.
Translate findings into practical guidance that informs developer practices and decisions.
Example Questions Your Work Might Tackle
How effective is pretraining-data filtering at reducing harmful CB capabilities while preserving benign performance? What scope of filtering works best, and how does this extend to open-weight models?
What would an effective differential or structured access regime look like for advanced CB-related AI system capabilities?
Requirements
We are looking for the following skills, experience and attitudes, but a successful candidate will not necessarily need to meet all these criteria. We can be flexible in shaping the role and salary to your background, expertise, and level of experience.
Broad knowledge of frontier AI development, safety and governance: training/fine-tuning pipelines, evaluations and safeguards, developers’ frontier safety frameworks, and technical mitigations for AI–CB risk.
Hands-on experience building or working deeply with general-purpose AI systems and their safety/safeguards stacks.
Experience writing production-level Python code that is scalable, robust and easy to maintain, ideally in a team.
Knowledge of scaffolding, prompting, fine-tuning and/or evaluating large language models
Knowledge of math, statistics, and machine learning sufficient to read and critique AI research.
Demonstrated research taste and execution: originate high-leverage ideas, drive them independently, and ship impactful technical or governance products.
Bias to action and ownership; quickly learn unfamiliar domains and prioritise policy-relevant technical work over purely academic novelty.
High agency and adaptability; communicate clearly and collaborate effectively across disciplines while operating autonomously in a fast-paced, evolving environment.
Familiarity with relevant datasets, benchmarks, or evaluation methodologies for CB risks from AI.
Please note that this role requires Security Clearance (SC), which requires at least 2 years of UK residency, and a willingness to undergo Developed Vetting (DV) if required. More detail on clearance eligibility can be found on the UK Government website. National security vetting: clearance levels - GOV.UK
Other Core Requirements
You should be able to spend at least 9 days per fortnight working with us
You should be willing to work from our office in London (Whitehall) at least 3 days/week.
You should be UK-based
What We Offer
Impact you couldn't have anywhere else
Incredibly talented, mission-driven and supportive colleagues.
D
About the Company
We’re building a team of world leading talent to advance our understanding of frontier AI and strengthen protections against the risks it poses – come and join us: https://www.aisi.gov.uk/.
The AISI is part of the UK Government's Department for Science, Innovation and Technology.
Know more