cover image
AI Security Institute

Security Strategy & Enablement Lead

On site

London, United kingdom

Senior

Full Time

27-09-2025

Share this job:

Skills

Communication Leadership Research Strategic Planning

Job Specifications

About The AI Security Institute

The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We're in the heart of the UK government with direct lines to No. 10, and we work with frontier developers and governments globally.

We're here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action.

About The Team

Security Engineering at the AI Security Institute (AISI) exists to help our researchers move fast, safely. We are founding the Security Engineering team in a largely greenfield cloud environment, we treat security as a measurable, researcher centric product.

Secure by design platforms, automated governance, and intelligence led detection that protects our people, partners, models, and data. We work shoulder to shoulder with research units and core technology teams, and we optimise for enablement over gatekeeping, proportionate controls, low ego, and high ownership.

What You Might Work On

Help design and ship paved roads and secure defaults across our platform so researchers can build quickly and safely
Build provenance and integrity into the software supply chain (signing, attestation, artefact verification, reproducibility)
Support strengthened identity, segmentation, secrets, and key management to create a defensible foundation for evaluations at scale
Develop automated, evidence driven assurance mapped to relevant standards, reducing audit toil and improving signal
Create detections and response playbooks tailored to model evaluations and research workflows, and run exercises to validate them
Threat model new evaluation pipelines with research and core technology teams, fixing classes of issues at the platform layer
Assess third party services and hardware/software supply chains; introduce lightweight controls that raise the bar
Contribute to open standards and open source, and share lessons with the broader community where appropriate

If you want to build security that accelerates frontier scale AI safety research, and see your work land in production quickly, this is a good place to do it

Role Summary

Act as the connective tissue of the AISI security function. This role blends chief of staff energy with product thinking and delivery focus. You'll own the team's narrative, planning, communication, and rhythm, ensuring security is seen as valuable, accessible, and outcome-driven across AISI and beyond. You'll also connect security to AISI's frontier AI work, making model lifecycle risks, safeguards, and evidence legible to leadership and partners, and aligning security delivery with AI safety objectives.

Responsibilities

Lead internal strategic planning, OKRs, delivery coordination, and progress tracking
Own security comms: presentations, dashboards, monthly updates, and assurance packs
Develop reusable material for onboarding, stakeholder engagement, and external briefings
Coordinate cross-cutting initiatives, risks, and dependencies across the function
Represent the CISO in meetings and planning forums as needed
Build and maintain relationships across AISI (engineering, research, policy) and with DSIT security stakeholders
Translate technical work into stories and narratives aligned to AISI's mission
Shape an integrated security + AI risk narrative, covering model lifecycle and how safeguards map to AISI's mission
Define and track outcome-oriented metrics that include AI surfaces (e.g., eval/release-gate coverage, model/weights custody controls, GPU governance posture, third-party model/API usage patterns, key AI incident learnings)
Curate enablement materials for AI/ML teams: secure/vetted patterns for model and data handling, use of external model APIs, and roles/responsibilities across shared responsibility boundaries
Coordinate AI-governance touchpoints with DSIT and internal leads (e.g., readiness for NIST AI RMF/ISO 42001 where relevant), partnering with GRC to ensure consistent evidence and narratives
Maintain a clear stakeholder map across research, platform, product, and policy; run the operating rhythm that keeps security and delivery aligned

Profile Requirements

Background in strategy, product, cyber security, or technical programme leadership
Exceptional written and verbal communication; able to switch fluently between technical and executive audiences
Operates independently, prioritises well, and holds delivery to account
Curious about how teams work, not just what they deliver
Values structure, clarity, and momentum
Practical familiarity with AI/ML concepts sufficient to translate between security, research, and policy
Desirable: experience enabling research or ML organisations, and aligning security narratives with AI safety goals

Key Competencies

Planning and roadmap ownership
Int

About the Company

We’re building a team of world leading talent to advance our understanding of frontier AI and strengthen protections against the risks it poses – come and join us: https://www.aisi.gov.uk/. The AISI is part of the UK Government's Department for Science, Innovation and Technology. Know more