About the AI Security Institute
The AI Security Institute is the largest team in a government dedicated to understanding AI capabilities and risks in the world.
Our mission is to equip governments with an empirical understanding of the safety of advanced AI systems. We conduct research to understand the capabilities and impacts of advanced AI and develop and test risk mitigations. We focus on risks with security implications, including the potential of AI to assist with the development of chemical and biological weapons, how it can be used to carry out cyber-attacks, enable crimes such as fraud, and the possibility of loss of control.
The risks from AI are not sci-fi, they are urgent. By combining the agility of a tech start-up with the expertise and mission-driven focus of government, we're building a unique and innovative organisation to prevent AI's harms from impeding its potential.
Control Team
Our team's focus is on AI Control to ensure that even if frontier systems are misaligned, that they can be safely used for high-stakes tasks. To achieve this, we are attempting to advance the state of conceptual research into control protocols and corresponding safety cases. Additionally, we will conduct realistic empirical research on mock frontier AI development infrastructure, to identify flaws in theoretical approaches and refine them accordingly.
Role Summary
As a research engineer, you'll work as part of a multi-disciplinary team including scientists, engineers and domain experts. Our team's first project - ControlArena - involves building out a realistic suite of mock lab infrastructure and codebases. We will then use this to conduct empirical experiments, including training monitor models and running control evaluations.
A core part of this role is research collaborations with frontier AI labs, as well as other prominent research organisations.
You'll receive coaching from your manager and mentorship from the principal research engineer on our team. We also have a strong learning & development culture, including Friday afternoons devoted to deep reading and various weekly paper reading groups.
Person Specification
You may be a good fit if you have some of the following skills, experience and attitudes. Please note that you don't need to meet all of these criteria, and if you're unsure, we encourage you to apply.
Salary & Benefits
We are hiring individuals at all ranges of seniority and experience. Your dedicated talent partner will work with you as you move through our assessment process to explain our internal benchmarking process. The full range of salaries are available below, salaries comprise of a base salary, technical allowance plus additional benefits as detailed on this page.
This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.
Selection Process
In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process.
Security
Successful candidates must undergo a criminal record check and get baseline personnel security standard (BPSS) clearance before they can be appointed. Additionally, there is a strong preference for eligibility for counter-terrorist check (CTC) clearance. Some roles may require higher levels of clearance, and we will state this by exception in the job advertisement.
Nationality requirements
We may be able to offer roles to applicants from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements.
Working for the Civil Service
The Civil Service Code sets out the standards of behaviour expected of civil servants. We recruit by merit on the basis of fair and open competition, as outlined in the Civil Service Commission's recruitment principles. The Civil Service embraces diversity and promotes equal opportunities.
Diversity and Inclusion
The Civil Service is committed to attract, retain and invest in talent wherever it is found.