Enable job alerts via email!

Research Scientist, Frontier Red Team (Autonomy)

Anthropic

London

On-site

GBP 125,000 - 150,000

30+ days ago

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An innovative firm is seeking Research Scientists to shape the future of AI Safety through advanced autonomy evaluations. This role involves leading the end-to-end development of evaluations that assess the AI Safety Level of models, directly influencing training and deployment strategies. Ideal candidates will have a strong machine learning background, particularly in experimental research with LLMs and agents, along with excellent Python engineering skills. Join a collaborative team where your insights will drive the development of cutting-edge AI technologies and safety protocols, making a significant impact on the industry.

Qualifications

  • ML background with experience in leading experimental research on LLMs and agents.
  • Strong Python skills and ability to solve ambiguously scoped problems.

Responsibilities

  • Lead development of autonomy evaluations and research from risk modeling to implementation.
  • Provide technical leadership to build scalable infrastructure for large-scale experiments.

Skills

Machine Learning

Python Engineering

Experimental Research

Problem Solving

Collaboration

Job description

We are looking for Research Scientists to develop and productionize advanced autonomy evaluations on our Frontier Red Team. Our goal is to develop and implement a gold standard of advanced autonomy evals to determine the AI Safety Level (ASL) of our models. This will have major implications for the way we train, deploy, and secure our models, as detailed in our Responsible Scaling Policy (RSP).

We believe that developing autonomy evals is one of the best ways to study increasingly capable and agentic models. If you’ve thought particularly hard about how models might be agentic and associated risks, and you’ve built an eval or experiment around it, we’d like to meet you.

Please note:

  • We will be prioritizing candidates who can start ASAP and can be based in either our San Francisco or London office.
  • We’re still iterating on the structure of our team. It is possible that this role might end up being the people manager of a few other individual contributors (ICs). If you would be interested in people management, you may express interest in the application.
Responsibilities:
  • Lead the end-to-end development of autonomy evals and research. This starts with risk and capability modeling, and includes designing, implementing, and regularly running these evals.
  • Quickly iterate on experiments to evaluate autonomous capabilities and forecast future capabilities.
  • Provide technical leadership to Research Engineers to scope + build scalable and secure infrastructure to quickly run large-scale experiments.
  • Communicate the outcomes of the evaluations to relevant Anthropic teams, as well as policy stakeholders and research collaborators, where relevant.
  • Collaborate with other projects on the Frontier Red Team, Alignment, and beyond to improve infrastructure and design safety techniques for autonomous capabilities.
You may be a good fit if you:
  • Have an ML background and experience leading experimental research on LLMs/multimodal models and/or agents
  • Have strong Python-based engineering skills
  • Are driven to find solutions to ambiguously scoped problems
  • Design and run experiments and iterate quickly to solve machine learning problems
  • Thrive in a collaborative environment (we love pair programming)
  • Have experience training, working with, and prompting models
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.