Enable job alerts via email!
Boost your interview chances
Create a job specific, tailored resume for higher success rate.
An innovative firm is seeking Research Scientists to shape the future of AI Safety through advanced autonomy evaluations. This role involves leading the end-to-end development of evaluations that assess the AI Safety Level of models, directly influencing training and deployment strategies. Ideal candidates will have a strong machine learning background, particularly in experimental research with LLMs and agents, along with excellent Python engineering skills. Join a collaborative team where your insights will drive the development of cutting-edge AI technologies and safety protocols, making a significant impact on the industry.
We are looking for Research Scientists to develop and productionize advanced autonomy evaluations on our Frontier Red Team. Our goal is to develop and implement a gold standard of advanced autonomy evals to determine the AI Safety Level (ASL) of our models. This will have major implications for the way we train, deploy, and secure our models, as detailed in our Responsible Scaling Policy (RSP).
We believe that developing autonomy evals is one of the best ways to study increasingly capable and agentic models. If you’ve thought particularly hard about how models might be agentic and associated risks, and you’ve built an eval or experiment around it, we’d like to meet you.
Please note: