Enable job alerts via email!
Boost your interview chances
Create a job specific, tailored resume for higher success rate.
An innovative research team is on the lookout for passionate research scientists, engineers, and software developers eager to explore the complexities of AI and LLMs. This role offers the chance to engage in groundbreaking evaluations and contribute to safety protocols in AI. With a focus on empirical research and a collaborative environment, you'll be part of a diverse team dedicated to advancing AI technology responsibly. Enjoy the benefits of flexible hours, unlimited vacation, and a supportive workplace culture that values your unique contributions. If you have a knack for steering LLMs and a drive for results, this opportunity is perfect for you.
Applications deadline: The final date for submissions is 12 January 2025. We review applications on a rolling basis and encourage early submissions. We will likely not reach out to candidates before 01 January 2025.
ABOUT APOLLO RESEARCH
The capabilities of current AI systems are evolving at a rapid pace. While these advancements offer tremendous opportunities, they also present significant risks, such as the potential for deliberate misuse or the deployment of sophisticated yet misaligned models. At Apollo Research, our primary concern lies with deceptive alignment, a phenomenon where a model appears to be aligned but is, in fact, misaligned and capable of evading human oversight.
Our approach involves conducting fundamental research on interpretability and behavioral model evaluations, which we then use to audit real-world models. In our evaluations, we focus on LM agents, i.e. LLMs with agentic scaffolding similar to AIDE or SWE agent. We also study model organisms in controlled environments (see our security policies), e.g. to better understand capabilities related to scheming.
At Apollo, we aim for a culture that emphasizes truth-seeking, being goal-oriented, giving and receiving constructive feedback, and being friendly and helpful. If you’re interested in more details about what it’s like working at Apollo, you can find more information here.
ABOUT THE TEAM:
The current evals team consists of Mikita Balesni, Jérémy Scheurer, Alex Meinke, Rusheb Shah, Bronson Schoen, and Axel Højmark. Marius Hobbhahn manages and advises the evals team, though team members lead individual projects. You will mostly work with the evals team, but you will likely sometimes interact with the interpretability team, e.g. for white-box evaluations, and with the governance team to translate technical knowledge into concrete recommendations. You can find our full team here.
ABOUT THE ROLE:
We’re looking for research scientists, research engineers, and software engineers who are excited to work on these and similar projects. We intend to hire people with a broad range of experience and encourage applications even if you don’t yet have experience in any of our current team efforts. We welcome applicants of all ethnicities, genders, sexes, ages, abilities, religions, sexual orientations, regardless of pregnancy or maternity, marital status, or gender reassignment.
EVALS TEAM WORK. The evals team focuses on the following efforts:We want to emphasize that people who feel they don’t fulfil all of these characteristics but think they would be a good fit for the position nonetheless are strongly encouraged to apply. We believe that excellent candidates can come from a variety of backgrounds and are excited to give you opportunities to shine.
Equality Statement: Apollo Research is an Equal Opportunity Employer. We value diversity and are committed to providing equal opportunities to all, regardless of age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, or sexual orientation.
How to apply:Please complete the application form with your CV. The provision of a cover letter is optional but not necessary. Please also feel free to share links to relevant work samples.
About the interview process: Our multi-stage process includes a screening interview, a take-home test (approx. 2 hours), 3 technical interviews, and a final interview with Marius (CEO). The technical interviews will be closely related to tasks the candidate would do on the job. There are no leetcode-style general coding interviews. If you want to prepare for the interviews, we suggest working on hands-on LLM evals projects (e.g. as suggested in our starter guide), such as building LM agent evaluations in Inspect.
Applications deadline: The final date for submissions is 12 January 2025. We review applications on a rolling basis and encourage early submissions. We will likely not reach out to candidates before 01 January 2025.