Enable job alerts via email!
Boost your interview chances
Create a job specific, tailored resume for higher success rate.
An innovative company at the forefront of AI is seeking a talented engineer to enhance machine learning models. This role focuses on optimizing performance, reliability, and scalability of AI infrastructure, ensuring exceptional user experience. You will collaborate with a team of elite researchers and engineers, leveraging cutting-edge technology and significant autonomy in technical decisions. If you are passionate about AI and eager to tackle complex challenges, this opportunity to contribute to pioneering advancements in real-time generative models is perfect for you.
Social network you want to login/join with:
Client: Odyssey
Location: London, United Kingdom
Job Category: Other
EU work permit required: Yes
Job Reference: 4773fb22f86e
Job Views: 5
Posted: 30.03.2025
Expiry Date: 14.05.2025
Odyssey is pioneering world models, the next frontier of artificial intelligence. By learning from the real-world, Odyssey is training a new kind of generative model, capable of generating cinematic, interactive worlds in real-time. Odyssey's mission is to reinvent film, gaming, and beyond.
Odyssey was founded in late 2023 by Oliver Cameron (Cruise, Voyage) and Jeff Hawke (Wayve, Oxford AI PhD), two veterans of self-driving cars and AI. They've since recruited a world-class team of AI researchers from Cruise, Waymo, Wayve, Tesla, Microsoft, Meta, and NVIDIA; lead computer graphics researchers from EA, Ubisoft, and Valve; and technical artists behind Hollywood blockbusters like Dune, Godzilla, Avengers, and Jurassic World.
Odyssey has raised significant venture capital from GV, EQT Ventures, Air Street Capital, DCVC, Elad Gil, Garry Tan, Soleio, Jeff Dean, Kyle Vogt, Qasar Younis, Guillermo Rauch, Soumith Chintala, and researchers from OpenAI, DeepMind, Meta, and Midjourney. Ed Catmull, the founder of Pixar, serves on Odyssey's board.
We are seeking a talented engineer passionate about advancing AI models. We are building inference infrastructure to scale to hundreds of thousands of users within a year. Your focus will be ensuring our models deliver exceptional speed, reliability, and scalability while optimizing efficiency to minimize TFLOPS per user.