About the job
At Scaleway, our AI Tribe is at the forefront of deploying cutting-edge AI technologies. Our Inference Squad specializes in delivering LLM-as-a-Service, facilitating both dedicated and mutualized GPU resources across various client applications. As a DevOps Engineer on this dynamic team, you will play a pivotal role in ensuring the seamless development, deployment, operation, and scaling of our AI products. You will collaborate with a team of engineers and developers dedicated to our AI service offerings, focusing on the infrastructure that supports large language models and various other ML models. Your expertise will drive the optimization and enhancement of our AI deployment pipelines and contribute significantly to our mission of providing robust AI solutions.
This position is based in our offices in Paris or Lille (France).