Enable job alerts via email!

Platform engineer, MLOps

Writer

London

On-site

GBP 125,000 - 150,000

5 days ago
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An established industry player is seeking a proactive Platform Engineer specializing in MLOps to enhance AI/ML operations. In this dynamic role, you will collaborate with engineers and researchers to design and implement robust CI/CD pipelines, ensuring the reliability and efficiency of extensive training environments. Your expertise in tools like Docker and Kubernetes will be crucial for managing large-scale infrastructure and optimizing system performance. If you thrive in fast-paced environments and are driven by innovation, this opportunity offers a chance to make a significant impact in the AI/ML landscape.

Qualifications

  • 5+ years of experience building core infrastructure and operating orchestration systems.
  • Strong knowledge of cloud platforms and CI/CD pipelines.

Responsibilities

  • Design and deploy CI/CD pipelines for safe and reproducible AI/ML experiments.
  • Manage monitoring and logging systems for extensive training runs.

Skills

Model training

Huggingface Transformers

Pytorch

vLLM

TensorRT

Infrastructure as code tools (Terraform)

Scripting languages (Python, Bash)

Cloud platforms (Google Cloud, AWS, Azure)

Git and GitHub workflows

Troubleshooting complex systems

Tools

Docker

Kubernetes

Prometheus

Grafana

Job description

About this role

As a Platform engineer, MLOps, you will be critical to deploying and managing cutting-edge infrastructure crucial for AI/ML operations, and you will collaborate with AI/ML engineers and researchers to develop a robust CI/CD pipeline that supports safe and reproducible experiments. Your expertise will also extend to setting up and maintaining monitoring, logging, and alerting systems to oversee extensive training runs and client-facing APIs. You will ensure that training environments are optimally available and efficiently managed across multiple clusters, enhancing our containerization and orchestration systems with advanced tools like Docker and Kubernetes.

This role demands a proactive approach to maintaining large Kubernetes clusters, optimizing system performance, and providing operational support for our suite of software solutions. If you are driven by challenges and motivated by the continuous pursuit of innovation, this role offers the opportunity to make a significant impact in a dynamic, fast-paced environment.

????️ Your responsibilities:

  • Work closely with AI/ML engineers and researchers to design and deploy a CI/CD pipeline that ensures safe and reproducible experiments.
  • Set up and manage monitoring, logging, and alerting systems for extensive training runs and client-facing APIs.
  • Ensure training environments are consistently available and prepared across multiple clusters.
  • Develop and manage containerization and orchestration systems utilizing tools such as Docker and Kubernetes.
  • Operate and oversee large Kubernetes clusters with GPU workloads.
  • Improve reliability, quality, and time-to-market of our suite of software solutions.
  • Measure and optimize system performance, with an eye toward pushing our capabilities forward, getting ahead of customer needs, and innovating for continual improvement.
  • Provide primary operational support and engineering for multiple large-scale distributed software applications.

️ Is this you?

  • You have professional experience with:
    • Model training
    • Huggingface Transformers
    • Pytorch
    • vLLM
    • TensorRT
    • Infrastructure as code tools like Terraform
    • Scripting languages such as Python or Bash
    • Cloud platforms such as Google Cloud, AWS or Azure
    • Git and GitHub workflows
    • Tracing and Monitoring
  • Familiar with high-performance, large-scale ML systems.
  • You have a knack for troubleshooting complex systems and enjoy solving challenging problems.
  • Proactive in identifying problems, performance bottlenecks, and areas for improvement.
  • Take pride in building and operating scalable, reliable, secure systems.
  • Are comfortable with ambiguity and rapid change.

Preferred skills and experience:

  • Familiar with monitoring tools such as Prometheus, Grafana, or similar.
  • 5+ years building core infrastructure.
  • Experience running inference clusters at scale.
  • Experience operating orchestration systems such as Kubernetes at scale.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.