Summary
At Kitopi, we leverage cutting-edge machine learning solutions to drive growth and operational efficiency, optimize bottomline, and enhance customer experiences. Our data team works on a range of domains, from marketing, brands, customer experience, supply chain, and finance. By working closely with our product squads and digital transformation team, we harness the power of data to generate value.
As a Junior Machine Learning Engineer, you will play a key role in developing, deploying, and managing ML models that power Kitopi’s growth, operations, and customer experience. You will work closely with data analysts, engineers, product managers, software engineers, and business stakeholders to build scalable ML solutions and ensure seamless integration into production systems. Your role will also involve monitoring model performance, optimizing pipelines, and maintaining model accuracy over time. This is a great opportunity to grow within a fast-paced environment, working on real-world ML applications in the food tech space.
Responsibilities
In brief:
- Collaborate with data, product, and business teams to develop ML solutions for growth, optimization, and automation.
- Deploy, manage, and monitor ML models in production environments to ensure scalability and performance.
- Build and maintain ML pipelines for data preprocessing, model training, deployment, and monitoring.
- Optimize ML models for efficiency, accuracy, and low latency in real-time applications.
- Work with MLOps tools and cloud platforms to streamline ML workflows and automate deployment.
In detail:
- Work closely with the data team to develop and refine ML models, ensuring they address business needs and deliver high-impact solutions.
- Deploy ML models into production using cloud-based solutions (AWS/Azure), ensuring scalability, robustness, and reliability.
- Develop end-to-end ML pipelines, integrating data ingestion, feature engineering, model training, deployment, and monitoring.
- Implement model monitoring and retraining strategies to ensure long-term model performance and prevent degradation over time.
- Optimize ML models for latency, efficiency, and cost-effectiveness, leveraging distributed computing and GPU acceleration when needed.
- Work with MLOps frameworks to automate deployment, version control, and CI/CD pipelines for ML models.
- Collaborate with engineering teams to integrate ML solutions into existing platforms, APIs, or applications.
- Troubleshoot and debug production ML models, ensuring data integrity, model explainability, and compliance with best practices.
- Stay up to date with the latest trends and advancements in machine learning, deep learning, and MLOps to improve Kitopi’s ML capabilities.
Qualifications
- At Kitopi, we value alignment with our mission and principles. The successful candidate will share these values and work by them.
- Bachelor’s degree in Computer Science, Machine Learning, Artificial Intelligence, Data Science, or a related field.
- 2+ years of relevant experience in machine learning model development, deployment, and management.
- Proficiency in Python and ML libraries such as TensorFlow, PyTorch, Scikit-learn, and XGBoost.
- Experience with ML model deployment using Docker, Kubernetes, and cloud-based services (AWS Sagemaker or Azure ML).
- Understanding of MLOps practices, including CI/CD for ML, model monitoring, model explainability, and automated retraining.
- Experience with SQL and NoSQL databases for handling large datasets.
- Strong problem-solving skills and the ability to optimize ML models for real-world applications.
- Excellent communication skills, with the ability to work collaboratively in cross-functional teams.
- A continuous learning mindset and passion for staying updated on the latest ML advancements and best practices.
- Familiarity with A/B testing and statistical analysis to evaluate performance.
- Experience with Generative AI concepts and models is a plus.
- Successful candidates will bring technical expertise, problem-solving skills, creativity, and the ability to work effectively with colleagues from diverse backgrounds.
Technologies We Use
- Data Warehouse / Big Data: Snowflake, ADL (Azure Data Lake), and MS Fabric
- Reporting: Power BI
- Data Pipelines: Kafka, Python, SQL, Airflow, Airbyte, and DBT
- ML Ops: Amazon SageMaker
- Platform: AWS (Amazon Web Services) and Azure
- Repository: GitLab
- Analysis: SQL, Python, and Excel
- Presentation: Power BI, MS Excel, Miro, and MS Power Point
- Documentation: Confluence
- Sprints and Backlog: Jira