Enable job alerts via email!

Senior Applied Scientist, Rufus Features Science

Amazon

London

On-site

GBP 60,000 - 120,000

7 days ago
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An established industry player is on the lookout for a Senior Applied Scientist to join their innovative team in London. This role focuses on developing cutting-edge multimodal language technology that enhances the shopping experience through AI-driven solutions. You will be at the forefront of research and development, utilizing your expertise in machine learning, computer vision, and natural language processing to create impactful solutions. If you are passionate about shaping the future of shopping and have a strong background in AI, this opportunity offers a dynamic environment where your contributions will directly influence customer experiences and drive technological advancement.

Qualifications

  • PhD or Master's degree in a relevant field with machine learning expertise.
  • Experience programming in Java, C++, or Python, and familiarity with neural networks.

Responsibilities

  • Drive development of multimodal conversational systems using advanced AI technologies.
  • Collaborate with cross-functional teams to optimize AI-driven shopping experiences.

Skills

Machine Learning

Natural Language Processing (NLP)

Computer Vision

Deep Learning

Generative AI

Problem-Solving

Education

PhD

Master's degree

Tools

Java

Python

C++

TensorFlow

scikit-learn

Spark MLLib

Hadoop

Job description

Job ID: 2919545 | Amazon Development Centre (London) Limited

We are looking for a passionate, talented, and inventive Senior Applied Scientist with a strong machine learning background and relevant industry experience to help build industry-leading multimodal language technology powering Rufus, our AI-driven search and shopping assistant, helping customers with their shopping tasks at every step of their shopping journey.

This role focuses on developing conversation-based, multimodal shopping experiences, utilizing multimodal large language models (MLLMs), generative AI, advanced machine learning (ML) and computer vision technologies.

Our mission in conversational shopping is to make it easy for customers to find and discover the best products to meet their needs by helping with their product research, providing comparisons and recommendations, answering textual and visual product questions, enabling shopping directly from images or videos, providing visual inspiration, and more. We do this by pushing the SoTA in Natural Language Processing (NLP), Generative AI, Multimodal Large Language Model (MLLM), Natural Language Understanding (NLU), Machine Learning (ML), Retrieval-Augmented Generation (RAG), Computer Vision, Responsible AI, LLM Agents, Evaluation, and Model Adaptation.

Key job responsibilities
As an Senior Applied Scientist on our team, you will be responsible for the research, design, and development of new AI technologies that will shape the future of shopping experiences. You will play a critical role in driving new ideas, roadmaps, aligning with stakeholders and partner teams, leading the development of multimodal conversational systems, building on large language models, information retrieval, recommender systems, knowledge graphs and computer vision. You will handle Amazon-scale use cases with significant impact on our customers' experiences. You will collaborate with scientists, engineers, and product partners locally and abroad.

You will:

  1. Take product ideas for new features and turn them into tech solution designs and roadmaps, evaluating the feasibility and scalability of possible solutions.
  2. Lead the development of scalable language model centric solutions for shopping assistant systems based on a rich set of structured and unstructured contextual signals using deep learning, ML, computer vision and MLLM techniques, and considering memory, compute, latency and quality.
  3. Drive end-to-end MLLM projects that have a high degree of ambiguity, scale and complexity, developing the most critical or challenging parts of the systems yourself (hands on).
  4. Perform offline and A/B test experiments, optimize and deploy your models into production, working closely with software engineers.
  5. Establish automated processes for large-scale model development, model validation and serving.
  6. Communicate results and insights to both technical and non-technical audiences, including through presentations and written reports and publish your work at internal and external conferences.

About the team
You will be part of the Rufus Features Science team based in London, working alongside over 100 engineers, designers and product managers, focused on shaping the future of AI-driven shopping experiences at Amazon. This team works on every aspect of the shopping experience, from understanding multimodal user queries to planning and generating answers that combine text, image, audio and video.
BASIC QUALIFICATIONS

- PhD, or Master's degree
- Experience programming in Java, C++, Python or related language
- Experience with neural deep learning methods and machine learning
- Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc.
- Experience with large scale distributed systems such as Hadoop, Spark etc.

PREFERRED QUALIFICATIONS

- Experience with generative deep learning models like CNNs, GANs, VAEs, NF and Bayesian networks
- Experience developing and implementing deep learning algorithms, particularly with respect to computer vision algorithms, e.g., image captioning, segmentation, video processing
- Experience leveraging and augmenting a large code base of computer vision or MLLM libraries to deliver new solutions.
- Experience deploying solutions to AWS or other cloud platforms.
- Excellent communication skills, solid work ethic, and a strong desire to write production-quality code.
- Have publications at top-tier peer-reviewed conferences or journals.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.