Enable job alerts via email!

Data Engineer

Sporty

United Kingdom

On-site

GBP 40,000 - 80,000

Full time

25 days ago

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An established industry player is seeking a Data Engineer to enhance their data processing capabilities. In this role, you will design, build, and maintain robust data pipelines that support machine learning and data science initiatives. With millions of active users, your work will directly impact the efficiency and scalability of data operations. This dynamic company values innovation and offers a flexible work environment, empowering you to contribute to exciting projects that shape the future of sports betting technology. If you are passionate about data and eager to make a significant impact, this opportunity is perfect for you.

Qualifications

  • 3+ years of experience in data engineering or related fields.
  • Proficient in SQL and familiar with Python for scripting tasks.

Responsibilities

  • Design and maintain scalable ETL and data pipelines.
  • Ensure data quality and implement process improvements.

Skills

Data Engineering
SQL
Data Modelling
Python
Apache Airflow
Apache Spark
Data Quality Assurance

Education

Bachelor’s degree in Computer Science
Equivalent experience in a technical field

Tools

AWS S3
AWS Athena
AWS EC2
AWS RedShift
AWS EMR
AWS EKS
AWS RDS
AWS Lambda

Job description

We consistently top the charts as one of if not the most used Sports Betting website in the countries we operate in. With millions of weekly active users, we strive to be the best in industry for our users.

As a Data Engineer at Sporty, you will play a critical role in ensuring the smooth processing and handling of data for our machine learning and data science initiatives. Your primary responsibilities will include designing, building, testing, optimising, and maintaining data pipelines and architectures for various aspects of our rapidly growing business.

Who We Are
Sporty Group is a consumer internet and technology business with an unrivalled sports media, gaming, social and fintech platform which serves millions of daily active users across the globe via technology and operations hubs across more than 10 countries and 3 continents. The recipe for our success is to discover intelligent and energetic people, who are passionate about our products and serving our users, and attract and retain them with a dynamic and flexible work life which empowers them to create value and rewards them generously based upon their contribution. We have already built a capable and proven team of 300+ high achievers from a diverse set of backgrounds and we are looking for more talented individuals to drive further growth and contribute to the innovation, creativity and hard work that currently serves our users further via their grit and innovation.

Responsibilities

  1. Design, develop and maintain scalable batch ETL and near-real-time data pipelines and architectures for various parts of our business, on fast and versatile data sources with millions of changes per day.
  2. Ensure all data provided is of the highest quality, accuracy, and consistency.
  3. Identify, design, and implement internal process improvements for optimising data delivery and re-designing infrastructure for greater scalability.
  4. Build out new API integrations to support continuing increases in data volume and complexity.
  5. Communicate with data scientists, MLOps engineers, product owners, and BI analysts in order to understand business processes and system architecture for specific product features.

Requirements
  1. Bachelor’s degree, or equivalent experience, in Computer Science, Engineering, Mathematics, or a related technical field.
  2. 3+ years of experience in data engineering, data platforms, BI or related domain.
  3. Experience in successfully implementing data-centric applications, such as data warehouses, operational data stores, and data integration projects.
  4. Experience with large-scale production relational and NoSQL databases.
  5. Experience with data modelling.
  6. General understanding of data architectures and event-driven architectures.
  7. Proficient in SQL.
  8. Familiarity with one scripting language, preferably Python.
  9. Experience with Apache Airflow & Apache Spark.
  10. Solid understanding of cloud data services: AWS services such as S3, Athena, EC2, RedShift, EMR (Elastic MapReduce), EKS, RDS (Relational Database Services) and Lambda.

Nice to have:
  1. Understanding of ML Models.
  2. Understanding of containerisation and orchestration technologies like Docker/Kubernetes.
  3. Relevant knowledge or experience in the gaming industry.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.