Enable job alerts via email!

Data Engineer

TN United Kingdom

Marlow

On-site

GBP 40,000 - 80,000

2 days ago
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An established industry player is seeking a skilled Data Engineer to design and optimize data pipelines and infrastructure. This role involves collaborating with data scientists and analysts to ensure clean, reliable data for insights. You will architect scalable storage solutions and automate workflows using AWS services. If you have a passion for data and experience with AWS technologies, this opportunity allows you to make a significant impact in a dynamic environment, driving data-driven decision-making and enhancing business intelligence capabilities.

Qualifications

  • Hands-on experience with AWS data services and ETL/ELT pipelines.
  • Proficiency in Python, SQL, or Java for data processing.

Responsibilities

  • Design, build, and optimize ETL/ELT pipelines using AWS services.
  • Integrate data from various sources ensuring reliability for analytics.

Skills

AWS data services

ETL/ELT pipelines

Python

SQL

Data integration

Data security best practices

Communication skills

Education

Bachelor's degree in Computer Science or related field

Tools

AWS Glue

AWS Lambda

AWS S3

AWS Redshift

Apache Spark

Terraform

AWS CDK

Hadoop

Kafka

Job description

We are seeking a Data Engineer to join our team. This role will involve designing, building, and optimizing data pipelines and infrastructure that enable efficient storage, processing, and analysis of large datasets. You’ll work closely with data scientists, analysts, and other engineering teams to deliver clean, reliable data for business insights.

Key Responsibilities:

  1. Design, build, and optimize ETL/ELT pipelines using AWS services like Glue, Lambda, S3, and Redshift to support data processing needs.
  2. Architect scalable storage solutions using data lakes (S3) and data warehouses (Redshift, RDS) to ensure efficient querying and data availability.
  3. Integrate data from various internal and external sources, ensuring consistency, reliability, and availability for analytics.
  4. Continuously monitor and optimize data processing workflows for speed, reliability, and cost efficiency.
  5. Partner with data scientists, business analysts, and other teams to enable self-service analytics and support data-driven decision-making.
  6. Implement automation for data workflows, deployment, and monitoring, using tools like CloudFormation, Terraform, or AWS CDK.
  7. Ensure data security and compliance with regulatory standards, implementing proper access controls, encryption, and governance policies.
  8. Maintain clear documentation on data pipelines, workflows, and architecture to ensure smooth operations and knowledge sharing.

Required Skills & Experience:

  1. Hands-on experience with AWS data services like S3, Redshift, Glue, RDS, Lambda, and DynamoDB.
  2. Strong background in building and optimizing ETL/ELT pipelines using AWS Glue, Apache Spark, or Python.
  3. Experience in designing and managing data lakes, data warehouses, and databases for efficient storage and querying.
  4. Ability to integrate diverse data sources, including APIs, databases, and flat files, ensuring consistency for analytical purposes.
  5. Experience automating data workflows, deployment, and monitoring using AWS CloudFormation, Terraform, or AWS CDK.
  6. Proficiency in Python, SQL, or Java for developing custom data solutions and processing large datasets.
  7. Familiarity with Hadoop, Spark, or Kafka for processing large-scale datasets.
  8. Knowledge of data security best practices, including encryption, IAM roles, and GDPR compliance.
  9. Strong communication and teamwork skills to work effectively with cross-functional teams, ensuring data solutions meet business needs.

Preferred Qualifications:

  1. AWS Certified Solutions Architect – Associate, AWS Certified Big Data – Specialty, or other relevant AWS certifications.
  2. Experience with AWS Kinesis, Kafka, or other real-time data streaming technologies.
  3. Familiarity with AWS Glue Data Catalog or Apache Atlas for data governance.
  4. Experience with preparing data for machine learning workflows, supporting data scientists with clean and structured data.
  5. Experience with Amazon EMR, Redshift Spectrum, or AWS Data Pipeline.

If you are a talented Data Engineer with experience in building scalable data pipelines and managing cloud infrastructure, we want to hear from you!

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.

Similar jobs

Lead Data Engineer

Only for registered members

Reading

Remote

GBP 70.000 - 80.000

Yesterday
Be an early applicant

Data Engineer (DV Clearance)

Only for registered members

London

Remote

GBP 75.000 - 90.000

6 days ago
Be an early applicant

Lead Data Engineer

Only for registered members

London

Remote

GBP 50.000 - 90.000

5 days ago
Be an early applicant

Data Engineer

Only for registered members

London

Remote

GBP 60.000 - 100.000

5 days ago
Be an early applicant

Data Engineer

Only for registered members

London

Remote

GBP 40.000 - 80.000

6 days ago
Be an early applicant

Senior Data Engineer

Only for registered members

London

Remote

GBP 65.000 - 85.000

2 days ago
Be an early applicant

Data Engineer

Only for registered members

Greater London

Remote

GBP 45.000 - 85.000

2 days ago
Be an early applicant

Data Engineer

Only for registered members

London

Remote

GBP 40.000 - 80.000

5 days ago
Be an early applicant

Lead Data Engineer

Only for registered members

Greater London

Remote

GBP 46.000 - 55.000

11 days ago