Enable job alerts via email!

Senior Data Engineer (Brickman)

ZipRecruiter

Malvern Hills

On-site

GBP 50,000 - 90,000

5 days ago
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An established industry player is seeking a Senior Data Engineer with over 7 years of experience in building robust data solutions. This role involves designing and optimizing ETL pipelines, developing resilient APIs, and implementing advanced anomaly detection systems. You will collaborate with cross-functional teams to translate business needs into scalable, data-driven solutions while leveraging cutting-edge technologies. Join a dynamic environment where your expertise will directly impact the quality and efficiency of data processing, ensuring high-performance data products that drive strategic decisions. If you are passionate about data engineering and innovation, this opportunity is perfect for you.

Qualifications

  • 7+ years of experience in data engineering with strong Python and AWS skills.
  • Expertise in designing APIs and building scalable data solutions.
  • Hands-on experience with anomaly detection and data quality frameworks.

Responsibilities

  • Design, build, and maintain complex ETL pipelines for large-scale data processing.
  • Implement anomaly detection systems and ensure data integrity.
  • Collaborate with stakeholders to deliver data-driven solutions.

Skills

Python

SQL

TypeScript

AWS Web Services

API Development

Anomaly Detection

GraphQL

Tools

Prometheus

Grafana

Swagger/OpenAPI

DynamoDB

S3

Athena

GlueETL

Lambda

ECS

Redshift Machine Learning

Job description

Senior Data Engineer 7+ Years of Experience

  • We are seeking a highly experienced Senior Data Engineer with 7+ years of expertise in designing, building, and optimizing robust data solutions. The ideal candidate must possess top-tier skills in Python, AWS services, API development, and TypeScript, and have significant hands-on experience with anomaly detection systems.
  • The candidate should have a proven ability to work at both strategic and tactical levels, from designing data architectures to implementing them in the weeds.

Required Technical Skills: Python, SQL, TypeScript, AWS Web services, Swagger/Open AI, Rest API, LLM/AI, Graphql

Core Programming Skills:

  • Expert proficiency in Python, with experience in building data pipelines and back-end systems.
  • Solid experience with TypeScript for developing scalable applications.
  • Advanced knowledge of SQL for querying and optimizing large datasets.

AWS Cloud Services Expertise:

  • DynamoDB, S3, Athena, GlueETL, Lambda, ECS, Glue Data Quality, EventBridge, Redshift Machine Learning, OpenSearch, and RDS.

API and Resilience Engineering:

  • Proven expertise in designing fault-tolerant APIs using Swagger/OpenAPI, GraphQL, and RESTful standards.
  • Strong understanding of distributed systems, load balancing, and failover strategies.

Monitoring and Orchestration:

  • Hands-on experience with Prometheus and Grafana for observability and monitoring.

Key Responsibilities:

Data Pipeline Development

  • Independently design, build, and maintain complex ETL pipelines, ensuring scalability and efficiency for large-scale data processing needs.
  • Manage pipeline complexity and orchestration, delivering high-performance data products accessible via APIs for business-critical applications.
  • Archive processed data products into data lakes (e.g., AWS S3) for analytics and machine learning use cases.

Anomaly Detection and Data Quality

  • Implement advanced anomaly detection systems and data validation techniques, ensuring data integrity and quality.
  • Leverage AI/ML methodologies, including Large Models (LLMs), to detect and address data inconsistencies.
  • Develop and automate robust data quality and validation frameworks.

Cloud and API Engineering

  • Architect and manage resilient APIs using modern patterns, including microservices, RESTful design, and GraphQL.
  • Configure API gateways, circuit breakers, and fault-tolerant mechanisms for distributed systems.
  • Ensure horizontal and vertical scaling strategies for API-driven data products.

Monitoring and Observability

  • Implement comprehensive monitoring and observability solutions using Prometheus and Grafana to optimize system reliability.
  • Establish proactive alerting systems and ensure real-time system health visibility.

Cross-functional Collaboration and Innovation

  • Collaborate with stakeholders to understand business needs and translate them into scalable, data-driven solutions.
  • Continuously research and integrate emerging technologies to enhance data engineering practices.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.

Similar jobs

Senior Data Engineer

Only for registered members

Reading

Remote

GBP 60,000 - 90,000

Today
Be an early applicant

Senior Data Engineer

Only for registered members

Remote

GBP 50,000 - 90,000

Today
Be an early applicant

Senior Data Engineer

Only for registered members

Glasgow

Remote

GBP 60,000 - 90,000

Today
Be an early applicant

Senior Data Engineer

Only for registered members

Greater London

Remote

GBP 50,000 - 90,000

Today
Be an early applicant

Senior Data Engineer

Only for registered members

London

Remote

GBP 50,000 - 90,000

Yesterday
Be an early applicant

Senior Data Engineer (m/f/d)

Only for registered members

London

Remote

GBP 50,000 - 90,000

Yesterday
Be an early applicant

Senior Data Engineer - Remote, UK

Only for registered members

Remote

GBP 50,000 - 90,000

Yesterday
Be an early applicant

Senior Data Engineer

Only for registered members

England

Remote

GBP 60,000 - 70,000

2 days ago
Be an early applicant

Senior Data Engineer

Only for registered members

Remote

GBP 40,000 - 80,000

2 days ago
Be an early applicant