Enable job alerts via email!

Senior Data Engineer | London, UK | Remote

Hermeneutic Investments

London

Remote

GBP 60,000 - 100,000

Full time

6 days ago
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

Join a rapidly growing hedge fund as a Senior Data Engineer, where you will architect and implement robust data infrastructures that support trading and research operations. This innovative firm is focused on building scalable and cost-efficient systems to manage vast amounts of market data and internal sources. You will collaborate with cross-functional teams to ensure data quality and drive the adoption of cutting-edge technologies. If you are passionate about data engineering and thrive in a dynamic environment, this is the perfect opportunity for you to make a significant impact and grow your career in a high-performing team.

Qualifications

  • Strong problem-solving and analytical skills are essential.
  • Experience with data quality checks and SQL optimization is a must.
  • Familiarity with cloud platforms and data services is advantageous.

Responsibilities

  • Design and optimize scalable data pipelines for large datasets.
  • Ensure data quality and real-time monitoring using various tools.
  • Collaborate with teams to meet their data needs and enhance frameworks.

Skills

Problem-solving
Analytical thinking
Communication skills
SQL proficiency
Python
Java/Scala
ETL pipeline development
Data security

Education

Degree in Computer Science
Degree in Engineering
Degree in Mathematics

Tools

Apache Airflow
Snowflake
Redshift
BigQuery
Apache Spark
Kafka
Flink
AWS
GCP
Azure

Job description

Senior Data Engineer

Hermeneutic Investments London, United Kingdom

We are looking for a Senior Data Engineer to help us architect, implement and operate the complete data infrastructure pipeline for our Research and Trading operations. This role will be crucial in building a scalable, reliable, and cost-efficient system for handling vast amounts of market trading data, real-time news feeds and a variety of internal and external data sources. The ideal candidate will be a hands-on professional who understands the entire data lifecycle and can drive innovation while collaborating across research and engineering teams to meet their needs.

Responsibilities

  • Design, build, and optimize scalable pipelines for ingesting, transforming, and integrating large-volume datasets (market data, news feeds and various unstructured data sources).
  • Ensure data quality, consistency, and real-time monitoring using tools like DBT, 3rd party libraries that can facilitate data validation processes.
  • Develop processes to normalize and organize our data warehouse for use across different departments.
  • Apply advanced data management practices to ensure the scalability, availability, and efficiency of data storage.
  • Ensure the infrastructure supports trading and research needs while maintaining data integrity, security, and performance at scale.
  • Collaborate with research and analytics teams to understand their data needs and build frameworks that empower data exploration, analysis, and model development. Create tools for overlaying data from multiple sources.
  • Ensure that data storage, processing, and management are done in a cost-effective manner, optimizing both hardware and software resources. Implement solutions that balance high performance with cost control.
  • Stay ahead of the curve by continuously evaluating and adopting the most suitable technologies for the organization’s data engineering needs. Ensure that company’s systems align with the latest best practices in data management.

Requirements

Must Have

  • Strong problem-solving and analytical thinking.
  • Clear communication skills for cross-functional collaboration.
  • Proficiency in building robust data quality checks for ingested data.
  • Experience identifying anomalies in ingested data.
  • Strong proficiency in writing complex SQL (and similar) queries and optimize performance.
  • Proficiency in Python or Java/Scala.
  • Experience building and maintaining complex ETL pipelines with tools like Apache Airflow, dbt, or custom scripts.
  • Strong understanding of dimensional modeling, star/snowflake schemas, normalization/denormalization principles.
  • Proven experience with platforms like Snowflake, Redshift, BigQuery, Synapse.
  • Expert knowledge of Apache Spark, Kafka, Flink, or similar.
  • Strong understanding of data security and privacy standards.

Good to Have

  • A degree in Computer Science, Engineering, Mathematics, or a related field.
  • Familiarity with one of the major cloud platforms (AWS, GCP, Azure) and their data services (e.g., BigQuery, Redshift, S3, Dataflow, etc.), proven by certifications (e.g., Google Professional Data Engineer, AWS Big Data Specialty or Snowflake’s SnowPro Data Engineer).
  • Experience with data quality frameworks (e.g., Great Expectations, Deequ or others).
  • Experience with Git/GitHub or similar for code versioning.
  • Experience with infrastructure-as-code tools (e.g., Terraform, CloudFormation).
  • Exposure to containerization/orchestration (Docker, Kubernetes).
  • Familiarity with data governance, data lineage, and catalog tools (e.g., Apache Atlas, Amundsen).
  • Hands-on with observability and monitoring tools for data pipelines (e.g., Monte Carlo, Datadog).
  • Knowledge of machine learning pipelines.
  • Prior experience in a trading or financial services environment.

Interview Process

  • Our partner and VP Eng will review your CV.
  • Our VP of Engineering will conduct the first round of interviews.
  • Our partner will conduct an additional round of interviews on technical and cultural fit.
  • Additional rounds may be conducted as necessary with other team members or our partners.

Throughout the process, you'll be assessed for cultural fit through our company values:

  • Drive - We believe the best team members are passionate about what they do, and that propels them to greater heights in their career.
  • Ownership - We aim to give ownership interest to as many people in the firm as possible, but in return, we expect everyone to act like owners.
  • Judgement - We look for team members who consistently look at the big picture and spend their time on the activities that most drive PnL.
  • Openness - We want a culture where we proactively share information with one another and challenge each other with constructive debate.
  • Competence - We value people with high intellectual horsepower who are experts in their domains and quick learners.

We are a rapidly growing hedge fund, 2 years old, managing a 9-figure AUM, generating 200%+ annualized returns with a 4 Sharpe.

Our team has grown to approximately 40 professionals across Trading & Research, Technology, and Operations.

As part of our growing team, you will play a pivotal role in designing and implementing robust data infrastructures that enable seamless research, analytical workflows, and effective trade ideation and execution. If you are an experienced data engineering leader with a passion for complex data systems, we want to hear from you!

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.