Descripción del trabajo
Data Engineer
About the Company: Join a thriving FinTech firm, known for being one of the fastest-growing international companies in its sector. With its headquarters in London, the company boasts over 1,500 staff from more than 50 nationalities, working across 27 offices worldwide, serving over 45,000 clients daily.
About the Team: The strategic growth of the company would not be possible without our clients' dedicated Data team. We are seeking a Data Engineer to join our clients' Data Engineering team. Our clients' data mission is to develop and maintain the Data Platform, providing a collaborative environment where Data Scientists, Data Engineers, Analytics Engineers, and Data Analysts can:
- Build ETLs and data pipelines to serve data on the platform.
- Provide clean, transformed data ready for analysis and use by BI tools.
- Develop department and project-specific data models to drive decision-making across the company.
- Automate end solutions to focus on high-value analysis rather than running data extracts.
About the Technology and Data Stack:
- Google Cloud Platform as the main Cloud provider
- Apache Airflow and dbt Cloud for orchestration
- Docker for delivering software in containers
- Cloud Build for CI / CD
- dbt for data modelling and warehousing
- Looker and Looker Studio for Business Intelligence / dashboarding
- GitHub for code management
- Jira for project management
- Other third-party tools such as Hevodata, MonteCarlo, Synq
About the Role: As a Data Engineer, you will collaborate closely with the team to help model and maintain the Data Platform. We are looking for:
- At least 3 years of experience in data / analytics engineering, building, maintaining, and optimising data pipelines & ETL processes in big data environments.
- Proficiency in Python and SQL.
- Knowledge of software engineering practices in data (SDLC, RFC, etc.).
- Staying informed about the latest developments and industry standards in Data.
- Fluency in English.
Desirable:
- Experience with modern Data stack tools.
- Knowledge of dimensional modelling / data warehousing concepts.
- Proficiency in Spanish.
Why This Offer is For You:
- Be mentored by an outstanding team member along a 30 / 60 / 90 plan designed for you.
- Participate in data modelling reviews and discussions to validate the model's accuracy, completeness, and alignment with business objectives.
- Design, develop, deploy, and maintain ELT / ETL data pipelines from various data sources (transactional databases, REST APIs, file-based endpoints).
- Deliver data models using solid software engineering practices (e.g., version control, testing, CI / CD).
- Manage overall pipeline orchestration using Airflow (hosted in Cloud Composer) and execution using GCP hosted services such as Container Registry, Artifact Registry, Cloud Run, Cloud Functions, and GKE.
- Work on reducing technical debt by addressing outdated or inefficient code.
- Collaborate with team members to reinforce best practices across the platform, encouraging a shared commitment to quality.
- Help implement data governance policies, including data quality standards, data access control, and data classification.
- Identify opportunities to optimise and refine existing processes.
Intro Consulting Ltd are proud to represent this forward-thinking fintech client. We act as their trusted recruitment partner, connecting top talent with meaningful opportunities.