We are looking for an enthusiastic and experienced Data Engineer with 3-5 years of experience to join our team. You will play a key role in designing, developing, and maintaining data pipelines, reporting solutions, and scalable data architectures in a cloud environment.
Working closely with business stakeholders, Data Engineers, and the Product Owner, you will be responsible for migrating ETL processes and data warehouses to the cloud. This includes developing pipelines for reporting, data monetisation, and large-scale data processing solutions.
If you are passionate about data engineering, SQL, Python, and cloud-based data solutions, we'd love to hear from you!
Key Responsibilities
Develop, optimise, and maintain ETL processes and reporting data sources.
Design and implement data warehouses and data lakes on AWS, ensuring scalability and efficiency.
Work with stakeholders to translate business needs into technical solutions.
Write efficient SQL queries and ensure performance optimisation.
Collaborate with the team to ensure quality assurance, peer reviews, and best coding practices.
Automate data workflows to improve efficiency and reduce costs.
Conduct data cleansing and enhancement exercises.
Provide clear documentation for data pipelines and processes.
Skills Knowledge and Expertise
Minimum 2 years of experience working with large-scale relational databases (500,000+ customers).
Strong understanding of relational database concepts (joins, sub-queries, indexing, normalisation).
Proficiency in SQL (query writing, performance tuning, optimisation).
Experience with Microsoft SQL Server technologies (SSRS, SSIS, SSAS).