Senior Data Engineer

Deep Kernel Labs
Torredembarra
EUR 40.000 - 60.000
Descripción del empleo

As a Data Engineer at DKL, you will be critical in designing, building, and optimizing our data infrastructure. Working alongside cross-functional teams, you’ll develop reliable data pipelines and maintain the integrity of large datasets used for analysis and reporting, directly impacting data-driven decision-making across the company.

  • SALARY: 40.000 - 60.000 EUR
  • REMOTE: 100% (Spain-based candidates only)
  • SCHEDULE: Flexible
  • Growth opportunities
  • 500€ / year for educational purposes

You can work remotely from anywhere within Spain. DKL is a fully remote company with no designated headquarters. While we have team members across Spain and internationally, this position is only open to candidates who are legally authorized to work and reside in Spain.

You will work 40 hours per week, with the flexibility to arrange your schedule in a way that works best for you. The only requirement is to be available for daily meetings or client appointments. We understand that personal wellness is crucial for maintaining focus and achieving results.

What’s the Role?

As a Data Engineer at DKL, you will be responsible for developing, operating, and maintaining scalable data architectures that support analysis, reporting, and machine learning applications. Your role will involve managing ETL processes, creating and running data warehouses, and ensuring the high performance and reliability of data systems. You will collaborate closely with product owners, data scientists, and analysts to translate business requirements into effective technical solutions while maintaining data quality and accessibility. As one of the primary contributors to DKL's data infrastructure, you will ensure our data solutions are efficient, accurate, and aligned with client goals.

Responsibilities

Your responsibilities will encompass a wide range of tasks, including but not limited to:

  • Designing, building, and optimizing data pipelines to handle large volumes of data from various sources and various frequencies, including real-time data.
  • Developing and maintaining data warehouse architecture, ensuring scalability and performance, and considering organizational requirements to determine the optimal architecture.
  • Implementing ETL / ELT processes to extract, transform, and load data for reporting and analytics.
  • Collaborating with data scientists and analysts to support machine learning workflows and advanced analytics.
  • Monitoring and troubleshooting data systems to ensure high availability and reliability.
  • Ensuring data quality and compliance with company data governance standards.
  • Documenting data processes and infrastructure for internal use and continuous improvement.

How will you work?

You’ll be part of DKL’s data team, working remotely and collaborating with data scientists, analysts, and software engineers to support DKL’s data-driven goals. Daily check-ins and regular project meetings are held online, ensuring open communication and alignment.

Who will you work with?

You will collaborate closely with the data team, working alongside data scientists and analysts to build, optimize, and maintain DKL's data infrastructure. You will report directly to the Head of Data and CTO, receiving guidance on data strategy and infrastructure development. Together, you’ll ensure that our data-driven insights align with business objectives and remain accessible across the organization. You’ll also work alongside the Project Manager to align on project timelines and deliverables, collaborating with engineering leads from Backend, DevOps, and Frontend teams to ensure smooth data integration and effective data utilization across all projects.

What Makes You a Fit?

Requirements

  • Bachelor’s degree in Computer Science or a related field.
  • Strong Python programming and Software Engineering skills.
  • Strong SQL and analytical skills.
  • Proficiency with at least one of the leading cloud platforms (AWS, GCP, or Azure) and data warehousing tools (Snowflake, Databricks, Redshift, or BigQuery).
  • Proficiency with a workflow orchestration tool, preferably Airflow.
  • Familiarity with data governance and security best practices.
  • Excellent problem-solving skills and the ability to work independently and collaborate with a larger team remotely.

Nice-to-Have

  • Experience with data streaming technologies, such as Kafka or Kinesis.
  • Experience with machine learning pipelines and MLOps.
  • Experience implementing a data mesh architecture.
  • Experience with functional data engineering.
  • Experience with Apache Spark.
  • Experience with a Data Quality framework such as Great Expectations.
  • Experience using DBT to orchestrate SQL transformations in a Data Warehouse.
  • Cloud or data engineering certifications.
  • Previous experience in a fast-paced, agile environment.

What will the First 6 Months be Like?

Your first six months will be structured to support your learning, integration, and progression as you settle into your role. This period aligns with our review checkpoints at 1, 3, and 6 months, ensuring you have a clear pathway to success during your probation period.

Month 1

Your first month will focus on onboarding and getting grounded in our data platforms, engineering practices, and team workflows. You’ll have access to comprehensive technical documentation and training resources, meet key stakeholders across data, analytics, and product teams, and start familiarizing yourself with our data architecture, pipelines, and development tools.

Months 2-3

By month two, you'll start taking on defined responsibilities within our data engineering projects, collaborating closely with your team to plan deliverables, estimate workloads, and coordinate progress across stakeholders.

Month 4-6

With solid experience under your belt, by month four, you'll be ready to lead your own data engineering projects more independently. During this stage, you'll take ownership of end-to-end delivery—designing, building, testing, and deploying scalable data solutions that support our business needs.

What’s the Selection Process?

We aim to make our selection process smooth, informative, and enjoyable, ensuring it’s a two-way street where we get to know each other.

1 / Initial Meet & Greet

A casual video call to introduce ourselves, discuss the role at a high level, and get to know each other’s backgrounds and motivations.

2 / Role-Focused Interview

A more focused discussion, diving into the role’s specifics and exploring key data engineering scenarios you might encounter with us.

3 / Meet the Team Leads

In this call, you’ll meet some of our key team leads. This conversation helps you understand the company culture, our team dynamics, and the kind of cross-functional work you’ll be doing.

4 / Decision & Offer

After the final discussion, we’ll circle back with a decision. If we’re a match, we’ll be excited to extend an offer and welcome you aboard!

Obtenga la revisión gratuita y confidencial de su currículum.
Selecciona un archivo o arrástralo y suéltalo
Avatar
Asesoramiento online gratuito
¡Mejora tus posibilidades de entrevistarte para ese puesto!
Adelántate y explora vacantes nuevas de Senior Data Engineer en