About MIGx
MIGx is a global consulting company with an exclusive focus on the healthcare and life science industries, addressing their demanding requirements on quality and regulatory aspects. We have been managing challenges and solving problems for our clients in the areas of compliance, business processes, and more.
MIGx interdisciplinary teams from Switzerland, Spain, and Georgia have been taking care of projects in the fields of M&A, Integration, Application, Data Platforms, Processes, IT management, Digital transformation, Managed services, and compliance.
About the Profile
We are looking for a data enthusiast who enjoys working with structured and unstructured data, transforming and organizing it to work in state-of-the-art data fabric and data mesh projects.
Project Description
In this role, you will be working as a Data Engineer on complex projects with multiple data sources and formats. You will be part of a larger team at MIGx responsible for Data Services and building Data Products for our customers (mid to large size enterprises). You will have the opportunity to grow in all things Data related and participate in building the overall Data Mesh architecture for the customer while focusing on one specific visualization project and more upcoming.
Responsibilities
- Develop ETL pipelines in Python and Azure Data Factory, as well as their DevOps CI/CD pipelines.
- Software engineering and systems integration via REST APIs and other standard interfaces.
- Collaborate with a team of professional engineers to develop data pipelines, automate processes, deploy and build infrastructure as code, and manage solutions designed in multicloud systems.
- Participate in agile ceremonies, weekly demos, and communicate your daily commitments.
- Configure and connect different data sources, especially SQL databases.
Requirements
- Studies in Computer Science (BSc and/or MSc desired).
- 3+ years of practical experience working in similar roles.
- Proficient with ETL products (Spark, Databricks, Snowflake, Azure Data Factory, etc.).
- Proficient with Azure Data Factory.
- Proficient with Databricks/Snowflake and PySpark.
- Proficient in developing DevOps/CICD pipelines.
- Proficient with Azure DevOps Classic/YAML Pipelines.
- Proficient with Azure cloud services: ARM templates, API management, App Service, VMs, AKS, Gateways.
- Advanced SQL knowledge and background in relational databases such as MS SQL Server, Oracle, MySQL, and PostgreSQL.
- Understanding of landing, staging area, data cleansing, data profiling, data security, and data architecture concepts (DWH, Data Lake, Delta Lake/Lakehouse, Datamart).
- Data Modeling skills and knowledge of modeling tools.
- Advanced programming skills in Python.
- Ability to work in an agile development environment (SCRUM, Kanban).
- Understanding of CI/CD principles and best practices.
Nice to Have
- Proficient with .NET C#.
- Terraform.
- Bash/Powershell.
- Data Vault Modeling.
- Familiar with GxP.
- Programming skills in other languages.
What We Offer
- Hybrid work model and flexible working schedule that suits night owls and early birds.
- 25 holiday days per year.
- Free English classes.
- Possibilities of career development and the opportunity to shape the company's future.
- An employee-centric culture directly inspired by employee feedback - your voice is heard, and your perspective is encouraged.
- Different training programs to support your personal and professional development.
- Work in a fast-growing, international company.
- Friendly atmosphere and supportive management team.
Data Engineer • Madrid, Kingdom Of Spain, ES