As part of our consulting team, you will work in an IT environment as a Data Engineer contributing to projects in an Agile environment.
Your Responsibilities
You will be involved in data-driven projects focusing on collecting, processing, and transforming raw data into actionable insights and Flat Data KPIs for business and data analysts. The primary goal is to enhance data accessibility, ensuring that digital data users can leverage information for strategic decision-making.
This role is pipeline-centric, requiring in-depth knowledge of distributed systems and computer science.
Your key responsibilities will include:
- Designing and delivering key components of the Digital Data Platform (Spark environment Scala language).
- Reviewing specifications, proposing technical solutions, and conducting feasibility studies.
- Acquiring datasets aligned with business needs.
- Developing algorithms to transform data into valuable insights.
- Designing, building, testing, and maintaining optimized data pipelines.
- Implementing new data validation methods and data analysis tools.
- Ensuring compliance with data governance and security policies.
- Improving data reliability, efficiency, and quality.
- Identifying, designing, and implementing internal process improvements (e.g., automating manual processes, optimizing data delivery, enhancing infrastructure scalability).
- Preparing data for predictive and prescriptive modeling.
- Collaborating with data and analytics experts to enhance the overall functionality of data systems.
- Developing software following Amadeus standards including documentation.
- Conducting code reviews to maintain high-quality development practices.
- Performing unit, package, and performance tests to ensure software quality.
- Participating in the validation and acceptance phase to refine and finalize products.
- Producing software documentation and delivering it to relevant teams.
- Supporting end users in the production phase, troubleshooting issues, and responding to Problem Tracking Records (PTRs) and Change Requests (CRs) from Product Management.
Qualifications: Technical Skills
- Previous experience as a Data Engineer or in a similar role.
- Strong expertise in building and optimizing big data pipelines, architectures, and datasets.
- Hands-on experience with Scala (2 years) or strong experience in Java/C with good knowledge of Scala.
- Proficiency in big data tools such as Spark, Kafka, MapR, Hadoop.
- Solid understanding of ETL (Extract, Transform, Load) processes.
- Experience with cloud services, especially Microsoft Azure.
Soft Skills
- Agile mindset: comfortable working with Agile values and methodologies.
- Fast learner: ability to quickly adapt to new environments and evolving requirements.
- Analytical & problem-solving skills: able to identify challenges, implement quick fixes, and develop long-term solutions.
- Team spirit & communication: strong collaboration skills, knowledge-sharing, and clear communication with colleagues and stakeholders.
- Proactive professional, open-minded, and innovative.
Additional Information:
Why Join Us:
Work on cutting-edge projects with leading international clients.
Be part of a collaborative and innovative Agile team.
Access career growth opportunities in an international environment.
Enjoy a hybrid work model for work-life balance.
Ready to take on the challenge? Apply now and be part of our journey!
Remote Work:
Employment Type:
Key Skills:
Apache Hive, S3, Hadoop, Redshift, Spark, AWS, Apache Pig, NoSQL, Big Data, Data Warehouse, Kafka, Scala.