Role Overview:
As a Senior Data Engineer, you will be responsible for acquiring, storing, governing, and processing large volumes of structured and unstructured data. You will contribute to designing optimal architecture components, implementing enterprise data foundations such as data lakes, and supporting data-driven decision-making. You will work closely with experts across Data Intelligence, Research, UX Design, Digital Technology, and Agile teams to deliver impactful solutions.
Key Responsibilities:
- Data Pipeline Development: Design, implement, and maintain scalable data pipelines to ingest, transform, and process data from multiple sources.
- Data Storage & Modeling: Build and manage data warehouses and data lakes, ensuring efficient storage, retrieval, and modeling aligned with business needs.
- Cloud Integration: Leverage cloud platforms (e.g., AWS, Azure, GCP) to design and deploy scalable data infrastructure, optimizing performance, cost-effectiveness, and reliability.
- Data Quality & Governance: Implement data validation processes, quality checks, and governance frameworks to maintain data accuracy, integrity, and security compliance.
- Optimization & Monitoring: Continuously monitor and enhance data pipelines and infrastructure for improved processing speed, reduced latency, and overall system efficiency.
- Collaboration & Stakeholder Support: Work with data scientists, analysts, and other stakeholders to understand data requirements and provide technical expertise.
- Data Visualization & Reporting: Support business users by implementing data presentation layers and visualizations using tools like Tableau and Power BI.
- Technology & Industry Trends: Stay updated with emerging technologies and industry trends, evaluating innovative solutions to enhance data capabilities.
- Infrastructure Optimization: Support the definition and continuous improvement of underlying data infrastructure.
Who You Are:
- Educational Background: Bachelor’s, Master’s, or Ph.D. in IT, Computer Science, Information Management, or a related field.
- Experience: Minimum of six years in data engineering or a related field.
- Technical Expertise: Strong knowledge of big data technologies and distributed computing concepts.
Experience with big data frameworks (e.g., Hadoop, Spark) and distributions (e.g., Cloudera, Hortonworks, MapR).
Proficiency in batch and ETL processes for ingesting and processing data from multiple sources.
Hands-on experience with NoSQL databases (e.g., Cassandra, MongoDB, Neo4J, Elasticsearch).
Familiarity with query tools such as Hive, Spark SQL, and Impala.
Experience with Power BI for data visualization and reporting.
Interest or experience in real-time stream processing using Kafka, AWS Kinesis, Flume, and/or Spark Streaming.
Knowledge or willingness to learn DevOps and DataOps principles (e.g., Infrastructure as Code, automation in data pipelines).
High-level understanding of Data Science concepts, including model building, training, and deployment.
HOW TO APPLY:
If you are a team player, meticulous & organized, and more importantly, believe that YOU CAN MAKE A DIFFERENCE, we would like to hear from you.
Submit your application by emailing a detailed copy of your updated Resume in MS Word Format to Izz Lokman (EA Personnel Reg. No. R24124828, Achieve Career Consultant Pte Ltd EA Licence No. 05C3451) by clicking the “Apply Now” or contact Izz at 89193200 for a confidential discussion.
Please indicate the below information in your resume:
- Current & Expected salary
- Reason(s) for leaving
- Notice Period / Availability to commence work
YOUR SUCCESS IS OUR ACHIEVEMENT!
Notice: We would like to inform that only short-listed candidates will be notified. All applications will be treated with the strictest confidence.