To develop and maintain complete data architecture across several application platforms, providing capability across application platforms. To design, build, operationalise, secure, and monitor data pipelines and data stores to applicable architecture solution designs, standards, policies, and governance requirements, thus making data accessible for the evaluation and optimisation for downstream use case consumption. To execute data engineering duties according to standards, frameworks, and roadmaps.
Qualifications:
Type of Qualification: First Degree
Field of Study: Business Commerce
Field of Study: Information Studies
Field of Study: Information Technology
Experience Required:
Experience in building databases, warehouses, reporting, and data integration solutions. Experience building and optimising big data data pipelines, architectures, and data sets. Experience in creating and integrating APIs. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
Experience in database programming languages including SQL, PL/SQL, SPARK, and/or appropriate data tooling. Experience with data pipeline and workflow management tools.
Understanding of data pipelining and performance optimisation, data principles, and how data fits in an organisation including customers, products, and transactional information. Knowledge of integration patterns, styles, protocols, and systems theory.
Additional Information:
Key Skills: Apache Hive, S3, Hadoop, Redshift, Spark, AWS, Apache Pig, NoSQL, Big Data, Data Warehouse, Kafka, Scala
Vacancy: 1
Remote Work: Employment Type: