ABOUT SAND
Sand Technologies is a global leader in digital transformation, empowering leading organisations and governments worldwide to achieve their digital aspirations.
We offer a comprehensive suite of services, including enterprise AI solutions, data science, software engineering, and IoT, delivered from our centres in the Americas, Europe, and Africa.
Our training programmes, in partnership with organisations like the Mastercard Foundation, Amazon Web Services, Holberton, and ALX cultivate the next generation of agile digital leaders.
Through recent strategic acquisitions, Sand Technologies has further strengthened its capabilities in advanced analytics and intelligent software development, enhancing our ability to solve our clients' most pressing challenges across telecom, utilities, healthcare, and insurance industries.
We believe in harnessing technology to deliver real impact and value, helping organisations bridge the gap between their current reality and digital future.
ABOUT THE ROLE
Sand Technologies focuses on cutting-edge cloud-based data projects, leveraging tools such as Databricks, DBT, Docker, Python, SQL, and PySpark to name a few. We work across a variety of data architectures such as Data Mesh, lakehouse, data vault and data warehouses. Our data engineers create pipelines that support our data scientists and power our front-end applications. This means we do data-intensive work for both OLTP and OLAP use cases. Our environments are primarily cloud-native spanning AWS, Azure and GCP, but we also work on systems running self-hosted open source services exclusively. We strive towards a strong code-first, data as a product mindset at all times, where testing and reliability with a keen eye on performance is a non-negotiable.
JOB SUMMARY
A Data Engineer has the primary role of designing, building, and maintaining scalable data pipelines and infrastructure to support data-intensive applications and analytics solutions. They closely collaborate with data scientists, analysts, and software engineers to ensure efficient data processing, storage, and retrieval for business insights and decision-making. From their expertise in data modelling, ETL (Extract, Transform, Load) processes, and big data technologies it becomes possible to develop robust and reliable data solutions.
RESPONSIBILITIES
- Data Pipeline Development: Design, implement, and maintain scalable data pipelines for ingesting, processing, and transforming large volumes of data from various sources using tools such as Databricks, Python, and PySpark.
- Data Modeling: Design and optimize data models and schemas for efficient storage, retrieval, and analysis of structured and unstructured data.
- ETL Processes: Develop and automate ETL workflows to extract data from diverse sources, transform it into usable formats, and load it into data warehouses, data lakes, or lakehouses.
- Big Data Technologies: Utilize big data technologies such as Spark, Kafka, and Flink for distributed data processing and analytics.
- Cloud Platforms: Deploy and manage data solutions on cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP), leveraging cloud-native services for data storage, processing, and analytics.
- Data Quality and Governance: Implement data quality checks, validation processes, and data governance policies to ensure accuracy, consistency, and compliance with regulations.
- Monitoring, Optimization and Troubleshooting: Monitor data pipelines and infrastructure performance, identify bottlenecks and optimize for scalability, reliability, and cost-efficiency. Troubleshoot and fix data-related issues.
- DevOps: Build and maintain basic CI/CD pipelines, commit code to version control and deploy data solutions.
- Collaboration: Collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to understand requirements, define data architectures, and deliver data-driven solutions.
- Documentation: Create and maintain technical documentation, including data architecture diagrams, ETL workflows, and system documentation, to facilitate understanding and maintainability of data solutions.
- Best Practices: Continuously learn and apply best practices in data engineering and cloud computing.
QUALIFICATIONS
- Proven experience as a Data Engineer, or in a similar role, with hands-on experience building and optimizing data pipelines and infrastructure.
- Proven experience working with Big Data and tools used to process Big Data.
- Strong problem-solving and analytical skills with the ability to diagnose and resolve complex data-related issues.
- Solid understanding of data engineering principles and practices.
- Excellent communication and collaboration skills to work effectively in cross-functional teams and communicate technical concepts to non-technical stakeholders.
- Ability to adapt to new technologies, tools, and methodologies in a dynamic and fast-paced environment.
- Ability to write clean, scalable, robust code using Python or similar programming languages. Background in software engineering a plus.
DESIRABLE LANGUAGES/TOOLS
- Proficiency in programming languages such as Python, Java, Scala, or SQL for data manipulation and scripting.
- Strong understanding of data modelling concepts and techniques, including relational and dimensional modelling.
- Experience in big data technologies and frameworks such as Databricks, Spark, Kafka, and Flink.
- Experience in using modern data architectures, such as lakehouse.
- Experience with CI/CD pipelines and version control systems like Git.
- Knowledge of ETL tools and technologies such as Apache Airflow, Informatica, or Talend.
- Knowledge of data governance and best practices in data management.
- Familiarity with cloud platforms and services such as AWS, Azure, or GCP for deploying and managing data solutions.
- SQL (for database management and querying).
- Apache Spark (for distributed data processing).
- Apache Spark Streaming, Kafka or similar (for real-time data streaming).
- Experience using data tools in at least one cloud service - AWS, Azure or GCP (e.g. S3, EMR, Redshift, Glue, Azure Data Factory, Databricks, BigQuery, Dataflow, Dataproc).
Would you like to join us as we work hard, have fun and make history?