Compile large and complex datasets that meet functional/non-functional business requirements
Define, design and implement internal process improvements: automate manual processes, improve data delivery, and redesign infrastructure to increase scalability
Build the infrastructure required to optimally extract, transform, and load data from a variety of data sources
Build analytics tools that use data to provide actionable insights into operational efficiency and business KPIs
Work with stakeholders and implementation teams to assist with data-related technical issues and support data infrastructure needs
Identify ways to improve data reliability, efficiency and quality
Ensure that data is available and accessed by the right parties
Ensure data entries adhere to data management practices
Requirements
Must have a Bachelor's degree in Statistics/Computer Science or equivalent
At least 5 years of working experience in data warehouses and big data
Advanced SQL working experience working with Relational databases and writing SQL queries as well as familiarity with working with a variety of databases
Experience building and optimizing big data
Strong analytical skills related to working with unstructured data sets
Build processes that support data transformation, data structures, metadata, and dependency
Manipulating, Processing, and Extracting Value from Large Disconnected datasets
Knowledge of Message Queuing, Stream Processing and scalable 'Big Data' data stores
Experience with big data tools: Kafka, Spark, Hadoop, etc.
Experience in object-oriented/object function scripting languages: Relational SQL and NoSQL Databases: Python, Java
Experience in programming languages, Scala, etc.
Excellent verbal and written communication skills in Arabic and English