The Senior Data Engineer will be responsible for the development and deployment of cloud-first ETL processes, data management, data warehousing, and more in the Automotive Digital Advertising industry. LotLinx is looking for a candidate who thrives in a fast-paced, collaborative environment and can utilize data to improve, optimize, and lead further development of our data aggregation processes.
This hybrid position is located in either our Winnipeg or Hamilton office locations.
Key Responsibilities
Design, build, and maintain robust, scalable, and efficient data pipelines to process large-scale datasets from multiple sources.
Develop and manage ETL/ELT workflows for data ingestion, transformation, and loading into data lakes and warehouses.
Architect and implement cloud-based solutions (AWS, GCP) to ensure data security, scalability, and high availability.
Work with stakeholders including Analytics, Product, and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Partner with DevOps and Security teams to ensure compliance with data governance, privacy, and security standards.
Engineer solutions for large data storage and management.
Proactively identify and resolve performance bottlenecks, scaling challenges, and technical issues.
Explore available technologies and design solutions to continuously improve our data quality, workflow reliability, and scalability while reporting performance and capabilities.
Act as an internal expert in each of the data sources to own overall data quality.
Qualifications:
5+ years of experience in data engineering or a similar role, with a focus on designing and managing scalable data systems.
Proven expertise with cloud platforms, specifically AWS and/or GCP.
Proficiency in SQL, including advanced query optimization and data modeling techniques.
Strong programming skills in Python, Scala, or Java, with a focus on developing data processing applications.
Experience with Data Engineering tools such as Airflow, Dataflow, DBT.
Experience with big data frameworks like Apache Spark, Hadoop, or Beam.
Hands-on experience with real-time data streaming platforms such as Apache Kafka, Pub/Sub, or Kinesis.
Knowledge of CI/CD pipelines, version control systems (e.g., Git), and containerization technologies (e.g., Docker, Kubernetes).
Experience managing data warehouses and lakes using modern platforms such as Snowflake, BigQuery, or Redshift.
Familiarity with data governance frameworks, security best practices, and compliance standards.
Demonstrated ability to solve complex technical challenges, think creatively, and innovate within cloud and data ecosystems.
The salary range for this position is $108,000 - $162,000.