Enable job alerts via email!

Data Modeler

Lorien

London

Remote

GBP 150,000 - 200,000

Full time

21 days ago

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An innovative firm is seeking a skilled Data Modeler to enhance data modeling practices for an insurance client. This role involves developing and optimizing semantic data models using Azure Synapse Analytics and PySpark, ensuring data accuracy, scalability, and compliance. You will collaborate with Power BI developers to create robust data models that meet analytical needs. This exciting opportunity offers a chance to work remotely on a 6-month contract, contributing to impactful data solutions in a dynamic environment. If you are passionate about data and eager to make a difference, this role is for you.

Qualifications

  • Proven experience with Azure Synapse Analytics and PySpark notebooks.
  • Strong knowledge of data formats like Delta Tables and Parquet.

Responsibilities

  • Develop and optimize semantic data models using Azure Synapse Analytics.
  • Integrate data from various sources for analytical use.

Skills

Azure Synapse Analytics
PySpark
Data Lake Storage Gen2
Power BI
Data Factory
Azure SQL Databases
Git
CI/CD pipelines

Job description

Job Description

Data Modeler

We are recruiting for a Data Modeler with Azure Synapse Analytics to join one of our Insurance clients on a 6-month contract.

Inside IR35

Remote

Responsibilities:

  1. Develop, maintain, and optimize semantic data models using Azure Synapse Analytics.
  2. Design data models that integrate data from Azure Data Lake Storage Gen2, Azure SQL databases, and other sources for consumption in Power BI and Synapse SQL pools.
  3. Build and maintain dimensional and relational models based on the business requirements.
  4. Ensure data model accuracy, scalability, and performance.
  5. Use PySpark within Azure Synapse notebooks to extract, transform, and load (ETL/ELT) data from raw formats (e.g., Delta, Parquet, CSV) stored in ADLS Gen2.
  6. Integrate data from Azure SQL databases and other sources into unified data models for analytical use.
  7. Implement data transformation pipelines and workflows in PySpark.
  8. Collaborate with Power BI developers to ensure the data models support analytical and reporting needs.
  9. Develop and enforce data governance and data modeling standards to ensure consistency and accuracy across all models.
  10. Ensure all data modeling practices adhere to security and compliance requirements, including role-based access control, encryption, and data privacy laws (e.g., GDPR).
  11. Apply RL/CL/OL security within the data models as defined by business use and target audience.

Skills:

  1. Proven experience with Azure Synapse Analytics (specifically serverless SQL pools) and PySpark notebooks.
  2. Strong knowledge of data formats such as Delta Tables, Parquet, and CSV and experience processing these formats in Azure Data Lake Storage Gen2.
  3. Solid experience with Azure SQL Databases, including schema design, querying, and data integration.
  4. Experience building semantic models that integrate with Power BI.
  5. Proficiency with Data Factory for data orchestration.
  6. Experience with Synapse Pipelines, Azure Monitor, and Azure Data Catalog is a plus.
  7. Familiarity with DevOps tools like Git, CI/CD pipelines, and Azure DevOps.

Carbon60, Lorien & SRG - The Impellam Group STEM Portfolio are acting as an Employment Business in relation to this vacancy.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.