Overall 5-7 years of experience in data engineering and transformation on Cloud.
3+ Years of Very Strong Experience in Azure Data Engineering, Databricks.
Expertise in supporting/developing lakehouse workloads at an enterprise level.
Experience in pyspark is required for developing and deploying the workloads to run on the Spark distributed computing.
Cloud deployment: Preferably Microsoft Azure.
Experience in implementing the platform and application monitoring using Cloud-native tools.