We are recruiting a freelance Data Scientist role for a German Energy Customer based in Essen, Germany:
Duration: 1 year + extensions chance
Location: Essen (2-3 days onsite per week)
Language: German & English Speaking (German is a nice to have)
Client: Multinational Utility Customer
Pay Rate: Negotiable
Key Responsibilities:
Mandatory Skills:
Data Science:
Extensive experience in time-series forecasting, predictive modelling, and deep learning.
Proficient in designing reusable and scalable machine learning systems.
Proficiency in implementing techniques such as ARIMA, LSTM, Prophet, Linear Regression, and Random Forest to ensure accurate forecasting and insights.
Strong command of machine learning libraries, including scikit-learn, XGBoost, Darts, TensorFlow, and PyTorch, along with data manipulation tools like Pandas and NumPy.
Proven expertise in designing and implementing explicit ensemble techniques such as stacking, boosting and bagging to improve model accuracy and robustness.
Proven track record of analysing and optimizing performance of operational machine learning models to ensure long-term efficiency and reliability.
Expertise in retraining and fine-tuning models based on evolving data trends and business requirements.
MLOps Implementation:
Proficiency in leveraging Python-based MLOps frameworks for automating machine learning pipelines, including model deployment, monitoring, and periodic retraining.
Advanced experience in using the Azure Machine Learning Python SDK to design and implement parallel model training workflows, incorporating distributed computing, parallel job execution, and efficient handling of large-scale datasets in managed cloud environments.
Strong experience in PySpark for scalable data processing and analytics.
Azure Expertise:
Azure Machine Learning: Managing parallel model training, deployment, and operationalization using the Python SDK.
Azure Databricks: Collaborating on data engineering and analytics tasks using PySpark/Python.
Azure Data Lake: Implementing scalable storage and processing solutions for large datasets.
Preferred Skills:
Experience in applying k-means clustering for data segmentation and pattern identification.
Skilled in creating granular bottom-up forecasting models for hierarchical insights.
Designing, orchestrating, and managing pipelines for seamless data integration and processing using Azure Data Factory.
Knowledge of power trading concepts.
Experience in applying generative AI models, such as GPT or similar frameworks.
If you are interested, or you know someone that could be, please reach out and we can arrange a time to speak?