(Senior) Data & Analytics Engineer

Sei unter den ersten Bewerbenden.
Gemma Analytics
Berlin
EUR 50.000 - 90.000
Sei unter den ersten Bewerbenden.
Gestern
Jobbeschreibung

About Gemma Analytics

At Gemma, we help our clients activate data by using state-of-the-art technology. Our clients make better choices and are empowered to make use of their data on their own. We are service-focused, yet also build open-source tools to deliver a more effective and efficient service. You can read more about our data philosophy here.
Our clients range from Series A ventures to SMEs with 30 to 13,000 employees per client. We have an honest, pleasant, and fun work environment. Please make reference calls on us for validation :)


About the job

Gemma Analytics is data-driven and helps clients to become more data-driven.

As our Senior Data & Analytics Engineer, you play a critical role in helping our clients unlock business value from their data. You’re not just technically strong — you’re a Data Magician who uncovers structure in chaos and turns raw data into meaningful, actionable insight. You dig into complex datasets, spot what others overlook, and guide clients toward pragmatic, high-impact solutions.

But your impact doesn’t stop at client work. As a senior team member, you act as a sparring partner and coach to your colleagues. You’re someone others turn to for advice on technical challenges, project structure, and best practices — and you’re excited to help them grow.

You have the opportunity to work on difficult problems while helping startups and SMEs to make well-informed decisions based on data.

Challenges:

  • As we are tooling-agnostic, you will touch on multiple technologies and understand the in’s & out’s of what is currently possible in the data landscape.

  • Collaborate with domain experts and client stakeholders to solve data challenges across a variety of industries.

  • Support and mentor other team members through code reviews, pair programming, and knowledge sharing.

  • Lead internal sparring sessions and contribute to developing team-wide best practices and scalable project structures.


Who you are

We believe in a good mixture of experience and upside in our team. We are looking for both types of people equally - for this role, we require more expertise and proof of trajectory.

Besides that, we are looking for the following:

  • 3–4 years of hands-on experience in data engineering or analytics engineering, with a strong focus on building and maintaining robust data pipelines and analytics-ready data models.

  • Proficient in SQL and experienced with relational databases, capable of translating complex business logic into clear, maintainable queries.

  • Hands-on experience using dbt (preferably dbt Cloud) in production environments, following best practices for modular, testable, and documented code.

  • Solid understanding of data modeling techniques (e.g., Kimball dimensional modeling, Data Vault, star/snowflake schema) and data warehousing principles.

  • Experience working with modern data stack tools, such as Snowflake, BigQuery, Airflow, Airbyte/Fivetran, Git, and CI/CD workflows.

  • Proficient in Python (or a similar scripting language) for use cases such as API integration, data loading, and automation.

  • Strong communication skills in English (written and spoken), with the ability to explain technical decisions and collaborate with both technical and non-technical stakeholders.

  • Comfortable working in client-facing projects, navigating ambiguity, and delivering high-quality results with minimal oversight.

  • Experience coaching or mentoring junior team members through code reviews, sparring, and knowledge sharing.

  • Bonus: Familiarity with data visualization tools (e.g., Tableau, Power BI, Looker) to support end-to-end workflows or assist analysts.

  • Bonus: Fluency in German.

Technologies you’ll use

Working with multiple clients, we are in touch with many technologies, which is truly exciting. We aim to use state-of-the-art technologies while being fully pragmatic (we do not crack a walnut with a sledgehammer). We follow an ELT philosophy and divide the tasks between Data Engineering and Analytics Engineering accordingly.

The following technologies constitute our preferred data tech stack:

  • Data Loading
    • For most clients, we use our own open-source Python library EWAH and Apache Airflow.
    • For simple requests, we work with Fivetran or Stitch.
  • Data Warehousing
    • For smaller data loads, we mostly use PostgreSQL databases.
    • For larger datasets, we work with Snowflake or BigQuery.
  • Data Transformation
    • We love to work with data build tool (dbt).
  • Data Visualization
    • For smaller businesses with < 100 FTE, we mostly recommend Metabase as a powerful open-source reporting tool.
    • For specified needs and a centralized BI, we recommend PowerBI or Tableau.
    • For a decentralized, self-service BI with more than 50 users, we recommend Looker.

Gemma Perks

We are located in Berlin, close to Nordbahnhof. We are currently 20 colleagues and will grow to 22 colleagues until the end of the year. Other perks include:

  • We have an honest, inclusive work environment and want to nurture this environment.
  • We have frequent team events in Berlin: our cultural base.
  • We encourage workations and even do workations as a company - it is up to you to come to the office between one and five days a week.
  • We don’t compromise on equipment - a powerful Laptop, extra screens, and all the tools you need to be effective.
  • We will surround you with great people who love to solve (mostly data) riddles.
  • We believe in efficient working hours rather than long working hours - we focus on the output rather than the input.
  • We learn and share during meetups, lunch & learn sessions and are open for further initiatives.
  • We pay a market-friendly salary and we additionally distribute at least 20% of profits to our employees.
  • We are fast-growing, have technology at our core, yet we do not rely on a VC and operate profitably.
  • We have a great yearly offsite event that brings us all together for a full week, enjoying good food, having a good time, and of course, solving complex data-related tasks.

How you'll get here

  1. CV Screening

  2. Phone/Coffee/Tea Initial Conversation

  3. Hiring Test @home

  4. Interviews with 2-3 future colleagues

  5. Reference calls

  6. Offer + Hired

Erhalte deine kostenlose, vertrauliche Lebenslaufüberprüfung.
Datei wählen oder lege sie per Drag & Drop ab
Avatar
Kostenloses Online-Coaching
Erhöhe deine Chance auf eine Einladung zum Interview!
Sei unter den Ersten, die neue Stellenangebote für (Senior) Data & Analytics Engineer in Berlin entdecken.