We're quickly growing and super excited for you to join us!
About Us
Our team is about straightforward communication, embracing feedback without taking it personally, and fostering a super collaborative environment. We thrive on working together, lifting each other, and getting things done with a sense of urgency. We're the kind of team that loves making bold choices, sharing extraordinary opinions, and maintaining a 100 mph pace. No endless meetings here – If it can be done today, we’ll make it happen—yesterday.
As a Senior Data Engineer, you will be responsible for designing, building, and maintaining scalable data infrastructure and pipelines. You will collaborate with cross-functional teams to ensure the availability, reliability, and efficiency of data systems, enabling data-driven decision-making across the organization.
Key Responsibilities
- Design, develop, and maintain robust ETL/ELT pipelines to process and transform large datasets efficiently.
- Optimize data architecture and storage solutions to support analytics, machine learning, and business intelligence.
- Work with cloud platforms (AWS) to implement scalable data solutions.
- Ensure data quality, integrity, and security across all data pipelines.
- Collaborate with data scientists, analysts, and software engineers to support data-driven initiatives.
- Monitor and troubleshoot data workflows to ensure system performance and reliability.
- Create APIs to provide analytical information to our clients.
What You Bring
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- 5+ years of experience in data engineering or a related field.
- Strong proficiency in SQL and database technologies (e.g., PostgreSQL, MySQL, Snowflake, BigQuery).
- Experience with data pipeline orchestration tools (e.g., Apache Airflow, Prefect, Dagster).
- Proficiency in programming languages such as Python and Scala.
- Hands-on experience with AWS cloud data services.
- Familiarity with big data processing frameworks like Apache Spark.
- Knowledge of data modeling, warehousing concepts, and distributed computing.
- Experience implementing CI/CD for data pipelines.
- Real-time data processing and streaming architectures (RisingWave, Kafka, Flink).
- Database performance tuning and query optimization.
- Strong problem-solving skills and the ability to work independently and collaboratively.
- ETL/ELT pipeline development and automation.
- Cloud computing and infrastructure management on AWS (nice to have).
What is it like to work at Topsort?
- Direct and Speedy: We give candid feedback, push each other to set higher goals and produce more impact by always thinking “how do we do this faster and better”.
- Embrace a Sports Team Mentality: We are all helpful and collaborative internally. You are ultimately surrounded by just different people that are all here to help you get the job done and shine as a team.
- Silicon Valley to the World: We were born in the pandemic by Stanford and Harvard alum co-founders who offer remote-working options with coworking memberships and (at least) once a year in-person offsite gathering.
- Constant Improvement: The best way to grow is by doing. The Topsort team is made of action-driven, intelligent, and curious individuals who are constantly seeking improvements and reinventions that lead to a better output and are never content with the status quo.
- Employee Stock Option Plan: Because we believe every person who is joining an early-stage fast-growing startup should be incentivized as the company grows.
Topsort is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
Do you sound like the right fit? Let's dive right in!