We are tech transformation specialists, uniting human expertise with AI to create scalable tech solutions.
With over 6,500 CI&Ters around the world, we’ve built partnerships with more than 1,000 clients during our 30 years of history. Artificial Intelligence is our reality.
We're growing fast and seeking a talented, motivated Databricks Senior Data Developer to join our team. You'll play a crucial role in developing and maintaining scalable data pipelines and infrastructure to drive data analytics and machine learning solutions for our clients.
Responsibilities:
- Design and Develop Data Pipelines: Create robust, scalable data pipelines using Databricks, Apache Spark, and SQL to transform and process large datasets efficiently.
- Data Architecture: Collaborate with data architects to design data models and architecture that support data analytics and machine learning applications.
- Performance Optimization: Monitor and optimize the performance of existing data pipelines and workflows to ensure high throughput and low latency.
- Collaboration with Stakeholders: Work closely with data scientists, analysts, and business stakeholders to understand data requirements and translate them into technical specifications.
- Data Quality and Governance: Implement data quality checks and governance practices to ensure data consistency, accuracy, and compliance.
- Documentation and Best Practices: Maintain comprehensive documentation for data pipelines and processes, and contribute to the establishment of best practices within the team.
- Mentorship: Provide guidance and mentorship to junior team members, fostering a culture of learning and collaboration.
Requirements for this Challenge:
- Sólid experience in data engineering or a related field, with a focus on data pipelines and ETL processes.
- Soft Skills: Excellent communication in English and Portuguese, with the ability to work effectively in a team-oriented environment and engage with clients.
- Technical Skills: Proficiency in Databricks, Apache Spark, SQL, and Python (or Scala).
- Cloud Technologies: Experience with cloud platforms such as Azure, AWS, or GCP, particularly related to data storage and processing solutions.
- Big Data Technologies: Experience with additional big data technologies such as Hadoop, Kafka, or Airflow.
- Data Modeling: Strong understanding of data modeling concepts and experience with relational and NoSQL databases.
- Problem-Solving: Excellent analytical and problem-solving skills, with the ability to troubleshoot complex data issues.
- Education: Bachelor’s degree in Computer Science, Information Technology, or a related field.
Nice to Have:
- Certifications: Databricks certification (e.g., Databricks Certified Developer) or other relevant certifications.
- Machine Learning: Familiarity with machine learning frameworks and concepts, and experience integrating data engineering with machine learning workflows.
- Data Visualization: Knowledge of data visualization tools (e.g., Tableau, Power BI) to help communicate insights effectively.
- Agile Methodologies: Experience working in Agile development environments and familiarity with tools such as JIRA or Confluence.
Our benefits:
- Health and dental insurance
- Meal and food allowance
- Childcare assistance
- Extended paternity leave
- Wellhub (Gympass)
- TotalPass
- Profit-sharing (PLR)
- Life insurance
- CI&T University
- Discount club
- Free online platform dedicated to physical, mental, and overall well-being
- Pregnancy and responsible parenting course
- Partnerships with online learning platforms
- Language learning platform
- And many more!
Collaboration is our superpower, diversity unites us, and excellence is our standard.
We value diverse identities and life experiences, fostering a diverse, inclusive, and safe work environment. We encourage applications from diverse and underrepresented groups to our job positions.