Critères de l'offre
Métiers :
- Développeur Big Data (H/F)
Expérience min :
- 3 à 5 ans
Secteur :
- Informatique, Internet, Télécoms, Conseil en informatique
Diplômes :
- Aucun diplôme
- + 1 diplôme
Lieux :
- Cours-les-Bains (33)
Conditions :
- CDI
- Temps Plein
- Télétravail partiel
L'entreprise : Klanik
KLANIK is an IT Engineering consulting company that supports its clients in their digital and technological projects.
The KLANIK Group now brings together more than 750 talents, working across 16 offices in Europe, North America, Africa, and the Middle East. These are committed, unconventional, and passionate experts, involved in strategic projects thanks to their high level of expertise in Software, DevOps, Cloud, Agility, Cybersecurity, Big Data & AI.
The KLANIK Group now brings together more than 750 talents, working across 16 offices in Europe, North America, Africa, and the Middle East. These are committed, unconventional, and passionate experts, involved in strategic projects thanks to their high level of expertise in Software, DevOps, Cloud, Agility, Cybersecurity, Big Data & AI.
Description du poste
We are looking for a Data Engineer to design, build, and maintain a scalable and reliable data platform supporting analytics and data science use cases. The role focuses on Databricks-based pipelines, data quality, and modern DevOps practices, working closely with BI, Analytics, and business stakeholders.
Key Responsibilities
Design and optimize ETL/ELT data pipelines on Databricks and AWS
Maintain and evolve the Medallion architecture
Build reusable frameworks for data ingestion, transformation, and validation
Develop data models, schemas, and datasets for analytics and visualization
Implement data quality checks and anomaly detection
Apply DevOps and CI/CD practices, strongly leveraging Terraform
Collaborate with cross-functional teams and support data infrastructure needs
Document pipelines, data models, and technical processes
Mentor team members and support Databricks platform administration
Key Responsibilities
Design and optimize ETL/ELT data pipelines on Databricks and AWS
Maintain and evolve the Medallion architecture
Build reusable frameworks for data ingestion, transformation, and validation
Develop data models, schemas, and datasets for analytics and visualization
Implement data quality checks and anomaly detection
Apply DevOps and CI/CD practices, strongly leveraging Terraform
Collaborate with cross-functional teams and support data infrastructure needs
Document pipelines, data models, and technical processes
Mentor team members and support Databricks platform administration
Description du profil
Required Qualifications
Bachelor's degree in Computer Science or related field (Master's preferred)
3+ years of experience as a Data Engineer
2+ years building Databricks pipelines
Strong SQL and Python / PySpark / SparkSQL skills
2+ years of dbt experience
1+ year of Terraform
Experience with Git-based CI/CD workflows
Strong knowledge of big data architectures and data modeling
Nice to Have
Experience with Snowflake, Redshift, or BigQuery
Databricks administration experience
API or streaming data ingestion experience
Skills
Strong analytical and problem-solving skills
Excellent communication and stakeholder collaboration
Ability to work in a fast-paced, data-driven environment
Bachelor's degree in Computer Science or related field (Master's preferred)
3+ years of experience as a Data Engineer
2+ years building Databricks pipelines
Strong SQL and Python / PySpark / SparkSQL skills
2+ years of dbt experience
1+ year of Terraform
Experience with Git-based CI/CD workflows
Strong knowledge of big data architectures and data modeling
Nice to Have
Experience with Snowflake, Redshift, or BigQuery
Databricks administration experience
API or streaming data ingestion experience
Skills
Strong analytical and problem-solving skills
Excellent communication and stakeholder collaboration
Ability to work in a fast-paced, data-driven environment
Salaire et avantages
TO BE DETERMINED
Référence : 2468241

