Specialist Software Engineer - Data Engineer (Spark, Scala, PySpark)
Missions
- Work with big data technologies, including Hadoop, Hive, and Spark, to process and
- Collaborate with cross-functional teams to gather requirements and implement solutions
analyze large datasets.
that meet business needs.
- Participate in all phases of the software development lifecycle, including planning,
development, testing, and deployment.
Implement CI/CD pipelines using tools like Jenkins to automate build, test, and
deployment processes.
- Adapt quickly to new technologies and methodologies, demonstrating a continuous
learning mindset.
- Troubleshoot and resolve issues in production and development environments.
- Hands on experience in cloud (ex: Azure, AWS)
Profile
Job Description:
We are seeking a talented and motivated Software Engineer with expertise in big data
technologies to join our dynamic team. The ideal candidate must have a strong background in
big data frameworks such as Hadoop, Hive, and Spark, and a passion for delivering high-quality
software solutions in an agile environment.
Engineer with 5+ years of experience in software development
Primary skills:
- Hands on experience in Spark, Pyspark, Hadoop, HDFS, Hive.
- CI/CD and DevOps practices - Jenkins, Maven, GIT
Secondary skills