Key Responsibilities
Design scalable & reliable solutions for Data Pipelines.
Manage day-by-day tasks in queue according to priorities set in sprint planning meetings.
Work closely with different commercial teams to deliver personalized customer offers
Ensure on-time high quality deliverables
Plan releases and provide proper support to released packages
Requirements
Bachelor’s degree in Computer Science, Information Systems, Software Engineering, or similar
Minimum 3 years of experience as a Big Data Enginner
Strong programming skills in Python, pyspark
Experience in SQL/H-Base
Knowledge about Kafka and Hadoop
Knowledge about Data Stage or any ETL Tool
Benefits
- Hybrid Working model
- Social and Medical insurance
- Transportation
- Flexible and Friendly working environment