We are looking for a
"Senior - Mid Big Data Engineer
" on behalf of our global business partner runs in
Telecommunications
Industry.
About Role
We are looking for a
Senior–Mid Big Data Engineer
to join our delivery-focused Big Data team. This role is fully aligned with production data migration and Big Data service deployment activities.
Key Responsibilities
-
Execute data migration projects, ensuring data accuracy, performance, and operational stability.
-
Deploy, configure, and operate Big Data services and platforms in live production environments.
-
Install, configure, and maintain Big Data clusters, including Hadoop ecosystem components (HDFS, Hive, Spark, Flink, MapReduce).
-
Perform DWS (Data Warehouse Service) deployments, including setup, configuration, tuning, and post-deployment support.
-
Design, implement, and maintain ETL/ELT pipelines for production Data Warehouses and Data Lakes.
-
Optimize and troubleshoot batch processing workloads using Hive, Spark, and related technologies.
-
Implement and support data ingestion pipelines, including real-time components where applicable.
-
Provide on-site and remote delivery support to international customers during deployment and migration phases.
Requirements
Required Qualifications
-
BSc or MSc degree in Computer Engineering, Computer Science, Software Engineering, or a related technical discipline.
-
4–8 years of professional experience in Big Data Engineering, with a strong focus on delivery, data migration, and production operations.
-
Strong experience with Spark, Flink, Hadoop, and related Big Data technologies.
-
Proficiency in Python or Java.
-
Solid experience with SQL development (MySQL, PostgreSQL).
-
Hands-on experience with batch processing using Hive and Spark.
-
Strong understanding of Data Warehouse and Data Lake architectures.
-
Experience working in Unix/Linux environments.
-
Ability to work in production environments with high availability and performance requirements.
-
Strong communication skills in international delivery teams.
-
Fluency in written and spoken English is mandatory.
-
Willingness and flexibility to travel internationally.
-
Strong sense of ownership for end-to-end delivery and production stability.
Nice to Have
-
Experience with real-time ingestion systems such as Kafka or RabbitMQ.
-
Experience with monitoring, alerting, and performance tuning of Big Data platforms.
-
Exposure to cloud-native Big Data services.
Seniority Expectations
-
Solid understanding of ETL/ELT methodologies and production best practices.
-
Proven ability to design, deploy, and operate complex data pipelines in live environments.
-
Ability to independently own delivery activities, guide implementation decisions, and mentor junior engineers.