Seeking a PySpark Data Engineer to design robust data pipelines, optimise big data workflows, support analytics requirements, collaborate with cross-functional teams, and ensure scalable, reliable data solutions for enterprise-level applications.
- Build and optimise scalable PySpark data pipelines efficiently.
- Develop ETL workflows processing large datasets reliably.
- Collaborate with data scientists to implement solutions.
- Ensure data quality and governance across systems.
- Monitor performance and troubleshoot processing issues.
- Integrate data from diverse sources with precision.
- Maintain secure and compliant data platform operations.
- Document technical specifications and implementation details.
- Strong expertise in Python and PySpark development.
- Excellent SQL skills for large data processing.
- Knowledge of data warehousing architecture designs.
- Experience with streaming technologies like Kafka.
- Familiarity with cloud platforms and big data tools.
- Ability to optimise Spark jobs for performance.
Note: Salary is disbursed in the local currency of the country of employment.
Date Posted
February 15, 2026
Offered Salary:
95000 - 125000 / year
Expiration date
August 21, 2028
Qualification
Bachelor Degree