We are seeking a highly skilled Data Engineer with strong expertise in PySpark and the Cloudera Data Platform (CDP). The ideal candidate will design, develop, and maintain scalable data pipelines while ensuring high data quality, performance, and availability across the organisation.
This role requires hands-on experience in big data ecosystems, cloud-native technologies, and advanced data processing frameworks. You will collaborate with cross-functional teams to build reliable and high-performance data solutions that drive business insights.
- Design, develop, and maintain scalable ETL/ELT pipelines using PySpark on CDP
- Ensure data integrity, reliability, and performance optimisation
- Develop ingestion frameworks to collect data from relational databases, APIs, streaming sources, and file systems
- Load structured and unstructured data into Data Lake/Data Warehouse environments
- Process, cleanse, and transform large-scale datasets using PySpark
- Build reusable data processing components
- Tune Spark jobs and Cloudera components for optimal performance
- Optimise memory, partitioning, and execution plans
- Reduce ETL runtime and improve cluster efficiency
- Implement data validation checks and monitoring mechanisms
- Ensure end-to-end data quality and governance standards
- Automate workflows using tools such as Apache Oozie, Apache Airflow, or similar orchestration frameworks
- Maintain CI/CD integration for data pipelines
- Monitor pipeline health and troubleshoot failures
- Provide production support and continuous improvements
- 5+ years of experience in Data Engineering
- Strong hands-on experience in PySpark
- Experience working on Cloudera Data Platform (CDP)
- Strong knowledge of Hadoop ecosystem (HDFS, Hive, Impala, YARN)
- Proficiency in SQL and data modelling concepts
- Experience with workflow orchestration tools (Airflow, Oozie, etc.)
- Good understanding of data warehousing concepts
- Experience with performance tuning and optimisation
- Experience with cloud platforms (AWS, Azure, GCP)
- Knowledge of streaming tools (Kafka, Spark Streaming)
- Exposure to DevOps practices and CI/CD pipelines
- Banking/Financial Services domain experience