Qureos

FIND_THE_RIGHTJOB.

Pyspark + Databricks

India

Skill: Databricks + Pyspark

Experience: 6 to 9 years

Location: Kolkata

Job description

We are looking for a highly skilled Data Engineer with expertise in PySpark and Databricks to design, build, and optimize scalable data pipelines for processing massive datasets.

Key Responsibilities:

  • Build & Optimize Pipelines: Develop high-throughput ETL workflows using PySpark on Databricks.
  • Data Architecture & Engineering: Work on distributed computing solutions, optimize Spark jobs, and build efficient data models.
  • Performance & Cost Optimization: Fine-tune Spark configurations, optimize Databricks clusters, and reduce compute/storage costs.
  • Collaboration: Work closely with Data Scientists, Analysts, and DevOps teams to ensure data reliability.
  • ETL & Data Warehousing: Implement scalable ETL processes for structured & unstructured data.
  • Monitoring & Automation: Implement logging, monitoring, and alerting mechanisms for data pipeline health and fault tolerance.

© 2025 Qureos. All rights reserved.