Qureos

FIND_THE_RIGHTJOB.

About the Role

We are looking for an experienced Data Engineer with strong expertise in building scalable data platforms and pipelines. The ideal candidate should be autonomous, self-driven, and comfortable working in a dynamic environment involving modern data stack tools and frameworks.

Key Responsibilities

  • Design, develop, and optimize large-scale data pipelines using PySpark and Spark SQL.
  • Implement robust data models and transformations in Databricks with Unity Catalog integration.
  • Build and maintain data validation workflows using Python (Pandera Framework).
  • Develop and integrate REST APIs for data ingestion and processing.
  • Collaborate with cross-functional teams to ensure high-quality, secure, and reliable data delivery.
  • Contribute to automation and deployment processes using CI/CD practices.

Mandatory Skill Set

  • PySpark, Spark SQL
  • Databricks (Unity Catalog)
  • Python – Pandera Framework
  • REST API Frameworks
  • Autonomous work style and strong problem-solving ability

Nice to Have Skills

  • GCP Services: Firestore, Google Cloud Storage, Cloud Build, BigQuery
  • ETL Tools: Airbyte / Fivetran
  • Testing Framework: Nutter Framework (unit testing)
  • Infrastructure as Code: Terraform
  • CI/CD Concepts
  • OAuth 2.0 Protocol

Preferred Candidate Attributes

  • Proven ability to work independently and deliver high-quality code.
  • Strong analytical and debugging skills.
  • Excellent communication and collaboration abilities.
  • Passion for data engineering, automation, and cloud technologies.

Job Types: Full-time, Part-time, Contractual / Temporary
Contract length: 6 months

Pay: From ₹4,000.00 per day

Work Location: Remote

© 2025 Qureos. All rights reserved.