Qureos

Find The RightJob.

Key Responsibilities

  • Design, develop, and maintain robust, scalable ETL/ELT pipelines using Python
  • Process and transform large datasets (millions/billions of records) efficiently
  • Build batch and real-time data ingestion frameworks
  • Work with cloud-native data services on AWS
  • Develop data models, data lakes, and warehouse solutions
  • Optimize data performance, reliability, and cost
  • Ensure data quality, governance, and security standards
  • Collaborate with Data Scientists, Analysts, and Product teams
  • Troubleshoot and improve existing data workflows
  • Document architecture, pipelines, and best practices

*

  • Required Skills
  • 4–6 years of experience in Data Engineering
  • Strong Python programming (Pandas, PySpark, NumPy, etc.)
  • Hands-on experience handling large/complex datasets
  • Experience building ETL/ELT pipelines
  • Strong SQL skills
  • Experience with:
  • o Apache Spark / PySpark
  • o Airflow / Prefect or similar orchestrators
  • o Data warehousing concepts
  • Understanding of data modeling and performance tuning

Job Type: Full-time

Pay: ₹500,000.00 - ₹1,500,000.00 per year

Work Location: Hybrid remote in Indiranagar, Bengaluru, Karnataka

© 2026 Qureos. All rights reserved.