FIND_THE_RIGHTJOB.
JOB_REQUIREMENTS
Hires in
Not specified
Employment Type
Not specified
Company Location
Not specified
Salary
Not specified
Key Responsibilities :
Design, build, and manage scalable data pipelines (ETL/ELT) to ingest, transform, and load data from various sources into cloud data warehouses or data lakes.
Integrate data from diverse systems, ensuring data is accessible, consistent, and reliable for downstream consumption.
Implement data validation and cleansing procedures to ensure data integrity and work with security teams to protect data within the cloud environment.
Work closely with data scientists, analysts, and other stakeholders to understand their data requirements and provide them with usable data.
Monitor and optimize data pipelines for scalability, efficiency, and cost-effectiveness.
Required Skills & Qualifications:
Strong proficiency in SQL with 6+ yrs exp and programming languages such as Python, PySpark, Shell scripting with 3+ yrs exp.
6+ yrs of hands-on data & analytics development experience
4+ ys of hands-on experience with AWS cloud services S3, EMR, Glue, Lambda.
Expertise in big data technologies like Apache Spark, Kafka, and Spark Streaming.
Experience with tools and processes for data extraction, transformation, and loading.
Working experience with orchestration
Knowledge of data/dimensional modeling, data warehousing concepts, and database management.
Similar jobs
Sterlite Technologies Ltd
Udaipur Jasratpur, India
11 days ago
Oracle
India
11 days ago
JPMorganChase
India
11 days ago
InvestCloud Technology India Pvt Ltd.
India
11 days ago
GE Vernova
India
11 days ago
Cognizant Technology Solutions
India
11 days ago
© 2025 Qureos. All rights reserved.