FIND_THE_RIGHTJOB.
India
Role Proficiency:
This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required.
Outcomes:
Measures of Outcomes:
Outputs Expected:
Code:
Documentation:
Configure:
Test:
Domain Relevance:
Manage Project:
Manage Defects:
Estimate:
Manage Knowledge:
Release:
Design:
Interface with Customer:
Manage Team:
Certifications:
Skill Examples:
Knowledge Examples:
Knowledge Examples
Additional Comments:
Job Title: Tech Lead – Data Engineering Location: [Specify – e.g., Pune / Bengaluru / Hybrid] Experience: 8–12 years Employment Type: Full-time ________________________________________ Role Summary We are looking for an experienced Tech Lead – Data Engineering to design, lead, and deliver scalable data solutions in a modern cloud environment. The ideal candidate will have deep hands-on expertise in ETL/ELT development, data lake architecture, and data warehousing, along with strong command over AWS data services, Python, and Spark/Databricks. The candidate will act as a technical lead and mentor, guiding a team of 3–7 engineers, ensuring delivery excellence, and aligning technical execution with architectural best practices and organizational data strategy. ________________________________________ Key Responsibilities • Lead the end-to-end design and delivery of modern data engineering solutions, ensuring performance, scalability, and reliability. • Architect and develop ETL/ELT pipelines using tools such as AWS Glue, DBT, and Airflow, integrating multiple structured and semi-structured data sources. • Design and maintain data lakes and data warehouse environments on AWS (S3, Redshift, Athena, Glue). • Build and optimize Spark / Databricks jobs for large-scale data transformation and processing. • Define and enforce best practices in coding, version control, testing, CI/CD, and data quality management. • Oversee infrastructure setup and automation using Terraform, Kubernetes, and Docker for data environments. • Collaborate closely with data architects, analysts, and business stakeholders to translate business needs into robust data models and pipelines. • Manage and mentor a team of 3–7 engineers, conducting technical reviews, workload planning, and skill development. • Monitor, troubleshoot, and optimize data pipelines in production to ensure reliability and SLAs. • Drive continuous improvement initiatives for pipeline automation, observability, and cost optimization. ________________________________________ Technical Skills and Tools Core Technical Expertise: • Programming: Python (preferred), SQL, and scripting for data transformation and automation. • ETL/ELT & Orchestration: AWS Glue, DBT, Airflow, Step Functions. • Cloud Platforms: AWS (S3, Glue, Lambda, Redshift, Athena, EMR), exposure to Azure Data Services a plus. • Data Processing: Apache Spark, Databricks. • Databases: PostgreSQL, Snowflake, MongoDB. • CI/CD & DevOps: GitHub Actions, CircleCI, Jenkins, with automation via Terraform and Docker. • Infrastructure Management: Kubernetes, Terraform, CloudFormation. • Data Modeling & Warehousing: Dimensional modeling, partitioning, and schema design. Good-to-Have: • Exposure to streaming data platforms like Kafka or Kinesis. • Familiarity with data governance, metadata management, and data cataloging tools. • Experience with cost optimization and performance tuning on cloud environments. • Knowledge of DevOps for data engineering and infrastructure automation best practices. ________________________________________ Leadership & Soft Skills • Proven ability to lead and mentor engineering teams, fostering a collaborative and growth-oriented culture. • Strong analytical and problem-solving mindset with focus on delivery ownership. • Effective communication and stakeholder management, bridging technical and business domains. • Hands-on leadership with a proactive approach to technical challenges. • Strong organization skills with the ability to manage multiple priorities in an agile setup. ________________________________________ Education & Experience • Bachelor’s or Master’s in Computer Science, Information Technology, or related field. • 8–12 years of experience in data engineering, with at least 2–3 years in a technical lead or architect role. • Proven track record of delivering data platforms and pipelines using AWS and open-source technologies. ________________________________________ Preferred Qualifications • AWS Certified Data Analytics – Specialty, Solutions Architect – Associate, or equivalent. • Experience leading data platform modernization or migration initiatives. • Background in implementing CI/CD pipelines for data workflows and data infrastructure automation.
Python,ETL,SQL,AWS
Similar jobs
NIKE
India
5 days ago
West Pharmaceutical Services
India
5 days ago
Mahindra & Mahindra Ltd
India
5 days ago
AB InBev GCC India
Mangaluru, India
5 days ago
Amazon.com
India
5 days ago
Amazon Web Services
India
5 days ago
West Pharmaceutical Services
India
5 days ago
© 2025 Qureos. All rights reserved.