FIND_THE_RIGHTJOB.
India
Role Proficiency:
This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be adept at using ETL tools such as Informatica Glue Databricks and DataProc with coding skills in Python PySpark and SQL. Works independently and demonstrates proficiency in at least one domain related to data with a solid understanding of SCD concepts and data warehousing principles.
Outcomes:
Measures of Outcomes:
Outputs Expected:
Code Development:
Documentation:
Configuration:
Testing:
Domain Relevance:
Defect Management:
Estimation:
Knowledge Management:
Design Understanding:
Certifications:
Skill Examples:
Knowledge Examples:
Knowledge Examples
Additional Comments:
Job Title: Junior Data Engineer Experience: 2–4 years Employment Type: Full-time ________________________________________ Role Summary We are looking for a Junior Data Engineer who is passionate about building scalable data pipelines and working with modern data technologies. The ideal candidate should have hands-on experience in ETL development, data integration, and cloud-based data solutions—preferably on AWS. You will work closely with senior data engineers, data architects, and analysts to develop and maintain data pipelines, ensuring data is accurate, reliable, and available for business and analytics needs. This is an excellent opportunity to grow into a data engineering expert in a modern cloud ecosystem. ________________________________________ Key Responsibilities • Design, develop, and maintain ETL/ELT pipelines for structured and unstructured data sources. • Work with senior engineers to implement data lake and data warehouse solutions using AWS services such as S3, Glue, Redshift, and Lambda. • Develop and optimize Spark / Databricks jobs for data transformation and processing. • Write efficient Python or SQL scripts for data cleaning, transformation, and validation. • Collaborate with data architects to implement data models and data quality frameworks. • Participate in code reviews, testing, and deployment processes to maintain code quality and stability. • Troubleshoot and resolve data-related issues in development and production environments. • Document technical processes, data flows, and pipeline logic for maintainability and transparency. • Learn and contribute to continuous improvements in automation, performance, and scalability of data systems. ________________________________________ Technical Skills and Tools Mandatory: • Good understanding of ETL concepts, data warehousing, and data integration techniques. • Hands-on experience with Python for data manipulation and scripting. • Working knowledge of SQL and experience writing optimized queries. • Familiarity with AWS data services like S3, Glue, Redshift, Athena, or equivalent cloud tools (Azure Data Factory, GCP BigQuery). • Basic exposure to Apache Spark or Databricks for distributed data processing. • Version control and collaboration tools: Git / GitHub / Bitbucket. Good-to-Have: • Exposure to Airflow or any job orchestration tool. • Knowledge of data modelling and dimensional design principles. • Understanding of data quality checks, logging, and monitoring. • Familiarity with DevOps / CI-CD pipelines. • Awareness of streaming data concepts (Kafka, Kinesis). ________________________________________ Soft Skills & Attributes • Strong analytical and problem-solving skills with attention to detail. • Willingness to learn and adapt quickly in a fast-paced, data-driven environment. • Good communication and collaboration skills to work effectively within cross-functional teams. • A proactive attitude toward automation, documentation, and best practices. • Curiosity to explore emerging tools and technologies in the data ecosystem. ________________________________________ Education & Experience • Bachelor’s degree in Computer Science, Information Technology, or related discipline. • 2–4 years of hands-on experience in data engineering, ETL development, or data integration. • Prior experience working in a cloud data environment (AWS / Azure / GCP) preferred. ________________________________________ Preferred Qualifications • AWS Certified Data Practitioner or Data Engineer Associate (or equivalent certification on Azure/GCP). • Exposure to modern data stack tools such as dbt, Snowflake, or Delta Lake. • Experience contributing to end-to-end data pipeline projects in production environments.
ETL,Data Warehousing,AWS
© 2025 Qureos. All rights reserved.