FIND_THE_RIGHTJOB.
JOB_REQUIREMENTS
Hires in
Not specified
Employment Type
Not specified
Company Location
Not specified
Salary
Not specified
Required Experience & Technical Stack
We are looking for an Engineer II to join our growing team of data experts. This role is
responsible for developing, maintaining and optimizing data pipelines, delivering data timely
to our cross-functional teams and support reporting and analytics. The Engineer will work
closely with product partner, data analyst, report developers, architects, data scientists and
other engineers on data initiatives and will be involved in designing our next generation of
data platform architecture to support increasing data demands.
Primary Responsibilities
Design, build and maintain scalable and reliable data pipelines to process large volumes of
structures, semi-structured and un-structured data from diverse sources.
Develop ETL processes to ingest, transform and load data into data warehouse, ensuring data
quality, integrity, and consistency.
Collaborate with business stakeholders, report developers, and data scientists to understand
data requirements and translate then into technical solutions for various business purposes.
Optimize performance and efficiency of data infrastructure and processes, including data
storage, processing, and querying.
Implement data governance policies and best practice to ensure compliance, security, and
privacy of sensitive data.
Troubleshoot data issues, identify root cause, and implement solutions in timely manner.
Excellent communication and collaboration skills, with the ability to work effectively in a
cross-functional team environment.
Basic Qualifications
We are looking for candidate with 4+ years of experience in a Data Engineer role and has
attained bachelor’s degree in technology and has experienced with Agile/Scrum
development process and methodologies.
Experience as a Data Engineer with strong track record of designing and implementing data
solutions.
Experience in programming language as Python or Java, with experience in building data
pipelines and workflows.
Experience with cloud data warehousing technologies, such as Snowflake and Redshift.
Experience with distributed computing frameworks such as Cloudera, Apache Hadoop and
Spark.
Experience with Cloud platforms such as AWS, Azure or Google Cloud Platform.
Experience with AWS cloud services, such as S3, EC2, EMR, Glue, CloudWatch, Athena,
Lambda.
Experience with containerization and orchestration technologies, such as Docker and
Kubernetes.
Experience with building CI/CD pipeline using tools, such has GitLab and Bitbucket.
Experience with data pipeline orchestration tools, such as Airflow and Jenkins.
Knowledge of database concepts, data modelling, schemas and query languages, SQL and
Hive.
Knowledge of data visualization and reporting tools, such as MicroStrategy, Tableau and
Power BI.
Knowledge of data quality and monitoring techniques and tools, such as Great Expectations
or similar.
Knowledge of data governance processes, lineage, cataloging, dictionaries using tools, such
as DataHub or similar.
Knowledge of streaming data processing and real-time analytics technologies, such as Kafka.
Retail experience is plus.
Candidate Should Be
Bachelors/Masters in computer science required
Job Type: Contractual / Temporary
Contract length: 12 months
Pay: ₹1,000,000.00 - ₹1,617,623.53 per year
Work Location: In person
Similar jobs
No similar jobs found
© 2025 Qureos. All rights reserved.