Qureos

FIND_THE_RIGHTJOB.

Associate III - Data Engineering

JOB_REQUIREMENTS

Hires in

Not specified

Employment Type

Not specified

Company Location

Not specified

Salary

Not specified

    3 - 5 Years
    1 Opening
    Bangalore


Role description

Role Proficiency:

This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be adept at using ETL tools such as Informatica Glue Databricks and DataProc with coding skills in Python PySpark and SQL. Works independently and demonstrates proficiency in at least one domain related to data with a solid understanding of SCD concepts and data warehousing principles.

Outcomes:

  • Collaborate closely with data analysts data scientists and other stakeholders to ensure data accessibility quality and security across various data sources.rnDesign develop and maintain data pipelines that collect process and transform large volumes of data from various sources.
  • Implement ETL (Extract Transform Load) processes to facilitate efficient data movement and transformation.
  • Integrate data from multiple sources including databases APIs cloud services and third-party data providers.
  • Establish data quality checks and validation procedures to ensure data accuracy completeness and consistency.
  • Develop and manage data storage solutions including relational databases NoSQL databases and data lakes.
  • Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools.

Measures of Outcomes:

  • Adherence to engineering processes and standards
  • Adherence to schedule / timelines
  • Adhere to SLAs where applicable
  • # of defects post delivery
  • # of non-compliance issues
  • Reduction of reoccurrence of known defects
  • Quickly turnaround production bugs
  • Completion of applicable technical/domain certifications
  • Completion of all mandatory training requirementst
  • Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times).
  • Average time to detect respond to and resolve pipeline failures or data issues.

Outputs Expected:

Code Development:

  • Develop data processing code independently
    ensuring it meets performance and scalability requirements.


Documentation:

  • Create documentation for personal work and review deliverable documents
    including source-target mappings
    test cases
    and results.


Configuration:

  • Follow configuration processes diligently.


Testing:

  • Create and conduct unit tests for data pipelines and transformations to ensure data quality and correctness.
  • Validate the accuracy and performance of data processes.


Domain Relevance:

  • Develop features and components with a solid understanding of the business problems being addressed for the client.
  • Understand data schemas in relation to domain-specific contexts
    such as EDI formats.


Defect Management:

  • Raise
    fix
    and retest defects in accordance with project standards.


Estimation:

  • Estimate time
    effort
    and resource dependencies for personal work.


Knowledge Management:

  • Consume and contribute to project-related documents
    SharePoint
    libraries
    and client universities.


Design Understanding:

  • Understand design and low-level design (LLD) and link it to requirements and user stories.


Certifications:

  • Obtain relevant technology certifications to enhance skills and knowledge.

Skill Examples:

  • Proficiency in SQL Python or other programming languages utilized for data manipulation.
  • Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF.
  • Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery).
  • Conduct tests on data pipelines and evaluate results against data quality and performance specifications.
  • Experience in performance tuning data processes.
  • Proficiency in querying data warehouses.

Knowledge Examples:

Knowledge Examples

  • Knowledge of various ETL services provided by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow and Azure ADF/ADLF.
  • Understanding of data warehousing principles and practices.
  • Proficiency in SQL for analytics including windowing functions.
  • Familiarity with data schemas and models.
  • Understanding of domain-related data and its implications.

Additional Comments:

Mandatory Skills Informatica, Snowflake Clould DB,Python,Spark, Scala,PySpark,AWS service Skill to Evaluate Informatica,-Snowflake-Clould-DB,Python,Spark,-Scala,PySpark,AWS-service Experience: 4 to 6 Years Location: Bengaluru Job Description: Data Engineer Responsibilities Engineer data pipelines and provide automated solutions for DE teams Work proactively to address project requirements, and articulate issues challenges with enough lead time to address project delivery risk System monitoring and ing, dashboards, charts/reports and s delivery Ensure code is fully scalable, maintainable, and performant Verifies program logic by overseeing the preparation of test data, testing and debugging of programs Work with Project Managers and TPM to ensure the timely delivery of project deliverables that meet requirements Requires ability to work in a global and multi-cultural environment. Team members are in multiple geographies, resulting in time constraints due to time zones differences and a requirement for occasional travel Participate in the agile development process Develops new documentation, departmental technical procedures and user guides Experience BS Degree in Engineering or Masters, Computer Science or equivalent experience 2+ years of experience in database development, programming, design, and analysis 2+ years of experience in coding and scripting languages - Java, Python, JavaScript, bash, batch files, Korn. 1 years of experience with Spark, Scala,PySpark 2+ year of experience with data and ETL programming (Databricks, Ab Initio) 2+ years of experience in SQL and a variety of database technologies – Snowflake, Teradata and Oracle. + years of experience in No Sql db (dynanmo db, Cassandra) Good technical analysis & troubleshooting skills Experience in designing and architecting medium to complex systems, well versed with design standards & framework General knowledge in Linux, Unix Administration and Windows Administration Knowledge of Dimensional Modelling 1+ year of experience with streaming services – Flink, Kafka Experience with automation, configuration management, orchestration, enterprise schedulers 1+ years of experience with AWS services (EKS, S3, EC2, Kinesis, DynamoDB, Glue, Deequ, etc) Skills Required (essential): Demonstrable experience in developing large-scale data processing platforms Extensive experience throughout the software development lifecycle Interpersonal skills including presentation, negotiating, conflict resolution, etc. Strong communication skillsets. It is vital that the successful candidate can explain complex technical issues in non-technical terms to business stakeholders Ability to multi-task and prioritise within parallel development cycles and aggressive timelines Knowledge and experiences of SDLC methodologies e.g. Agile, Waterfall Strong troubleshooting and problem solving ability Skilled in business requirements analysis with ability to translate business information into technical specifications Eager to learn Skills Required (desirable): Knowledge of data warehouse appliances, e.g. Snowflake, Databricks, Python,Scala,AWS Education Qualificaiton BE/B.Tech

Skills

Information Technology,Snowflake Cloud DB,Python, Spark,Scala, Pyspark, AWS


About UST

UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Similar jobs

No similar jobs found

© 2025 Qureos. All rights reserved.