Qureos

FIND_THE_RIGHTJOB.

Lead II - Software Engineering

JOB_REQUIREMENTS

Hires in

Not specified

Employment Type

Not specified

Company Location

Not specified

Salary

Not specified

    7 - 9 Years
    3 Openings
    Hyderabad


Role description

Job Summary: As a Product Engineer - Big Data, you will be responsible
for designing, building, and optimizing large-scale data processing pipelines
using the latest Big Data technologies. You will collaborate with
cross-functional teams, including data scientists, analysts, and product
managers, to ensure data is easily accessible, secure, and reliable. Your focus
will be on delivering high-quality, scalable solutions for data storage,
ingestion, and analysis, while also driving continuous improvements across the
data lifecycle. Key Responsibilities:

  • ETL Pipeline Development &
    Optimization Design and implement complex end-to-end ETL pipelines to handle
    large-scale data ingestion and processing.
  • Big Data Processing Develop and
    optimize real-time and batch data processing systems using Apache Spark, Scala
    Spark, and Apache Kafka. Ensure the data is processed in a fault-tolerant
    manner, with strong focus on scalability and performance. Knowledge on Java
    & NoSQL will be good to have.
  • Cloud Infrastructure Development Build
    scalable cloud-based data infrastructure leveraging AWS tools. Ensure data
    pipelines are resilient and adaptable to changes in data volume and variety,
    with a focus on minimizing costs and maximizing efficiency.
  • Data Analysis
    & Insights Work closely with business teams and data scientists to
    understand data needs and deliver high-quality datasets. Conduct in-depth
    analysis to derive insights from the data, identifying key trends, patterns,
    and anomalies that can drive business decisions. Present findings in a clear
    and actionable format.
  • Real-time & Batch Data Integration Enable seamless
    integration of both real-time streaming & batch data from systems like AWS
    MSK. Ensure consistency in data ingestion and processing across different
    formats and sources, providing a unified view of the data ecosystem.
  • CI/CD
    & Automation Utilize Jenkins to establish and maintain continuous
    integration and delivery pipelines. Implement automated testing and deployment
    workflows, ensuring that new features and updates are seamlessly integrated
    into production environments without disruptions.
  • Data Security &
    Compliance Collaborate with security teams to ensure that data pipelines comply
    with organizational and regulatory standards, including GDPR, HIPAA, or any
    other relevant compliance frameworks. Implement data governance frameworks to
    ensure data integrity, security, and traceability throughout the data
    lifecycle.
  • Collaboration & Cross-Functional Work Partner with other
    engineers, data scientists, product managers, and business stakeholders to
    understand data requirements and deliver scalable solutions. Collaborate in
    agile teams, participate in sprint planning, and contribute to architectural
    discussions.
  • Troubleshooting & Performance Tuning Identify and resolve
    performance bottlenecks in data pipelines. Ensure optimal performance through
    proactive monitoring, tuning, and applying best practices for data ingestion
    and storage. Skills & Qualifications: Must-Have Skills: 1. AWS Expertise: Hands-on
    experience with core AWS services related to Big Data, including but not
    limited to EMR, Managed Apache Airflow, Glue, S3, DMS, MSK, and EC2. Deep
    understanding of cloud-native data architecture. 2. Big Data Technologies:
    Proficiency in PySpark/Scala Spark and SQL for data transformations and
    analysis. Experience working with large-scale data processing frameworks such
    as Apache Spark and Kafka. 3. Data Frameworks: Strong knowledge of Spark
    Dataframe & datasets. 4. Database Modeling & Data Warehousing: Expertise
    in designing and implementing scalable data models for OLAP and OLTP systems.
    5. ETL Pipeline Development: Proven experience in building robust, scalable ETL
    pipelines for processing both real-time and batch data across various
    platforms. 6. Data Analysis & Insights: Ability to conduct complex data
    analysis to extract valuable business insights. Strong problem-solving skills
    with a data-driven approach to decision-making. 7. CI/CD & Automation:
    Basic to intermediate knowledge of CI/CD pipelines using Jenkins or similar
    tools to automate deployment and monitoring of data pipelines. Preferred
    Skills:
  • Familiarity with data governance frameworks and tools to ensure
    compliance and security.
  • Knowledge of monitoring tools such as AWS
    CloudWatch, Splunk or Dynatrace to keep track of the health and performance of
    data systems.

Skills

big data,scala spark,apache spark,etl pipeline development,


About UST

UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Similar jobs

No similar jobs found

© 2026 Qureos. All rights reserved.