Qureos

Find The RightJob.

Data BI Engineer

At Zimmer Biomet, we believe in pushing the boundaries of innovation and driving our mission forward. As a global medical technology leader for nearly 100 years, a patient’s mobility is enhanced by a Zimmer Biomet product or technology every 8 seconds. As a Zimmer Biomet team member, you will share in our commitment to providing mobility and renewed life to people around the world. To support our talent team, we focus on development opportunities, robust employee resource groups (ERGs), a flexible working environment, location specific competitive total rewards, wellness incentives and a culture of recognition and performance awards. We are committed to creating an environment where every team member feels included, respected, empowered and recognised.

What You Can Expect

The Data Engineer is responsible for designing, developing, and maintaining scalable data pipelines and models within Snowflake and cloud environments, with a strong focus on enabling analytics through Power BI datasets/semantic models and DAX-based metrics. This role combines strong SQL and Python development with hands-on data engineering in Agile teams. The engineer will partner with architects, analysts, and business teams to deliver secure, reliable, high-quality data solutions that power analytics, reporting, and operational workflows


Work Location: Bangalore

Work Mode: Hybrid (3 Days in office)

How You'll Create Impact

  • Design, build, and optimize SQL pipelines, transformations, and data models within Snowflake.
  • Design and maintain reporting-ready data models that support Power BI datasets/semantic models and enterprise KPIs.
  • Develop, standardise, and support DAX measures and KPI definitions in partnership with BI/Analytics stakeholders.
  • Support and troubleshoot Power BI dataset refresh, performance, and reconciliation of key metrics back to source-of-truth tables.
  • Develop Python-based data processing, automation, and integration workflows.
  • Ingest and process structured, semi-structured, and unstructured data from databases, APIs, files, SaaS applications, and cloud storage.
  • Implement ETL/ELT processes using SQL, Python, dbt, Snowflake features, or orchestration tools such as Airflow/Dagster.
  • Contribute to Agile ceremonies and deliverables, ensuring timely and high-quality sprint outcomes.
  • Implement CI/CD workflows and maintain version-controlled repositories (GitHub or equivalent).
  • Support data governance, data quality, and secure data handling practices.
  • Troubleshoot pipeline issues and proactively drive performance improvements.
  • Maintain documentation, monitoring, alerting, and operational playbooks for pipelines.
  • Collaborate with cross-functional teams (Data Engineering, Architecture, Data Science, Business) to translate requirements into technical solutions.

What Makes You Stand Out

  • Ability to work effectively within cross-functional and Agile teams.
  • Ability to define, validate, and explain BI metrics/KPIs and their implementation in DAX and Power BI models.
  • Strong analytical, problem-solving, and critical-thinking skills.
  • Clear written and verbal communication skills, including the ability to translate technical information for non-technical partners.
  • Strong organizational skills with attention to detail and documentation.
  • Ability to influence and collaborate across teams and geographies.
  • Demonstrated ownership, adaptability, and willingness to embrace new technologies.
  • Ability to understand and work across multiple technology stacks.

Your Background

  • 4 + years of experience designing and developing SQL-based pipelines and data models.
  • Power BI (Desktop/Service) and DAX (measures, KPI logic, performance basics)
  • SQL (strong expertise), Snowflake, and data modeling
  • Hands-on experience delivering or supporting Power BI datasets/semantic models and implementing DAX measures for enterprise KPIs.
  • Hands-on experience with Python for ETL/ELT development and automation.
  • Experience with Snowflake or other cloud data warehouses.
  • Experience with orchestration platforms such as Airflow, Dagster, or Prefect.
  • Experience with ingestion tools (Fivetran, Nifi, Airbyte) is a plus.
  • Familiarity with APIs, JSON/CSV/XML/Parquet data formats, and cloud storage environments.
  • Understanding of CDC mechanisms, data governance, and security best practices.
  • Exposure to real-time or streaming ingestion (Kafka, EventHub) is beneficial.
  • Experience working in Agile environments and delivering iterative outcomes.
  • Ability to participate in and lead technical discussions.
  • Interest or exposure to AI/ML workflows is a plus.

Preferred:

  • Python for data pipelines and automation
  • dbt / SQL-based transformation frameworks
  • Airflow, Dagster, or similar orchestration tools
  • CI/CD tools (GitHub, GitLab, Azure DevOps)
  • ETL/ELT tools such as Fivetran, Airbyte, Apache Nifi (preferred)
  • Cloud services (AWS, Azure, GCP) and cloud storage
    • Git-based version control.

Physical Requirements

Travel Expectations

Up to 5%


EOE/M/F/Vet/Disability

© 2026 Qureos. All rights reserved.