Qureos

Find The RightJob.

Data Engineer II

Job Summary
mPulse is seeking a highly motivated and detail-oriented Data Engineer II to join our Data Engineering team. In this role, you will design, develop, and maintain scalable data pipelines and platform capabilities that support our analytics, product, and AI/ML initiatives.

Our data platform processes high-volume, high-velocity healthcare data, enabling insights and data-driven solutions for a growing client base. You will work cross-functionally with Data Operations, Product, Analytics, and Data Science teams to ensure data is reliable, performant, and aligned with business needs.

The ideal candidate brings strong experience in SQL, Python, dbt, and Airflow, along with a solid foundation in data warehousing and a passion for solving complex data challenges in a healthcare environment.

Duties & Responsibilities
  • Design, develop, and maintain scalable data pipelines (ETL/ELT) to support ingestion, transformation, and delivery of high-volume healthcare data.
  • Write, optimize, and maintain complex SQL queries for data transformation, validation, and performance tuning.
  • Develop and manage workflow orchestration using Apache Airflow, including DAG creation, monitoring, and troubleshooting.
  • Enhance and scale data platform capabilities to support analytics, product features, and AI/ML use cases.
  • Build and maintain data quality frameworks, including automated data profiling, validation, and testing processes.
  • Monitor and optimize pipeline performance, reliability, and efficiency in production environments.
  • Analyze complex data issues, identify root causes, and implement scalable solutions, clearly communicating findings to both technical and non-technical stakeholders.
  • Collaborate with cross-functional teams (Data Operations, Product, Analytics, Data Science) to gather requirements and deliver high-quality data solutions.
  • Partner with clinical and analytics teams to operationalize data-driven insights and reporting solutions.
  • Contribute to documentation of data pipelines, data models, and engineering processes to support maintainability and knowledge sharing.
Skills & Experience
  • Strong proficiency in SQL, including complex querying, data transformation, and performance optimization.
  • Proficiency in Python for data processing, automation, and integration tasks.
  • Experience designing and building data pipelines (ETL/ELT) to support data ingestion, transformation, and delivery.
  • Hands-on experience with modern data warehousing platforms, such as Snowflake, PostgreSQL, Amazon Redshift, or Microsoft SQL Server.
  • Experience with workflow orchestration tools, particularly Apache Airflow (DAG development, debugging, and maintenance).
  • Experience using dbt (data build tool) to develop, test, and manage modular data transformation workflows.
  • Experience working with cloud platforms, particularly AWS (e.g., S3, RDS, Lambda, Glue, DMS).
  • Experience with version control systems and collaborative development workflows, such as GitHub or Bitbucket.
  • Familiarity with CI/CD practices and tools, such as Jenkins or GitHub Actions.
  • Experience supporting data quality initiatives, including data profiling, validation, or monitoring frameworks.
  • Familiarity with data modeling and data warehousing concepts, including dimensional modeling.
  • Exposure to analytics, reporting, or data visualization tools (e.g., Tableau, Looker) is a plus.
  • Experience working with healthcare data, including claims or clinical datasets, is a plus.
  • Familiarity with data science or machine learning workflows from a data engineering perspective is a plus.
Qualifications
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field, or equivalent practical experience.
  • 3+ years of professional experience in data engineering or a related field.
  • Strong analytical and problem-solving skills, with the ability to work with complex datasets and identify root causes.
  • Strong written and verbal communication skills, with the ability to effectively communicate technical concepts to both technical and non-technical stakeholders.
  • Ability to collaborate effectively in cross-functional environments, working with engineering, product, analytics, and operations teams.
  • Strong attention to detail and commitment to data accuracy, consistency, and quality.
  • Demonstrated ability to manage multiple priorities and deliver high-quality work in a fast-paced environment.
  • Demonstrates self-awareness, with the ability to recognize and communicate individual strengths and areas for growth.
  • Shows a strong willingness to learn, adapt, and support team members, contributing to a collaborative and positive team environment.
  • Effectively communicates progress, priorities, and status updates to both internal and external stakeholders in a clear and timely manner.
Why Join Us
You will have the opportunity to work on modern data platforms and tools, collaborate with cross-functional teams, and help build reliable data systems that drive meaningful insights and business value.

We’re as passionate about our people as we are about making our mark on healthcare. Fostering a fun and challenging environment that’s centered around personal and professional growth has brought us to where we are today. We are constantly seeking out new ways to reinvest in our team members because let’s face it, we all do our best work when we feel valued.

  • Please note, due to the requirements of this position, responses may automatically disqualify you from moving forward in the application process. Please review minimum qualifications thoroughly before applying.
If you require any accommodations during the application, interview, or assessment process due to a disability or any other accessibility-related concern, please do not hesitate to reach out to our People Ops and Recruiting team at careers@mpulsemobile.com

© 2026 Qureos. All rights reserved.