Find The RightJob.
Essential Job Statements
Leads the design and implementation of scalable data pipelines for ingesting, transforming, and loading multi-format, batch or streaming data from various sources into enterprise Data Lakehouse.
Ensures data quality and integrity through data cleansing, validation, and transformation techniques.
Automates data processing workflows using scripting languages and orchestration tools.
Builds and deploys RESTful API’s and web services for data access and data requests.
Helps and orchestrates the definition of data management strategy and roadmap for the organization.
Collaborates with data architects, data engineers, and product managers to align data solutions with business strategy.
Stays current with industry trends, vendor product offerings, and evolving data technologies to ensure the organization leverages the best modern data engineering tools.
Leads the development and maintenance of technical design documents, coding best practices and standards to ensure consistent and reliable data management processes are in place.
Responsible for maintaining legacy technologies and processes as new ones are adopted.
Mentors junior Data Engineers.
Patient Population
Not applicable to this position.
Employment Qualifications
Required Education:
Bachelor’s Degree in Business, IT, Math, Data Analytics, Engineering, or related discipline
Combination of education and experience may be considered in lieu of a degree.
Preferred Education:
Master’s degree in Business, IT, Math, Data Analytics, Engineering or related discipline
Licensure/Certification Required:
Licensure/Certification Preferred:
Minimum Qualifications
Years and Type of Required Experience
8+ years of experience in Data Engineering, Data Management or Data Warehouse/Lakehouse design and development.
5+ years of experience in Python/PySpark for building and maintaining data pipelines.
3+ years of experience with cloud based modern data platforms and services (e.g., Azure, GCP or AWS).
Other Knowledge, Skills and Abilities Required:
Expert knowledge of database systems, data modeling techniques, and SQL proficiency.
Substantial experience creating and enhancing ETL and ELT processes.
Significant experience working with JSON/YAML/XML/Parquet and other open-source data interchange and storage formats.
Experience working with big data technologies like Spark, Kafka, NoSQL DB, Splunk etc.
Significant experience with agile development processes and concepts.
Strong knowledge of DevOps practices and CI/CD processes.
Knowledge of Microservices architecture, Data Product and Data Mesh concepts.
Other Knowledge, Skills and Abilities Preferred:
Displays intellectual curiosity and integrity.
Experience in healthcare environment.
Working Conditions
Physical Requirements
Physical Demands:
Work Position: Sitting
Additional Physical Requirements/ Hazards
Physical Requirements:
Hazards:
Mental/Sensory – Emotional
Mental / Sensory: Strong Recall, Reasoning, Problem Solving, Hearing, Speak Clearly, Write Legibly, Reading, Logical Thinking
Emotional: Fast-paced environment, Able to Handle Multiple Priorities, Frequent and Intense Customer Interactions, Able to Adapt to Frequent Change
DaysEEO Employer/Disabled/Protected Veteran/41 CFR 60-1.4.
Similar jobs
Amazon.com
Austin, United States
2 days ago
Wayvia (formerly PriceSpider)
Irvine, United States
2 days ago
SAIF CORPORATION
Portland, United States
2 days ago
Taco Bell
Irvine, United States
2 days ago
Walmart
Bentonville, United States
3 days ago
© 2026 Qureos. All rights reserved.