Find The RightJob.
Job Description: · 3+ years of hands-on experience in data engineering with a focus on ETL workflows, data pipelines, and cloud computing.
· Strong experience with AWS services for data processing and storage (e.g., S3, Glue, Athena, Lambda, Redshift).
· Proficiency in programming languages such as Python, PySpark, and TypeScript/JavaScript.
· Deep understanding of microservices architecture and distributed systems.
· Familiarity with AI/ML tools and frameworks (e.g., TensorFlow, PyTorch) and their integration into data pipelines.
· Experience with big data technologies like Snowflake.
· Strong problem-solving and performance optimization skills.
· Exposure to modern DevOps practices, including CI/CD pipelines and container orchestration tools like Docker and Kubernetes.
· Experience working in agile environments delivering complex data engineering solutions.
· Proven expertise or certification in Palantir Foundry is highly preferred.
· Prior experience in the insurance domain is highly desirable.
Responsibilities: · 3+ years of hands-on experience in data engineering with a focus on ETL workflows, data pipelines, and cloud computing.
· Strong experience with AWS services for data processing and storage (e.g., S3, Glue, Athena, Lambda, Redshift).
· Proficiency in programming languages such as Python, PySpark, and TypeScript/JavaScript.
· Deep understanding of microservices architecture and distributed systems.
· Familiarity with AI/ML tools and frameworks (e.g., TensorFlow, PyTorch) and their integration into data pipelines.
· Experience with big data technologies like Snowflake.
· Strong problem-solving and performance optimization skills.
· Exposure to modern DevOps practices, including CI/CD pipelines and container orchestration tools like Docker and Kubernetes.
· Experience working in agile environments delivering complex data engineering solutions.
· Proven expertise or certification in Palantir Foundry is highly preferred.
· Prior experience in the insurance domain is highly desirable.
Qualifications: · 3+ years of hands-on experience in data engineering with a focus on ETL workflows, data pipelines, and cloud computing.
· Strong experience with AWS services for data processing and storage (e.g., S3, Glue, Athena, Lambda, Redshift).
· Proficiency in programming languages such as Python, PySpark, and TypeScript/JavaScript.
· Deep understanding of microservices architecture and distributed systems.
· Familiarity with AI/ML tools and frameworks (e.g., TensorFlow, PyTorch) and their integration into data pipelines.
· Experience with big data technologies like Snowflake.
· Strong problem-solving and performance optimization skills.
· Exposure to modern DevOps practices, including CI/CD pipelines and container orchestration tools like Docker and Kubernetes.
· Experience working in agile environments delivering complex data engineering solutions.
· Proven expertise or certification in Palantir Foundry is highly preferred.
· Prior experience in the insurance domain is highly desirable.
Similar jobs
The Home Depot
Atlanta, United States
3 days ago
The Home Depot
Atlanta, United States
4 days ago
Kroger
Cincinnati, United States
11 days ago
Apple
Cupertino, United States
11 days ago
Adobe
San Jose, United States
11 days ago
HCA Healthcare
Nashville, United States
11 days ago
© 2026 Qureos. All rights reserved.