Find The RightJob.
This position is fully remote
This role is an Individual Contributor position
A day in the life as a Senior Data Engineer...
Lead the technical construction of complex data pipelines. You will be responsible for the engineering implementation of SCD (Slowly Changing Dimension) methodologies and data movement across retail domains as defined by the Data Modeling team
Manage and maintain enterprise-level orchestration outside of the Google Cloud Platform (e.g., Tidal Automation). You will ensure that data pipelines are synchronized with broader organizational workflows, managing critical interdependencies with other technical teams and legacy systems
Act as the primary engineering point of contact for external teams. You will build and optimize the pipelines required by Data Science for model training/deployment and by Data Insight for reporting, ensuring their technical requirements are met with high availability
Build and maintain high-velocity streaming and near-time pathways using Google Dataflow and Confluent Kafka to support real-time operational needs
Mastery of Dataform (SQLX) to build orchestrated, environment-agnostic transformations. You will ensure all models leverage JavaScript configuration and ref() functions to maintain strict code portability across our 16-project grid
Partner with the Governance team to implement "Privacy by Design" technical controls, including CCPA/GDPR compliance workflows, encrypted hashing, and integration with centralized Google Dataplex registries
Enforce mandatory resource labeling (env, domain, layer) across all BigQuery jobs and GCS buckets to ensure transparent cost attribution for the various teams we support
Lead the team’s Git-flow process via BitBucket. Manage code through feature branches and UAT release collection for deployment into our 16-project production environment
Perform high-quality code reviews and mentor junior engineers, ensuring the team’s output meets the rigorous engineering standards required to support our downstream partners
What you'll bring to the table...
5+ years of dedicated Data Engineering experience, with at least 2 years in a lead or senior capacity
Deep proficiency in BigQuery, Google Dataflow, Google Cloud Storage, and Dataform.
Expert-level Python and SQL are mandatory. Python is our primary language for all custom processing and orchestration
Proficiency with BitBucket (Git-flow), Confluent Kafka, Dataplex, and Enterprise Orchestration tools (e.g., Tidal)
Understanding of PII and how to protect both customer and company data
Bachelor’s degree in Computer Science, Engineering, or a related technical field
We'd love to hear from you if you have...
Experience in a "Data Engineering as a Service" environment, building pipelines to satisfy requirements from Data Science or Analytics stakeholders
Experience working with high-volume retail data (e.g., e-commerce, supply chain, or loyalty)
Familiarity with Java for specific edge-case data processing tasks is a plus
Strong understanding of cloud cost optimization and resource attribution
Similar jobs
New York, United States
8 days ago
Amazon.com
Sunnyvale, United States
8 days ago
Amazon Web Services
Seattle, United States
8 days ago
Amazon.com
Seattle, United States
8 days ago
DeNovo Solutions
Remote, United States
8 days ago
Success Academy Charter Schools
New York, United States
8 days ago
IDEA Public Schools
San Antonio, United States
9 days ago
© 2026 Qureos. All rights reserved.