Core Responsibilities:
Lead end-to-end design, development, and delivery of complex cloud-based data pipelines.
Collaborate with architects and stakeholders to translate business requirements into technical data solutions.
Ensure scalability, reliability, and performance of data systems across environments.
Provide mentorship and technical leadership to data engineering teams.
Define and enforce best practices for data modeling, transformation, and governance.
Optimize data ingestion and transformation frameworks for efficiency and cost management.
Contribute to data architecture design and review sessions across projects.
Qualifications:
Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
8+ years of experience in data engineering with proven leadership in designing cloud-native data systems.
Strong expertise in Python, SQL, Apache Spark, and at least one cloud platform (Azure, AWS, or GCP).
Experience with Big Data, DataLake, DeltaLake, and Lakehouse architectures
Proficient in one or more database technologies (e.g. PostgreSQL, Redshift, Snowflake, and NoSQL databases).
Ability to recommend and implement scalable data pipelines
Preferred Qualifications:
Cloud certification (AWS, Azure, or GCP).
Experience with Databricks, Snowflake, or Terraform.
Familiarity with data governance, lineage, and observability tools.
Strong collaboration skills and ability to influence data-driven decisions across teams.