Job Title
Overview
We are looking for an experienced
Data Engineer
with strong expertise in
Databricks
and
Apache Spark
to build and optimize scalable data pipelines on cloud platforms. The ideal candidate will have hands-on experience delivering ETL/ELT workflows, transforming large datasets, and supporting analytics and data platform initiatives. Experience across
Azure
is required, while exposure to
GCP
and modern orchestration tools is a plus.
Mandatory Skills
-
Strong hands-on experience with Azure Databricks
-
Expert-level proficiency in Apache Spark (PySpark/Scala)
-
Solid understanding of ETL/ELT pipelines, batch & streaming data processing
-
Proficient in Python and SQL
Good-to-Have Skills
-
Experience with GCP data services: BigQuery, Dataflow, DataProc
-
Knowledge of Airflow / Cloud Composer, DAG creation and orchestration
-
Familiarity with GCP IAM, storage, and networking concepts
-
Exposure to data pipelines across multi-cloud environments
-
Experience with orchestration, workflow automation, and CI/CD for data pipelines