We are seeking a skilled Data Engineer with 7–8 years of experience in designing, building, and optimizing scalable data pipelines. The ideal candidate will have strong expertise in Python, DBT, and Spark, and a passion for transforming complex data into actionable insights.
-
Design, develop, and maintain efficient ETL pipelines and workflows.
-
Build and optimize data models and transformations using DBT.
-
Perform large-scale distributed data processing with Apache Spark.
-
Ensure data quality, consistency, and performance across systems.
-
Collaborate with analytics and business teams to enable data-driven decisions.
-
Python: Strong programming skills for data processing, integration, and automation.
-
DBT (Data Build Tool): Expertise in data modeling, transformation, and version-controlled development.
-
Apache Spark: Proficiency in distributed data processing and performance optimization.
-
Python Notebook: Experience with data exploration and prototyping.
-
Polars: Knowledge of modern, high-performance data processing frameworks.
-
DPI: Familiarity with data performance improvements and optimization techniques.