Find The RightJob.
We are seeking a skilled Senior Data Engineer to support the modernization of existing Talend-based ETL pipelines into a modern data engineering ecosystem leveraging Python, dbt, Kafka, Apache NiFi, and orchestration tools such as Airflow or Dagster.
You will work closely with senior engineers to migrate, build, test, and maintain high-quality data pipelines across the organization. This role is ideal for professionals with strong hands-on data engineering skills, a collaborative mindset, and an eagerness to work with modern data stack technologies.
Key Responsibilities:
Pipeline Migration & Development
● Assist in re-engineering legacy Talend pipelines into Python, dbt, and Airflow/Dagster workflows.
● Ensure pipeline logic, data mappings, and tests are accurately replicated and validated.
● Support both legacy and new pipeline environments during the transition period.
Data Ingestion & Processing
● Develop and maintain data ingestion flows using Kafka, Apache NiFi, and REST APIs.
● Work with batch and streaming data across structured, semi-structured, and unstructured
formats.
● Implement data validation, quality checks, schema enforcement, and row-level transformations.
Transformation & Modeling
● Contribute to dbt development (models, tests, documentation, snapshots).
● Support transformation logic to maintain accuracy, maintainability, and lineage.
Monitoring & Maintenance
● Monitor daily ETL/ELT workflows for failures, bottlenecks, or data quality issues.
● Perform root-cause analysis and escalate complex issues when needed.
● Optimize performance across data ingestion, processing, and transformation layers.
Documentation & Collaboration
● Maintain well-structured documentation for pipeline logic, migration work, and data flows.
● Collaborate with senior engineers, QA, data analysts, architects, and platform teams.
● Participate in Agile ceremonies: stand-ups, planning, reviews, and retrospectives.
Required Experience & Qualification:
● 5+ years of experience in Data Engineering or ETL development.
● Demonstrated ability to design, build, and maintain robust ELT/ETL pipelines.
● Proficiency writing production-grade code (preferably in Python).
● Hands-on with SQL (including analytical queries, CTEs, window functions, optimization).
● Experience building pipelines using orchestration tools (Airflow, Databricks Workflows, etc.).
● Proven comfort with version control, automated testing, code review, CI/CD for data.
● ++ Practical experience with Databricks: can confidently use Spark APIs, Delta Lake features
(ACID, schema evolution, time travel), and Unity Catalog for data management & access
governance.
● Familiarity with data quality frameworks welcomed.
● Skilled in designing scalable, maintainable, and performant data models (e.g., star/snowflake,
normalization, partitioning, incremental strategies).
● Can articulate and justify trade-offs in storage, compute, and access layer designs.
● Proactive in identifying and fixing pipeline/data quality issues.
● Strong troubleshooting, debugging, and root cause analysis skills; goes beyond surface-level
solutions.
● Able to reason about idempotency, error handling, recovery, backfilling, and other critical
production concerns.
● Actively seeks to understand how and why things work
- Consistently dives deeper
- Explains in depth, not just what was done but why, and what trade-offs were considered or
- Excellent communicator: explains choices and solutions clearly, tailors depth for technical and non-technical audiences.
Job Type: Full-time
Application Question(s):
Work Location: In person
Similar jobs
Kwanso
Lahore, Pakistan
27 minutes ago
ibex
Lahore, Pakistan
about 8 hours ago
gelato
Lahore, Pakistan
11 days ago
Recruitzz
Lahore, Pakistan
11 days ago
Contour Software
Lahore, Pakistan
11 days ago
HR Force International
Lahore, Pakistan
11 days ago
Contour Software
Lahore, Pakistan
11 days ago
© 2026 Qureos. All rights reserved.