Qureos

FIND_THE_RIGHTJOB.

ETIC, Azure and Databricks Data Engineer, Senior Associate

JOB_REQUIREMENTS

Hires in

Not specified

Employment Type

Not specified

Company Location

Not specified

Salary

Not specified

Line of Service

Advisory

Industry/Sector

Technology

Specialism

Advisory - Other

Management Level

Senior Associate

Job Description & Summary

As a Senior Associate in the Data Engineering team, you will play a key role in designing, building, and optimizing modern data platforms and pipelines on Azure and Databricks. You will work within cross-functional teams to deliver scalable, secure, and high-performing data solutions that enable advanced analytics, AI, and business insights for enterprise clients.
This role requires a strong understanding of cloud data architecture, hands-on experience with Azure data services (including Microsoft Fabric), and deep practical knowledge of Databricks for batch and data engineering.

Key Responsibilities

  • Design, develop, and maintain end-to-end data pipelines across structured, semi-structured, and unstructured data sources.
  • Implement data ingestion, transformation, and orchestration frameworks leveraging Azure Data Factory, Synapse, and/or Microsoft Fabric Data Pipelines.
  • Develop and optimize ETL/ELT processes using Databricks (PySpark, SQL, Delta Lake) to ensure high performance and scalability.
  • Implement and enforce data quality, lineage, and governance practices.
  • Work closely with solution architects to design modern data architectures and ensure compliance with security and privacy standards.
  • Participate in client workshops and technical discussions to translate business needs into technical designs.


Required Skills & Experience

  • 3–6 years of experience in data engineering, preferably in a consulting or enterprise environment.
  • Strong hands-on experience with:
    • Azure Data Platform: Data Factory, Synapse Analytics, Azure Data Lake Storage, Microsoft Fabric, Event Hub/IoT Hub, and Azure Functions.
    • Databricks: PySpark, Spark SQL, Delta Lake, Unity Catalog, and Databricks Workflows.
  • Proficiency in Python and SQL for large-scale data processing and transformation.
  • Solid understanding of data modeling, medallion architecture, and lakehouse principles.
  • Familiarity with CI/CD pipelines, DevOps, and version control (e.g., Git, Azure DevOps).
  • Knowledge of data governance, lineage, and observability tools.
  • Experience with performance optimization, cost control, and best practices in cloud environments.

Education (if blank, degree and/or field of study not specified)

Degrees/Field of Study required:

Degrees/Field of Study preferred:

Certifications (if blank, certifications not specified)

Required Skills

Optional Skills

Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more}

Desired Languages (If blank, desired languages not specified)

Travel Requirements

Not Specified

Available for Work Visa Sponsorship?

No

Government Clearance Required?

No

Job Posting End Date

© 2025 Qureos. All rights reserved.