Qureos

Find The RightJob.

Dadabricks Developer

Pune

About Us

We empower enterprises globally through intelligent, creative, and insightful services for data integration, data analytics and data visualization.
Hoonartek is a leader in enterprise transformation, data engineering and an acknowledged world-class Ab Initio delivery partner.
Using centuries of cumulative experience, research and leadership, we help our clients eliminate the complexities & risk of legacy modernization and safely deliver big data hubs, operational data integration, business intelligence, risk & compliance solutions and traditional data warehouses & marts.
At Hoonartek, we work to ensure that our customers, partners and employees all benefit from our unstinting commitment to delivery, quality and value. Hoonartek is increasingly the choice for customers seeking a trusted partner of vision, value and integrity

How We Work?

Define, Design and Deliver (D3) is our in-house delivery philosophy. It’s culled from agile and rapid methodologies and focused on ‘just enough design’. We embrace this philosophy in everything we do, leading to numerous client success stories and indeed to our own success.
We embrace change, empowering and trusting our people and building long and valuable relationships with our employees, our customers and our partners. We work flexibly, even adopting traditional/waterfall methods where circumstances demand it. At Hoonartek, the focus is always on delivery and value.

Job Description

A Databricks Developer designs, builds, and optimizes high-performance data pipelines and analytics solutions using Spark, Python (PySpark), and SQL within the Databricks Lakehouse platform. Key responsibilities include implementing ELT/ETL processes using Delta Lake (Medallion Architecture), optimizing cluster performance, and automating workflows.
Key Responsibilities
  • Data Pipeline Development: Design, develop, and maintain complex ELT/ETL pipelines using PySpark, Spark SQL, and Databricks notebooks to ingest data from various sources.
  • Performance Optimization: Tune Spark jobs, optimize SQL queries, and manage Databricks cluster configurations to improve efficiency and reduce costs.
  • Streaming & Batch Processing: Develop and deploy real-time data ingestion pipelines using Spark Structured Streaming.
  • Workflow Automation: Use tools like Azure Data Factory (ADF) or Databricks Workflows for scheduling and orchestrating data pipelines.
  • Collaboration & Governance: Work with data engineers, architects, and scientists to ensure data quality, security, and adherence to governance policies.
Required Technical Skills
  • Core Technologies: Strong proficiency in databricks, Apache Spark, PySpark, and SQL.
  • Cloud Platforms: Experience with Azure Databricks (ADLS Gen2, ADF) or Databricks on AWS/GCP.
  • Programming & Tools: Proficiency in Python, Git, and CI/CD pipelines
Typical Qualifications
  • Bachelor’s degree in Computer Science, Engineering, or related field.
  • 3-5 years of experience in data engineering, data warehousing, or big data processing.
  • Proven ability to troubleshoot and resolve performance bottlenecks in distributed computing environments.

Similar jobs

No similar jobs found

© 2026 Qureos. All rights reserved.