Role: Databricks Engineer
Location: Remote
Experience: 5+ Years
About the Role:
We are looking for an experienced Databricks Engineer with strong expertise in big data processing and analytics. The ideal candidate will work closely with data engineers, data scientists, and business stakeholders to design, develop, and optimize scalable data solutions using Databricks.
Key Responsibilities:
- Design, develop, and maintain scalable data pipelines using Databricks
- Work with Apache Spark (PySpark/Scala) for large-scale data processing
- Optimize Databricks jobs for performance, cost, and reliability
- Integrate data from multiple sources (structured & unstructured)
- Implement data transformation, cleansing, and validation processes
- Collaborate with cross-functional teams to deliver data-driven solutions
- Ensure data quality, security, and governance best practices
- Troubleshoot and resolve data pipeline and performance issues
Required Skills & Qualifications:
- 5+ years of experience in Data Engineering / Big Data roles
- Strong hands-on experience with Databricks
- Proficiency in Apache Spark, PySpark, or Scala
- Experience with SQL and data modeling concepts
- Familiarity with cloud platforms (Azure / AWS / GCP)
- Experience working with ETL/ELT pipelines
- Knowledge of data lakes, Delta Lake, and big data ecosystems
- Good understanding of data warehousing and analytics concepts
Nice to Have:
- Databricks certifications
- Experience with Delta Live Tables and Unity Catalog
- Exposure to CI/CD for data pipelines
- Experience supporting analytics and ML workloads
Job Types: Full-time, Contractual / Temporary
Contract length: 12 months
Pay: ₹70,000.00 - ₹80,000.00 per month
Work Location: Remote