About Tarento:
Tarento is a fast-growing technology consulting company headquartered in Stockholm, with a strong presence in India and clients across the globe. We specialize in digital transformation, product engineering, and enterprise solutions, working across diverse industries including retail, manufacturing, and healthcare. Our teams combine Nordic values with Indian expertise to deliver innovative, scalable, and high-impact solutions.
We're proud to be recognized as a Great Place to Work, a testament to our inclusive culture, strong leadership, and commitment to employee well-being and growth. At Tarento, you’ll be part of a collaborative environment where ideas are valued, learning is continuous, and careers are built on passion and purpose.
Core Responsibilities
-
Design, build, and optimize scalable data pipelines using Azure Databricks, Azure Data Factory, Delta Lake, and related Azure services.
-
Architect, implement, and maintain ETL/ELT workflows to manage large and diverse datasets, both batch and streaming.
-
Develop, test, and deploy data engineering solutions using PySpark, SQL, and Python in Databricks notebooks, applying medallion/lakehouse architectures.
-
Integrate CI/CD and version control (Azure DevOps, GitHub Actions) into Databricks projects to ensure automated testing, deployment, and monitoring.
-
Manage and optimize Databricks clusters for cost and performance; monitor, troubleshoot, and document data infrastructure.
-
Implement data quality checks, data governance, and security best practices.
Required Skills & Qualifications
-
4–7 years' experience in data engineering, cloud data platforms, and big data technologies (Spark, Delta Lake).
-
Hands-on experience with Azure Databricks, Azure Data Factory, Azure Synapse Analytics, Azure SQL, and Delta Live Tables.
-
Proficiency in Python, PySpark, and SQL for large-scale data processing.
-
Strong background in CI/CD, version control (Azure DevOps, Git), and automation for data platforms.
-
Experience designing and supporting ETL/ELT pipelines; knowledge of medallion architectures and data modeling (star schema, data vault).
-
Familiarity with orchestration, monitoring, and deployment tools in the Azure ecosystem.
-
Excellent collaboration, problem-solving, and communication skills.
Typical Qualifications
-
Bachelor's/Master's in CS, IT, or related field
-
4+ years in data engineering/DevOps
-
Core Tech Stack: Azure Databricks, Data Factory, PySpark, SQL, Python, Azure DevOps
-
Methodologies: CI/CD, Agile, DataOps