Job Overview
We are seeking an experienced Data Engineer with deep expertise in Data Vault 2.0 methodology to design, develop, and maintain scalable data transformation pipelines. You will work with modern data stack technologies including dbt, Snowflake, and Python to build robust data models that support enterprise-wide analytics and reporting.
Key Responsibilities
- Develop and maintain data transformation pipelines using dbt and Python
- Design and manage scalable, efficient data models in Snowflake using SQL
- Build and maintain dbt models following Data Vault 2.0 and Star Schema principles
- Use VS Code and dbt CLI for local development and version control
- Collaborate with data architects to implement Enterprise Data Models (EDM) and ensure data integrity
- Optimize data workflows for performance, scalability, and maintainability
- Integrate best practices for data quality, testing, and transformation logic
- Work with cross-functional teams to support analytical and reporting needs
- Document data models, transformation logic, and pipeline processes
Mandatory Skills
Must-Have Technical Skills
- Strong expertise in SQL and Python programming
- Hands-on experience with dbt (Data Build Tool) for transformation and modeling
- Proficiency with VS Code and dbt CLI development workflows
- Experience building dbt models using Data Vault 2.0 and Star Schema design patterns
- Working knowledge of Snowflake data warehouse
- Strong understanding of data modeling concepts and best practices
- Experience with Enterprise Data Models (EDM)
- Proven ability in designing and optimizing scalable data workflows
Professional Experience
- 6+ years of experience in data engineering or related roles
- Proven track record of delivering data solutions in production environments
- Experience working in Agile development environments
Good to Have
- Familiarity with CI/CD pipelines for data workflows
- Experience with cloud data platforms (AWS, Azure, GCP)
- Knowledge of data orchestration tools (Airflow, Dagster, Prefect)
- Understanding of DevOps practices for data engineering
- Experience with data visualization and BI tools
- Knowledge of additional cloud data warehouses (Databricks, BigQuery, Redshift)
What You'll Bring
- Strong analytical and problem-solving skills
- Excellent communication and collaboration abilities
- Attention to detail and commitment to data quality
- Ability to work independently and manage multiple priorities
- Passion for learning new technologies and best practices
How to Apply
Apply now with your updated resume highlighting your experience in data modeling, dbt, and modern data engineering practices .
Send your updated resume to: sandesh@ekovits.com
Job Types: Full-time, Permanent
Pay: ₹1,400,000.00 - ₹1,600,000.00 per year
Benefits:
- Flexible schedule
- Health insurance
- Life insurance
- Paid sick time
- Provident Fund
- Work from home
Work Location: Remote