Experience:- 7+ Years
Work Mode:- Remote
About the Role:
We are seeking a highly skilled Senior Data Engineer with strong expertise in Snowflake and Databricks to design, build, and optimize end-to-end data solutions across our modern data ecosystem. The ideal candidate will have deep technical experience in API integrations, data pipeline automation, and performance optimization in cloud environments (AWS or Azure).
You’ll collaborate closely with data architects, analysts, and business stakeholders to ensure reliable, scalable, and secure data delivery for analytics and reporting use cases.
Key Responsibilities:
- Design and implement data ingestion, transformation, and orchestration pipelines using Databricks (PySpark, Delta Lake) and Snowflake.
- Develop and manage API-based data integrations between source systems, Databricks, and Snowflake.
- Build reusable components for data ingestion, validation, and enrichment leveraging REST APIs, Python, and Spark.
- Implement data models, schema design, and performance optimization for Snowflake.
- Automate data pipeline workflows using orchestration tools such as Airflow, Azure Data Factory, or AWS Glue.
- Collaborate with data scientists and BI teams to ensure efficient data accessibility and governance.
- Develop CI/CD pipelines for data workflows using Git, Jenkins, or similar tools.
- Monitor, troubleshoot, and improve existing pipelines to ensure high performance and reliability.
- Implement best practices for data security, lineage, and compliance across platforms.
Required Skills & Experience:
- 7+ years of experience in data engineering or ETL development.
- 3+ years of hands-on experience with Snowflake (data modeling, query tuning, stored procedures, Snowpipe, streams & tasks).
- 3+ years of hands-on experience with Databricks (PySpark, Delta Lake, data pipeline development).
- Strong experience in API development and integration (REST, JSON, Python SDKs).
- Proficiency in Python and SQL for data manipulation and automation.
- Deep understanding of cloud data ecosystems — AWS, Azure, or GCP.
- Experience with data orchestration tools (Airflow, ADF, Glue, etc.).
- Strong grasp of CI/CD, version control, and DevOps practices in data workflows.
- Experience working in agile teams and collaborating with cross-functional stakeholders.
Job Type: Full-time
Work Location: Remote