Role Overview
We are looking for a skilled Data Engineer with strong expertise in cloud-based data platforms, data engineering frameworks, and modern programming languages. The ideal candidate will have hands-on experience with Azure, Databricks, and large-scale data processing using Python and PySpark. This role requires strong collaboration with stakeholders and the ability to manage programs and deliver scalable data solutions.
Key Responsibilities
- Design, develop, and maintain scalable data pipelines using Python, PySpark, and SQL.
- Build and optimize data workflows on Azure Cloud using Databricks.
- Develop and implement efficient data models to support analytics and reporting requirements.
- Write optimized SQL queries for data transformation, integration, and analysis.
- Collaborate with cross-functional teams to gather data requirements and translate them into technical solutions.
- Work closely with stakeholders to ensure data solutions align with business objectives.
- Develop and maintain components using TypeScript where required for data platform integrations.
- Ensure data quality, reliability, and performance across the data ecosystem.
- Lead or support program management activities, including planning, execution, and delivery of data initiatives.
Required Skills
- Strong programming skills in Python, PySpark, and SQL.
- Hands-on experience with Azure Cloud Platform.
- Experience working with Azure Databricks for large-scale data processing.
- Solid understanding of data modeling concepts and data architecture.
- Proficiency in SQL programming and query optimization.
- Experience with TypeScript development.
- Strong stakeholder management and communication skills.
- Experience in program management or managing complex data initiatives.
Preferred Qualifications
- Experience working with large-scale data platforms and modern data architectures.
- Familiarity with data warehousing and analytics solutions.
- Ability to work in a fast-paced, collaborative environment.