Job Description: Key Responsibilities: Maintain and monitor the MS Fabric-based Data Lakehouse to ensure high availability and performance. Design, develop, and deploy robust and scalable data pipelines using Microsoft Fabric tools and services. Troubleshoot and resolve issues in existing pipelines to minimize downtime and data latency. Implement enhancements and optimizations in data workflows to improve efficiency and reduce costs. Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements and deliver high-quality data solutions. Ensure data quality, integrity, and security across all stages of the data lifecycle. Document pipeline architecture, data flows, and operational procedures for transparency and maintainability. Stay updated with the latest features and best practices in Microsoft Fabric and related technologies. Required Qualifications: Bachelor's or Master’s degree in Computer Science, Engineering, or a related field. 4+ years of experience in data engineering or a similar role. Hands-on experience with Microsoft Fabric , including Data Factory, OneLake, and Synapse . Strong proficiency in SQL , Python , and data modeling . Experience with CI/CD pipelines , version control (e.g., Git), and DevOps practices. Familiarity with data governance , security , and compliance standards.