Data Engineer – Microsoft Fabric & Advanced Analytics
Summary
We are seeking a skilled Data Engineer with hands-on experience in Microsoft Fabric and advanced analytics platforms. The ideal candidate will design, optimize, and maintain data models and data pipelines to support reporting, analytics, and data science needs across the enterprise. You will collaborate closely with data scientists, analytics developers, engineers, and business stakeholders to deliver robust, scalable solutions using tools such as Power BI, Dataflows, Lakehouses, SQL endpoints, and other modern data engineering technologies.
Key Responsibilities
- Work closely with data scientists, analytics developers, and business teams to establish data needs and architect scalable solutions.
- Design, develop, and optimize data pipelines using Microsoft Fabric and other relevant technologies to handle large-scale data workflows.
- Assemble large, complex data sets that meet functional and non-functional business requirements.
- Create, manage, and automate ETL (Extract, Transform, Load) processes to integrate data from a variety of sources into unified data warehouses or lakes.
- Build and maintain efficient data models and schemas, ensuring data is structured for maximum performance and scalability.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, and re-designing infrastructure for greater scalability and performance.
- Implement best practices for ensuring data quality, consistency, and integrity across all systems.
- Ensure compliance with global data privacy, security, and regulatory standards.
- Collaborate with cross-functional teams to ensure smooth data flow and availability for analytics and reporting.
- Regularly optimize data processes for performance improvements and cost management.
- Utilize orchestration tools to automate workflows and scheduling of data processes.
- Document all data processes, architectures, and solutions to facilitate knowledge sharing and future maintenance.
Other duties as assigned.
Required Qualifications
- Bachelor’s degree in Computer Science , Engineering, Information Technology, or a related quantitative field, or equivalent relevant work experience.
- Strong experience in Microsoft Fabric (e.g., One Lake, Data Pipelines, Dataflows, Lakehouses , SQL endpoints).
- Experience with cloud-based data services, including Azure Data Lake, Azure SQL Database, or Azure Data Factory.
- 3+ years’ experience with relational databases and data integration (SQL, Data Warehouse, Data Lake, ETL, Talend, Data Mapping, etc.).
- 2+ years utilizing Data Science tools (Python) for data engineering.
- Familiarity with ETL tools and pipelines, with hands-on experience using Microsoft Fabric’s built-in ETL features.
- Knowledge of Data Lakehouse concepts and best practices.
- Experience with data integration, including REST APIs, file systems, and external databases.
- Strong knowledge of data lifecycle management, data governance, and global privacy practices.
- Experience with Git for version control, including branching and merging in a collaborative environment.
- Experience with CI/CD pipelines for data engineering projects.
- Strong interpersonal communication, problem-solving, troubleshooting, and organizational skills.
- Ability to work in an agile, collaborative environment with cross-functional teams.
- Proven ability to drive technical projects from initial concept to implementation with minimal oversight.
Passionate about data quality and process improvement.
Preferred Qualifications
- Microsoft Azure certification, especially in data engineering or cloud technologies.
- Experience with Power BI and predictive modeling.
- Experience with Snowflake or other cloud data warehouse platforms.
- Familiarity with Striim or similar data integration tools.