Qureos

Find The RightJob.

Data Engineer 4

Position Overview

Bloc Resources is seeking an experienced Data Engineer 4 to support enterprise-scale data systems within a modern analytics and data architecture environment. This role focuses on designing, building, and optimizing scalable data pipelines that support operational analytics, reporting, and advanced data science initiatives across the organization.

The ideal candidate will bring strong experience in both traditional enterprise data environments and modern lakehouse architectures, with hands-on expertise in SQL, ETL development, and cloud-based data platforms such as Databricks. This role also requires the ability to operate in a regulated, production-critical environment supporting utility and operations data systems.

Key Responsibilities

Data Pipeline Development & Engineering

  • Design, build, and maintain scalable batch and streaming data pipelines
  • Transform raw structured and unstructured data into curated, analytics-ready datasets
  • Develop robust ETL/ELT processes using SSIS or equivalent tools
  • Optimize data workflows for performance, reliability, and scalability

Data Architecture & Modeling

  • Develop and maintain data models, including star schema and dimensional models
  • Work with relational, NoSQL, and data lake environments
  • Support enterprise data architecture modernization initiatives toward Databricks Lakehouse
  • Ensure consistency, accuracy, and integrity of enterprise data assets

Database & Analytics Support

  • Work extensively with SQL Server and complex SQL queries
  • Support analytics and reporting solutions using Power BI
  • Partner with business and analytics teams to deliver data solutions that meet reporting needs
  • Ensure high data quality through validation, testing, and monitoring

Cloud & Modern Data Platforms

  • Support migration and development efforts on Databricks / Lakehouse architecture
  • Work with distributed data processing frameworks such as Apache Spark
  • Collaborate on modernization of legacy ETL and data processing systems
  • Integrate cloud and on-premise data systems in hybrid environments

Data Engineering Operations

  • Monitor, troubleshoot, and optimize production data pipelines
  • Implement orchestration workflows using tools such as Airflow or equivalents
  • Ensure reliability and uptime of critical data systems
  • Participate in on-call or production support activities as needed

Automation & AI-Assisted Engineering

  • Utilize AI tools and copilots to enhance productivity in:
  • SQL development
  • Pipeline creation
  • Testing and validation
  • Documentation generation
  • Explore automation opportunities using AI agents and modern engineering tools
  • Support continuous improvement of data engineering workflows

Collaboration & Stakeholder Engagement

  • Work closely with engineers, analysts, and business stakeholders
  • Translate business requirements into technical data solutions
  • Communicate technical concepts clearly to both technical and non-technical audiences
  • Operate effectively in a regulated, utility-focused enterprise environment

Required Education and Experience

  • Bachelor’s degree in Computer Science, Data Engineering, Information Systems, or related field (preferred)
  • 5+ years of experience in data engineering or software/data development roles
  • Experience working in enterprise environments with hybrid (on-prem + cloud) systems

Required Skills and Abilities

  • Strong proficiency in SQL and data modeling (dimensional/star schema)
  • Hands-on experience with SQL Server
  • Experience building ETL pipelines using SSIS or similar tools
  • Strong understanding of data warehousing and analytics architecture
  • Experience supporting enterprise reporting solutions (Power BI preferred)
  • Ability to troubleshoot and optimize complex data systems
  • Strong analytical and problem-solving skills
  • Excellent communication skills and ability to collaborate across teams
  • Ability to work in regulated, production-critical environments

Technical Skills & Competencies

  • SQL (Advanced)
  • SQL Server (Advanced)
  • SSIS / ETL Tools
  • Databricks / Spark / Lakehouse Architecture
  • Power BI (Reporting & Analytics)
  • Data Modeling (Star Schema / Dimensional Modeling)
  • Python (preferred)
  • CI/CD and Git-based workflows
  • Orchestration tools (Airflow or equivalent)
  • Data lakes, relational databases, and NoSQL systems

Preferred Qualifications

  • Experience with cloud data platforms (Azure, Databricks, or similar)
  • Experience in utility, energy, or operations-based data environments
  • Exposure to AI-assisted development tools and automation workflows
  • Experience modernizing legacy ETL systems into cloud-native architectures

Work Environment

  • Hybrid or on-site work at Atlanta, GA location
  • Collaborative enterprise environment supporting engineering and operations teams
  • Work within a regulated, mission-critical data ecosystem
  • Occasional production support responsibilities may be required

Job Title: Data Engineer 4 (Enterprise / Lakehouse / AI-Enabled Data Engineering)

Location: 241 Ralph McGill Blvd, Atlanta, GA 30308, United States
Pay Rate: $65.00 – $68.08 per hour
Job Type: Contract (On-Site / Hybrid)
Company: Bloc Resources


Position Overview

Bloc Resources is seeking an experienced Data Engineer 4 to support enterprise-scale data systems within a modern analytics and data architecture environment. This role focuses on designing, building, and optimizing scalable data pipelines that support operational analytics, reporting, and advanced data science initiatives across the organization.

The ideal candidate will bring strong experience in both traditional enterprise data environments and modern lakehouse architectures, with hands-on expertise in SQL, ETL development, and cloud-based data platforms such as Databricks. This role also requires the ability to operate in a regulated, production-critical environment supporting utility and operations data systems.


Key Responsibilities

Data Pipeline Development & Engineering

  • Design, build, and maintain scalable batch and streaming data pipelines
  • Transform raw structured and unstructured data into curated, analytics-ready datasets
  • Develop robust ETL/ELT processes using SSIS or equivalent tools
  • Optimize data workflows for performance, reliability, and scalability

Data Architecture & Modeling

  • Develop and maintain data models, including star schema and dimensional models
  • Work with relational, NoSQL, and data lake environments
  • Support enterprise data architecture modernization initiatives toward Databricks Lakehouse
  • Ensure consistency, accuracy, and integrity of enterprise data assets

Database & Analytics Support

  • Work extensively with SQL Server and complex SQL queries
  • Support analytics and reporting solutions using Power BI
  • Partner with business and analytics teams to deliver data solutions that meet reporting needs
  • Ensure high data quality through validation, testing, and monitoring

Cloud & Modern Data Platforms

  • Support migration and development efforts on Databricks / Lakehouse architecture
  • Work with distributed data processing frameworks such as Apache Spark
  • Collaborate on modernization of legacy ETL and data processing systems
  • Integrate cloud and on-premise data systems in hybrid environments

Data Engineering Operations

  • Monitor, troubleshoot, and optimize production data pipelines
  • Implement orchestration workflows using tools such as Airflow or equivalents
  • Ensure reliability and uptime of critical data systems
  • Participate in on-call or production support activities as needed

Automation & AI-Assisted Engineering

  • Utilize AI tools and copilots to enhance productivity in:
  • SQL development
  • Pipeline creation
  • Testing and validation
  • Documentation generation
  • Explore automation opportunities using AI agents and modern engineering tools
  • Support continuous improvement of data engineering workflows

Collaboration & Stakeholder Engagement

  • Work closely with engineers, analysts, and business stakeholders
  • Translate business requirements into technical data solutions
  • Communicate technical concepts clearly to both technical and non-technical audiences
  • Operate effectively in a regulated, utility-focused enterprise environment

Required Education and Experience

  • Bachelor’s degree in Computer Science, Data Engineering, Information Systems, or related field (preferred)
  • 5+ years of experience in data engineering or software/data development roles
  • Experience working in enterprise environments with hybrid (on-prem + cloud) systems

Required Skills and Abilities

  • Strong proficiency in SQL and data modeling (dimensional/star schema)
  • Hands-on experience with SQL Server
  • Experience building ETL pipelines using SSIS or similar tools
  • Strong understanding of data warehousing and analytics architecture
  • Experience supporting enterprise reporting solutions (Power BI preferred)
  • Ability to troubleshoot and optimize complex data systems
  • Strong analytical and problem-solving skills
  • Excellent communication skills and ability to collaborate across teams
  • Ability to work in regulated, production-critical environments

Technical Skills & Competencies

  • SQL (Advanced)
  • SQL Server (Advanced)
  • SSIS / ETL Tools
  • Databricks / Spark / Lakehouse Architecture
  • Power BI (Reporting & Analytics)
  • Data Modeling (Star Schema / Dimensional Modeling)
  • Python (preferred)
  • CI/CD and Git-based workflows
  • Orchestration tools (Airflow or equivalent)
  • Data lakes, relational databases, and NoSQL systems

Preferred Qualifications

  • Experience with cloud data platforms (Azure, Databricks, or similar)
  • Experience in utility, energy, or operations-based data environments
  • Exposure to AI-assisted development tools and automation workflows
  • Experience modernizing legacy ETL systems into cloud-native architectures

Work Environment

  • Hybrid or on-site work at Atlanta, GA location
  • Collaborative enterprise environment supporting engineering and operations teams
  • Work within a regulated, mission-critical data ecosystem
  • Occasional production support responsibilities may be required

Compensation & Benefits

  • Hourly pay rate of $65.00 – $68.08 per hour
  • Competitive compensation for senior-level data engineering expertise
  • Access to BLOC Resources contractor support services, including onboarding and recruiter assistance
  • Opportunity to work on enterprise-scale modernization and lakehouse transformation initiatives
  • Exposure to cutting-edge data engineering and AI-assisted workflows
  • Potential for contract extension or long-term placement based on performance

Additional Screening Questions (for Candidates)

  • Describe your experience with data engineering tools and technologies (Azure, Databricks, SQL, Python, and on-prem ETL tools such as SSIS).
  • Provide an example of a data pipeline or project you’ve built that demonstrates your technical capabilities.
  • Are you currently authorized to work in the United States with permanent work authorization? (Student visas are not considered permanent status.)

About Bloc Resources

Bloc Resources is a trusted workforce solutions partner specializing in engineering, utilities, IT, and technical staffing. We connect top talent with leading organizations while providing career support, project opportunities, and long-term growth potential.

© 2026 Qureos. All rights reserved.