Qureos

Find The RightJob.

Data Engineer

Data Engineer

Location: Charlotte, NC

Hybrid Schedule: 3 days onsite, 2 days remote

Hourly Pay Rate: $70/hour

W-2 with Brooksource - We are not able to provide sponsorship at this time


** Not open to Relocation at this time - we will be prioritizing local candidates



Role Overview

We are seeking a Senior Data Engineer to design, build, and maintain scalable data engineering solutions that power analytics, reporting, and data products across the organization. This role works closely with product managers, data analysts, BI developers, and other engineers to deliver reliable, high‑quality data pipelines across both batch and real‑time streaming architectures .

The ideal candidate has deep experience building cloud‑native data solutions on AWS , strong fundamentals in distributed systems , and hands‑on experience with stream processing frameworks such as Apache Flink , using Python and Java .


Key Responsibilities

Data Engineering & Platform Development

  • Design, develop, and maintain scalable data pipelines supporting batch and real‑time streaming workloads
  • Build and optimize data processing jobs using AWS Glue, PySpark, Python, and Java
  • Develop and maintain stream processing applications using Apache Flink
  • Implement reliable data ingestion solutions using AWS DMS, Kafka, and AWS Lambda
  • Design and manage data persistence layers using Amazon Aurora (PostgreSQL) and related AWS services

Streaming & Batch Processing

  • Design and support real‑time and near‑real‑time streaming pipelines using Kafka and Apache Flink
  • Build and maintain stateful stream processing jobs , including windowing, aggregation, and event‑time processing
  • Develop efficient batch processing workflows for large‑scale data transformation and enrichment
  • Ensure data consistency, latency, fault tolerance, and reliability across streaming and batch systems

Data Quality, Reliability & Performance

  • Implement monitoring, logging, and alerting for batch and streaming data pipelines
  • Diagnose and resolve data quality, performance, and scalability issues
  • Apply best practices for schema evolution, checkpointing, fault tolerance, and back‑pressure handling
  • Optimize pipelines for performance and cost efficiency

Analytics & Consumption

  • Enable downstream analytics and reporting through well‑modeled, well‑documented datasets
  • Partner with analytics and BI teams supporting tools such as Qlik
  • Support data consumers by improving data discoverability, usability, and trust

Collaboration & Engineering Excellence

  • Collaborate with product, analytics, and platform teams to translate requirements into technical solutions
  • Participate in architecture discussions, design reviews, and code reviews
  • Contribute to data engineering standards, reusable frameworks, and documentation
  • Mentor junior engineers and promote best practices across the team


Core Technologies

This role will work extensively with the following technologies:

  • AWS Glue
  • PySpark
  • Python
  • Java
  • Apache Flink (Stream Processing)
  • Kafka
  • AWS DMS
  • AWS Lambda
  • Amazon Aurora (PostgreSQL)
  • Streaming and Batch Data Processing
  • Qlik (analytics / BI consumption)


Preferred / Nice‑to‑Have Experience

  • Amazon Redshift
  • Data warehousing concepts (dimensional modeling, star/snowflake schemas)
  • Experience building or supporting enterprise data warehouses or data lakes
  • Familiarity with event‑driven architectures and real‑time analytics use cases
  • Experience with data governance, metadata management, or lineage tools


Required Qualifications

  • 5+ years of experience in data engineering or backend engineering
  • Strong hands‑on experience with Python and Java in distributed systems
  • Experience building streaming data applications using Apache Flink and Kafka
  • Solid experience with AWS data services
  • Strong SQL skills and understanding of relational data modeling
  • Experience working with large‑scale, distributed data systems


What Success Looks Like

  • Streaming and batch pipelines are reliable, scalable, and observable
  • Real‑time data is processed with low latency and high correctness
  • Data is trusted and easily consumable by analytics and downstream systems
  • Systems are designed with fault tolerance, performance, and maintainability in mind
  • Engineering best practices are consistently applied and shared

© 2026 Qureos. All rights reserved.