Qureos

FIND_THE_RIGHTJOB.

Data Streaming Engineer

JOB_REQUIREMENTS

Hires in

Not specified

Employment Type

Not specified

Company Location

Not specified

Salary

Not specified

About Company :

global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2024 global revenues of €22.1 billion.

Job Description:

As a Data Streaming Engineer, you will be building and maintaining robust, scalable Kubernetes-based event streaming platforms. You will play a pivotal role in ensuring the reliability and efficiency of our production and development environments, enabling smooth deployments and accurate event reporting.

Key Responsibilities:

Kubernetes Infrastructure Management:

  • Design, provision, and maintain Kubernetes clusters across various environments (cloud, on-premises, hybrid)
  • Monitor cluster performance and optimize resource utilization
  • Troubleshoot issues and implement solutions to ensure high availability and stability

Splunk Operations & Engineering:

  • Manage Splunk infrastructure including indexers, search heads
  • Troubleshoot ingestion issues, latency, and indexing delays across distributed environments
  • Develop and maintain SPL queries, saved searches, alerts, and dashboards
  • Perform capacity planning, performance tuning, and upgrade planning for Splunk components
  • Implement data onboarding strategies and field extractions for new log sources

Cribl Pipeline Management:

  • you Cribl experience is advantageous but not required
  • Design, build, and optimize Cribl pipelines for log routing, transformation, filtering, and enrichment
  • Integrate Cribl with various data sources and destinations
  • Automate pipeline deployments and configuration using CI/CD and GitOps practices

Production Support:

  • Participate in incident response and resolution, ensuring minimal downtime and disruption to services
  • Continuously analyze and improve the performance, reliability, and security of production environments

Requirements

Qualifications:

  • Bachelor's degree in Engineering, Computer Science, Information Technology, or a related field
  • Minimum of 5 years of professional experience
  • Outstanding English communication, both verbal and non-verbal
  • Strong Kubernetes Expertise: Deep understanding of Kubernetes architecture, concepts, and best practices. Hands-on experience with deploying and managing Kubernetes clusters at scale
  • Splunk or ELK Familiarity: Experience working with log data platforms such as Splunk and ELK, including organizing, interpreting, and preparing data for analysis. Familiarity with tools and processes used to manage and streamline data flows across systems
  • GitLab Familiarity: Understanding of GitLab CI/CD features and capabilities. Ability to write efficient and maintainable pipeline scripts
  • Cloud Infrastructure Knowledge: Experience with cloud platforms (AWS, Azure, GCP) and cloud-native technologies
  • Troubleshooting and Problem-Solving: Ability to analyze complex problems, diagnose root causes, and implement effective solutions
  • Collaboration and Communication: Strong communication and interpersonal skills, able to collaborate effectively with cross-functional teams

The following details apply to the positions:

Work Model:

  • All positions will follow a hybrid work model: 3 days from the office (CFC) and 2 days from home
  • Work schedule is Monday to Friday

you must be willing to travel twice a year for short-term periods

Qualifications:

  • Bachelor's degree in Engineering, Computer Science, Information Technology, or a related field
  • Minimum of 5 years of professional experience
  • Outstanding English communication, both verbal and non-verbal
  • Strong Kubernetes Expertise: Deep understanding of Kubernetes architecture, concepts, and best practices. Hands-on experience with deploying and managing Kubernetes clusters at scale
  • Splunk or ELK Familiarity: Experience working with log data platforms such as Splunk and ELK, including organizing, interpreting, and preparing data for analysis. Familiarity with tools and processes used to manage and streamline data flows across systems
  • GitLab Familiarity: Understanding of GitLab CI/CD features and capabilities. Ability to write efficient and maintainable pipeline scripts
  • Cloud Infrastructure Knowledge: Experience with cloud platforms (AWS, Azure, GCP) and cloud-native technologies
  • Troubleshooting and Problem-Solving: Ability to analyze complex problems, diagnose root causes, and implement effective solutions
  • Collaboration and Communication: Strong communication and interpersonal skills, able to collaborate effectively with cross-functional teams

The following details apply to the positions:

Work Model:

  • hybrid work model: 3 days from the office (CFC) and 2 days from home
  • Work schedule is Monday to Friday

you must be willing to travel twice a year for short-term periods

Other Key Points:

  • Fluency in English: 90% of all roles involve global interviews, so it is essential that you can communicate fluently in English and have a clear, understandable accent
  • The company does not want Foreigners they need Egyptians

© 2025 Qureos. All rights reserved.