Job Title: Lead Confluent Kafka Engineer/ArchitectLocation: Remote (Pak) / Hybrid (Riyadh, KSA)Employment Type: Full-Time / Contract-to-HireExperience Level: 10+ years (Senior/Lead)Role Summary
As a
Lead Confluent Kafka Engineer, you'll architect, design, setup, install, implement and optimize high-throughput data streaming solutions using Confluent Platform and Apache Kafka. You'll lead a team of engineers in delivering production-grade pipelines, ensuring scalability, reliability, and security. This role involves hands-on development, mentoring, and collaborating with data architects, DevOps, and stakeholders to implement event-driven architectures. You'll champion best practices in real-time data processing, from proof-of-concepts to enterprise deployments, including full lifecycle management from installation to optimization.
Key Responsibilities-
Architecture & Design: Lead the design of scalable Kafka clusters and Confluent-based ecosystems (e.g., Kafka Streams, ksqlDB, Schema Registry, Connect) for on-prem, hybrid, and multi-cloud (GCP) environments.
-
Implementation & Development: Build and maintain real-time data pipelines, integrations, and microservices using Kafka producers/consumers; integrate with tools like Flink, Spark, or ML frameworks for advanced analytics.
-
Installation & Setup: Oversee the end-to-end installation and initial configuration of Confluent Platform and Apache Kafka clusters, including:
-
Deploying Confluent Enterprise/Community editions on Kubernetes (via Helm/Operator), bare-metal servers, or managed cloud services (e.g., Confluent Cloud, GCP).
-
Configuring brokers, ZooKeeper/KRaft mode, topics, partitions, replication factors, and security settings (e.g., SSL/TLS, SASL, ACLs) using Ansible, Terraform, or Confluent CLI.
-
Setting up auxiliary components like Schema Registry, Kafka Connect clusters, and monitoring agents (e.g., JMX exporters) with automated scripts for reproducible environments.
-
Performing initial health checks, load testing (e.g., with Kafka's performance tools), and integration with existing infrastructure (e.g., VPC peering, load balancers).
-
Operations & Maintenance: Oversee monitoring, troubleshooting, performance tuning, and lifecycle management (upgrades, patching) of Kafka/Confluent instances; implement DevSecOps practices for CI/CD pipelines.
-
Team Leadership: Mentor junior engineers, conduct code reviews, and drive technical proofs-of-concept (POCs); gather requirements and define standards for Kafka as a managed service (e.g., access controls, documentation).
-
Optimization & Innovation: Ensure high availability (>99.99%), fault tolerance, and cost-efficiency; explore emerging features like Kafka Tiered Storage or Confluent Cloud integrations for AI workloads.
-
Collaboration & Delivery: Partner with cross-functional teams (data engineers, architects, product owners) to align streaming solutions with business goals; provide thought leadership on event-driven patterns.
-
Security & Compliance: Implement RBAC, encryption, and auditing; conduct root-cause analysis for incidents and ensure GDPR/HIPAA compliance in data flows.
Required Qualifications & Skills-
Bachelor's/Master's in Computer Science, Engineering, or related; certifications like Confluent Developer/Administrator a plus.
-
10+ years in software engineering; 5+ years hands-on with Apache Kafka & Confluent Platform (Cloud/Enterprise editions).
-
Proficiency in Java/Scala/Python (8/11+); Kafka Streams/Connect/ksqlDB; Schema Registry; REST/gRPC APIs.
-
Event-driven/microservices design; data pipeline optimization; handling high-volume streams (TB/day scale).
-
Expertise in containerization (Docker/Kubernetes); CI/CD (Jenkins/GitHub Actions); Terraform/Ansible for IaC.
-
Multi-cloud experience (AWS, GCP, Azure); monitoring tools (Prometheus, Grafana, Confluent Control Center).
-
Experience with streaming integrations (e.g., Flink, Spark Streaming for CDC).
- Contributions to open-source Kafka projects or publications on streaming architectures.
- Knowledge of AI/ML data pipelines (e.g., Kafka + TensorFlow/PyTorch).
- Familiarity with observability tools and security (OAuth, Kerberos).
- Strong problem-solving, communication, and leadership; experience leading POCs and cross-team projects.
-
Agile/Scrum leadership in fast-paced environments.
-
Experience in client facing roles and leading teams.
eSAVL5n77F