Job Summary
We are looking for a Senior Data Engineer with strong experience in real-time data streaming using Apache Kafka. The ideal candidate will design and build scalable data pipelines, streaming architectures, and data platforms to support real-time analytics and data-driven applications.
Key Responsibilities
-
Design, develop, and maintain real-time data pipelines using Apache Kafka
-
Build scalable stream processing solutions using Kafka Streams, Spark Streaming, or Flink
-
Develop and optimize ETL/ELT pipelines for large-scale data processing
-
Integrate streaming pipelines with data lakes, data warehouses, and analytics platforms
-
Work with cross-functional teams including Data Scientists, Product Managers, and Analysts
-
Ensure data reliability, scalability, and performance optimization
-
Implement data governance, monitoring, and alerting for streaming systems
-
Troubleshoot production issues and optimize streaming architecture. Required Skills
-
8+ years of experience in Data Engineering
-
Strong hands-on experience with Apache Kafka / Kafka Streaming
-
Experience with Python, Java, or Scala
-
Strong knowledge of real-time streaming architectures
-
Experience with Spark, Flink, or Kafka Streams
-
Strong SQL and data modeling skills
-
Experience working with cloud platforms (AWS, GCP, or Azure)
-
Familiarity with data lake technologies (S3, Delta Lake, Snowflake, etc.). Preferred Skills
-
Experience with Databricks or Snowflake
-
Experience with containerization (Docker, Kubernetes)
-
Knowledge of CI/CD pipelines and DevOps practices
-
Experience with monitoring tools like Prometheus, Grafana, or Splunk. Nice to Have
-
Experience with event-driven architecture
-
Experience building high-throughput, low-latency data systems