FIND_THE_RIGHTJOB.
India
Additional Comments:
Responsibilities: • Develop and maintain real-time data pipelines using Apache Kafka (MSK or Confluent) and AWS services. • Configure and manage Kafka connectors, ensuring seamless data flow and integration across systems. • Demonstrate a strong understanding of the Kafka ecosystem, including producers, consumers, brokers, topics, and schema registry. • Design and implement scalable ETL/ELT workflows to process large volumes of data efficiently. • Optimize data lake and data warehouse solutions using AWS services such as Lambda, S3, and Glue. • Implement robust monitoring, testing, and observability practices to ensure data platform reliability and performance. • Uphold data security, governance, and compliance standards across all data operations. Requirements: • Minimum of 5 years of experience in data engineering or related roles. • Proven expertise with Apache Kafka and the AWS data stack (MSK, Glue, Lambda, S3, etc.). • Proficient in coding with Python, SQL, and Java (Java strongly preferred). Person needs to flexible to write code in Python/Java • Experience with infrastructure-as-code tools (e.g. CloudFormation) and CI/CD pipelines. • Excellent problem-solving skills and strong communication and collaboration abilities.
Aws,Kafka,Python
Similar jobs
NIKE
India
5 days ago
West Pharmaceutical Services
India
5 days ago
Mahindra & Mahindra Ltd
India
5 days ago
AB InBev GCC India
Mangaluru, India
5 days ago
Amazon.com
India
5 days ago
Amazon Web Services
India
5 days ago
West Pharmaceutical Services
India
5 days ago
© 2025 Qureos. All rights reserved.