Job Title: Big Data Hadoop Developer
Location: Hyderabad
Work mode: 5 Days - (WFO)
Experience Required: 4+ years
Key Skills: Big Data, Hadoop, Spark, Hive
Job Overview
We are looking for an experienced Big Data Hadoop Developer to design, build, and optimize large-scale data processing systems. The ideal candidate should have strong expertise in the Hadoop ecosystem, hands-on experience with Spark and Hive, and the ability to develop efficient ETL pipelines.
Key Responsibilities
- Design, develop, and maintain scalable big data systems and data pipelines.
- Implement data processing frameworks using Hadoop, Spark, and Hive.
- Develop and manage ETL workflows to ensure data accuracy, consistency, and availability.
- Collaborate with data architects, analysts, and business teams to translate requirements into robust big data solutions.
- Perform data validation, quality checks, and issue resolution for data pipelines.
- Optimize data storage, query performance, and cluster utilization in the Hadoop ecosystem.
- Ensure compliance with security, governance, and data management standards.
Required Skills & Qualifications
- 4+ years of hands-on experience in Big Data technologies.
- Strong understanding of the Hadoop ecosystem (HDFS, MapReduce, YARN).
- Proficiency in Apache Spark (batch and streaming) and Hive.
- Experience in building and maintaining data pipelines and ETL processes.
- Strong knowledge of data optimization, partitioning, and performance tuning.
- Familiarity with NoSQL databases such as HBase, Cassandra, or MongoDB is an advantage.
- Experience with programming/scripting languages: Java, Scala, Python, or Shell.
- Strong analytical and problem-solving skills.
Job Type: Full-time
Pay: ₹1,500,000.00 - ₹2,400,000.00 per year
Application Question(s):
- How many years of experience you have with Big Data, Hadoop, Spark, Hive?
- Mention your Notice Period, Current CTC and ECTC.
- How many years of experience you have with Big Data / Hadoop Developer?
Work Location: In person