Role Hadoop developer
To develop and maintain robust, scalable, and secure big data infrastructure, enabling organizations to process, analyze, and derive insights from massive datasets efficiently.
Key Responsibilities:
- Design & Development: Build and implement Hadoop-based applications, data pipelines, and frameworks for large datasets.
- Data Processing: Write MapReduce jobs, Hive/Spark/Scala code, and use tools like Pig/NiFi for data ingestion, transformation, and analysis.
- Performance Optimization: Tune and optimize Hadoop cluster performance, applications, and queries (SQL/Hive).
- Cluster Management: Handle installation, configuration, monitoring, and troubleshooting of Hadoop/HBase clusters.
- Collaboration: Work with data analysts, scientists, and stakeholders to gather requirements and deliver solutions.
- Security & Compliance: Implement data security measures and ensure compliance with standards.
- Documentation: Produce detailed technical designs, process documentation, and best practices.
Key Skills & Requirements:
- Technical Expertise: Strong knowledge of Hadoop ecosystem (HDFS, YARN, MapReduce, Hive, Pig, HBase, Spark).
- Programming: Proficiency in Java, Scala, Python, and scripting (Pig Latin, HiveQL).
- Databases: Understanding of SQL, data warehousing, and database theories.
- Tools: Experience with big data tools like Kafka, NiFi, Elasticsearch.
- Analytical Skills: Strong problem-solving, analytical, and debugging abilities.
- Experience: Hands-on experience as a Big Data Engineer or Hadoop Developer.
Desired Candidate Profile
Qualifications : BACHELOR OF TECHNOLOGY