We are seeking a Big Data Administrator to manage and support enterprise big data platforms. The role ensures platform availability, performance, security, and stability across production and non-production environments, working closely with data engineering, IAM, network, security, and infrastructure teams.
Key Responsibilities
-
Deploy, configure, and administer Cloudera CDP/Hadoop clusters using Cloudera Manager.
- Operate and support Hadoop ecosystem services: HDFS, YARN, Spark, Hive/Impala, HBase, Zookeeper.
- Monitor platform health and performance; perform capacity planning and performance tuning.
- Implement and manage security: Kerberos, Ranger policies, LDAP/AD integration, and TLS/SSL hardening.
- Troubleshoot production incidents (authentication issues, service failures, disk/log issues, stuck Spark/YARN jobs) and perform root cause analysis.
- Manage upgrades, patching, and configuration changes with minimal downtime.
- Build automation using Bash/Python for routine tasks and operational efficiency.
- Administer Kubernetes platforms (Red Hat OpenShift): basic cluster operations, namespace/project management, pod/service troubleshooting, log analysis, and support for platform applications.
- Maintain operational documentation, runbooks, and support procedures; participate in on-call support as required.
Requirements
3+ years of hands-on experience as a Cloudera/Hadoop Administrator in production.
-
Strong Linux administration and command-line skills.
- Proven experience with Cloudera Manager and cluster operations.
- Working knowledge of Kubernetes/OpenShift administration (oc/kubectl, pods, deployments, services, logs, troubleshooting).
- Solid understanding of Kerberos, Ranger, LDAP/AD, and TLS/SSL.
- Experience with monitoring tools such as Grafana/Prometheus (or similar).
Strong troubleshooting, communication, and collaboration skills