Job Role: Data Ops Support Engineer
Job Responsibilities:
- Implement and manage continuous integration and deployment pipelines for various applications and services.
- Proactively monitor data pipelines, system performance and troubleshoot any issues to maintain high availability and reliability of the infrastructure.
- Collaborate with development and business teams to design and implement scalable and resilient solutions.
- Automate routine tasks and processes to streamline operations and improve efficiency.
- Conduct in-depth reviews of code and debug issues in data pipelines across multiple applications in production environments, with the ability to perform detailed code analysis and troubleshoot complex issues effectively.
- Implement security best practices and ensure compliance with relevant regulations.
- Participate in on-call rotation and respond to incidents promptly to minimize downtime.
- Document processes and procedures to maintain an up-to-date knowledge base.
- Stay updated with emerging technologies and industry trends to drive innovation and continuous improvement.
Job Requirements:
- Minimum one year development experience on big data tools with at least 2 years' experience in DevOps role or managed services operations role.
- Knowledge and understanding of big data tools and AWS cloud services like MWAA, S3, Athena and Lambda is must.
- Experience with Apache NiFi is must.
- Proficiency in scripting languages like Python, Bash, or PowerShell is preferable.
- Familiarity of containerization technologies like Docker and orchestration tools like Kubernetes.
- Familiarity with monitoring and logging tools such as Yarn, CloudWatch etc.
- Experience with version control systems like Git/Bitbucket.
- Excellent communication and collaboration skills.
- Ability to work independently and prioritize tasks effectively in a fast-paced environment.
- Flexibility to operate across various time zones and schedules, including weekend shifts, is required.