Sr. Hadoop and Kubernetes Administrator - REMOTE

  • Infinity Consulting Solutions
  • Remote * (Illinois, USA)
  • Jan 15, 2022
Telecommuting

Job Description

Job Description - Sr. Hadoop and Kubernetes Administrator - REMOTE

Sr. Hadoop and Kubernetes Administrator

Job Description

The team is seeking a highly motivated Hadoop and Kubernetes (k8s) Administrator with big data infrastructure administration experience. The incumbent will report to Director of Big Data (DBD) and will work toward supporting the Hadoop Cluster and Kubernetes Cluster. The incumbent will be part of a team of Big Data DevOps Group focusing on the day-to-day tasks of managing and maintaining On-Prem environments, and will be hands-on involved with CI/CD process, monitoring application servers and deploying new applications. Candidate must be comfortable working in an agile environment. Hadoop and Kubernetes Administration Certifications is a Plus.

Qualifications/Requirements

Bachelors' of Science in Computer Science or related field

4+ years' experience in the following:

  • Install MapR Hadoop cluster from ground up with SDLC cycle methodology including Dev, Test, Production and Disaster Recovery
  • Plan capacity planning, infrastructure planning and version fix to build Hadoop Cluster
  • Perform upgrades to MapR Hadoop Cluster along with support
  • Provide infrastructure and support to software developers to rapidly iterate on their products and services and deliver high-quality results. This includes infrastructure for automated builds and testing, continuous integration, software releases, and system deployment
  • Designing and implement solutions to leverage a Kubernetes cluster by Configuring hardware, peripherals, and services, managing settings and storage of Kubernetes cluster
  • Research opportunities for automation, troubleshooting issues as reported by users.
  • Mentor junior team members in best practices.
  • Collaborate with other members of the IT team using tools like GIT to promote security, efficiency, and scalability of core services and capabilities.
  • Design and implement Backup and Disaster Recovery strategy for batch applications and Kafka for real-time streaming applications
  • Align with development and architecture teams to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments
  • Support kafka Streaming to enable real-time streaming applications
  • Monitor and coordinate all data system operations, including security procedures, and liaison with infrastructure, security, DevOps, Data Platform and Application team
  • Ensure proper resource utilization between the different development teams and processes
  • Design and implement a toolset that simplifies provisioning and support of a large cluster environment
  • Align with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments
  • Apply proper architecture guidelines to ensure highly available services
  • Review performance stats and query execution/explain plans; recommend changes for tuning
  • Create and maintain detailed, up-to-date technical documentation
  • Solve live performance and stability issues and prevent recurrence
  • Strong knowledge of scripting and automation tools and strategies, e.g. Shell, Python, Ansible
- provided by Dice