Hello ,
Hope you are doing great !

Please find the below mentioned job description and revert back with the
updated resume to *[email protected]
<[email protected]>*

Need Only - H1/USC
If H1 Need Passport Number



*Position: DevOps & Data Cloud LeadLocation: San Jose, CA / RTP Durham, NC
- (Onsite from Day 1)Duration: Long Term Contract*


*Need Only Locals Job Description:*

   - 10+ years of relevant experience.
   - You will design and lead the team in important architectural
   decisions. The candidate will provide technical leadership for the team(s)
   they are associated with and participate in key technical decisions. They
   will engage with customers on escalations and ensure that there is
   continuous improvement in all areas. Participate in technical discussions
   within the team and with other groups within Business Units associated with
   specified projects
   - You design, develop, and maintain our real time data processing, data
   LakeHouse infrastructure.
   - You have experience with Python to write data pipelines and data
   processing layers.
   - You develop and maintain Ansible playbooks for infrastructure
   configuration and management
   - You develop and maintain Kubernetes manifests, Helm charts, and other
   deployment artifacts
   - You have hands on experience on Docker and containerisation and how to
   manage/prune the images in private registries.
   - You have hands-on experience on access control in K8S cluster
   - You have hands-on experience on SPARK and maintaining SPARK CLUSTER
   - You monitor and troubleshoot issues related to Kubernetes clusters and
   containerized applications
   - You drive initiatives to containerize standalone apps to be
   containerized in Kubernetes.
   - You develop and maintain infrastructure as code (IaC) and collaborate
   with other teams to ensure consistent infrastructure management across the
   organization
   - You use observability tools to do “capacity management” of our
   services and infrastructure resources.
   - You are for guiding the development and testing activities of other
   engineers that involve several inter-dependencies
   - Experience in AWS ECS and EKS is added advantage
   - Experience in Dremio is added advantage
   - Experience in Dynatrace or any tracing , infrastructure, or real time
   monitoring tool is added advantage

*High Level Job Responsibilities:*

   - Infrastructure knowledge on the Kubernetes, Hadoop clusters, capacity
   planning, scheduling and their configurations.
   - Understanding and implementation of Disaster recovery for distributed
   clusters that include Hadoop, Dremio, s3.
   - Experience with Kubernetes, helm and spark services and their
   migration, upgrade processes and troubleshooting.
   - Understanding of Hadoop frameworks - spark, Sqoop.
   - Need to SyncUP with the NB team on a daily basis to coordinate and
   share the handover.


-- 
*Warm Regards,*

*Mahesh G*
Sr. US IT Recruitment Lead
[email protected]
www.solioscorp.com

-- 
You received this message because you are subscribed to the Google Groups 
"VB.NET 2003 Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/vbnet2003group/CAAqQ8G1qeQ2tXgUneDgiiZACi_OvsARNhUGw1zdoHcvHBvD5MQ%40mail.gmail.com.

Reply via email to