Dear Candidate,
We have an urgent opening for *Hadoop Admin *and I have sent you a job
description ,Please go through it and let me know if you are comfortable
with it and also send me your consultant’s updated resume ASAP.
*Job Title: Hadoop Admin*
*Location: NYC, NY*
*Start Date: Immediate*
*Duration: 4-6 Months Contract*
* Job Description:*
- Responsible for setup, administration, monitoring, tuning, optimizing,
governing Hadoop Cluster and Hadoop components :On-Premise/Cloud to meet
high availability/uptime requirements.
- Design & implement new components and various emerging technologies in
Hadoop Echo System, and successful execution of various Proof-Of-Technology
(PoT)
- Design and implement high availability options for critical component
like Kerberos, Ranger, Amabari, Resource Manager, MySQL repositories.
- Collaborate with various cross functional teams: infrastructure,
network, database, application for various activities: deployment new
hardware/software, environment, capacity uplift etc.
- Work with various teams to setup new Hadoop users, security and
platform governance
- Create and executive capacity planning strategy process for the Hadoop
platform
- Work on cluster maintenance as well as creation and removal of nodes
using tools like *Ganglia, Nagios, Cloudera Manager Enterprise, Ambari*
etc.
- Performance tuning of Hadoop clusters and various Hadoop components
and routines.
- Monitor job performances, file system/disk-space management, cluster &
database connectivity, log files, management of backup/security and
troubleshooting various user issues.
- Hadoop cluster performance monitoring and tuning, disk space management
- Harden the cluster to support use cases and self-service in 24x7 model
and apply advanced troubleshooting techniques to on critical, highly
complex customer problems
- Contribute to the evolving Hadoop architecture of our services to meet
changing requirements for scaling, reliability, performance, manageability,
and price.
- Setup monitoring and alerts for the Hadoop cluster, creation of
dashboards, alerts, weekly status report for uptime, usage, issue, etc.
- Design, implement, test and document performance benchmarking strategy
for platform as well for each use cases
- Act as a liaison between the Hadoop cluster administrators and the
Hadoop application development team to identify and resolve issues
impacting application availability, scalability, performance, and data
throughput.
- Research Hadoop user issues in a timely manner and follow up directly
with the customer with recommendations and action plans
- Work with project team members to help propagate knowledge and
efficient use of Hadoop tool suite and participate in technical
communications within the team to share best practices and learn about new
technologies and other ecosystem applications
- Automate deployment and management of Hadoop services including
implementing monitoring
- Drive customer communication during critical events and
participate/lead various operational improvement initiatives
*What you need to succeed*
- Bachelor's Degree in Computer Science, Information Science,
Information Technology or Engineering/Related Field
- *5-7 Years Of strong Hadoop/Big Data experience.*
- Strong Experience on *administration and management of large-scale
Hadoop production clusters* on Bare metal, Public Cloud platforms
(IaaS/PaaS) like *AWS (Strongly preferred)*, Rackspace, OpenStack, GCE,
etc.
- Able to deploy *Hadoop cluster*, add and remove nodes, keep track of
jobs, monitor critical parts of the cluster, configure high availability,
schedule and configure and take backups.
- Strong Experience with* Hortonworks (HDP) or Pivotal(CDH) Hadoop
Distribution* and *Core Hadoop Echo System components: MapReduce and
HDFS*
- Strong Experience with *Hadoop cluster
management/administration/operations using Oozie, Yarn, Ambari, Zookeeper,
Tez, Slider*
- Strong Experience with *Hadoop Security & Governance using Ranger*,
Falcon,Kerberos, Security Concepts-Best Practices
- Strong Experience with Hadoop ETL/Data Ingestion: *Sqoop, Flume, Hive,
Spark*
- Experience in Hadoop Data Consumption and Other Components:* Hive,
HUE,HAWQ,Madlib, Spark, Mahout, Pig*
- Prior working experience with* AWS - any or all of EC2, S3, EBS, ELB,
RDS*
- Experience monitoring, troubleshooting and tuning services and
applications and operational expertise such as good troubleshooting skills,
understanding of system's capacity, bottlenecks, basics of memory, CPU, OS,
storage, and networks.
- Experience with open source configuration management and deployment
tools such as Puppet or Chef and Scripting using Python/Shell/Perl/Ruby/Bash
- Good understanding of distributed computing environments
*Anupam Amita |* *Technical Recruiter* | *Apetan Consulting LLC*
Tel: 201-285-8031 <201-448-3198> * 107 | Fax: 201-526-6869 | 72 Van
Reipen Avenue # 255 Jersey City, NJ 07306
[email protected] | www.apetan.com
<http://www.facebook.com/Apetanconsulting>
<http://www.linkedin.com/company/apetan-consulting-llc?trk=top_nav_home>
<http://twitter.com/ApetanLLC>
*Disclaimer:* We respect your Online Privacy. This e-mail message,
including any attachments, is for the sole use of the intended recipient(s)
and may contain confidential and privileged information. Any unauthorized
review, use, disclosure or distribution is prohibited. If you are not the
intended recipient, please contact the sender by reply e-mail and destroy
all copies of the original message. If you are not interested in receiving
our e-mails then please reply with a "REMOVE" in the subject line at
[email protected] and mention all the e-mail addresses to be removed with
any e-mail addresses, which might be diverting the e mails to you. We are
sorry for the inconvenience.
--
You received this message because you are subscribed to the Google Groups
"Vendors" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/vendors.
For more options, visit https://groups.google.com/d/optout.