Hi,
Please find the requirement and revert ASAP with the updated resume. ---------------------------------------------------------------------------------- Full Name of Candidate: Email Address: Contact details: Current Location: Relocation (Yes/NO): Travelling (Yes/NO): Visa Status: 4 Digit SSN Number: Availability for the role: Availability for the calls: Any interviews lined up (Yes /No): Expected Rate (C2C/1099/W2): Skype id: *Job Title: Hadoop Administrator* *Location: Hoffman Estates, IL* *Duration: 2+ Years* This role is part of the growing team at our client, a Big Data startup company and wholly owned subsidiary of a Fortune 100 Company. This position will be a ground-floor opportunity to help support and administer open source *Hadoop* environments for Our clients customers, including but not limited to; infrastructure planning, scaling and administration. The administrator will work closely with infrastructure, network, and architectural teams to insure the *Hadoop* environments are highly available and performing within agreed on service levels. *Required: * supporting open source Linux operating systems (RHEL, CENTOS, Fedora) and hardware in an enterprise environment. Experience installing, configuring Linux based systems Strong scripting expertise including BASH, PHP, PERL, Java script and UNIX Shell Experience with java virtual machines (JVM) and multithreaded processing Expertise in typical system administration and programming skills such as storage capacity management, performance tuning, system dump analysis, server hardening (security). Demonstrated ability to apply problem analysis and resolution techniques to complex system problems Strong network background with a good understanding of TCP/IP, firewalls and DNS Hands on experience with the *Hadoop* stack; MapReduce, Sqoop, Pig, Hive, Flume) Strong Knowledge of NoSQL platforms Hands on experience with deploying and administering MySQL databases Fluency to read, understand, and build java code Hands on experience with opens source monitoring tools including; Nagios and Ganglia is a must. *Responsibilities: * * Provides day to day production support of our *Hadoop* infrastructure including; * Implement new *Hadoop* hardware infrastructure, OS integration and application installation * Cluster maintenance * HDFS support and maintenance * Adding and removing cluster nodes * Backup and restores * Assist with troubleshooting MapReduce Jobs as required * Cluster Monitoring and Troubleshooting * Manage Nagios and Ganglia monitoring * Manage and review *Hadoop* log files * Filesystem management and monitoring * *Hadoop* Cluster capacity planning * Implement and maintain security as designed by the *Hadoop* Architects * Manage and review data backups * Execute system and disaster recovery processes as required * Provide support and troubleshooting for BI issues associated with *Hadoop* * Work with the *Hadoop* production support team to implement new business initiatives as they relate to *Hadoop* Best Regards... Amar amarosai...@gmail.com voice-630-566-7324 -- You received this message because you are subscribed to the Google Groups "OracleD2K" group. To unsubscribe from this group and stop receiving emails from it, send an email to oracled2k+unsubscr...@googlegroups.com. To post to this group, send email to oracled2k@googlegroups.com. Visit this group at http://groups.google.com/group/oracled2k. For more options, visit https://groups.google.com/d/optout.