*Subject:* Very Urgent Requirement BIG DATA Hadoop Administrator


*The candidate will work on an AT&T project but vendor will do the
interviews.  *

Only GC OR USC

Phone and Skype.



Location: Plano, TX

Duration: 12 months plus



Professional System Engineer BIG DATA Hadoop Administrator

*Location: Plano, Texas*

Overall Purpose: Responsible for translating the core architecture for
business requirements into the final technical solution (consisting of
platform, network, software, etc.) through functional, performance, and
reliability analysis using engineering models and techniques, primarily
through software development throughout the vertical stack. This is a
hands-on role that ultimately results in the delivery of an application or
service.

Key Roles and Responsibilities: Designs, develops, documents and analyzes
technology systems, maximizing reuse of target state platforms such as API,
data fabric, or data routers platforms. Engineers integrated hardware and
software solutions that meet performance, usability, scalability,
reliability, and security needs. Coordinates design, specification, and
integration of total systems and subsystems. Assesses (proof of concept)
and recommends solutions (algorithms and products) to improve the current
systems. Job Contribution: Intermediate level technical professional.
Subject matter technical knowledge within a discipline and sound
understanding of Telecommunication technologies.

*Required Qualifications*

·         *Bachelor's degree in Computer Science, Computer Engineering or
related technical field 3-5 years related technical architect experience.*

·         *Strong Linux and Hadoop background / experience*

·         *Experience Monitoring & tuning HDFS for optimal performance and
uptime*

·         *Strong troubleshooting skils (Hadoop/HDFS/Linux)*

·         *Ability to tune Hadoop itself tuning and resolving HDFS
performance and availability issues*

·         *Experience with HDFS access control methodologies for multiple
users in a large multi tenant environment.*

·         *Experience designing a large Hadoop cluster for optimal
performance, H/W configurations/tunings, disk layouts, physical network
architecture*

·         *In depth knowledge of MapReduce & YARN (Hadoop 2.X)*

·         *Strong knowledge/experience in tuning a Hadoop cluster for
multi-tenancy and optimal resource utilization, performance and high
availability*

·         *Experience in troubleshoopting Hadoop/Yarn/MapReduce*

·         *Experience tuning and troubleshooting MapReduce and general YARN
jobs for optimal performance, both on the Command Line as well as via
provided UI's.*

·         *Knowledge and experience providing High Availability NameNodes
using Quorum Journal Managers and Zookeeper Failover Controllers Extensive
knowledge of various configurations parameters available in mapred-size.xml
and yarn-site.xml and locations of various log files Usage of
JobHistoryServer (and newer App Timeline Server) UI & API to monitor
cluster usage and overall job health,forensics etc*

·         *Deep knowledge and experience with Hive, HiveServer2, and the
Hive Metastore, tuning and optimizing. Knowledge of Hive SQL and SQL syntax
and usage (selects, inserts, joins). Bonus points for experience with Tez,
and newer file formats such as ORC (Optimized Row Columnar) and why they
may be superior to Text, Sequence and RC formats, but also what are some
tradeoffs*

·         *Strong experience using Ambari administering large Hadoop
clusters (> 100 to > 1000's of physical nodes) Ambari REST API for
automating common tasks along with the monitoring of the over all cluster
health.*

·         *Strong Oozie knowledge and experience developing, deploying
Oozie workflows, including coordinator flows and Oozie actions such as the
HiveAction*

·         *Experience deploying and maintaining a zookeeper ensemble within
and without Hadoop*

·         *Experience using Sqoop to transfer data into and out of HDFS &
RDBMS*

·         *Experience tuning & maintaining HBase*

*Desired Qualifications*

·         Experience managing access to the data with XASecure/Ranger for
centralized Hadoop cluster authorization

·         Experience deploying, administrating and tuning MySQL and
Postgres DBs, including developing HA & sound BackUp Strategy, plans and
implementations.

·         In depth ability and experience using Linux, sysadmin tools,
strong ability to create and debug your own scripts.

·         Experience using deployment/configuration automation tools such
as Ansible

·         Experience with SOLR for large scale low latency text search

·         Experience with Kafka for real time data ingestion

·         Experience with Storm for real time data ingestion

·         Experience tuning and troubleshooting Spark

·         Experience with Elastic Search, Logstash, and Kibana for log
management

·         Experience with Graphite for graphing and dashboards

Java and Python development experience

-- 

*Nick G.* |* Technical Recruiter **| **Apetan Consulting LLC |*

*Tel: 201-620-9700 * 141 **| **15 Union Avenue,  office # 6,  Rutherford,
New Jersey 07070  | *

*Mail :-** n...@apetan.com <n...@apetan.com> **| **www.apetan.com*
<http://www.apetan.com/> |

https://www.linkedin.com/in/nikhil-gupta-a4637391

-- 
You received this message because you are subscribed to the Google Groups 
"Oracle Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to oradev+unsubscr...@googlegroups.com.
To post to this group, send email to oradev@googlegroups.com.
Visit this group at https://groups.google.com/group/oradev.
For more options, visit https://groups.google.com/d/optout.

Reply via email to