*Hi Folks ,*


*Hope you’re doing great !    *



*Kindly let me know if anyone available with required skills. *

*Please send resumes to r...@vstconsulting.com <r...@vstconsulting.com>*



Hadoop Admins and Hadoop Engineer

Location: NYC

duration : Long term

Rate: DOE

Telephonic followed by face to face




*Requirement 1: *



The ideal candidate will be responsible for administration and support
ofHadoop ecosystem. The ideal candidate will ensure that: systems function
within defined SLA, outages are addressed rapidly and efficiently, upgrades
and enhancements are non-disruptive and appropriate planning is performed
and acted upon.



*Responsibilities*

·         Manage scalable Hadoop cluster environments.

·         Hadoop Core Components & Environment Administration Hive, Job
Tracker, Task tracker, Pig, Scoop, Flume etc.

·         All Functions like Monitoring Cluster & Jobs, troubleshooting,
user onboarding, reporting application status, Incident & Outage Management.

·         Manage the backup and disaster recovery for Hadoop data.

·         Optimize and tune the Hadoop environments to meet performance
requirements.

·         Install and configure monitoring tools.

·         Work with big data developers and developers designing scalable
supportable infrastructure.

·         Work with Linux server admin team in administering the server
hardware and operating system

·         Assist with develop and maintain the system runbooks.

·         Create and publish various production metrics including system
performance and reliability information to systems owners and management.

·         Perform ongoing capacity management forecasts including timing
and budget considerations.

·

·         Coordinate root cause analysis (RCA) efforts to minimize future
system issues.

·         Mentor, develop and train junior staff members as needed.

*Qualifications*

·         BS Degree in Computer Science/Engineering required.

·         7+ years of IT experience

·         3-5 years overall experience in Linux systems.

·         1-2 years of experience in deploying and administering Hadoop
Clusters

·         Well versed in installing & managing distributions of Hadoop
(Hortonworks, Clouderaetc.)

·         Knowledge in performance troubleshooting and tuning Hadoop
Clusters

·         Good knowledge of Hadoop cluster connectivity and security.

·         Linux System Admin Knowledge, understanding of Storage, Filesytem
, Disks, Mounts , NFS, VIP

·         Development experience in Hive, PIG, Sqoop, Flume, and HBASE
desired

·         Excellent customer service attitude, communication skills
(written and verbal), and interpersonal skills.

·         Experience working in cross-functional, multi-location teams.

·         Excellent analytical and problem-solving skills.

·         Ability to be flexible and adapt to any given situation.

·         Ability to work under pressure and in high stress situations with
a calm demeanor.

·         Willingness to work occasional evenings and weekends in support
of deployments and resolution of production issues.  This is an "on call"
position.

*Requirements 2: *

A large global bank is considering ways to transform and leverage
proprietary data assets into opportunities for clients internally and
externally.  Protecting and managing intellectual property effectively as
well as utilizing it to develop solutions that are both customized and
scalable will enable the bank to create additional shareholder value.



This bank is looking for individuals with experience in Big Data
Technologies such as HDFS, MapReduce, Greenplum, etc that can provide the
technical leadership towards architecting a highly scalable, cost effective
and highly performing platform. The vision is to create a reliable and
scalable data platform, provide standard interfaces to query and support
analytics for our big analytics related data sets that is transparent,
secure, efficient and easy to access as possible by our varied applications.



Responsibilities will include:

   - Design, engineer and build data platforms over Big Data Technologies
   - Own and establish Engineering Frameworks for Big Data
   - Establish and communicate fit for purpose analytical platforms for
   business prototypes
   - Leading innovation by exploring, investigating, recommending,
   benchmarking and implementing data centric technologies for the platform.
   - Being the proactive and technical point person for the data platform
   end to end.
   - Lead by example coaching and mentoring those of your peers and
   mentoring less experienced team members.



Qualifications - Internal

   -   Full stack engineering, being able to lead and run conference calls,
   document and execute on an architecture vision.
   - 3-5 years of professional experience with an established track record
   as an engineer.
   - Deep knowledge of component systems architecture including but not
   limited to distributed systems architecture and multiple programming
   languages supporting such architecture.
   - Knowledge of LDAP, postgres and Linux.
   - Knowledge of various Big Data components, vendors and technologies
   including, hadoop, Greenplum, Tableau, Gemfire, low latency solutions
   (networking / disk / etc).

Positions would be located in NYC.



Thanks & Regards,



Ravi

VST Consulting, Inc

PH:  732-491-8663| Fax: 732-404-0045

r...@vstconsulting.com <pra...@vstconsulting.com>

http://www.vstconsulting.com

-- 
-- 
To unsubscribe from this group, send email to 
cbe-software-engineer-unsubscr...@googlegroups.com

For more options, visit this group at 
http://groups.google.com/group/CBE-Software-Engineer?hl=en
--- 
You received this message because you are subscribed to the Google Groups "CBE 
Software Engineer" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to cbe-software-engineer+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to