*Hi**,*

*Hope you are doing great!!*

*Please find the requirement below **,* *If you find yourself comfortable
with the requirement please reply back with your updated resume with
details to this email id s.dak...@esskay-inc.com <s.dak...@esskay-inc.com>
and I will get back as soon as possible.*

*Position: Hadoop – Big Data Architect*
*Location: NY City, NY *
*Duration: 12+ Months*



*(Only Locals Preferred)*
*Job Description:*
Technical Architect with deep understanding of Big Data analytical products
and Data management method & tools for Hadoop deployments. The candidate
should have experience with engineering BI/Analytical products for large
scale deployments, as well as past experience in the design / development
architecture and engineering responsibilities of the Big Data platform

*Responsibilities:*

   - Strong understanding of distributed computing, analytics concepts,
   model development, large scale ETL (Talend, AbInitio) and BI architecture
   - Strong knowledge of *Hadoop ecosystem components* (e.g.; Cloudera) in
   open source  infrastructure stack; specifically: HDFS, Map/Reduce, Yarn,
   HBase, OOZIE, HIVE, TEZ, KAFKA,  STORM, Java, C++, Perl, or Python
   - *Design/Deploy Hadoop architectures and Hadoop Analytical/BI Tools*
   (with features such as high availability, scalability, process isolation,
   load-balancing, workload scheduling, BIGDATA clusters   etc.).
   -  Experience developing *Hadoop integrations for data ingestion, data
   mapping and data processing capabilities*
   -  Experience with *statistical analysis software, such as Weka, R,
   Rapid-Miner, Matlab, SAS, SPSS*
   - Experience with *optimizing computing techniques i.e. parallel
   processing, grid computing* etc.
   - Publish and enforce best practices, configuration recommendations,
   usage design/patterns, and cookbooks to developer community.
   - Good interpersonal with excellent communication skills
   - Able to interact with client projects in cross-functional teams
   - Good team player interested in sharing knowledge and shows interest in
   learning new technologies and products


*Mandatory Skills:*

   - Strong knowledge of Hadoop ecosystem components (e.g.; Cloudera) in
   open source  infrastructure stack; specifically: HDFS, Map/Reduce, Yarn,
   HBase, OOZIE, HIVE, TEZ, KAFKA,  STORM, Java, C++, Perl, or Python
   - Design/Deploy *Hadoop architectures and Hadoop Analytical*/BI Tools
   (with features such as high availability, scalability, process isolation,
   load-balancing, workload scheduling, BIGDATA clusters   etc.).
   - Experience developing *Hadoop integrations for data ingestion, data
   mapping and data processing capabilities.*


*Desired Skills:*

   -  More than 10 years of experience in *Java / Object Oriented*
   programming
   - Minimum 3 years hands on experience in* HADOOP*
   - Experience building, operating, tuning and deploying enterprise scale
   applications

*Thank You*

-- 
You received this message because you are subscribed to the Google Groups 
"SureIT" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sureitlife+unsubscr...@googlegroups.com.
To post to this group, send email to sureITlife@googlegroups.com.
Visit this group at http://groups.google.com/group/sureITlife.
For more options, visit https://groups.google.com/d/optout.

Reply via email to