Currently in New York City, NY. Available to relocate and start within 2 
weeks’ notice. Available
for a phone screen with 24-hour notice and can come for a Face to Face 
interview with a 3 days’
notice.
Professional Summary
• Around 8 years of professional IT experience including 5+ years in Big 
Data ecosystem related
technologies. Expertise in Big Data technologies a consultant, proven 
capability in project based team
work and also as an individual developer with good communications skills.
• Hands-on experience with major components in Hadoop Ecosystem like Map 
Reduce, HDFS,
YARN, Hive, Pig, HBase, Sqoop, Oozie, Cassandra, Impala and Flume.
• Knowledge in installing, configuring, and using Hadoop ecosystem 
components like Hadoop Map
Reduce, HDFS, HBase, Oozie, Hive, Sqoop, Pig, spark, kafka, storm, 
Zookeeper and Flume
• Experience with new Hadoop 2.0 architecture YARN and developing YARN 
Applications on it
• Experience with Apache Spark’s Core, Spark SQL, Spark Streaming
• Experience with distributed systems, large-scale non-relational data 
stores and multi-terabyte
data warehouses.
• Firm grip on data modeling, database performance tuning and NoSQL 
map-reduce systems
• Experience in managing and reviewing Hadoop log files
• Real time experience in Hadoop/Big Data related technology experience in 
Storage, Querying,
Processing and analysis of data
• Worked on multi clustered environment and setting up cloudera Hadoop 
ecosystem.
• Worked on Data Virtualization tools like Tableau.
• Hands on experience in Agile and scrum methodologies
• Performed importing and exporting data into HDFS and hive using Scoop
• Experience in processing semi-structured and unstructured datasets
• Responsible for setting up processes for Hadoop based application design 
and implementation
• Experience in managing HBase database and using it to update/modify the 
data
• Experience in running MapReduce and Spark jobs over YARN
• Experience with Cloudera distributions (i.e.) CDH3/CDH4
• Extending Hive and PIG core functionality by writing UDFs.
• Used Oozie engine for creating workflow and coordinator jobs that 
schedule and execute various
Hadoop jobs such as MapReduce Jobs, Hive, Pig and sqoop operations.
• Used Hive to create tables in both delimited text storage format and 
binary storage format.
• Hands-on experience in complete project life cycle (design, development, 
testing and implementation)
of Client Server and Web applications
 Experience in Object Oriented Analysis and Design (OOAD) and development 
of software using
UML Methodology.
 Solid experience in writing complex SQL queries. Also, experienced in 
working with NOSQL
databases like Cassandra 2.1.

-- 
You received this message because you are subscribed to the Google Groups 
"Android Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to android-developers+unsubscr...@googlegroups.com.
To post to this group, send email to android-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/android-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/android-developers/fba661cf-0754-457f-be85-72701ea63b45%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to