*Position: **Hadoop ETL Developer*

*Location: Sacramento, CA*

*Assignment: 6+ months *


*Hadoop ETL Developer* with Hadoop and other Software as a Service (Saas)
experience:

·         BS Degree in Computer Science or equivalent

·         8 years of extensive IT experience with Java, map reduce ETL

·         Minimum 4 years of experience with Hadoop, developing Big Data /
Hadoop applications.

·         Languages: Java and scripting

·         Unix/Linux Environment- scripting skills

·         Hortonworks or Cloudera commercial release experience

·         Streaming ingest (JSON entities) over HTTPS utilizing Flume

·         Batch ingest of Domain data into HBASE utilizing SFTP

·         Excellent working experience with the Hadoop Ecosystem including
MapReduce, HDFS, Sqoop, Pig scripts, Hive, HBase, Flume, Impala, Kafka,
Spark,Oozie and Zookeeper

·         Experience with distributed systems, large scale non-relational
data stores, MapReduce systems, data modeling, and big data systems.

·         Experience in writing MapReduce programs & UDFs for both Hive &
Pig in Java. time periods analysis, aggregates, OLTP analysis, churn
analysis

·         STORM Topologies implementation experience

·         Experience with Hive queries, pig Latin scripts, and MapReduce
programs for data analysis and to process the data and loading into
databases for visualization.

·         Experience in working with agile methodologies and suggesting
process improvements in agile.

·         Expert knowledge in Trouble shooting and performance tuning at
various levels such as source, mapping, target and sessions.

·         Experience with performing real time analytics on NoSQL databases
like HBase and Cassandra.

·         Experience working with Oozie workflow engine to schedule time
based jobs to perform multiple actions.

·         Experience with importing and exporting data from Relational
databases to HDFS, Hive and HBase using Sqoop.

·         Used Flume to channel data from different sources to HDFS.

·         RDBMS Experience Oracle, SQLServer, MySQL

·         Experience writing Apache Spark API  or Impala on Big Data
distribution in the active cluster environment.

·         Experience coding and testing the Standardization, Normalization,
Load, Extract and AVRO models to filter/massage the data and its validation.



·         Highly adept at promptly and thoroughly mastering new
technologies with a keen awareness of new industry developments and the
evolution of next generation programming solutions.





·         Open stack or AWS experience preferred

-- 




*--Thanks & Regards,Vikas Kumar SinghPh: 408-722-9100 Ext: 112Email:
vi...@svsintegration.com <vi...@svsintegration.com>*

-- 
You received this message because you are subscribed to the Google Groups 
"US_IT.Groups" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to us_itgroups+unsubscr...@googlegroups.com.
To post to this group, send email to us_itgroups@googlegroups.com.
Visit this group at https://groups.google.com/group/us_itgroups.
For more options, visit https://groups.google.com/d/optout.

Reply via email to