*Hi                         *

*Please lookup the below position and if you feel comfortable ,then please
send me your updated resume*



*Position       :  Hadoop Java Developer  *

*Location      :  Chicago, IL *

*Duration     :  3-6 Months *

*Interview   : Phone then Face to Face*



*Job Description*

·         This position is responsible for the design of data movement into
an throughout the TIL, including but not limited to the Operational Data
Store, Atomic Data Warehouse, Dimensional Data Warehouse and Master Data
Management.

·         Mentoring of designers for detailed design.

·         Development of enterprise design view and application to project
level. Essential Functions: Review all Project Level

·         Data Movement Designs for adherence to Standards and Best
Practices Suggest changes to Project Level Designs

·         Develop New Data Movement Design Patterns where Required Guide
the Coding and Testing of standard data movement reuseable components



*Requirements: Strong Analytical and problem solving skills.*

   - Build distributed, scalable, and reliable data pipelines that ingest
   and process data at scale and in real-time.
   - Collaborate with other teams to design and develop data tools that
   support both operations and product use cases.
   - Source huge volume of data from diversified data platforms into Hadoop
   platform
   - Perform offline analysis of large data sets using components from the
   Hadoop ecosystem.
   - Evaluate big data technologies and prototype solutions to improve our
   data processing architecture.
   - Knowledge of Healthcare domain is an added advantage

*Candidate Profile: 8+ years of hands-on programming experience with 3+
years in Hadoop platform*

   - Proficiency with Java and one of the scripting languages like Python
   etc.
   - J2EE, EJB, WAS deployments, RESTful service
   - Good grasp of data movement approaches and techniques and when to
   apply them
   - Strong hand on experience with databases like Db2, Teradata
   - Flair for data, schema, data model, how to bring efficiency in big
   data related life cycle
   - Ability to acquire, compute, store and provision various types of
   datasets in Hadoop platform
   - Understanding of various Visualization platforms (Tableau, Qlikview,
   others)
   - Strong object-oriented design and analysis skills
   - Excellent technical and organizational skills
   - Excellent written and verbal communication skills

*Top skill sets / technologies:*

   - Java / Python
   - Sqoop/Flume/Kafka/Pig/Hive/(DataStage or similar ETL tool) / HBase /
   NoSQL / Datameer / MapReduce/Spark
   - Data Integration/Data Management/Data visualization experience



Regards

*Abhishek Kumar*

*Technical Recruiter*

*VSG Business Solutions*

*221,Cornwell Dr, Bear,DE 19701*

*Contact No : 302-261-3207 X 101*

*Email :abhis...@vsgbusinesssolutions.com
<abhis...@vsgbusinesssolutions.com>*

*Hangout :abhishek.vsg*

-- 
You received this message because you are subscribed to the Google Groups 
"Oracle-Projects" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to oracle-projects+unsubscr...@googlegroups.com.
To post to this group, send email to oracle-projects@googlegroups.com.
Visit this group at http://groups.google.com/group/oracle-projects.
For more options, visit https://groups.google.com/d/optout.

Reply via email to