*Spark Developer (Cloudera)* *Work Location*: Atlanta, GA
*Contract*: Long Term *Great communications skills require!!!* *Need Locals Only !!!* *Job Descriptions:* Client needs candidates that have experience in *Spark *development (a must), *Scoop *and *Scala*, Teradata is the source database they will extract from using a Teradata tools. And then populate a hadoop database using Cloudera suite. Client would also like all candidates to take a Spark/Teradata online assessment test. Please confirm with your candidates if they are willing to take the test. Please do *whatever* you can to vet and tech check, personality check, your candidate before presenting. I know it can be tough but we can have only so many rejections before client determines in their mind that we don’t have a quality pipeline. *1. **First resource – hadoop senior developer that is strong with Sparc, Scala, Scoop, Hive, Cloudera environment – must be on-site in Atlanta (start immediately) for rest of year.* *2. **Second resource – senior hadoop dev ops consultant – automated testing, automated code deployment, bitbucket, bamboo, SonarCube, experience with best practices policies and procedures needed for devops and then help implement them. Start immediately for rest of 2017.* Here is a sample resume: · Around 8 years of total IT development experience in all phases of the SDLC · 3+ years of Scala/Apache Spark experience and 4+ years of Hadoop/Java Developer experience in all phases of Hadoop and HDFS development. · Extensive experience and actively involved in Requirements gathering, Analysis, Design, Coding and Code Reviews, Unit and Integration Testing. · Experience in designing Use Cases, Class diagrams, Sequence and Collaboration diagrams for multi-tiered object-oriented system architectures utilizing Unified Modeling Tools (UML) such as Rational Rose, Rational Unified Process (RUP) Working knowledge of Agile Development and Test Driven Development (TDD) Business Driven Development (BDD) methodologies. · Extensive knowledge of Client–Server technology, web-based n-tier architecture, Database Design and development of applications using J2EE Design Patterns like Singleton, Session Facade, Factory Pattern and Business Delegate. · Hands on experience in Hadoop ecosystem including Spark, Kafka, HBase, Scala, Pig, Impala, Sqoop, Oozie, Flume, Mahout, Storm, Tableau, Talend big data technologies. · Involved in converting Hive/SQL queries into Spark transformations using Spark RDD and Scala concepts. · Experience working with SQL, PL/SQL and NoSQL databases like Microsoft SQL Server, Oracle, HBase, Cassandra and MongoDB. · Experience in Importing and exporting data from different databases like MySQL, Oracle, Netezza, Teradata, DB2 into HDFS using Sqoop, Talend. · Involved in writing Pig scripts to transform raw data into forming baseline data. · Worked on data warehouse product Amazon Redshift, which is a part of the AWS. · Good experience in design the jobs and transformations and load the data sequentially & parallel for initial and incremental loads in Talend. · Experience in developing and scheduling ETL workflows in Hadoop using Oozie. · *Experience in deploying and managing the Hadoop cluster using Cloudera Manager.* *Thanks & Regards,* *Abhishek Ojha* *732- 837- 2138 ao...@sagetl.com <ao...@sagetl.com>* -- You received this message because you are subscribed to the Google Groups "Android Developers" group. To unsubscribe from this group and stop receiving emails from it, send an email to android-developers+unsubscr...@googlegroups.com. To post to this group, send email to android-developers@googlegroups.com. Visit this group at https://groups.google.com/group/android-developers. To view this discussion on the web visit https://groups.google.com/d/msgid/android-developers/CAKpcopQdiyHjZ5E3FcL7Xqb%2BDwn7unAAaSW3gHbiOqjij6eunA%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.