Dear Associates, Hope you are doing great!!!
Kindly help with suitable profiles of your consultants who are good fit for the below jobs. Please send me across recent updated resume along with the contact details of your consultants to *vams...@svktechinc.com* <vams...@svktechinc.com> *Role: Python/Perl Scripting* *(OPT’s are also fine)* *Work Location: Atlanta, GA* *Duration: 6-12 Months* *Job Description:* Extensive experience in Python/Perl Scripting Experience with Bash, Linux tools. Experience with REST or SOAP API Serive using Python/Perl Scripting Basic Networking and application Protocol Knowledge Qa automation testing experience Java Script/PHP experience is a plus. SOAP UI Experience is a Plus *Position: Metadata Manager Engineer/Developer* *Work Location: West Chester, PA* *Duration: 6-12 Months* *Overview:* Seeking contractor resources to come in and engineer end to end solution for metadata manager and data lineage using Informatica Metadata Manager and connectors. The individual should be able to design and create custom meta-models to support the enterprise needs. The individual should be strong in Informatica tools and scripting tools and must be a senior resource who is self-driven to work with the business to capture requirements and translate to technical deliverables. Individual should have strong communication skills. *Responsibilities:* • Gather and translate business requirements into deployable s/w components using Informatica Metadata Manager Engineer metadata solutions to manage business glossary and metadata for the enterprise • Use xConnectors and built in connectors to bring in metadata from various end points • Automate metadata loads and refreshes for xConnectors and Business Glossaries • Define and maintain taxonomies for managing and organizing metadata • Establish and maintain end to end data lineage accurately reflecting production processes and data flows • Establish and maintain technical definitions and business glossaries within enterprise business glossary. • Organize business glossaries and map them to data fields • Establish governance processes for publishing Business Glossary • Document standards and best practices around Metadata Manager • Document M&Ps and Operating Procedures *Requirement:* • Strong experience of Informatica Metadata Manager • Strong experience in modeling and loading custom meta-models using Informatica Metadata Manager • Strong experience in interfacing Informatica Metadata Manager to metadata sources: RDBMS (Oracle, Teradata, SQL etc), Bteq scripts, Power Designer, Informatica metadata, Manta xconnect etc • Strong skills in shell scripting using Unix/Linux shell, python, Perl • 2+ years of experience in Metadata Management using Informatica Metadata Manager • 8+ years of experience in software engineering in related fields (ETL, Informatica) *Role: Hadoop Architect* *Work Location: Denver, CO* *Duration: 6-12 Months* *Job Description:* • Understanding various Big data technologies like Hive, Spark, Hbase, Accumulo , Java, Scala, kafka • Integrating technologies for optimum speed of ingestion and driving solutions on Big data platform • Hands on experience on coding them and giving demonstration on how they work. • Coordinating review on performance and making suggestions on improvement where ever possible • To be able to document use cases, solutions and recommendations; • Perform architecture design, data modeling, and implementation of Big Data platform and analytic applications for Huawei consumer products. • Analyze latest technologies and their innovative applications in both business intelligence analysis and new service offerings. Bring these insights and best practices to Huawei's global consumer business. • Apply deep learning capability to improve understanding of user behavior and data • Develop highly scalable and extensible Big Data platform which enables collection, storage, modeling, and analysis of massive data sets from numerous channels. • Develop utilities to monitor cluster better • To be able to benchmark systems, analyze system bottlenecks and propose solutions to eliminate them; • Manage large clusters with huge volumes of data • Help architect and develop big data solutions streaming/batch using Hadoop technologies. *Role: IDQ Developer* *Work Location: West Chester, PA* *Duration: 6-12 Months* *Technical Skills:* • Experience in DQ on Hadoop platform; knowledge of Hive, Pig, Data Management in Hadoop • Experience in Teradata is a must have; good skills in writing SQLs in Teradata to analyze data issues • Informatica IDQ, IDE, Analyst - Develop using IDQ, use Analyst analysis • Develop and enforce methods of validation mechanism and metrics collection to ensure quality • Develop reusable Data Quality mappings and mapplets • Use out of the box features of Informatica DQ suite • Shell Scripting - be able to automate jobs using well written scripts - shell scripts, python etc • SQL development - a Must • Data Quality methods and procedures Analytical Skills • Identify quality issues and suggest solutions • Articulate the importance of Data Quality, and how it fits in the overall data strategy • Perform Data Profiling in a regular basis • Communicate findings to business Process Skills • Agile Development / Scrums • Work in an environment where teams are geographically disbursed • Knowledge of Master Data management is a plus • Knowledge of Meta data management is a plus General • Be self-driven and work independently on a given task • Be a team player; collaborate and build team spirit • Maintain professional standards • Follow sound engineering principles • Ability to learn new skills *Role: Data Scientist* *Work Location: Denver, CO* *Duration: 6-12 Months* *Job Description:* • The Data Scientist will be diving into huge, noisy, and complex real-world behavioral data to produce innovative analysis and new types of predictive models of customer behaviors, Cable and TV product performance • Unleashing your creativity to find hidden gems that improve our understanding of Cable and internet that will drive actionable business decisions • Would have good understanding of the experimental design, simulation, Optimization, mixed methods and market research. • Advanced knowledge of two or more of the analytics languages/toolkits such as R, SAS, SPSS, Matlab or Python with analytical extensions. • Discovering and implementing new tools and technologies to keep current with the trends and advances in the data science community • Serving as a subject matter expert in the capabilities of Data Science. • SQL and advanced data processing. • Solid experience in practical predictive modeling. Forecasting is a plus. • The Data Scientist will be proficient in at least one programming/scripting language like Python, Scala, Julia, Ruby, or Java, C#, etc. • Strong understanding of big data concepts and knowledge of big data languages/tools such as Hive, Pig, Mahout or Spark. • Work with business partners in understanding and translating business requirements into solution designs. • Understand complex ideas and break them down into logical steps • Experience with Hadoop Hive, Pig, Flume, Sqoop, storm, Kafka, Accumulo, and HBase • Integrating technologies for optimum speed of ingestion and driving solutions on Big data platform • Hands on experience on coding them and giving demonstration on how they work. • Coordinating review on performance and making suggestions on improvement where ever possible *Thanks & Regards,* *Vamsheedhar* *SVK Technology Solutions, Inc * vams...@svktechinc.com 200 Metroplex Dr, Suite 401, Edison, NJ 08817 Tel: 609-904-0020 | Fax: +1 732-391-4243 | www.svktechinc.com *An E-Verified company |An-Equal Employment Opportunity Employer* -- You received this message because you are subscribed to the Google Groups "IT RECURITER" group. To unsubscribe from this group and stop receiving emails from it, send an email to it-recuriter+unsubscr...@googlegroups.com. To post to this group, send email to it-recuriter@googlegroups.com. Visit this group at https://groups.google.com/group/it-recuriter. For more options, visit https://groups.google.com/d/optout.