Hello *Associate*,

Hope you are doing well

We have below requirement open. Please send me your genuine candidate on my
email ID  *ifthek...@canopyone.com <ifthek...@canopyone.com>*


*Position: Big Data/Hadoop Developer*
*Location : Dearborn - MI*
*Type: Contract*

*Job Description* • Minimum 8 + Years working experience in Bigdata,
Hadoop, Spark, Python, Scala, Kafka, SQLs, ETL development, data modelling
• Hands on experience in GCP, Big Query, GCS Bucket, G-Cloud Function,
cloud dataflow pub/sub cloud shell, GSUTIL, BQ command line utilities,
DataProc, Stack driver
• Experience in writing a program using g-cloud function – to load data in
to Big Query for on arrival files in GCS bucket
• Experience in writing a program to maintain raw file archival in GCS B

*Experience 5to9yrs*

*Required Skills Technical Skills*- ,Python, Hadoop Administration, Big
Data Management

*Domain Skills*- ,Industrial Manufacturing, Manufacturing Oper-Manlog

Nice to have skills Technical Skills- ,Big Data, Java

*Roles & Responsibilities :*
Minimum 8 + Years working experience in Bigdata, Hadoop, Spark, Python,
Scala, Kafka,SQLs,ETL development, data modelling
Hands on experience in GCP, Big Query, GCS Bucket, G-Cloud Function, cloud
dataflow pub/sub cloud shell, GSUTIL, BQ command line utilities, DataProc,
Stack driver
Experience in writing a program using g-cloud function – to load data in to
BigQuery for on arrival files in GCS bucket
Experience in writing a program to maintain raw file archival in GCS Bucket
 Designing any schema in BigQuery – full scan for OLAP/BI use cases,
experience in any technology usage for disk I/O throughput – cloud platform
economy of scale – any technology in combination of MapRed and BigQuery to
get better performance
 Loading Data on incremental basis to BIGQUERY raw and UDM layer using
SOQL, Google DataProc, GCS bucket, HIVE, Spark, Scala, Python, Gsutil and
Shell Script.
Experience in writing a program to download Database (SQL Server, Oracle,
DB2) dump and load it in GCS Bucket – from GCS bucket to Database (hosted
in Google cloud) and load to BigQuery using python/spark/Scala/DataProc
Experience in processing and loading bound and unbound Data from Google
pub/sub to BigQuery using cloud Dataflow with scripting language
Using BigQuery rest API with (python/spark/Scala) to ingest Data from and
some other site to BIGQUERY, build App Engine-Based Dashboards
Do participate in architecture council for database architecture
recommendation.
Deep analysis on SQL execution plan and recommend hints or restructure or
introduce index or materialized view for better performance
 Open SSH tunnel to Google DataProc to access to yarn manager to monitor
spark jobs.
Submit spark jobs using gsutil and spark submission get it executed in
Dataproc cluster Qualifications:
2+ Experience Google Cloud Platform technologies.
Experience in Private,hybrid or public cloud technology.
 Process GCP and other cloud implementaion an


*Thanks & Regards*

*Mohammed Ifthekharuddin*

 Tel: 703-831-8282 Ext 223 Cell: 323-825-5662
  Email: ifthek...@canopyone.com  Web: www.canopyone.com
<http://canopyone.com/>

-- 
You received this message because you are subscribed to "rtc-linux".
Membership options at http://groups.google.com/group/rtc-linux .
Please read http://groups.google.com/group/rtc-linux/web/checklist
before submitting a driver.
--- 
You received this message because you are subscribed to the Google Groups 
"rtc-linux" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to rtc-linux+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/rtc-linux/CAHa9xHD6g7aj6aj%2BXtiZmRdPUGhXWS97Nv9_VOgaYGnWXOuMiQ%40mail.gmail.com.

Reply via email to