[ 
https://issues.apache.org/jira/browse/AIRFLOW-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17034360#comment-17034360
 ] 

Kamil Bregula commented on AIRFLOW-6649:
----------------------------------------

I agree, but I know that the internal implementation of Snowflake makes it the 
most efficient when it reads data from GCS/S3. Writing data directly is simply 
much slower. Snowflake can read data using multiple nodes when doing COPY INTO

However, I support your idea, but I'm afraid that this might be problematic. 

I wonder about the operator dedicated to copying data using COPY INTO.  It 
could start the copying process and then monitor the process and share logs 
directly in Airflow.  We have a similar solution in the KubernetesPodOperator.

> Google storage to Snowflake
> ---------------------------
>
>                 Key: AIRFLOW-6649
>                 URL: https://issues.apache.org/jira/browse/AIRFLOW-6649
>             Project: Apache Airflow
>          Issue Type: New Feature
>          Components: gcp, operators
>    Affects Versions: 1.10.6
>            Reporter: nexoriv
>            Priority: Major
>              Labels: snowflake
>
> can someone share google storage to snowflake operator?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to