[jira] [Comment Edited] (SPARK-33605) Add GCS FS/connector to the dependencies akin to S3

2020-11-30 Thread Rafal Wojdyla (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-33605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17240863#comment-17240863
 ] 

Rafal Wojdyla edited comment on SPARK-33605 at 11/30/20, 5:33 PM:
--

Actually, the pyspark package includes the config for S3 via 
{{core-default.xml}} that comes from {{hadoop-common}}, but not the AWS jars. 
Further {{core-default.xml}} doesn't include defaults for GCS, which is a 
Hadoop issue [HADOOP-17402|https://issues.apache.org/jira/browse/HADOOP-17402]. 
But I still wonder if pyspark could make things easier for users to package 
extra shaded FS jars, I understand though that would be an extra complexity and 
increase the size of the package. An alternative could be to add extras for 
pyspark package, like: {{pyspark[gcs]}}, {{pyspark[s3]}}, that would include 
extra dependencies on request.


was (Author: ravwojdyla):
Actually, the pyspark package includes the config for S3 via 
{{core-default.xml}} that comes from {{hadoop-common}}, but not the AWS jars. 
Further {{core-default.xml}} doesn't include defaults for GCS, which is a 
Hadoop issue. But I still wonder if pyspark could make things easier for users 
to package extra shaded FS jars, I understand though that would be an extra 
complexity and increase the size of the package. An alternative could be to add 
extras for pyspark package, like: {{pyspark[gcs]}}, {{pyspark[s3]}}, that would 
include extra dependencies on request.

> Add GCS FS/connector to the dependencies akin to S3
> ---
>
> Key: SPARK-33605
> URL: https://issues.apache.org/jira/browse/SPARK-33605
> Project: Spark
>  Issue Type: Improvement
>  Components: PySpark, Spark Core
>Affects Versions: 3.0.1
>Reporter: Rafal Wojdyla
>Priority: Major
>
> Spark comes with some S3 batteries included, which makes it easier to use 
> with S3, for GCS to work users are required to manually configure the jars. 
> This is especially problematic for python users who may not be accustomed to 
> java dependencies etc. This is an example of workaround for pyspark: 
> [pyspark_gcs|https://github.com/ravwojdyla/pyspark_gcs]. If we include the 
> [GCS 
> connector|https://cloud.google.com/dataproc/docs/concepts/connectors/cloud-storage],
>  it would make things easier for GCS users.
> Please let me know what you think.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-33605) Add GCS FS/connector to the dependencies akin to S3

2020-11-30 Thread Rafal Wojdyla (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-33605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17240863#comment-17240863
 ] 

Rafal Wojdyla edited comment on SPARK-33605 at 11/30/20, 4:45 PM:
--

Actually, the pyspark package includes the config for S3 via 
{{core-default.xml}} that comes from {{hadoop-common}}, but not the AWS jars. 
Further {{core-default.xml}} doesn't include defaults for GCS, which is a 
Hadoop issue. But I still wonder if pyspark could make things easier for users 
to package extra shaded FS jars, I understand though that would be an extra 
complexity and increase the size of the package. An alternative could be to add 
extras for pyspark package, like: `pyspark[gcs]`, `pyspark[s3]`, that would 
include extra dependencies on request.


was (Author: ravwojdyla):
Actually, the pyspark package includes the config for S3 via 
{{core-default.xml}} that comes from {{hadoop-common}}, but not the AWS jars. 
Further {{core-default.xml}} doesn't include defaults for GCS, which is a 
Hadoop issue. But I still wonder if pyspark could make things easier for users 
to package extra shaded FS jars, I understand though that would be an extra 
complexity and increase the size of the package.

> Add GCS FS/connector to the dependencies akin to S3
> ---
>
> Key: SPARK-33605
> URL: https://issues.apache.org/jira/browse/SPARK-33605
> Project: Spark
>  Issue Type: Improvement
>  Components: PySpark, Spark Core
>Affects Versions: 3.0.1
>Reporter: Rafal Wojdyla
>Priority: Major
>
> Spark comes with some S3 batteries included, which makes it easier to use 
> with S3, for GCS to work users are required to manually configure the jars. 
> This is especially problematic for python users who may not be accustomed to 
> java dependencies etc. This is an example of workaround for pyspark: 
> [pyspark_gcs|https://github.com/ravwojdyla/pyspark_gcs]. If we include the 
> [GCS 
> connector|https://cloud.google.com/dataproc/docs/concepts/connectors/cloud-storage],
>  it would make things easier for GCS users.
> The fix could be to:
>  * add the [gcs-connector 
> dependency|https://mvnrepository.com/artifact/com.google.cloud.bigdataoss/gcs-connector]
>  to the {{hadoop-cloud}}
>  * test that there are not problematic classpath conflicts
>  * test that pyspark package includes gcs connector in the jars
> Please let me know what you think.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-33605) Add GCS FS/connector to the dependencies akin to S3

2020-11-30 Thread Rafal Wojdyla (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-33605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17240863#comment-17240863
 ] 

Rafal Wojdyla edited comment on SPARK-33605 at 11/30/20, 4:45 PM:
--

Actually, the pyspark package includes the config for S3 via 
{{core-default.xml}} that comes from {{hadoop-common}}, but not the AWS jars. 
Further {{core-default.xml}} doesn't include defaults for GCS, which is a 
Hadoop issue. But I still wonder if pyspark could make things easier for users 
to package extra shaded FS jars, I understand though that would be an extra 
complexity and increase the size of the package. An alternative could be to add 
extras for pyspark package, like: {{pyspark[gcs]}}, {{pyspark[s3]}}, that would 
include extra dependencies on request.


was (Author: ravwojdyla):
Actually, the pyspark package includes the config for S3 via 
{{core-default.xml}} that comes from {{hadoop-common}}, but not the AWS jars. 
Further {{core-default.xml}} doesn't include defaults for GCS, which is a 
Hadoop issue. But I still wonder if pyspark could make things easier for users 
to package extra shaded FS jars, I understand though that would be an extra 
complexity and increase the size of the package. An alternative could be to add 
extras for pyspark package, like: `pyspark[gcs]`, `pyspark[s3]`, that would 
include extra dependencies on request.

> Add GCS FS/connector to the dependencies akin to S3
> ---
>
> Key: SPARK-33605
> URL: https://issues.apache.org/jira/browse/SPARK-33605
> Project: Spark
>  Issue Type: Improvement
>  Components: PySpark, Spark Core
>Affects Versions: 3.0.1
>Reporter: Rafal Wojdyla
>Priority: Major
>
> Spark comes with some S3 batteries included, which makes it easier to use 
> with S3, for GCS to work users are required to manually configure the jars. 
> This is especially problematic for python users who may not be accustomed to 
> java dependencies etc. This is an example of workaround for pyspark: 
> [pyspark_gcs|https://github.com/ravwojdyla/pyspark_gcs]. If we include the 
> [GCS 
> connector|https://cloud.google.com/dataproc/docs/concepts/connectors/cloud-storage],
>  it would make things easier for GCS users.
> The fix could be to:
>  * add the [gcs-connector 
> dependency|https://mvnrepository.com/artifact/com.google.cloud.bigdataoss/gcs-connector]
>  to the {{hadoop-cloud}}
>  * test that there are not problematic classpath conflicts
>  * test that pyspark package includes gcs connector in the jars
> Please let me know what you think.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org