[jira] [Commented] (SPARK-38904) Low cost DataFrame schema swap util

2022-07-25 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-38904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17570914#comment-17570914
 ] 

Apache Spark commented on SPARK-38904:
--

User 'ravwojdyla' has created a pull request for this issue:
https://github.com/apache/spark/pull/37277

> Low cost DataFrame schema swap util
> ---
>
> Key: SPARK-38904
> URL: https://issues.apache.org/jira/browse/SPARK-38904
> Project: Spark
>  Issue Type: New Feature
>  Components: SQL
>Affects Versions: 3.2.1
>Reporter: Rafal Wojdyla
>Assignee: Wenchen Fan
>Priority: Major
> Fix For: 3.4.0
>
>
> This question is related to [https://stackoverflow.com/a/37090151/1661491]. 
> Let's assume I have a pyspark DataFrame with certain schema, and I would like 
> to overwrite that schema with a new schema that I *know* is compatible, I 
> could do:
> {code:python}
> df: DataFrame
> new_schema = ...
> df.rdd.toDF(schema=new_schema)
> {code}
> Unfortunately this triggers computation as described in the link above. Is 
> there a way to do that at the metadata level (or lazy), without eagerly 
> triggering computation or conversions?
> Edit, note:
>  * the schema can be arbitrarily complicated (nested etc)
>  * new schema includes updates to description, nullability and additional 
> metadata (bonus points for updates to the type)
>  * I would like to avoid writing a custom query expression generator, 
> *unless* there's one already built into Spark that can generate query based 
> on the schema/{{{}StructType{}}}
> Copied from: 
> [https://stackoverflow.com/questions/71610435/how-to-overwrite-pyspark-dataframe-schema-without-data-scan]
> See POC of workaround/util in 
> [https://github.com/ravwojdyla/spark-schema-utils]
> Also posted in 
> [https://lists.apache.org/thread/5ds0f7chzp1s3h10tvjm3r96g769rvpj]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-38904) Low cost DataFrame schema swap util

2022-07-25 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-38904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17570915#comment-17570915
 ] 

Apache Spark commented on SPARK-38904:
--

User 'ravwojdyla' has created a pull request for this issue:
https://github.com/apache/spark/pull/37277

> Low cost DataFrame schema swap util
> ---
>
> Key: SPARK-38904
> URL: https://issues.apache.org/jira/browse/SPARK-38904
> Project: Spark
>  Issue Type: New Feature
>  Components: SQL
>Affects Versions: 3.2.1
>Reporter: Rafal Wojdyla
>Assignee: Wenchen Fan
>Priority: Major
> Fix For: 3.4.0
>
>
> This question is related to [https://stackoverflow.com/a/37090151/1661491]. 
> Let's assume I have a pyspark DataFrame with certain schema, and I would like 
> to overwrite that schema with a new schema that I *know* is compatible, I 
> could do:
> {code:python}
> df: DataFrame
> new_schema = ...
> df.rdd.toDF(schema=new_schema)
> {code}
> Unfortunately this triggers computation as described in the link above. Is 
> there a way to do that at the metadata level (or lazy), without eagerly 
> triggering computation or conversions?
> Edit, note:
>  * the schema can be arbitrarily complicated (nested etc)
>  * new schema includes updates to description, nullability and additional 
> metadata (bonus points for updates to the type)
>  * I would like to avoid writing a custom query expression generator, 
> *unless* there's one already built into Spark that can generate query based 
> on the schema/{{{}StructType{}}}
> Copied from: 
> [https://stackoverflow.com/questions/71610435/how-to-overwrite-pyspark-dataframe-schema-without-data-scan]
> See POC of workaround/util in 
> [https://github.com/ravwojdyla/spark-schema-utils]
> Also posted in 
> [https://lists.apache.org/thread/5ds0f7chzp1s3h10tvjm3r96g769rvpj]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-38904) Low cost DataFrame schema swap util

2022-07-04 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-38904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17562224#comment-17562224
 ] 

Apache Spark commented on SPARK-38904:
--

User 'cloud-fan' has created a pull request for this issue:
https://github.com/apache/spark/pull/37011

> Low cost DataFrame schema swap util
> ---
>
> Key: SPARK-38904
> URL: https://issues.apache.org/jira/browse/SPARK-38904
> Project: Spark
>  Issue Type: New Feature
>  Components: SQL
>Affects Versions: 3.2.1
>Reporter: Rafal Wojdyla
>Priority: Major
>
> This question is related to [https://stackoverflow.com/a/37090151/1661491]. 
> Let's assume I have a pyspark DataFrame with certain schema, and I would like 
> to overwrite that schema with a new schema that I *know* is compatible, I 
> could do:
> {code:python}
> df: DataFrame
> new_schema = ...
> df.rdd.toDF(schema=new_schema)
> {code}
> Unfortunately this triggers computation as described in the link above. Is 
> there a way to do that at the metadata level (or lazy), without eagerly 
> triggering computation or conversions?
> Edit, note:
>  * the schema can be arbitrarily complicated (nested etc)
>  * new schema includes updates to description, nullability and additional 
> metadata (bonus points for updates to the type)
>  * I would like to avoid writing a custom query expression generator, 
> *unless* there's one already built into Spark that can generate query based 
> on the schema/{{{}StructType{}}}
> Copied from: 
> [https://stackoverflow.com/questions/71610435/how-to-overwrite-pyspark-dataframe-schema-without-data-scan]
> See POC of workaround/util in 
> [https://github.com/ravwojdyla/spark-schema-utils]
> Also posted in 
> [https://lists.apache.org/thread/5ds0f7chzp1s3h10tvjm3r96g769rvpj]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-38904) Low cost DataFrame schema swap util

2022-07-04 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-38904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17562225#comment-17562225
 ] 

Apache Spark commented on SPARK-38904:
--

User 'cloud-fan' has created a pull request for this issue:
https://github.com/apache/spark/pull/37011

> Low cost DataFrame schema swap util
> ---
>
> Key: SPARK-38904
> URL: https://issues.apache.org/jira/browse/SPARK-38904
> Project: Spark
>  Issue Type: New Feature
>  Components: SQL
>Affects Versions: 3.2.1
>Reporter: Rafal Wojdyla
>Priority: Major
>
> This question is related to [https://stackoverflow.com/a/37090151/1661491]. 
> Let's assume I have a pyspark DataFrame with certain schema, and I would like 
> to overwrite that schema with a new schema that I *know* is compatible, I 
> could do:
> {code:python}
> df: DataFrame
> new_schema = ...
> df.rdd.toDF(schema=new_schema)
> {code}
> Unfortunately this triggers computation as described in the link above. Is 
> there a way to do that at the metadata level (or lazy), without eagerly 
> triggering computation or conversions?
> Edit, note:
>  * the schema can be arbitrarily complicated (nested etc)
>  * new schema includes updates to description, nullability and additional 
> metadata (bonus points for updates to the type)
>  * I would like to avoid writing a custom query expression generator, 
> *unless* there's one already built into Spark that can generate query based 
> on the schema/{{{}StructType{}}}
> Copied from: 
> [https://stackoverflow.com/questions/71610435/how-to-overwrite-pyspark-dataframe-schema-without-data-scan]
> See POC of workaround/util in 
> [https://github.com/ravwojdyla/spark-schema-utils]
> Also posted in 
> [https://lists.apache.org/thread/5ds0f7chzp1s3h10tvjm3r96g769rvpj]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-38904) Low cost DataFrame schema swap util

2022-05-02 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-38904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17530695#comment-17530695
 ] 

Apache Spark commented on SPARK-38904:
--

User 'ravwojdyla' has created a pull request for this issue:
https://github.com/apache/spark/pull/36430

> Low cost DataFrame schema swap util
> ---
>
> Key: SPARK-38904
> URL: https://issues.apache.org/jira/browse/SPARK-38904
> Project: Spark
>  Issue Type: New Feature
>  Components: SQL
>Affects Versions: 3.2.1
>Reporter: Rafal Wojdyla
>Priority: Major
>
> This question is related to [https://stackoverflow.com/a/37090151/1661491]. 
> Let's assume I have a pyspark DataFrame with certain schema, and I would like 
> to overwrite that schema with a new schema that I *know* is compatible, I 
> could do:
> {code:python}
> df: DataFrame
> new_schema = ...
> df.rdd.toDF(schema=new_schema)
> {code}
> Unfortunately this triggers computation as described in the link above. Is 
> there a way to do that at the metadata level (or lazy), without eagerly 
> triggering computation or conversions?
> Edit, note:
>  * the schema can be arbitrarily complicated (nested etc)
>  * new schema includes updates to description, nullability and additional 
> metadata (bonus points for updates to the type)
>  * I would like to avoid writing a custom query expression generator, 
> *unless* there's one already built into Spark that can generate query based 
> on the schema/{{{}StructType{}}}
> Copied from: 
> [https://stackoverflow.com/questions/71610435/how-to-overwrite-pyspark-dataframe-schema-without-data-scan]
> See POC of workaround/util in 
> [https://github.com/ravwojdyla/spark-schema-utils]
> Also posted in 
> [https://lists.apache.org/thread/5ds0f7chzp1s3h10tvjm3r96g769rvpj]



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-38904) Low cost DataFrame schema swap util

2022-04-18 Thread Rafal Wojdyla (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-38904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17523636#comment-17523636
 ] 

Rafal Wojdyla commented on SPARK-38904:
---

[~hyukjin.kwon] ok, will give it a shot, and ping you if I get stuck. Also if 
you have any immediate tips would appreciate it.

> Low cost DataFrame schema swap util
> ---
>
> Key: SPARK-38904
> URL: https://issues.apache.org/jira/browse/SPARK-38904
> Project: Spark
>  Issue Type: New Feature
>  Components: SQL
>Affects Versions: 3.2.1
>Reporter: Rafal Wojdyla
>Priority: Major
>
> This question is related to [https://stackoverflow.com/a/37090151/1661491]. 
> Let's assume I have a pyspark DataFrame with certain schema, and I would like 
> to overwrite that schema with a new schema that I *know* is compatible, I 
> could do:
> {code:python}
> df: DataFrame
> new_schema = ...
> df.rdd.toDF(schema=new_schema)
> {code}
> Unfortunately this triggers computation as described in the link above. Is 
> there a way to do that at the metadata level (or lazy), without eagerly 
> triggering computation or conversions?
> Edit, note:
>  * the schema can be arbitrarily complicated (nested etc)
>  * new schema includes updates to description, nullability and additional 
> metadata (bonus points for updates to the type)
>  * I would like to avoid writing a custom query expression generator, 
> *unless* there's one already built into Spark that can generate query based 
> on the schema/{{{}StructType{}}}
> Copied from: 
> [https://stackoverflow.com/questions/71610435/how-to-overwrite-pyspark-dataframe-schema-without-data-scan]
> See POC of workaround/util in 
> [https://github.com/ravwojdyla/spark-schema-utils]
> Also posted in 
> [https://lists.apache.org/thread/5ds0f7chzp1s3h10tvjm3r96g769rvpj]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-38904) Low cost DataFrame schema swap util

2022-04-17 Thread Hyukjin Kwon (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-38904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17523469#comment-17523469
 ] 

Hyukjin Kwon commented on SPARK-38904:
--

It would be great to have such API actually. Feel free to go ahead for a PR if 
you're interested in that.

> Low cost DataFrame schema swap util
> ---
>
> Key: SPARK-38904
> URL: https://issues.apache.org/jira/browse/SPARK-38904
> Project: Spark
>  Issue Type: New Feature
>  Components: SQL
>Affects Versions: 3.2.1
>Reporter: Rafal Wojdyla
>Priority: Major
>
> This question is related to [https://stackoverflow.com/a/37090151/1661491]. 
> Let's assume I have a pyspark DataFrame with certain schema, and I would like 
> to overwrite that schema with a new schema that I *know* is compatible, I 
> could do:
> {code:python}
> df: DataFrame
> new_schema = ...
> df.rdd.toDF(schema=new_schema)
> {code}
> Unfortunately this triggers computation as described in the link above. Is 
> there a way to do that at the metadata level (or lazy), without eagerly 
> triggering computation or conversions?
> Edit, note:
>  * the schema can be arbitrarily complicated (nested etc)
>  * new schema includes updates to description, nullability and additional 
> metadata (bonus points for updates to the type)
>  * I would like to avoid writing a custom query expression generator, 
> *unless* there's one already built into Spark that can generate query based 
> on the schema/{{{}StructType{}}}
> Copied from: 
> [https://stackoverflow.com/questions/71610435/how-to-overwrite-pyspark-dataframe-schema-without-data-scan]
> See POC of workaround/util in 
> [https://github.com/ravwojdyla/spark-schema-utils]
> Also posted in 
> [https://lists.apache.org/thread/5ds0f7chzp1s3h10tvjm3r96g769rvpj]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-38904) Low cost DataFrame schema swap util

2022-04-14 Thread Rafal Wojdyla (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-38904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17522610#comment-17522610
 ] 

Rafal Wojdyla commented on SPARK-38904:
---

[~hyukjin.kwon] thanks for the comment, sounds good to me, just want to point 
out that at least in my case it's important that the metadata of the columns 
gets "updated".

> Low cost DataFrame schema swap util
> ---
>
> Key: SPARK-38904
> URL: https://issues.apache.org/jira/browse/SPARK-38904
> Project: Spark
>  Issue Type: New Feature
>  Components: SQL
>Affects Versions: 3.2.1
>Reporter: Rafal Wojdyla
>Priority: Major
>
> This question is related to [https://stackoverflow.com/a/37090151/1661491]. 
> Let's assume I have a pyspark DataFrame with certain schema, and I would like 
> to overwrite that schema with a new schema that I *know* is compatible, I 
> could do:
> {code:python}
> df: DataFrame
> new_schema = ...
> df.rdd.toDF(schema=new_schema)
> {code}
> Unfortunately this triggers computation as described in the link above. Is 
> there a way to do that at the metadata level (or lazy), without eagerly 
> triggering computation or conversions?
> Edit, note:
>  * the schema can be arbitrarily complicated (nested etc)
>  * new schema includes updates to description, nullability and additional 
> metadata (bonus points for updates to the type)
>  * I would like to avoid writing a custom query expression generator, 
> *unless* there's one already built into Spark that can generate query based 
> on the schema/{{{}StructType{}}}
> Copied from: 
> [https://stackoverflow.com/questions/71610435/how-to-overwrite-pyspark-dataframe-schema-without-data-scan]
> See POC of workaround/util in 
> [https://github.com/ravwojdyla/spark-schema-utils]
> Also posted in 
> [https://lists.apache.org/thread/5ds0f7chzp1s3h10tvjm3r96g769rvpj]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-38904) Low cost DataFrame schema swap util

2022-04-14 Thread Hyukjin Kwon (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-38904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17522599#comment-17522599
 ] 

Hyukjin Kwon commented on SPARK-38904:
--

I think we should have an API like DataFrame.select(StructType) so we don't 
need to trigger another ser/de via conversion between RDD and DataFrame.

> Low cost DataFrame schema swap util
> ---
>
> Key: SPARK-38904
> URL: https://issues.apache.org/jira/browse/SPARK-38904
> Project: Spark
>  Issue Type: New Feature
>  Components: SQL
>Affects Versions: 3.2.1
>Reporter: Rafal Wojdyla
>Priority: Major
>
> This question is related to [https://stackoverflow.com/a/37090151/1661491]. 
> Let's assume I have a pyspark DataFrame with certain schema, and I would like 
> to overwrite that schema with a new schema that I *know* is compatible, I 
> could do:
> {code:python}
> df: DataFrame
> new_schema = ...
> df.rdd.toDF(schema=new_schema)
> {code}
> Unfortunately this triggers computation as described in the link above. Is 
> there a way to do that at the metadata level (or lazy), without eagerly 
> triggering computation or conversions?
> Edit, note:
>  * the schema can be arbitrarily complicated (nested etc)
>  * new schema includes updates to description, nullability and additional 
> metadata (bonus points for updates to the type)
>  * I would like to avoid writing a custom query expression generator, 
> *unless* there's one already built into Spark that can generate query based 
> on the schema/{{{}StructType{}}}
> Copied from: 
> [https://stackoverflow.com/questions/71610435/how-to-overwrite-pyspark-dataframe-schema-without-data-scan]
> See POC of workaround/util in 
> [https://github.com/ravwojdyla/spark-schema-utils]
> Also posted in 
> [https://lists.apache.org/thread/5ds0f7chzp1s3h10tvjm3r96g769rvpj]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org