[jira] [Commented] (SPARK-34588) Support int64 buffer lengths in Java for pyspark Pandas UDF as buffer expanding
[ https://issues.apache.org/jira/browse/SPARK-34588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17300264#comment-17300264 ] Hyukjin Kwon commented on SPARK-34588: -- If it's a different issue from this, let's interact in mailing list instead of here. > Support int64 buffer lengths in Java for pyspark Pandas UDF as buffer > expanding > --- > > Key: SPARK-34588 > URL: https://issues.apache.org/jira/browse/SPARK-34588 > Project: Spark > Issue Type: Improvement > Components: PySpark >Affects Versions: 3.0.2 > Environment: Hadoop part: > * spark 3.0.2 > * java 1.8.0_77 > * scala 2.12.10 > Python part: > * cython 0.29.22 > * numpy 1.19.5 > * pandas 1.1.5 > * pyarrow 2.0.0 >Reporter: Dmitry Kravchuk >Priority: Major > Fix For: 3.1.1 > > > This issue is an extention of [arrow > issue|https://issues.apache.org/jira/browse/ARROW-10957#] for making possible > using pyspark Pandas UDF functions for data more than 2gb per data group. > Here is the deal - arrow [supports > |https://github.com/apache/arrow/commit/9742007c463e253e2b916e65f668146953456a00#diff-2e086b32ec292aae20695dd4341c647c9a9d7d3d77816bf849f7fbf68e9fa6cfR209]long > type for data serialization between java and python but spark doesn't. It > gives a lot of problem when somebody is trying to apply Pandas UDF for > dataset where any group is more than 2^32(-1) bytes what is equal to 2gb. > Solving this problem will help to use more data per Pandas UDF groupping - > 2^64(-1) bytes. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-34588) Support int64 buffer lengths in Java for pyspark Pandas UDF as buffer expanding
[ https://issues.apache.org/jira/browse/SPARK-34588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17300178#comment-17300178 ] Dmitry Kravchuk commented on SPARK-34588: - [~hyukjin.kwon] hey there. I have some issues with Zeppelin 0.9.0 and spark3 interpreter. I set up Zeppelin according [documentation|https://zeppelin.apache.org/docs/latest/interpreter/spark.html] but faced issues related to not existing file spark-yarn-archive.tgz. Can you help me out and give an advice where actually I can download this archive for spark3? > Support int64 buffer lengths in Java for pyspark Pandas UDF as buffer > expanding > --- > > Key: SPARK-34588 > URL: https://issues.apache.org/jira/browse/SPARK-34588 > Project: Spark > Issue Type: Improvement > Components: PySpark >Affects Versions: 3.0.2 > Environment: Hadoop part: > * spark 3.0.2 > * java 1.8.0_77 > * scala 2.12.10 > Python part: > * cython 0.29.22 > * numpy 1.19.5 > * pandas 1.1.5 > * pyarrow 2.0.0 >Reporter: Dmitry Kravchuk >Priority: Major > Fix For: 3.1.1 > > > This issue is an extention of [arrow > issue|https://issues.apache.org/jira/browse/ARROW-10957#] for making possible > using pyspark Pandas UDF functions for data more than 2gb per data group. > Here is the deal - arrow [supports > |https://github.com/apache/arrow/commit/9742007c463e253e2b916e65f668146953456a00#diff-2e086b32ec292aae20695dd4341c647c9a9d7d3d77816bf849f7fbf68e9fa6cfR209]long > type for data serialization between java and python but spark doesn't. It > gives a lot of problem when somebody is trying to apply Pandas UDF for > dataset where any group is more than 2^32(-1) bytes what is equal to 2gb. > Solving this problem will help to use more data per Pandas UDF groupping - > 2^64(-1) bytes. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-34588) Support int64 buffer lengths in Java for pyspark Pandas UDF as buffer expanding
[ https://issues.apache.org/jira/browse/SPARK-34588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17299215#comment-17299215 ] Hyukjin Kwon commented on SPARK-34588: -- That's incredible [~dishka_krauch]! > Support int64 buffer lengths in Java for pyspark Pandas UDF as buffer > expanding > --- > > Key: SPARK-34588 > URL: https://issues.apache.org/jira/browse/SPARK-34588 > Project: Spark > Issue Type: Improvement > Components: PySpark >Affects Versions: 3.0.2 > Environment: Hadoop part: > * spark 3.0.2 > * java 1.8.0_77 > * scala 2.12.10 > Python part: > * cython 0.29.22 > * numpy 1.19.5 > * pandas 1.1.5 > * pyarrow 2.0.0 >Reporter: Dmitry Kravchuk >Priority: Major > > This issue is an extention of [arrow > issue|https://issues.apache.org/jira/browse/ARROW-10957#] for making possible > using pyspark Pandas UDF functions for data more than 2gb per data group. > Here is the deal - arrow [supports > |https://github.com/apache/arrow/commit/9742007c463e253e2b916e65f668146953456a00#diff-2e086b32ec292aae20695dd4341c647c9a9d7d3d77816bf849f7fbf68e9fa6cfR209]long > type for data serialization between java and python but spark doesn't. It > gives a lot of problem when somebody is trying to apply Pandas UDF for > dataset where any group is more than 2^32(-1) bytes what is equal to 2gb. > Solving this problem will help to use more data per Pandas UDF groupping - > 2^64(-1) bytes. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-34588) Support int64 buffer lengths in Java for pyspark Pandas UDF as buffer expanding
[ https://issues.apache.org/jira/browse/SPARK-34588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17298897#comment-17298897 ] Dmitry Kravchuk commented on SPARK-34588: - Hi! Negative length error has gone away after spark upgrading to 3.1.1. Sometimes a have errors with hadoop nodes ram memory but it does not related to spark issue btw. > Support int64 buffer lengths in Java for pyspark Pandas UDF as buffer > expanding > --- > > Key: SPARK-34588 > URL: https://issues.apache.org/jira/browse/SPARK-34588 > Project: Spark > Issue Type: Improvement > Components: PySpark >Affects Versions: 3.0.2 > Environment: Hadoop part: > * spark 3.0.2 > * java 1.8.0_77 > * scala 2.12.10 > Python part: > * cython 0.29.22 > * numpy 1.19.5 > * pandas 1.1.5 > * pyarrow 2.0.0 >Reporter: Dmitry Kravchuk >Priority: Major > > This issue is an extention of [arrow > issue|https://issues.apache.org/jira/browse/ARROW-10957#] for making possible > using pyspark Pandas UDF functions for data more than 2gb per data group. > Here is the deal - arrow [supports > |https://github.com/apache/arrow/commit/9742007c463e253e2b916e65f668146953456a00#diff-2e086b32ec292aae20695dd4341c647c9a9d7d3d77816bf849f7fbf68e9fa6cfR209]long > type for data serialization between java and python but spark doesn't. It > gives a lot of problem when somebody is trying to apply Pandas UDF for > dataset where any group is more than 2^32(-1) bytes what is equal to 2gb. > Solving this problem will help to use more data per Pandas UDF groupping - > 2^64(-1) bytes. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-34588) Support int64 buffer lengths in Java for pyspark Pandas UDF as buffer expanding
[ https://issues.apache.org/jira/browse/SPARK-34588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17295763#comment-17295763 ] Hyukjin Kwon commented on SPARK-34588: -- Thank you [~dishka_krauch]. > Support int64 buffer lengths in Java for pyspark Pandas UDF as buffer > expanding > --- > > Key: SPARK-34588 > URL: https://issues.apache.org/jira/browse/SPARK-34588 > Project: Spark > Issue Type: Improvement > Components: PySpark >Affects Versions: 3.0.2 > Environment: Hadoop part: > * spark 3.0.2 > * java 1.8.0_77 > * scala 2.12.10 > Python part: > * cython 0.29.22 > * numpy 1.19.5 > * pandas 1.1.5 > * pyarrow 2.0.0 >Reporter: Dmitry Kravchuk >Priority: Major > > This issue is an extention of [arrow > issue|https://issues.apache.org/jira/browse/ARROW-10957#] for making possible > using pyspark Pandas UDF functions for data more than 2gb per data group. > Here is the deal - arrow [supports > |https://github.com/apache/arrow/commit/9742007c463e253e2b916e65f668146953456a00#diff-2e086b32ec292aae20695dd4341c647c9a9d7d3d77816bf849f7fbf68e9fa6cfR209]long > type for data serialization between java and python but spark doesn't. It > gives a lot of problem when somebody is trying to apply Pandas UDF for > dataset where any group is more than 2^32(-1) bytes what is equal to 2gb. > Solving this problem will help to use more data per Pandas UDF groupping - > 2^64(-1) bytes. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-34588) Support int64 buffer lengths in Java for pyspark Pandas UDF as buffer expanding
[ https://issues.apache.org/jira/browse/SPARK-34588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17295118#comment-17295118 ] Dmitry Kravchuk commented on SPARK-34588: - Okay, gonna do it. My test cluster is really small (16gb per data node) that's why I need to reconfigure it a little bit. Will return here with test results in a week, thx. > Support int64 buffer lengths in Java for pyspark Pandas UDF as buffer > expanding > --- > > Key: SPARK-34588 > URL: https://issues.apache.org/jira/browse/SPARK-34588 > Project: Spark > Issue Type: Improvement > Components: PySpark >Affects Versions: 3.0.2 > Environment: Hadoop part: > * spark 3.0.2 > * java 1.8.0_77 > * scala 2.12.10 > Python part: > * cython 0.29.22 > * numpy 1.19.5 > * pandas 1.1.5 > * pyarrow 2.0.0 >Reporter: Dmitry Kravchuk >Priority: Major > > This issue is an extention of [arrow > issue|https://issues.apache.org/jira/browse/ARROW-10957#] for making possible > using pyspark Pandas UDF functions for data more than 2gb per data group. > Here is the deal - arrow [supports > |https://github.com/apache/arrow/commit/9742007c463e253e2b916e65f668146953456a00#diff-2e086b32ec292aae20695dd4341c647c9a9d7d3d77816bf849f7fbf68e9fa6cfR209]long > type for data serialization between java and python but spark doesn't. It > gives a lot of problem when somebody is trying to apply Pandas UDF for > dataset where any group is more than 2^32(-1) bytes what is equal to 2gb. > Solving this problem will help to use more data per Pandas UDF groupping - > 2^64(-1) bytes. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-34588) Support int64 buffer lengths in Java for pyspark Pandas UDF as buffer expanding
[ https://issues.apache.org/jira/browse/SPARK-34588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17294939#comment-17294939 ] Hyukjin Kwon commented on SPARK-34588: -- Yeah, it's released. Yes, it would be great if we can verify if the issue still stands in the latest Spark. > Support int64 buffer lengths in Java for pyspark Pandas UDF as buffer > expanding > --- > > Key: SPARK-34588 > URL: https://issues.apache.org/jira/browse/SPARK-34588 > Project: Spark > Issue Type: Improvement > Components: PySpark >Affects Versions: 3.0.2 > Environment: Hadoop part: > * spark 3.0.2 > * java 1.8.0_77 > * scala 2.12.10 > Python part: > * cython 0.29.22 > * numpy 1.19.5 > * pandas 1.1.5 > * pyarrow 2.0.0 >Reporter: Dmitry Kravchuk >Priority: Major > > This issue is an extention of [arrow > issue|https://issues.apache.org/jira/browse/ARROW-10957#] for making possible > using pyspark Pandas UDF functions for data more than 2gb per data group. > Here is the deal - arrow [supports > |https://github.com/apache/arrow/commit/9742007c463e253e2b916e65f668146953456a00#diff-2e086b32ec292aae20695dd4341c647c9a9d7d3d77816bf849f7fbf68e9fa6cfR209]long > type for data serialization between java and python but spark doesn't. It > gives a lot of problem when somebody is trying to apply Pandas UDF for > dataset where any group is more than 2^32(-1) bytes what is equal to 2gb. > Solving this problem will help to use more data per Pandas UDF groupping - > 2^64(-1) bytes. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-34588) Support int64 buffer lengths in Java for pyspark Pandas UDF as buffer expanding
[ https://issues.apache.org/jira/browse/SPARK-34588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17293975#comment-17293975 ] Dmitry Kravchuk commented on SPARK-34588: - [~emkornfield] [~gurwls223] looks like 3.1.1 spark is released, right? https://spark.apache.org/releases/spark-release-3-1-1.html > Support int64 buffer lengths in Java for pyspark Pandas UDF as buffer > expanding > --- > > Key: SPARK-34588 > URL: https://issues.apache.org/jira/browse/SPARK-34588 > Project: Spark > Issue Type: Improvement > Components: PySpark >Affects Versions: 3.0.2 > Environment: Hadoop part: > * spark 3.0.2 > * java 1.8.0_77 > * scala 2.12.10 > Python part: > * cython 0.29.22 > * numpy 1.19.5 > * pandas 1.1.5 > * pyarrow 2.0.0 >Reporter: Dmitry Kravchuk >Priority: Major > > This issue is an extention of [arrow > issue|https://issues.apache.org/jira/browse/ARROW-10957#] for making possible > using pyspark Pandas UDF functions for data more than 2gb per data group. > Here is the deal - arrow [supports > |https://github.com/apache/arrow/commit/9742007c463e253e2b916e65f668146953456a00#diff-2e086b32ec292aae20695dd4341c647c9a9d7d3d77816bf849f7fbf68e9fa6cfR209]long > type for data serialization between java and python but spark doesn't. It > gives a lot of problem when somebody is trying to apply Pandas UDF for > dataset where any group is more than 2^32(-1) bytes what is equal to 2gb. > Solving this problem will help to use more data per Pandas UDF groupping - > 2^64(-1) bytes. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-34588) Support int64 buffer lengths in Java for pyspark Pandas UDF as buffer expanding
[ https://issues.apache.org/jira/browse/SPARK-34588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17293817#comment-17293817 ] Micah Kornfield commented on SPARK-34588: - It looks like 3.1.0 should have the change in it. We should be able to verify against that. > Support int64 buffer lengths in Java for pyspark Pandas UDF as buffer > expanding > --- > > Key: SPARK-34588 > URL: https://issues.apache.org/jira/browse/SPARK-34588 > Project: Spark > Issue Type: Improvement > Components: PySpark >Affects Versions: 3.0.2 > Environment: Hadoop part: > * spark 3.0.2 > * java 1.8.0_77 > * scala 2.12.10 > Python part: > * cython 0.29.22 > * numpy 1.19.5 > * pandas 1.1.5 > * pyarrow 2.0.0 >Reporter: Dmitry Kravchuk >Priority: Major > > This issue is an extention of [arrow > issue|https://issues.apache.org/jira/browse/ARROW-10957#] for making possible > using pyspark Pandas UDF functions for data more than 2gb per data group. > Here is the deal - arrow [supports > |https://github.com/apache/arrow/commit/9742007c463e253e2b916e65f668146953456a00#diff-2e086b32ec292aae20695dd4341c647c9a9d7d3d77816bf849f7fbf68e9fa6cfR209]long > type for data serialization between java and python but spark doesn't. It > gives a lot of problem when somebody is trying to apply Pandas UDF for > dataset where any group is more than 2^32(-1) bytes what is equal to 2gb. > Solving this problem will help to use more data per Pandas UDF groupping - > 2^64(-1) bytes. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-34588) Support int64 buffer lengths in Java for pyspark Pandas UDF as buffer expanding
[ https://issues.apache.org/jira/browse/SPARK-34588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17293713#comment-17293713 ] Dmitry Kravchuk commented on SPARK-34588: - [~hyukjin.kwon] I've searched though apache arrow github repo and found your [commit |https://github.com/apache/spark/commit/c2caf2522b2e65a93a797580f08ac36461000969#diff-9c5fb3d1b7e3b0f54bc5c4182965c4fe1f9023d449017cece3005d3f90e8e4d8]for version 3.1.1. Is it right that buffer size at spark side will be expanded in spark 3.1.1 release? > Support int64 buffer lengths in Java for pyspark Pandas UDF as buffer > expanding > --- > > Key: SPARK-34588 > URL: https://issues.apache.org/jira/browse/SPARK-34588 > Project: Spark > Issue Type: Improvement > Components: PySpark >Affects Versions: 3.0.2 > Environment: Hadoop part: > * spark 3.0.2 > * java 1.8.0_77 > * scala 2.12.10 > Python part: > * cython 0.29.22 > * numpy 1.19.5 > * pandas 1.1.5 > * pyarrow 2.0.0 >Reporter: Dmitry Kravchuk >Priority: Major > > This issue is an extention of [arrow > issue|https://issues.apache.org/jira/browse/ARROW-10957#] for making possible > using pyspark Pandas UDF functions for data more than 2gb per data group. > Here is the deal - arrow [supports > |https://github.com/apache/arrow/commit/9742007c463e253e2b916e65f668146953456a00#diff-2e086b32ec292aae20695dd4341c647c9a9d7d3d77816bf849f7fbf68e9fa6cfR209]long > type for data serialization between java and python but spark doesn't. It > gives a lot of problem when somebody is trying to apply Pandas UDF for > dataset where any group is more than 2^32(-1) bytes what is equal to 2gb. > Solving this problem will help to use more data per Pandas UDF groupping - > 2^64(-1) bytes. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-34588) Support int64 buffer lengths in Java for pyspark Pandas UDF as buffer expanding
[ https://issues.apache.org/jira/browse/SPARK-34588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17293695#comment-17293695 ] Dmitry Kravchuk commented on SPARK-34588: - [~hyukjin.kwon] as you can see here https://issues.apache.org/jira/browse/ARROW-4890 in the last comment I've created new issue related to pyarrow 2.0.0 and spark 3.0.2 where [~emkornfi...@gmail.com] noticed that pyarrow buffer size issue was resolved [here|https://github.com/apache/arrow/commit/9742007c463e253e2b916e65f668146953456a00#diff-2e086b32ec292aae20695dd4341c647c9a9d7d3d77816bf849f7fbf68e9fa6cfR209] but at [spark|https://github.com/apache/spark/blob/branch-3.0/pom.xml#L209] side it's still using pyarrow version less than 0.16. Could you please take this issue to work? > Support int64 buffer lengths in Java for pyspark Pandas UDF as buffer > expanding > --- > > Key: SPARK-34588 > URL: https://issues.apache.org/jira/browse/SPARK-34588 > Project: Spark > Issue Type: Improvement > Components: PySpark >Affects Versions: 3.0.2 > Environment: Hadoop part: > * spark 3.0.2 > * java 1.8.0_77 > * scala 2.12.10 > Python part: > * cython 0.29.22 > * numpy 1.19.5 > * pandas 1.1.5 > * pyarrow 2.0.0 >Reporter: Dmitry Kravchuk >Priority: Major > > This issue is an extention of [arrow > issue|https://issues.apache.org/jira/browse/ARROW-10957#] for making possible > using pyspark Pandas UDF functions for data more than 2gb per data group. > Here is the deal - arrow [supports > |https://github.com/apache/arrow/commit/9742007c463e253e2b916e65f668146953456a00#diff-2e086b32ec292aae20695dd4341c647c9a9d7d3d77816bf849f7fbf68e9fa6cfR209]long > type for data serialization between java and python but spark doesn't. It > gives a lot of problem when somebody is trying to apply Pandas UDF for > dataset where any group is more than 2^32(-1) bytes what is equal to 2gb. > Solving this problem will help to use more data per Pandas UDF groupping - > 2^64(-1) bytes. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-34588) Support int64 buffer lengths in Java for pyspark Pandas UDF as buffer expanding
[ https://issues.apache.org/jira/browse/SPARK-34588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17293645#comment-17293645 ] Hyukjin Kwon commented on SPARK-34588: -- Isn't it an Arrow side issue not yet resolved? - https://issues.apache.org/jira/browse/ARROW-4890 > Support int64 buffer lengths in Java for pyspark Pandas UDF as buffer > expanding > --- > > Key: SPARK-34588 > URL: https://issues.apache.org/jira/browse/SPARK-34588 > Project: Spark > Issue Type: Improvement > Components: PySpark >Affects Versions: 3.0.2 > Environment: Hadoop part: > * spark 3.0.2 > * java 1.8.0_77 > * scala 2.12.10 > Python part: > * cython 0.29.22 > * numpy 1.19.5 > * pandas 1.1.5 > * pyarrow 2.0.0 >Reporter: Dmitry Kravchuk >Priority: Major > > This issue is an extention of [arrow > issue|https://issues.apache.org/jira/browse/ARROW-10957#] for making possible > using pyspark Pandas UDF functions for data more than 2gb per data group. > Here is the deal - arrow [supports > |https://github.com/apache/arrow/commit/9742007c463e253e2b916e65f668146953456a00#diff-2e086b32ec292aae20695dd4341c647c9a9d7d3d77816bf849f7fbf68e9fa6cfR209]long > type for data serialization between java and python but spark doesn't. It > gives a lot of problem when somebody is trying to apply Pandas UDF for > dataset where any group is more than 2^32(-1) bytes what is equal to 2gb. > Solving this problem will help to use more data per Pandas UDF groupping - > 2^64(-1) bytes. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org