[jira] [Commented] (SPARK-26412) Allow Pandas UDF to take an iterator of pd.DataFrames for the entire partition
[ https://issues.apache.org/jira/browse/SPARK-26412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16822521#comment-16822521 ] Xiangrui Meng commented on SPARK-26412: --- [~bryanc] It handles the data exchange for DL model inference use cases, not much for training. > Allow Pandas UDF to take an iterator of pd.DataFrames for the entire partition > -- > > Key: SPARK-26412 > URL: https://issues.apache.org/jira/browse/SPARK-26412 > Project: Spark > Issue Type: New Feature > Components: PySpark >Affects Versions: 3.0.0 >Reporter: Xiangrui Meng >Priority: Major > > Pandas UDF is the ideal connection between PySpark and DL model inference > workload. However, user needs to load the model file first to make > predictions. It is common to see models of size ~100MB or bigger. If the > Pandas UDF execution is limited to batch scope, user need to repeatedly load > the same model for every batch in the same python worker process, which is > inefficient. I created this JIRA to discuss possible solutions. > Essentially we need to support "start()" and "finish()" besides "apply". We > can either provide those interfaces or simply provide users the iterator of > batches in pd.DataFrame and let user code handle it. > cc: [~icexelloss] [~bryanc] [~holdenk] [~hyukjin.kwon] [~ueshin] [~smilegator] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-26412) Allow Pandas UDF to take an iterator of pd.DataFrames for the entire partition
[ https://issues.apache.org/jira/browse/SPARK-26412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16751785#comment-16751785 ] Bryan Cutler commented on SPARK-26412: -- [~mengxr] I think Arrow record batches would be a much more ideal way to connect with other frameworks. Making the conversion to Pandas carries some overhead and the Arrow format/types are more solidly defined. It is also better suited to be used with an iterator - most of the Arrow IPC mechanisms operate on streams of record batches. Is this proposal instead of SPARK-24579 SPIP: Standardize Optimized Data Exchange between Spark and DL/AI frameworks? > Allow Pandas UDF to take an iterator of pd.DataFrames for the entire partition > -- > > Key: SPARK-26412 > URL: https://issues.apache.org/jira/browse/SPARK-26412 > Project: Spark > Issue Type: New Feature > Components: PySpark >Affects Versions: 3.0.0 >Reporter: Xiangrui Meng >Priority: Major > > Pandas UDF is the ideal connection between PySpark and DL model inference > workload. However, user needs to load the model file first to make > predictions. It is common to see models of size ~100MB or bigger. If the > Pandas UDF execution is limited to batch scope, user need to repeatedly load > the same model for every batch in the same python worker process, which is > inefficient. I created this JIRA to discuss possible solutions. > Essentially we need to support "start()" and "finish()" besides "apply". We > can either provide those interfaces or simply provide users the iterator of > batches in pd.DataFrame and let user code handle it. > cc: [~icexelloss] [~bryanc] [~holdenk] [~hyukjin.kwon] [~ueshin] [~smilegator] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-26412) Allow Pandas UDF to take an iterator of pd.DataFrames for the entire partition
[ https://issues.apache.org/jira/browse/SPARK-26412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16743691#comment-16743691 ] Hyukjin Kwon commented on SPARK-26412: -- [~mengxr], looks this can be subset of SPARK-26413. Did I understand correctly? > Allow Pandas UDF to take an iterator of pd.DataFrames for the entire partition > -- > > Key: SPARK-26412 > URL: https://issues.apache.org/jira/browse/SPARK-26412 > Project: Spark > Issue Type: New Feature > Components: PySpark >Affects Versions: 3.0.0 >Reporter: Xiangrui Meng >Priority: Major > > Pandas UDF is the ideal connection between PySpark and DL model inference > workload. However, user needs to load the model file first to make > predictions. It is common to see models of size ~100MB or bigger. If the > Pandas UDF execution is limited to batch scope, user need to repeatedly load > the same model for every batch in the same python worker process, which is > inefficient. I created this JIRA to discuss possible solutions. > Essentially we need to support "start()" and "finish()" besides "apply". We > can either provide those interfaces or simply provide users the iterator of > batches in pd.DataFrame and let user code handle it. > cc: [~icexelloss] [~bryanc] [~holdenk] [~hyukjin.kwon] [~ueshin] [~smilegator] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-26412) Allow Pandas UDF to take an iterator of pd.DataFrames for the entire partition
[ https://issues.apache.org/jira/browse/SPARK-26412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725957#comment-16725957 ] Li Jin commented on SPARK-26412: So thisĀ is similar to the mapPartitions API in Scala but instead of having an iterator of records, here we want an iterator of pd.DataFrames instead? > Allow Pandas UDF to take an iterator of pd.DataFrames for the entire partition > -- > > Key: SPARK-26412 > URL: https://issues.apache.org/jira/browse/SPARK-26412 > Project: Spark > Issue Type: New Feature > Components: PySpark >Affects Versions: 3.0.0 >Reporter: Xiangrui Meng >Priority: Major > > Pandas UDF is the ideal connection between PySpark and DL model inference > workload. However, user needs to load the model file first to make > predictions. It is common to see models of size ~100MB or bigger. If the > Pandas UDF execution is limited to batch scope, user need to repeatedly load > the same model for every batch in the same python worker process, which is > inefficient. I created this JIRA to discuss possible solutions. > Essentially we need to support "start()" and "finish()" besides "apply". We > can either provide those interfaces or simply provide users the iterator of > batches in pd.DataFrame and let user code handle it. > cc: [~icexelloss] [~bryanc] [~holdenk] [~hyukjin.kwon] [~ueshin] [~smilegator] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org