[jira] [Created] (SPARK-45379) Allow the daily tests of branch-3.3 to use the new test group tags
Yang Jie created SPARK-45379: Summary: Allow the daily tests of branch-3.3 to use the new test group tags Key: SPARK-45379 URL: https://issues.apache.org/jira/browse/SPARK-45379 Project: Spark Issue Type: Improvement Components: Project Infra Affects Versions: 4.0.0 Reporter: Yang Jie -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-45379) Allow the daily tests of branch-3.3 to use the new test group tags
[ https://issues.apache.org/jira/browse/SPARK-45379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated SPARK-45379: --- Labels: pull-request-available (was: ) > Allow the daily tests of branch-3.3 to use the new test group tags > -- > > Key: SPARK-45379 > URL: https://issues.apache.org/jira/browse/SPARK-45379 > Project: Spark > Issue Type: Improvement > Components: Project Infra >Affects Versions: 4.0.0 >Reporter: Yang Jie >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-45380) Replace mutable.WrappedArray with mutable.ArraySeq
Yang Jie created SPARK-45380: Summary: Replace mutable.WrappedArray with mutable.ArraySeq Key: SPARK-45380 URL: https://issues.apache.org/jira/browse/SPARK-45380 Project: Spark Issue Type: Sub-task Components: Connect, Spark Core, SQL Affects Versions: 4.0.0 Reporter: Yang Jie {code:java} @deprecated("Use ArraySeq instead of WrappedArray; it can represent both, boxed and unboxed arrays", "2.13.0") type WrappedArray[X] = ArraySeq[X] @deprecated("Use ArraySeq instead of WrappedArray; it can represent both, boxed and unboxed arrays", "2.13.0") val WrappedArray = ArraySeq {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-45380) Replace mutable.WrappedArray with mutable.ArraySeq
[ https://issues.apache.org/jira/browse/SPARK-45380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated SPARK-45380: --- Labels: pull-request-available (was: ) > Replace mutable.WrappedArray with mutable.ArraySeq > -- > > Key: SPARK-45380 > URL: https://issues.apache.org/jira/browse/SPARK-45380 > Project: Spark > Issue Type: Sub-task > Components: Connect, Spark Core, SQL >Affects Versions: 4.0.0 >Reporter: Yang Jie >Priority: Major > Labels: pull-request-available > > {code:java} > @deprecated("Use ArraySeq instead of WrappedArray; it can represent both, > boxed and unboxed arrays", "2.13.0") > type WrappedArray[X] = ArraySeq[X] > @deprecated("Use ArraySeq instead of WrappedArray; it can represent both, > boxed and unboxed arrays", "2.13.0") > val WrappedArray = ArraySeq {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-8489) Add regression tests for SPARK-8470
[ https://issues.apache.org/jira/browse/SPARK-8489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated SPARK-8489: -- Labels: pull-request-available (was: ) > Add regression tests for SPARK-8470 > --- > > Key: SPARK-8489 > URL: https://issues.apache.org/jira/browse/SPARK-8489 > Project: Spark > Issue Type: Bug > Components: SQL, Tests >Affects Versions: 1.4.0 >Reporter: Andrew Or >Assignee: Andrew Or >Priority: Critical > Labels: pull-request-available > Fix For: 1.4.1, 1.5.0 > > > See SPARK-8470 for more detail. Basically the Spark Hive code silently > overwrites the context class loader populated in SparkSubmit, resulting in > certain classes missing when we do reflection in `SQLContext#createDataFrame`. > That issue is already resolved in https://github.com/apache/spark/pull/6891, > but we should add a regression test for the specific manifestation of the bug > in SPARK-8470. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-45381) Incorrect COUNT bug handling in scalar subqueries
Andrey Gubichev created SPARK-45381: --- Summary: Incorrect COUNT bug handling in scalar subqueries Key: SPARK-45381 URL: https://issues.apache.org/jira/browse/SPARK-45381 Project: Spark Issue Type: Bug Components: SQL Affects Versions: 3.5.0 Reporter: Andrey Gubichev This query has incorrect results: create temp view l (a, b) as values (1, 2.0), (1, 2.0), (2, 1.0), (2, 1.0), (3, 3.0), (null, null), (null, 5.0), (6, null); create temp view r (c, d) as values (2, 3.0), (2, 3.0), (3, 2.0), (4, 1.0), (null, null), (null, 5.0), (6, null); select ( select sum(cnt) from (select count(*) cnt from r where l.a = r.c) ) from l; It returns -- !query output 1 1 2 2 NULL NULL NULL NULL NULLs in the output should be zeros. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-45381) Incorrect COUNT bug handling in scalar subqueries
[ https://issues.apache.org/jira/browse/SPARK-45381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrey Gubichev updated SPARK-45381: Description: This query has incorrect results: create temp view l (a, b) as values (1, 2.0), (1, 2.0), (2, 1.0), (2, 1.0), (3, 3.0), (null, null), (null, 5.0), (6, null); create temp view r (c, d) as values (2, 3.0), (2, 3.0), (3, 2.0), (4, 1.0), (null, null), (null, 5.0), (6, null); select ( select sum(cnt) from (select count ( * ) cnt from r where l.a = r.c) ) from l; It returns – !query output 1 1 2 2 NULL NULL NULL NULL NULLs in the output should be zeros. was: This query has incorrect results: create temp view l (a, b) as values (1, 2.0), (1, 2.0), (2, 1.0), (2, 1.0), (3, 3.0), (null, null), (null, 5.0), (6, null); create temp view r (c, d) as values (2, 3.0), (2, 3.0), (3, 2.0), (4, 1.0), (null, null), (null, 5.0), (6, null); select ( select sum(cnt) from (select count(*) cnt from r where l.a = r.c) ) from l; It returns -- !query output 1 1 2 2 NULL NULL NULL NULL NULLs in the output should be zeros. > Incorrect COUNT bug handling in scalar subqueries > - > > Key: SPARK-45381 > URL: https://issues.apache.org/jira/browse/SPARK-45381 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 3.5.0 >Reporter: Andrey Gubichev >Priority: Major > > This query has incorrect results: > create temp view l (a, b) > as values > (1, 2.0), > (1, 2.0), > (2, 1.0), > (2, 1.0), > (3, 3.0), > (null, null), > (null, 5.0), > (6, null); > create temp view r (c, d) > as values > (2, 3.0), > (2, 3.0), > (3, 2.0), > (4, 1.0), > (null, null), > (null, 5.0), > (6, null); > select ( > select sum(cnt) > from (select count ( * ) cnt from r where l.a = r.c) > ) from l; > > > It returns > – !query output > 1 > 1 > 2 > 2 > NULL > NULL > NULL > NULL > NULLs in the output should be zeros. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-45382) Upgrade Netty to 4.1.99.Final
Dongjoon Hyun created SPARK-45382: - Summary: Upgrade Netty to 4.1.99.Final Key: SPARK-45382 URL: https://issues.apache.org/jira/browse/SPARK-45382 Project: Spark Issue Type: Sub-task Components: Build Affects Versions: 4.0.0 Reporter: Dongjoon Hyun -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-45382) Upgrade Netty to 4.1.99.Final
[ https://issues.apache.org/jira/browse/SPARK-45382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated SPARK-45382: --- Labels: pull-request-available (was: ) > Upgrade Netty to 4.1.99.Final > - > > Key: SPARK-45382 > URL: https://issues.apache.org/jira/browse/SPARK-45382 > Project: Spark > Issue Type: Sub-task > Components: Build >Affects Versions: 4.0.0 >Reporter: Dongjoon Hyun >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-45376) [CORE] Add netty-tcnative-boringssl-static dependency
[ https://issues.apache.org/jira/browse/SPARK-45376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated SPARK-45376: --- Labels: pull-request-available (was: ) > [CORE] Add netty-tcnative-boringssl-static dependency > - > > Key: SPARK-45376 > URL: https://issues.apache.org/jira/browse/SPARK-45376 > Project: Spark > Issue Type: Task > Components: Spark Core >Affects Versions: 4.0.0 >Reporter: Hasnain Lakhani >Priority: Major > Labels: pull-request-available > > Add the boringssl dependency which is needed for SSL functionality to work, > and provide the network common test helper to other test modules which need > to test SSL functionality -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-45379) Allow the daily tests of branch-3.3 to use the new test group tags
[ https://issues.apache.org/jira/browse/SPARK-45379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dongjoon Hyun reassigned SPARK-45379: - Assignee: Yang Jie > Allow the daily tests of branch-3.3 to use the new test group tags > -- > > Key: SPARK-45379 > URL: https://issues.apache.org/jira/browse/SPARK-45379 > Project: Spark > Issue Type: Improvement > Components: Project Infra >Affects Versions: 4.0.0 >Reporter: Yang Jie >Assignee: Yang Jie >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Resolved] (SPARK-45379) Allow the daily tests of branch-3.3 to use the new test group tags
[ https://issues.apache.org/jira/browse/SPARK-45379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dongjoon Hyun resolved SPARK-45379. --- Fix Version/s: 4.0.0 Resolution: Fixed Issue resolved by pull request 43177 [https://github.com/apache/spark/pull/43177] > Allow the daily tests of branch-3.3 to use the new test group tags > -- > > Key: SPARK-45379 > URL: https://issues.apache.org/jira/browse/SPARK-45379 > Project: Spark > Issue Type: Improvement > Components: Project Infra >Affects Versions: 4.0.0 >Reporter: Yang Jie >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-44074) `Logging plan changes for execution` test failed
[ https://issues.apache.org/jira/browse/SPARK-44074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dongjoon Hyun updated SPARK-44074: -- Fix Version/s: 3.4.2 3.3.4 > `Logging plan changes for execution` test failed > > > Key: SPARK-44074 > URL: https://issues.apache.org/jira/browse/SPARK-44074 > Project: Spark > Issue Type: Bug > Components: SQL, Tests >Affects Versions: 3.4.2, 3.5.0, 3.3.4 >Reporter: Yang Jie >Assignee: Yang Jie >Priority: Major > Labels: pull-request-available > Fix For: 3.4.2, 3.5.0, 3.3.4 > > > run {{build/sbt clean "sql/test" > -Dtest.exclude.tags=org.apache.spark.tags.ExtendedSQLTest,org.apache.spark.tags.SlowSQLTest}} > {{}} > {code:java} > 2023-06-15T19:58:34.4105460Z �[0m[�[0m�[0minfo�[0m] > �[0m�[0m�[32mQueryExecutionSuite:�[0m�[0m > 2023-06-15T19:58:34.5395268Z �[0m[�[0m�[0minfo�[0m] �[0m�[0m�[32m- dumping > query execution info to a file (77 milliseconds)�[0m�[0m > 2023-06-15T19:58:34.5856902Z �[0m[�[0m�[0minfo�[0m] �[0m�[0m�[32m- dumping > query execution info to an existing file (49 milliseconds)�[0m�[0m > 2023-06-15T19:58:34.6099849Z �[0m[�[0m�[0minfo�[0m] �[0m�[0m�[32m- dumping > query execution info to non-existing folder (25 milliseconds)�[0m�[0m > 2023-06-15T19:58:34.6136467Z �[0m[�[0m�[0minfo�[0m] �[0m�[0m�[32m- dumping > query execution info by invalid path (4 milliseconds)�[0m�[0m > 2023-06-15T19:58:34.6425071Z �[0m[�[0m�[0minfo�[0m] �[0m�[0m�[32m- dumping > query execution info to a file - explainMode=formatted (28 > milliseconds)�[0m�[0m > 2023-06-15T19:58:34.7084916Z �[0m[�[0m�[0minfo�[0m] �[0m�[0m�[32m- limit > number of fields by sql config (66 milliseconds)�[0m�[0m > 2023-06-15T19:58:34.7432299Z �[0m[�[0m�[0minfo�[0m] �[0m�[0m�[32m- check > maximum fields restriction (34 milliseconds)�[0m�[0m > 2023-06-15T19:58:34.7554546Z �[0m[�[0m�[0minfo�[0m] �[0m�[0m�[32m- toString() > exception/error handling (11 milliseconds)�[0m�[0m > 2023-06-15T19:58:34.7621424Z �[0m[�[0m�[0minfo�[0m] �[0m�[0m�[32m- > SPARK-28346: clone the query plan between different stages (6 > milliseconds)�[0m�[0m > 2023-06-15T19:58:34.8001412Z �[0m[�[0m�[0minfo�[0m] �[0m�[0m�[31m- Logging > plan changes for execution *** FAILED *** (12 milliseconds)�[0m�[0m > 2023-06-15T19:58:34.8007977Z �[0m[�[0m�[0minfo�[0m] �[0m�[0m�[31m > testAppender.loggingEvents.exists(((x$10: > org.apache.logging.log4j.core.LogEvent) => > x$10.getMessage().getFormattedMessage().contains(expectedMsg))) was false > (QueryExecutionSuite.scala:232)�[0m�[0m > {code} > > but run {{build/sbt "sql/testOnly *QueryExecutionSuite"}} not this issue, > need to investigate. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Resolved] (SPARK-42205) Remove logging of Accumulables in Task/Stage start events in JsonProtocol
[ https://issues.apache.org/jira/browse/SPARK-42205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Josh Rosen resolved SPARK-42205. Fix Version/s: 4.0.0 Resolution: Fixed Issue resolved by pull request 39767 [https://github.com/apache/spark/pull/39767] > Remove logging of Accumulables in Task/Stage start events in JsonProtocol > - > > Key: SPARK-42205 > URL: https://issues.apache.org/jira/browse/SPARK-42205 > Project: Spark > Issue Type: Improvement > Components: Spark Core >Affects Versions: 3.0.0 >Reporter: Josh Rosen >Assignee: Josh Rosen >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > > Spark's JsonProtocol event logs (used by the history server) are impacted by > a race condition when tasks / stages finish very quickly: > The SparkListenerTaskStart and SparkListenerStageSubmitted events contain > mutable TaskInfo and StageInfo objects, which in turn contain Accumulables > fields. When a task or stage is submitted, Accumulables is initially empty. > When the task or stage finishes, this field is updated with values from the > task. > If a task or stage finishes before the start event has been logged by the > event logging listener then the _start_ event will contain the Accumulable > values from the task or stage _end_ event. > This information isn't used by the History Server and contributes to wasteful > bloat in event log sizes. In one real-world log, I found that ~10% of the > uncompressed log size was due to these redundant Accumulable fields. > I propose that we update JsonProtocol to skip the logging of this field for > Start/Submitted events. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-44855) Small tweaks to attaching ExecuteGrpcResponseSender to ExecuteResponseObserver
[ https://issues.apache.org/jira/browse/SPARK-44855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated SPARK-44855: --- Labels: pull-request-available (was: ) > Small tweaks to attaching ExecuteGrpcResponseSender to ExecuteResponseObserver > -- > > Key: SPARK-44855 > URL: https://issues.apache.org/jira/browse/SPARK-44855 > Project: Spark > Issue Type: Improvement > Components: Connect >Affects Versions: 4.0.0 >Reporter: Juliusz Sompolski >Priority: Major > Labels: pull-request-available > > Small improvements can be made to the way new ExecuteGrpcResponseSender is > attached to observer. > * Since now we have addGrpcResponseSender in ExecuteHolder, it should be > ExecuteHolder responsibility to interrupt the old sender and that there is > only one at a time, and to ExecuteResponseObserver's responsibility > * executeObserver is used as a lock for synchronization. An explicit lock > object could be better. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-44762) Add more documentation and examples for using job tags for interrupt
[ https://issues.apache.org/jira/browse/SPARK-44762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated SPARK-44762: --- Labels: pull-request-available (was: ) > Add more documentation and examples for using job tags for interrupt > > > Key: SPARK-44762 > URL: https://issues.apache.org/jira/browse/SPARK-44762 > Project: Spark > Issue Type: Improvement > Components: Connect >Affects Versions: 3.5.0 >Reporter: Juliusz Sompolski >Priority: Major > Labels: pull-request-available > > Add documentation to spark.addJob tag with similar examples and explanation > like SparkContext.setJobGroup -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-30665) Eliminate pypandoc dependency
[ https://issues.apache.org/jira/browse/SPARK-30665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17770575#comment-17770575 ] Colin Dean commented on SPARK-30665: While researching a problem we're encountering while trying to install any of PySpark 2.3-2.4 — upgrading past that is not an option presently — we're finding that {{pypandoc}} changed its API in a recent release, thereby breaking setup.py on these old PySpark releases because these releases did not specify a bound on the dependency. What are the chances that the patch that removes the {{pypandoc}} requirement could accepted as a backport to a 2.3 or 2.4 release? I'm happy to put up the PR to the branches if there's a process for nudging a 2.3.5 and 2.4.9 out to make those minor releases installable again. > Eliminate pypandoc dependency > - > > Key: SPARK-30665 > URL: https://issues.apache.org/jira/browse/SPARK-30665 > Project: Spark > Issue Type: Improvement > Components: Build, Documentation, PySpark >Affects Versions: 3.0.0 >Reporter: Nicholas Chammas >Assignee: Nicholas Chammas >Priority: Minor > Fix For: 3.0.0 > > > PyPI now supports Markdown project descriptions, so we no longer need to > convert the Spark README into ReStructuredText and thus no longer need > pypandoc. > Removing pypandoc has the added benefit of eliminating the failure mode > described in [this PR|https://github.com/apache/spark/pull/18981]. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-45383) Missing case for RelationTimeTravel in CheckAnalysis
Ryan Johnson created SPARK-45383: Summary: Missing case for RelationTimeTravel in CheckAnalysis Key: SPARK-45383 URL: https://issues.apache.org/jira/browse/SPARK-45383 Project: Spark Issue Type: Bug Components: Spark Core Affects Versions: 3.5.0 Reporter: Ryan Johnson {{CheckAnalysis.checkAnalysis0}} lacks a case for {{{}RelationTimeTravel{}}}, and since the latter is (intentionally) an {{UnresolvedLeafNode}} rather than a {{{}UnaryNode{}}}, the existing checks do not traverse it. Result: Attempting time travel over a non-existing table produces a spark internal error from the [default case|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala#L818], rather than the expected {{{}AnalysisException{}}}: {code:java} [info] Cause: org.apache.spark.SparkException: [INTERNAL_ERROR] Found the unresolved operator: 'RelationTimeTravel 'UnresolvedRelation [not_exists], [], false, 0 [info] at org.apache.spark.SparkException$.internalError(SparkException.scala:77) [info] at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$54(CheckAnalysis.scala:753) {code} Solution should be simple enough: {code:java} case tt: RelationTimeTravel => checkAnalysis0(tt.table) {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-45383) Missing case for RelationTimeTravel in CheckAnalysis
[ https://issues.apache.org/jira/browse/SPARK-45383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan Johnson updated SPARK-45383: - Description: {{CheckAnalysis.checkAnalysis0}} lacks a case for {{{}RelationTimeTravel{}}}, and since the latter is (intentionally) an {{UnresolvedLeafNode}} rather than a {{{}UnaryNode{}}}, the existing checks do not traverse it. Result: Attempting time travel over a non-existing table produces a spark internal error from the [default case|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala#L818], rather than the expected {{{}AnalysisException{}}}: {code:java} [info] Cause: org.apache.spark.SparkException: [INTERNAL_ERROR] Found the unresolved operator: 'RelationTimeTravel 'UnresolvedRelation [not_exists], [], false, 0 [info] at org.apache.spark.SparkException$.internalError(SparkException.scala:77) [info] at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$54(CheckAnalysis.scala:753) {code} Fix should be simple enough: {code:java} case tt: RelationTimeTravel => checkAnalysis0(tt.table) {code} was: {{CheckAnalysis.checkAnalysis0}} lacks a case for {{{}RelationTimeTravel{}}}, and since the latter is (intentionally) an {{UnresolvedLeafNode}} rather than a {{{}UnaryNode{}}}, the existing checks do not traverse it. Result: Attempting time travel over a non-existing table produces a spark internal error from the [default case|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala#L818], rather than the expected {{{}AnalysisException{}}}: {code:java} [info] Cause: org.apache.spark.SparkException: [INTERNAL_ERROR] Found the unresolved operator: 'RelationTimeTravel 'UnresolvedRelation [not_exists], [], false, 0 [info] at org.apache.spark.SparkException$.internalError(SparkException.scala:77) [info] at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$54(CheckAnalysis.scala:753) {code} Solution should be simple enough: {code:java} case tt: RelationTimeTravel => checkAnalysis0(tt.table) {code} > Missing case for RelationTimeTravel in CheckAnalysis > > > Key: SPARK-45383 > URL: https://issues.apache.org/jira/browse/SPARK-45383 > Project: Spark > Issue Type: Bug > Components: Spark Core >Affects Versions: 3.5.0 >Reporter: Ryan Johnson >Priority: Major > > {{CheckAnalysis.checkAnalysis0}} lacks a case for {{{}RelationTimeTravel{}}}, > and since the latter is (intentionally) an {{UnresolvedLeafNode}} rather than > a {{{}UnaryNode{}}}, the existing checks do not traverse it. > Result: Attempting time travel over a non-existing table produces a spark > internal error from the [default > case|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala#L818], > rather than the expected {{{}AnalysisException{}}}: > {code:java} > [info] Cause: org.apache.spark.SparkException: [INTERNAL_ERROR] Found the > unresolved operator: 'RelationTimeTravel 'UnresolvedRelation [not_exists], > [], false, 0 > [info] at > org.apache.spark.SparkException$.internalError(SparkException.scala:77) > [info] at > org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$54(CheckAnalysis.scala:753) > {code} > Fix should be simple enough: > {code:java} > case tt: RelationTimeTravel => > checkAnalysis0(tt.table) {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Resolved] (SPARK-45382) Upgrade Netty to 4.1.99.Final
[ https://issues.apache.org/jira/browse/SPARK-45382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dongjoon Hyun resolved SPARK-45382. --- Fix Version/s: 4.0.0 Resolution: Fixed Issue resolved by pull request 43180 [https://github.com/apache/spark/pull/43180] > Upgrade Netty to 4.1.99.Final > - > > Key: SPARK-45382 > URL: https://issues.apache.org/jira/browse/SPARK-45382 > Project: Spark > Issue Type: Sub-task > Components: Build >Affects Versions: 4.0.0 >Reporter: Dongjoon Hyun >Assignee: Dongjoon Hyun >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-45382) Upgrade Netty to 4.1.99.Final
[ https://issues.apache.org/jira/browse/SPARK-45382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dongjoon Hyun reassigned SPARK-45382: - Assignee: Dongjoon Hyun > Upgrade Netty to 4.1.99.Final > - > > Key: SPARK-45382 > URL: https://issues.apache.org/jira/browse/SPARK-45382 > Project: Spark > Issue Type: Sub-task > Components: Build >Affects Versions: 4.0.0 >Reporter: Dongjoon Hyun >Assignee: Dongjoon Hyun >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-45373) Minimizing calls to HiveMetaStore layer for getting partitions, when tables are repeated
[ https://issues.apache.org/jira/browse/SPARK-45373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated SPARK-45373: --- Labels: pull-request-available (was: ) > Minimizing calls to HiveMetaStore layer for getting partitions, when tables > are repeated > - > > Key: SPARK-45373 > URL: https://issues.apache.org/jira/browse/SPARK-45373 > Project: Spark > Issue Type: Improvement > Components: SQL >Affects Versions: 3.5.1 >Reporter: Asif >Priority: Minor > Labels: pull-request-available > Fix For: 3.5.1 > > > In the rule PruneFileSourcePartitions where the CatalogFileIndex gets > converted to InMemoryFileIndex, the HMS calls can get very expensive if : > 1) The translated filter string for push down to HMS layer becomes empty , > resulting in fetching of all partitions and same table is referenced multiple > times in the query. > 2) Or just in case same table is referenced multiple times in the query with > different partition filters. > In such cases current code would result in multiple calls to HMS layer. > This can be avoided by grouping the tables based on CatalogFileIndex and > passing a common minimum filter ( filter1 || filter2) and getting a base > PrunedInmemoryFileIndex which can become a basis for each of the specific > table. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-44120) Support Python 3.12
[ https://issues.apache.org/jira/browse/SPARK-44120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated SPARK-44120: --- Labels: pull-request-available (was: ) > Support Python 3.12 > --- > > Key: SPARK-44120 > URL: https://issues.apache.org/jira/browse/SPARK-44120 > Project: Spark > Issue Type: Sub-task > Components: PySpark >Affects Versions: 4.0.0 >Reporter: Dongjoon Hyun >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] (SPARK-45373) Minimizing calls to HiveMetaStore layer for getting partitions, when tables are repeated
[ https://issues.apache.org/jira/browse/SPARK-45373 ] Asif deleted comment on SPARK-45373: -- was (Author: ashahid7): Will be generating a PR for this. > Minimizing calls to HiveMetaStore layer for getting partitions, when tables > are repeated > - > > Key: SPARK-45373 > URL: https://issues.apache.org/jira/browse/SPARK-45373 > Project: Spark > Issue Type: Improvement > Components: SQL >Affects Versions: 3.5.1 >Reporter: Asif >Priority: Minor > Labels: pull-request-available > Fix For: 3.5.1 > > > In the rule PruneFileSourcePartitions where the CatalogFileIndex gets > converted to InMemoryFileIndex, the HMS calls can get very expensive if : > 1) The translated filter string for push down to HMS layer becomes empty , > resulting in fetching of all partitions and same table is referenced multiple > times in the query. > 2) Or just in case same table is referenced multiple times in the query with > different partition filters. > In such cases current code would result in multiple calls to HMS layer. > This can be avoided by grouping the tables based on CatalogFileIndex and > passing a common minimum filter ( filter1 || filter2) and getting a base > PrunedInmemoryFileIndex which can become a basis for each of the specific > table. > Opened following PR for ticket: > [SPARK-45373-PR|https://github.com/apache/spark/pull/43183] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-45373) Minimizing calls to HiveMetaStore layer for getting partitions, when tables are repeated
[ https://issues.apache.org/jira/browse/SPARK-45373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Asif updated SPARK-45373: - Description: In the rule PruneFileSourcePartitions where the CatalogFileIndex gets converted to InMemoryFileIndex, the HMS calls can get very expensive if : 1) The translated filter string for push down to HMS layer becomes empty , resulting in fetching of all partitions and same table is referenced multiple times in the query. 2) Or just in case same table is referenced multiple times in the query with different partition filters. In such cases current code would result in multiple calls to HMS layer. This can be avoided by grouping the tables based on CatalogFileIndex and passing a common minimum filter ( filter1 || filter2) and getting a base PrunedInmemoryFileIndex which can become a basis for each of the specific table. Opened following PR for ticket: [SPARK-45373-PR|https://github.com/apache/spark/pull/43183] was: In the rule PruneFileSourcePartitions where the CatalogFileIndex gets converted to InMemoryFileIndex, the HMS calls can get very expensive if : 1) The translated filter string for push down to HMS layer becomes empty , resulting in fetching of all partitions and same table is referenced multiple times in the query. 2) Or just in case same table is referenced multiple times in the query with different partition filters. In such cases current code would result in multiple calls to HMS layer. This can be avoided by grouping the tables based on CatalogFileIndex and passing a common minimum filter ( filter1 || filter2) and getting a base PrunedInmemoryFileIndex which can become a basis for each of the specific table. > Minimizing calls to HiveMetaStore layer for getting partitions, when tables > are repeated > - > > Key: SPARK-45373 > URL: https://issues.apache.org/jira/browse/SPARK-45373 > Project: Spark > Issue Type: Improvement > Components: SQL >Affects Versions: 3.5.1 >Reporter: Asif >Priority: Minor > Labels: pull-request-available > Fix For: 3.5.1 > > > In the rule PruneFileSourcePartitions where the CatalogFileIndex gets > converted to InMemoryFileIndex, the HMS calls can get very expensive if : > 1) The translated filter string for push down to HMS layer becomes empty , > resulting in fetching of all partitions and same table is referenced multiple > times in the query. > 2) Or just in case same table is referenced multiple times in the query with > different partition filters. > In such cases current code would result in multiple calls to HMS layer. > This can be avoided by grouping the tables based on CatalogFileIndex and > passing a common minimum filter ( filter1 || filter2) and getting a base > PrunedInmemoryFileIndex which can become a basis for each of the specific > table. > Opened following PR for ticket: > [SPARK-45373-PR|https://github.com/apache/spark/pull/43183] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Resolved] (SPARK-45380) Replace mutable.WrappedArray with mutable.ArraySeq
[ https://issues.apache.org/jira/browse/SPARK-45380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dongjoon Hyun resolved SPARK-45380. --- Fix Version/s: 4.0.0 Resolution: Fixed Issue resolved by pull request 43178 [https://github.com/apache/spark/pull/43178] > Replace mutable.WrappedArray with mutable.ArraySeq > -- > > Key: SPARK-45380 > URL: https://issues.apache.org/jira/browse/SPARK-45380 > Project: Spark > Issue Type: Sub-task > Components: Connect, Spark Core, SQL >Affects Versions: 4.0.0 >Reporter: Yang Jie >Assignee: Yang Jie >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > > {code:java} > @deprecated("Use ArraySeq instead of WrappedArray; it can represent both, > boxed and unboxed arrays", "2.13.0") > type WrappedArray[X] = ArraySeq[X] > @deprecated("Use ArraySeq instead of WrappedArray; it can represent both, > boxed and unboxed arrays", "2.13.0") > val WrappedArray = ArraySeq {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-45380) Replace mutable.WrappedArray with mutable.ArraySeq
[ https://issues.apache.org/jira/browse/SPARK-45380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dongjoon Hyun reassigned SPARK-45380: - Assignee: Yang Jie > Replace mutable.WrappedArray with mutable.ArraySeq > -- > > Key: SPARK-45380 > URL: https://issues.apache.org/jira/browse/SPARK-45380 > Project: Spark > Issue Type: Sub-task > Components: Connect, Spark Core, SQL >Affects Versions: 4.0.0 >Reporter: Yang Jie >Assignee: Yang Jie >Priority: Major > Labels: pull-request-available > > {code:java} > @deprecated("Use ArraySeq instead of WrappedArray; it can represent both, > boxed and unboxed arrays", "2.13.0") > type WrappedArray[X] = ArraySeq[X] > @deprecated("Use ArraySeq instead of WrappedArray; it can represent both, > boxed and unboxed arrays", "2.13.0") > val WrappedArray = ArraySeq {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-44120) Support Python 3.12
[ https://issues.apache.org/jira/browse/SPARK-44120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dongjoon Hyun reassigned SPARK-44120: - Assignee: Dongjoon Hyun > Support Python 3.12 > --- > > Key: SPARK-44120 > URL: https://issues.apache.org/jira/browse/SPARK-44120 > Project: Spark > Issue Type: Sub-task > Components: PySpark >Affects Versions: 4.0.0 >Reporter: Dongjoon Hyun >Assignee: Dongjoon Hyun >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Resolved] (SPARK-44913) DS V2 supports push down V2 UDF that has magic method
[ https://issues.apache.org/jira/browse/SPARK-44913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Sun resolved SPARK-44913. -- Fix Version/s: 4.0.0 Resolution: Fixed Issue resolved by pull request 42612 [https://github.com/apache/spark/pull/42612] > DS V2 supports push down V2 UDF that has magic method > - > > Key: SPARK-44913 > URL: https://issues.apache.org/jira/browse/SPARK-44913 > Project: Spark > Issue Type: Improvement > Components: SQL >Affects Versions: 3.4.1 >Reporter: Xianyang Liu >Assignee: Xianyang Liu >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > > Right now we only support pushing down the V2 UDF that has not a magic > method. Because the V2 UDF will be analyzed into the > `ApplyFunctionExpression` which could be translated and pushed down. However, > a V2 UDF that has the magic method will be analyzed into `StaticInvoke` or > `Invoke` that can not be translated into V2 expression and then can not be > pushed down to the data source. The magic method is suggested. So this PR > adds the support of pushing down the V2 UDF that has a magic method. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-44913) DS V2 supports push down V2 UDF that has magic method
[ https://issues.apache.org/jira/browse/SPARK-44913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Sun reassigned SPARK-44913: Assignee: Xianyang Liu > DS V2 supports push down V2 UDF that has magic method > - > > Key: SPARK-44913 > URL: https://issues.apache.org/jira/browse/SPARK-44913 > Project: Spark > Issue Type: Improvement > Components: SQL >Affects Versions: 3.4.1 >Reporter: Xianyang Liu >Assignee: Xianyang Liu >Priority: Major > Labels: pull-request-available > > Right now we only support pushing down the V2 UDF that has not a magic > method. Because the V2 UDF will be analyzed into the > `ApplyFunctionExpression` which could be translated and pushed down. However, > a V2 UDF that has the magic method will be analyzed into `StaticInvoke` or > `Invoke` that can not be translated into V2 expression and then can not be > pushed down to the data source. The magic method is suggested. So this PR > adds the support of pushing down the V2 UDF that has a magic method. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-36321) Do not fail application in kubernetes if name is too long
[ https://issues.apache.org/jira/browse/SPARK-36321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17770619#comment-17770619 ] Wing Yew Poon commented on SPARK-36321: --- [~dongjoon], is this fixed by SPARK-39614? > Do not fail application in kubernetes if name is too long > - > > Key: SPARK-36321 > URL: https://issues.apache.org/jira/browse/SPARK-36321 > Project: Spark > Issue Type: Bug > Components: Kubernetes >Affects Versions: 3.3.0 >Reporter: XiDuo You >Priority: Major > Labels: pull-request-available > > If we have a long spark app name and start with k8s master, we will get the > execption. > {code:java} > java.lang.IllegalArgumentException: > 'a-89fe2f7ae71c3570' in > spark.kubernetes.executor.podNamePrefix is invalid. must conform > https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-label-names > and the value length <= 47 > at > org.apache.spark.internal.config.TypedConfigBuilder.$anonfun$checkValue$1(ConfigBuilder.scala:108) > at > org.apache.spark.internal.config.TypedConfigBuilder.$anonfun$transform$1(ConfigBuilder.scala:101) > at scala.Option.map(Option.scala:230) > at > org.apache.spark.internal.config.OptionalConfigEntry.readFrom(ConfigEntry.scala:239) > at > org.apache.spark.internal.config.OptionalConfigEntry.readFrom(ConfigEntry.scala:214) > at org.apache.spark.SparkConf.get(SparkConf.scala:261) > at > org.apache.spark.deploy.k8s.KubernetesConf.get(KubernetesConf.scala:67) > at > org.apache.spark.deploy.k8s.KubernetesExecutorConf.(KubernetesConf.scala:147) > at > org.apache.spark.deploy.k8s.KubernetesConf$.createExecutorConf(KubernetesConf.scala:231) > at > org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.$anonfun$requestNewExecutors$2(ExecutorPodsAllocator.scala:367) > {code} > Use app name as the executor pod name is the Spark internal behavior and we > should not make application failure. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-45330) Upgrade ammonite to 2.5.11
[ https://issues.apache.org/jira/browse/SPARK-45330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dongjoon Hyun reassigned SPARK-45330: - Assignee: Yang Jie > Upgrade ammonite to 2.5.11 > -- > > Key: SPARK-45330 > URL: https://issues.apache.org/jira/browse/SPARK-45330 > Project: Spark > Issue Type: Improvement > Components: Build >Affects Versions: 4.0.0 >Reporter: Yang Jie >Assignee: Yang Jie >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Resolved] (SPARK-45330) Upgrade ammonite to 2.5.11
[ https://issues.apache.org/jira/browse/SPARK-45330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dongjoon Hyun resolved SPARK-45330. --- Fix Version/s: 4.0.0 Resolution: Fixed Issue resolved by pull request 43058 [https://github.com/apache/spark/pull/43058] > Upgrade ammonite to 2.5.11 > -- > > Key: SPARK-45330 > URL: https://issues.apache.org/jira/browse/SPARK-45330 > Project: Spark > Issue Type: Improvement > Components: Build >Affects Versions: 4.0.0 >Reporter: Yang Jie >Assignee: Yang Jie >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-45384) Replcace `TraversableOnce` with `IterableOnce`
Yang Jie created SPARK-45384: Summary: Replcace `TraversableOnce` with `IterableOnce` Key: SPARK-45384 URL: https://issues.apache.org/jira/browse/SPARK-45384 Project: Spark Issue Type: Sub-task Components: DStreams, Spark Core, SQL Affects Versions: 4.0.0 Reporter: Yang Jie {code:java} @deprecated("Use IterableOnce instead of TraversableOnce", "2.13.0") type TraversableOnce[+A] = scala.collection.IterableOnce[A] type IterableOnce[+A] = scala.collection.IterableOnce[A] {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-45331) Upgrade Scala to 2.13.12
[ https://issues.apache.org/jira/browse/SPARK-45331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated SPARK-45331: --- Labels: pull-request-available (was: ) > Upgrade Scala to 2.13.12 > > > Key: SPARK-45331 > URL: https://issues.apache.org/jira/browse/SPARK-45331 > Project: Spark > Issue Type: Improvement > Components: Build >Affects Versions: 4.0.0 >Reporter: Yang Jie >Priority: Major > Labels: pull-request-available > > wait SPARK-45330 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-45384) Replcace `TraversableOnce` with `IterableOnce`
[ https://issues.apache.org/jira/browse/SPARK-45384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated SPARK-45384: --- Labels: pull-request-available (was: ) > Replcace `TraversableOnce` with `IterableOnce` > -- > > Key: SPARK-45384 > URL: https://issues.apache.org/jira/browse/SPARK-45384 > Project: Spark > Issue Type: Sub-task > Components: DStreams, Spark Core, SQL >Affects Versions: 4.0.0 >Reporter: Yang Jie >Priority: Major > Labels: pull-request-available > > {code:java} > @deprecated("Use IterableOnce instead of TraversableOnce", "2.13.0") > type TraversableOnce[+A] = scala.collection.IterableOnce[A] > type IterableOnce[+A] = scala.collection.IterableOnce[A] {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org