[spark] branch master updated: [SPARK-37282][TESTS][FOLLOWUP] Extract `Utils.isMacOnAppleSilicon` for reuse in UTs

2021-11-19 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new c6c72a4  [SPARK-37282][TESTS][FOLLOWUP] Extract 
`Utils.isMacOnAppleSilicon` for reuse in UTs
c6c72a4 is described below

commit c6c72a453b9958c32a16be3f199e256dc7bc17ae
Author: yangjie01 
AuthorDate: Fri Nov 19 18:31:00 2021 -0800

[SPARK-37282][TESTS][FOLLOWUP] Extract `Utils.isMacOnAppleSilicon` for 
reuse in UTs

### What changes were proposed in this pull request?
SPARK-37282 use code similar to `SystemUtils.IS_OS_MAC_OSX && 
SystemUtils.OS_ARCH.equals("aarch64")` to determine whether the current test 
environment is `isMacOnAppleSilicon`. This pr extracted this duplicate code to 
`Utils.isMacOnAppleSilicon` and reuse it in UTs.

### Why are the changes needed?
Remove duplicate codes.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?

- Pass the Jenkins or GitHub Action
- Manual test changed UTs use Apple Silicon passed

Closes #34648 from LuciferYang/SPARK-37282-FOLLOWUP.

Authored-by: yangjie01 
Signed-off-by: Dongjoon Hyun 
---
 .../java/org/apache/spark/util/kvstore/LevelDBIteratorSuite.java | 2 +-
 .../src/test/java/org/apache/spark/util/kvstore/LevelDBSuite.java| 2 +-
 core/src/main/scala/org/apache/spark/util/Utils.scala| 5 +
 .../org/apache/spark/deploy/history/FsHistoryProviderSuite.scala | 2 +-
 .../scala/org/apache/spark/deploy/history/HistoryServerSuite.scala   | 4 ++--
 .../src/test/scala/org/apache/spark/status/AppStatusStoreSuite.scala | 2 +-
 .../org/apache/spark/sql/streaming/StreamingSessionWindowSuite.scala | 2 +-
 7 files changed, 12 insertions(+), 7 deletions(-)

diff --git 
a/common/kvstore/src/test/java/org/apache/spark/util/kvstore/LevelDBIteratorSuite.java
 
b/common/kvstore/src/test/java/org/apache/spark/util/kvstore/LevelDBIteratorSuite.java
index ea814cb..ceab771 100644
--- 
a/common/kvstore/src/test/java/org/apache/spark/util/kvstore/LevelDBIteratorSuite.java
+++ 
b/common/kvstore/src/test/java/org/apache/spark/util/kvstore/LevelDBIteratorSuite.java
@@ -41,7 +41,7 @@ public class LevelDBIteratorSuite extends DBIteratorSuite {
 
   @Override
   protected KVStore createStore() throws Exception {
-assumeFalse(SystemUtils.IS_OS_MAC_OSX && 
System.getProperty("os.arch").equals("aarch64"));
+assumeFalse(SystemUtils.IS_OS_MAC_OSX && 
SystemUtils.OS_ARCH.equals("aarch64"));
 dbpath = File.createTempFile("test.", ".ldb");
 dbpath.delete();
 db = new LevelDB(dbpath);
diff --git 
a/common/kvstore/src/test/java/org/apache/spark/util/kvstore/LevelDBSuite.java 
b/common/kvstore/src/test/java/org/apache/spark/util/kvstore/LevelDBSuite.java
index 1134ec2..ef92a6c 100644
--- 
a/common/kvstore/src/test/java/org/apache/spark/util/kvstore/LevelDBSuite.java
+++ 
b/common/kvstore/src/test/java/org/apache/spark/util/kvstore/LevelDBSuite.java
@@ -52,7 +52,7 @@ public class LevelDBSuite {
 
   @Before
   public void setup() throws Exception {
-assumeFalse(SystemUtils.IS_OS_MAC_OSX && 
System.getProperty("os.arch").equals("aarch64"));
+assumeFalse(SystemUtils.IS_OS_MAC_OSX && 
SystemUtils.OS_ARCH.equals("aarch64"));
 dbpath = File.createTempFile("test.", ".ldb");
 dbpath.delete();
 db = new LevelDB(dbpath);
diff --git a/core/src/main/scala/org/apache/spark/util/Utils.scala 
b/core/src/main/scala/org/apache/spark/util/Utils.scala
index 0029bbd..27496d6 100644
--- a/core/src/main/scala/org/apache/spark/util/Utils.scala
+++ b/core/src/main/scala/org/apache/spark/util/Utils.scala
@@ -1962,6 +1962,11 @@ private[spark] object Utils extends Logging {
   val isMac = SystemUtils.IS_OS_MAC_OSX
 
   /**
+   * Whether the underlying operating system is Mac OS X and processor is 
Apple Silicon.
+   */
+  val isMacOnAppleSilicon = SystemUtils.IS_OS_MAC_OSX && 
SystemUtils.OS_ARCH.equals("aarch64")
+
+  /**
* Pattern for matching a Windows drive, which contains only a single 
alphabet character.
*/
   val windowsDrive = "([a-zA-Z])".r
diff --git 
a/core/src/test/scala/org/apache/spark/deploy/history/FsHistoryProviderSuite.scala
 
b/core/src/test/scala/org/apache/spark/deploy/history/FsHistoryProviderSuite.scala
index c3d524e..b05b9de 100644
--- 
a/core/src/test/scala/org/apache/spark/deploy/history/FsHistoryProviderSuite.scala
+++ 
b/core/src/test/scala/org/apache/spark/deploy/history/FsHistoryProviderSuite.scala
@@ -1652,7 +1652,7 @@ class FsHistoryProviderSuite extends SparkFunSuite with 
Matchers with Logging {
 
 if (!inMemory) {
   // LevelDB doesn't support Apple Silicon yet
-  assume(!(Utils.isMac && System.getProperty("os.arch").equals("aarch64")))
+  assume(!Utils.isMacOnAppleSilicon)
   conf.set(LOCAL_STORE_DIR, 

[spark] branch master updated: [SPARK-37379][SQL] Add tree pattern pruning to CTESubstitution rule

2021-11-19 Thread joshrosen
This is an automated email from the ASF dual-hosted git repository.

joshrosen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 3b4eb1f  [SPARK-37379][SQL] Add tree pattern pruning to 
CTESubstitution rule
3b4eb1f is described below

commit 3b4eb1fbd8a351c29a12bfd94ec4cdbee803f416
Author: Josh Rosen 
AuthorDate: Fri Nov 19 15:24:52 2021 -0800

[SPARK-37379][SQL] Add tree pattern pruning to CTESubstitution rule

### What changes were proposed in this pull request?

This PR adds tree pattern pruning to the `CTESubstitution` analyzer rule. 
The rule will now exit early if the tree does not contain an `UnresolvedWith` 
node.

### Why are the changes needed?

Analysis is eagerly performed after every DataFrame transformation. If a 
user's program performs a long chain of _n_ transformations to construct a 
large query plan then this can lead to _O(n^2)_ performance costs from 
`CTESubstitution` because it is applied _n_ times and each application 
traverses the entire logical plan tree (which contains _O(n)_ nodes). In the 
case of chained `withColumn` calls (leading to stacked `Project` nodes) it's 
possible to see _O(n^3)_ slowdowns where _n_  [...]

Very large DataFrame plans typically do not use CTEs because there is not a 
DataFrame syntax for them (although they might appear in the plan if 
`sql(someQueryWithCTE)` is used). As a result, this PR's proposed optimization 
to skip `CTESubstitution` can greatly reduce the analysis cost for such plans.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

I believe that optimizer correctness is covered by existing tests.

As a toy benchmark, I ran

```
import org.apache.spark.sql.DataFrame
org.apache.spark.sql.catalyst.rules.RuleExecutor.resetMetrics()
(1 to 600).foldLeft(spark.range(100).toDF)((df: DataFrame, i: Int) => 
df.withColumn(s"col$i", $"id" % i))
println(org.apache.spark.sql.catalyst.rules.RuleExecutor.dumpTimeSpent())
```

on my laptop before and after this PR's changes (simulating a _O(n^3)_ 
case). Skipping `CTESubstitution` cut the running time from ~28.4 seconds to 
~15.5 seconds.

The bulk of the remaining time comes from `DeduplicateRelations`, for which 
I plan to submit a separate optimization PR.

Closes #34658 from JoshRosen/CTESubstitution-tree-pattern-pruning.

Authored-by: Josh Rosen 
Signed-off-by: Josh Rosen 
---
 .../scala/org/apache/spark/sql/catalyst/analysis/CTESubstitution.scala | 3 +++
 .../spark/sql/catalyst/plans/logical/basicLogicalOperators.scala   | 2 ++
 .../main/scala/org/apache/spark/sql/catalyst/trees/TreePatterns.scala  | 1 +
 3 files changed, 6 insertions(+)

diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CTESubstitution.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CTESubstitution.scala
index ec3d957..2e2d415 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CTESubstitution.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CTESubstitution.scala
@@ -48,6 +48,9 @@ import 
org.apache.spark.sql.internal.SQLConf.{LEGACY_CTE_PRECEDENCE_POLICY, Lega
  */
 object CTESubstitution extends Rule[LogicalPlan] {
   def apply(plan: LogicalPlan): LogicalPlan = {
+if (!plan.containsPattern(UNRESOLVED_WITH)) {
+  return plan
+}
 val isCommand = plan.find {
   case _: Command | _: ParsedStatement | _: InsertIntoDir => true
   case _ => false
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala
index f1b954d..e8a632d 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala
@@ -626,6 +626,8 @@ object View {
 case class UnresolvedWith(
 child: LogicalPlan,
 cteRelations: Seq[(String, SubqueryAlias)]) extends UnaryNode {
+  final override val nodePatterns: Seq[TreePattern] = Seq(UNRESOLVED_WITH)
+
   override def output: Seq[Attribute] = child.output
 
   override def simpleString(maxFields: Int): String = {
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/trees/TreePatterns.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/trees/TreePatterns.scala
index 6c1b64d..aad90ff 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/trees/TreePatterns.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/trees/TreePatterns.scala
@@ -111,6 +111,7 @@ object TreePattern extends Enumeration 

[spark] branch branch-3.2 updated: [SPARK-36900][SPARK-36464][CORE][TEST] Refactor `: size returns correct positive number even with over 2GB data` to pass with Java 8, 11 and 17

2021-11-19 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 31faa59  [SPARK-36900][SPARK-36464][CORE][TEST] Refactor `: size 
returns correct positive number even with over 2GB data` to pass with Java 8, 
11 and 17
31faa59 is described below

commit 31faa597edec1b92e78087e55815c85fdffae6dc
Author: yangjie01 
AuthorDate: Sat Oct 16 09:10:06 2021 -0500

[SPARK-36900][SPARK-36464][CORE][TEST] Refactor `: size returns correct 
positive number even with over 2GB data` to pass with Java 8, 11 and 17

### What changes were proposed in this pull request?
Refactor `SPARK-36464: size returns correct positive number even with over 
2GB data` in `ChunkedByteBufferOutputStreamSuite` to reduce the total use of 
memory for this test case, then this case can pass with Java 8, Java 11 and 
Java 17 use `-Xmx4g`.

### Why are the changes needed?
`SPARK-36464: size returns correct positive number even with over 2GB data` 
pass with Java 8 but OOM with Java 11 and Java 17.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?

- Pass the Jenkins or GitHub Action
- Manual test
```
mvn clean install -pl core -am -Dtest=none 
-DwildcardSuites=org.apache.spark.util.io.ChunkedByteBufferOutputStreamSuite
```
with Java 8, Java 11 and Java 17, all tests passed.

Closes #34284 from LuciferYang/SPARK-36900.

Authored-by: yangjie01 
Signed-off-by: Sean Owen 
(cherry picked from commit cf436233072b75e083a4455dc53b22edba0b3957)
Signed-off-by: Dongjoon Hyun 
---
 .../spark/util/io/ChunkedByteBufferOutputStreamSuite.scala| 11 ++-
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git 
a/core/src/test/scala/org/apache/spark/util/io/ChunkedByteBufferOutputStreamSuite.scala
 
b/core/src/test/scala/org/apache/spark/util/io/ChunkedByteBufferOutputStreamSuite.scala
index 29443e2..0a61488 100644
--- 
a/core/src/test/scala/org/apache/spark/util/io/ChunkedByteBufferOutputStreamSuite.scala
+++ 
b/core/src/test/scala/org/apache/spark/util/io/ChunkedByteBufferOutputStreamSuite.scala
@@ -121,12 +121,13 @@ class ChunkedByteBufferOutputStreamSuite extends 
SparkFunSuite {
   }
 
   test("SPARK-36464: size returns correct positive number even with over 2GB 
data") {
-val ref = new Array[Byte](1024 * 1024 * 1024)
-val o = new ChunkedByteBufferOutputStream(1024 * 1024, ByteBuffer.allocate)
-o.write(ref)
-o.write(ref)
+val data4M = 1024 * 1024 * 4
+val writeTimes = 513
+val ref = new Array[Byte](data4M)
+val o = new ChunkedByteBufferOutputStream(data4M, ByteBuffer.allocate)
+(0 until writeTimes).foreach(_ => o.write(ref))
 o.close()
 assert(o.size > 0L) // make sure it is not overflowing
-assert(o.size == ref.length.toLong * 2)
+assert(o.size == ref.length.toLong * writeTimes)
   }
 }

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated: [SPARK-36997][PYTHON][TESTS] Run mypy tests against ml, sql, streaming and core examples

2021-11-19 Thread zero323
This is an automated email from the ASF dual-hosted git repository.

zero323 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 53f9334  [SPARK-36997][PYTHON][TESTS] Run mypy tests against ml, sql, 
streaming and core examples
53f9334 is described below

commit 53f9334ef3e0f1f2b9669bedc7711c31e2ece120
Author: zero323 
AuthorDate: Fri Nov 19 22:43:18 2021 +0100

[SPARK-36997][PYTHON][TESTS] Run mypy tests against ml, sql, streaming and 
core examples

### What changes were proposed in this pull request?

This PR:

- Adds `mypy_examples_test` and `mypy_annotation_test` and refactors 
`mypy_test` in `dev/lint-python` to enable testing PySpark examples with mypy.
- Adjusts examples for `ml`, `sql`, `streaming` and core to address 
detected type checking issues.

### Why are the changes needed?

The goal of this PR is to improve test coverage of type hints.

### Does this PR introduce _any_ user-facing change?

In general, no.

The only change, directly visible to the end user, are small adjustments to 
the example scripts.

### How was this patch tested?

Existing tests with additions listed above.

Closes #34273 from zero323/SPARK-36997.

Authored-by: zero323 
Signed-off-by: zero323 
---
 dev/lint-python| 35 --
 examples/src/main/python/{sort.py => __init__.py}  | 27 -
 examples/src/main/python/als.py| 20 ++---
 examples/src/main/python/avro_inputformat.py   |  4 ++-
 .../src/main/python/{sort.py => ml/__init__,py}| 27 -
 .../src/main/python/ml/chi_square_test_example.py  |  5 
 examples/src/main/python/ml/correlation_example.py |  8 +
 .../src/main/python/{sort.py => mllib/__init__.py} | 27 -
 examples/src/main/python/parquet_inputformat.py|  4 ++-
 examples/src/main/python/sort.py   |  4 ++-
 .../src/main/python/{sort.py => sql/__init__.py}   | 27 -
 .../python/{sort.py => sql/streaming/__init__,py}  | 27 -
 .../structured_network_wordcount_windowed.py   |  2 +-
 .../main/python/{sort.py => streaming/__init__.py} | 27 -
 .../python/streaming/network_wordjoinsentiments.py | 15 --
 15 files changed, 71 insertions(+), 188 deletions(-)

diff --git a/dev/lint-python b/dev/lint-python
index 851edd6..9b7a139 100755
--- a/dev/lint-python
+++ b/dev/lint-python
@@ -182,23 +182,40 @@ function mypy_data_test {
 fi
 }
 
+function mypy_examples_test {
+local MYPY_REPORT=
+local MYPY_STATUS=
 
-function mypy_test {
-if ! hash "$MYPY_BUILD" 2> /dev/null; then
-echo "The $MYPY_BUILD command was not found. Skipping for now."
-return
+echo "starting mypy examples test..."
+
+MYPY_REPORT=$( (MYPYPATH=python $MYPY_BUILD \
+  --allow-untyped-defs \
+  --config-file python/mypy.ini \
+  --exclude "mllib/*" \
+  examples/src/main/python/) 2>&1)
+
+MYPY_STATUS=$?
+
+if [ "$MYPY_STATUS" -ne 0 ]; then
+echo "examples failed mypy checks:"
+echo "$MYPY_REPORT"
+echo "$MYPY_STATUS"
+exit "$MYPY_STATUS"
+else
+  echo "examples passed mypy checks."
+  echo
 fi
+}
 
-_MYPY_VERSION=($($MYPY_BUILD --version))
-MYPY_VERSION="${_MYPY_VERSION[1]}"
-EXPECTED_MYPY="$(satisfies_min_version $MYPY_VERSION $MINIMUM_MYPY)"
 
-if [[ "$EXPECTED_MYPY" == "False" ]]; then
-echo "The minimum mypy version needs to be $MINIMUM_MYPY. Your current 
version is $MYPY_VERSION. Skipping for now."
+function mypy_test {
+if ! hash "$MYPY_BUILD" 2> /dev/null; then
+echo "The $MYPY_BUILD command was not found. Skipping for now."
 return
 fi
 
 mypy_annotation_test
+mypy_examples_test
 mypy_data_test
 }
 
diff --git a/examples/src/main/python/sort.py 
b/examples/src/main/python/__init__.py
old mode 100755
new mode 100644
similarity index 51%
copy from examples/src/main/python/sort.py
copy to examples/src/main/python/__init__.py
index 9efb00a..cce3aca
--- a/examples/src/main/python/sort.py
+++ b/examples/src/main/python/__init__.py
@@ -14,30 +14,3 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 #
-
-import sys
-
-from pyspark.sql import SparkSession
-
-
-if __name__ == "__main__":
-if len(sys.argv) != 2:
-print("Usage: sort ", file=sys.stderr)
-sys.exit(-1)
-
-spark = SparkSession\
-.builder\
-.appName("PythonSort")\
-.getOrCreate()
-
-lines = spark.read.text(sys.argv[1]).rdd.map(lambda r: r[0])
-sortedCount = lines.flatMap(lambda x: x.split(' ')) \
-.map(lambda x: (int(x), 1)) \
-.sortByKey()
-# This is just a demo 

[spark] branch master updated (5815dddb -> 9e408f6)

2021-11-19 Thread viirya
This is an automated email from the ASF dual-hosted git repository.

viirya pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 5815dddb [SPARK-37385][SQL][TESTS] Add tests for TimestampNTZ and 
TimestampLTZ for Parquet data source
 add 9e408f6  [SPARK-37224][SS][FOLLOWUP] Add benchmark on basic state 
store operations

No new revisions were added by this update.

Summary of changes:
 .../StateStoreBasicOperationsBenchmark-results.txt | 183 ++
 .../StateStoreBasicOperationsBenchmark.scala   | 370 +
 2 files changed, 553 insertions(+)
 create mode 100644 
sql/core/benchmarks/StateStoreBasicOperationsBenchmark-results.txt
 create mode 100644 
sql/core/src/test/scala/org/apache/spark/sql/execution/benchmark/StateStoreBasicOperationsBenchmark.scala

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (4f20898 -> 5815dddb)

2021-11-19 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 4f20898  [SPARK-35672][FOLLOWUP][TESTS] Add more exclusion rules to 
MimaExcludes.scala for Scala 2.13
 add 5815dddb [SPARK-37385][SQL][TESTS] Add tests for TimestampNTZ and 
TimestampLTZ for Parquet data source

No new revisions were added by this update.

Summary of changes:
 .../datasources/parquet/ParquetIOSuite.scala   | 58 +-
 .../datasources/parquet/ParquetQuerySuite.scala| 42 ++--
 2 files changed, 83 insertions(+), 17 deletions(-)

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated: [SPARK-35672][FOLLOWUP][TESTS] Add more exclusion rules to MimaExcludes.scala for Scala 2.13

2021-11-19 Thread sarutak
This is an automated email from the ASF dual-hosted git repository.

sarutak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 4f20898  [SPARK-35672][FOLLOWUP][TESTS] Add more exclusion rules to 
MimaExcludes.scala for Scala 2.13
4f20898 is described below

commit 4f2089899dd7f21ba41c9ccfc0453a93afa1e7eb
Author: Kousuke Saruta 
AuthorDate: Fri Nov 19 20:33:23 2021 +0900

[SPARK-35672][FOLLOWUP][TESTS] Add more exclusion rules to 
MimaExcludes.scala for Scala 2.13

### What changes were proposed in this pull request?

This PR adds more MiMa exclusion rules for Scala 2.13.
#34649 partially resolved the compatibility issue but additional 3 
compatibility problems are raised.

```
$ build/sbt clean
$ dev/change-scala-version.sh 2.13
$ build/sbt -Pscala-2.13 clean
$ dev/mima

...
[error] spark-core: Failed binary compatibility check against 
org.apache.spark:spark-core_2.13:3.2.0! Found 3 potential problems (filtered 
910)
[error]  * synthetic method 
copy$default$8()scala.collection.mutable.ListBuffer in class 
org.apache.spark.executor.CoarseGrainedExecutorBackend#Arguments has a 
different result type in current version, where it is scala.Option rather than 
scala.collection.mutable.ListBuffer
[error]filter with: 
ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.executor.CoarseGrainedExecutorBackend#Arguments.copy$default$8")
[error]  * synthetic method copy$default$9()scala.Option in class 
org.apache.spark.executor.CoarseGrainedExecutorBackend#Arguments has a 
different result type in current version, where it is Int rather than 
scala.Option
[error]filter with: 
ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.executor.CoarseGrainedExecutorBackend#Arguments.copy$default$9")
[error]  * the type hierarchy of object 
org.apache.spark.executor.CoarseGrainedExecutorBackend#Arguments is different 
in current version. Missing types {scala.runtime.AbstractFunction10}
[error]filter with: 
ProblemFilters.exclude[MissingTypesProblem]("org.apache.spark.executor.CoarseGrainedExecutorBackend$Arguments$")
...
```

### Why are the changes needed?

To keep the build stable.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Confirmed MiMa passed.
```
$ build/sbt clean
$ dev/change-scala-version.sh 2.13
$ build/sbt -Pscala-2.13 clean
$ dev/mima

Closes #34664 from sarutak/followup-SPARK-35672-mima-take2.

Authored-by: Kousuke Saruta 
Signed-off-by: Kousuke Saruta 
---
 project/MimaExcludes.scala | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/project/MimaExcludes.scala b/project/MimaExcludes.scala
index 15df3d4..75fa001 100644
--- a/project/MimaExcludes.scala
+++ b/project/MimaExcludes.scala
@@ -37,8 +37,10 @@ object MimaExcludes {
   // Exclude rules for 3.3.x from 3.2.0
   lazy val v33excludes = v32excludes ++ Seq(
 // [SPARK-35672][CORE][YARN] Pass user classpath entries to executors 
using config instead of command line
-// This is necessary for Scala 2.13.
+// The followings are necessary for Scala 2.13.
 
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.executor.CoarseGrainedExecutorBackend#Arguments.*"),
+
ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.executor.CoarseGrainedExecutorBackend#Arguments.*"),
+
ProblemFilters.exclude[MissingTypesProblem]("org.apache.spark.executor.CoarseGrainedExecutorBackend$Arguments$")
   )
 
   // Exclude rules for 3.2.x from 3.1.1

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org