This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 4c800478d5a [SPARK-43830][BUILD] Update scalatest and scalatestplus 
related dependencies to newest version
4c800478d5a is described below

commit 4c800478d5a6c76dca3ed1fc945de71182fd65e3
Author: panbingkun <pbk1...@gmail.com>
AuthorDate: Sat May 27 18:12:52 2023 -0500

    [SPARK-43830][BUILD] Update scalatest and scalatestplus related 
dependencies to newest version
    
    ### What changes were proposed in this pull request?
    The pr aims to update scalatest and scalatestplus related dependencies to 
newest version, include:
    This pr aims upgrade `scalatest` related test dependencies to 3.2.16:
     - scalatest: upgrade scalatest from 3.2.15 to 3.2.16
    
     - mockito
       - mockito-core: upgrade from 4.6.1 to 4.11.0
       - mockito-inline: upgrade from 4.6.1 to 4.11.0
    
     - selenium-java: upgrade from 4.7.2 to 4.9.1
    
     - htmlunit-driver: upgrade from 4.7.2 to 4.9.1
    
     - htmlunit: upgrade from 2.67.0 to 2.70.0
    
     - scalatestplus
             - scalacheck-1-17: upgrade from 3.2.15.0 to 3.2.16.0
       - mockito: upgrade from `mockito-4-6` 3.2.15.0 to `mockito-4-11` 3.2.16.0
       - selenium: upgrade from `selenium-4-7` 3.2.15.0 to `selenium-4-9` 
3.2.16.0
    
    ### Why are the changes needed?
    The relevant release notes as follows:
     - scalatest:
             - 
https://github.com/scalatest/scalatest/releases/tag/release-3.2.16
    
     - [mockito](https://github.com/mockito/mockito)
       - https://github.com/mockito/mockito/releases/tag/v4.11.0
       - https://github.com/mockito/mockito/releases/tag/v4.10.0
       - https://github.com/mockito/mockito/releases/tag/v4.9.0
       - https://github.com/mockito/mockito/releases/tag/v4.8.1
       - https://github.com/mockito/mockito/releases/tag/v4.8.0
       - https://github.com/mockito/mockito/releases/tag/v4.7.0
    
     - [selenium-java](https://github.com/SeleniumHQ/selenium)
       - https://github.com/SeleniumHQ/selenium/releases/tag/selenium-4.9.1
       - https://github.com/SeleniumHQ/selenium/releases/tag/selenium-4.9.0
       - https://github.com/SeleniumHQ/selenium/releases/tag/selenium-4.8.3-java
       - https://github.com/SeleniumHQ/selenium/releases/tag/selenium-4.8.2-java
       - https://github.com/SeleniumHQ/selenium/releases/tag/selenium-4.8.1
       - https://github.com/SeleniumHQ/selenium/releases/tag/selenium-4.8.0
    
     - [htmlunit-driver](https://github.com/SeleniumHQ/htmlunit-driver)
       - 
https://github.com/SeleniumHQ/htmlunit-driver/releases/tag/htmlunit-driver-4.9.1
       - 
https://github.com/SeleniumHQ/htmlunit-driver/releases/tag/htmlunit-driver-4.9.0
       - 
https://github.com/SeleniumHQ/htmlunit-driver/releases/tag/htmlunit-driver-4.8.3
       - 
https://github.com/SeleniumHQ/htmlunit-driver/releases/tag/htmlunit-driver-4.8.1.1
       - https://github.com/SeleniumHQ/htmlunit-driver/releases/tag/4.8.1
       - https://github.com/SeleniumHQ/htmlunit-driver/releases/tag/4.8.0
    
     - [htmlunit](https://github.com/HtmlUnit/htmlunit)
       - https://github.com/HtmlUnit/htmlunit/releases/tag/2.70.0
       - Why this version: because the 4.9.1 version of Selenium relies on it. 
https://github.com/SeleniumHQ/selenium/blob/selenium-4.9.1/java/maven_deps.bzl#L83
    
     - 
[org.scalatestplus:scalacheck-1-17](https://github.com/scalatest/scalatestplus-scalacheck)
       - 
https://github.com/scalatest/scalatestplus-scalacheck/releases/tag/release-3.2.16.0-for-scalacheck-1.17
    
     - 
[org.scalatestplus:mockito-4-11](https://github.com/scalatest/scalatestplus-mockito)
       - 
https://github.com/scalatest/scalatestplus-mockito/releases/tag/release-3.2.16.0-for-mockito-4.11
    
     - 
[org.scalatestplus:selenium-4-9](https://github.com/scalatest/scalatestplus-selenium)
       - 
https://github.com/scalatest/scalatestplus-selenium/releases/tag/release-3.2.16.0-for-selenium-4.9
    
    ### Does this PR introduce _any_ user-facing change?
    No.
    
    ### How was this patch tested?
    - Pass GitHub Actions
    - Manual test:
       - ChromeUISeleniumSuite
       - RocksDBBackendChromeUIHistoryServerSuite
    
    ```
        build/sbt -Dguava.version=31.1-jre 
-Dspark.test.webdriver.chrome.driver=/path/to/chromedriver 
-Dtest.default.exclude.tags="" -Phive -Phive-thriftserver "core/testOnly 
org.apache.spark.ui.ChromeUISeleniumSuite"
    
        build/sbt -Dguava.version=31.1-jre 
-Dspark.test.webdriver.chrome.driver=/path/to/chromedriver 
-Dtest.default.exclude.tags="" -Phive -Phive-thriftserver "core/testOnly 
org.apache.spark.deploy.history.RocksDBBackendChromeUIHistoryServerSuite"
    ```
    <img width="856" alt="image" 
src="https://github.com/apache/spark/assets/15246973/73349ffb-4198-4371-a741-411712d14712";>
    
    Closes #41341 from panbingkun/upgrade_scalatest.
    
    Authored-by: panbingkun <pbk1...@gmail.com>
    Signed-off-by: Sean Owen <sro...@gmail.com>
---
 .../org/apache/spark/HeartbeatReceiverSuite.scala  |  2 +-
 .../org/apache/spark/api/java/JavaUtilsSuite.scala |  4 +++-
 .../deploy/history/FsHistoryProviderSuite.scala    | 10 ++++-----
 .../history/HistoryServerDiskManagerSuite.scala    |  3 ++-
 .../spark/deploy/worker/DriverRunnerTest.scala     |  2 +-
 .../CoarseGrainedExecutorBackendSuite.scala        |  6 +++--
 .../spark/executor/ProcfsMetricsGetterSuite.scala  |  2 +-
 .../internal/plugin/PluginContainerSuite.scala     |  4 ++--
 .../apache/spark/scheduler/DAGSchedulerSuite.scala |  7 +++---
 .../scheduler/OutputCommitCoordinatorSuite.scala   |  6 +++--
 .../spark/scheduler/TaskResultGetterSuite.scala    |  4 ++--
 .../spark/scheduler/TaskSchedulerImplSuite.scala   |  4 ++--
 .../spark/scheduler/TaskSetManagerSuite.scala      |  6 ++---
 .../storage/BlockManagerReplicationSuite.scala     |  2 +-
 .../apache/spark/storage/BlockManagerSuite.scala   | 12 +++++-----
 .../storage/PartiallySerializedBlockSuite.scala    | 14 +++++++-----
 dev/deps/spark-deps-hadoop-3-hive-2.3              |  2 +-
 pom.xml                                            | 26 +++++++++++-----------
 .../KubernetesClusterSchedulerBackendSuite.scala   |  2 +-
 .../org/apache/spark/deploy/yarn/ClientSuite.scala |  7 +++---
 .../spark/deploy/yarn/YarnAllocatorSuite.scala     |  4 ++--
 .../analysis/AnalysisExternalCatalogSuite.scala    |  4 ++--
 .../catalyst/analysis/TableLookupCacheSuite.scala  |  4 ++--
 .../scala/org/apache/spark/sql/JoinSuite.scala     |  2 +-
 .../sql/errors/QueryExecutionErrorsSuite.scala     |  6 ++---
 .../SparkExecuteStatementOperationSuite.scala      |  2 +-
 .../streaming/ReceivedBlockTrackerSuite.scala      |  2 +-
 27 files changed, 81 insertions(+), 68 deletions(-)

diff --git a/core/src/test/scala/org/apache/spark/HeartbeatReceiverSuite.scala 
b/core/src/test/scala/org/apache/spark/HeartbeatReceiverSuite.scala
index 879ce558406..ee0a5773692 100644
--- a/core/src/test/scala/org/apache/spark/HeartbeatReceiverSuite.scala
+++ b/core/src/test/scala/org/apache/spark/HeartbeatReceiverSuite.scala
@@ -73,7 +73,7 @@ class HeartbeatReceiverSuite
       .setMaster("local[2]")
       .setAppName("test")
       .set(DYN_ALLOCATION_TESTING, true)
-    sc = spy(new SparkContext(conf))
+    sc = spy[SparkContext](new SparkContext(conf))
     scheduler = mock(classOf[TaskSchedulerImpl])
     when(sc.taskScheduler).thenReturn(scheduler)
     when(scheduler.excludedNodes).thenReturn(Predef.Set[String]())
diff --git a/core/src/test/scala/org/apache/spark/api/java/JavaUtilsSuite.scala 
b/core/src/test/scala/org/apache/spark/api/java/JavaUtilsSuite.scala
index 8e6e3e09686..ee20d51b892 100644
--- a/core/src/test/scala/org/apache/spark/api/java/JavaUtilsSuite.scala
+++ b/core/src/test/scala/org/apache/spark/api/java/JavaUtilsSuite.scala
@@ -22,6 +22,7 @@ import java.io.Serializable
 import org.mockito.Mockito._
 
 import org.apache.spark.SparkFunSuite
+import org.apache.spark.api.java.JavaUtils.SerializableMapWrapper
 
 
 class JavaUtilsSuite extends SparkFunSuite {
@@ -33,7 +34,8 @@ class JavaUtilsSuite extends SparkFunSuite {
 
     src.put(key, "42")
 
-    val map: java.util.Map[Double, String] = 
spy(JavaUtils.mapAsSerializableJavaMap(src))
+    val map: java.util.Map[Double, String] = 
spy[SerializableMapWrapper[Double, String]](
+      JavaUtils.mapAsSerializableJavaMap(src))
 
     assert(map.containsKey(key))
 
diff --git 
a/core/src/test/scala/org/apache/spark/deploy/history/FsHistoryProviderSuite.scala
 
b/core/src/test/scala/org/apache/spark/deploy/history/FsHistoryProviderSuite.scala
index 4e026486e84..893f1083357 100644
--- 
a/core/src/test/scala/org/apache/spark/deploy/history/FsHistoryProviderSuite.scala
+++ 
b/core/src/test/scala/org/apache/spark/deploy/history/FsHistoryProviderSuite.scala
@@ -788,7 +788,7 @@ abstract class FsHistoryProviderSuite extends SparkFunSuite 
with Matchers with P
   }
 
   test("provider correctly checks whether fs is in safe mode") {
-    val provider = spy(new FsHistoryProvider(createTestConf()))
+    val provider = spy[FsHistoryProvider](new 
FsHistoryProvider(createTestConf()))
     val dfs = mock(classOf[DistributedFileSystem])
     // Asserts that safe mode is false because we can't really control the 
return value of the mock,
     // since the API is different between hadoop 1 and 2.
@@ -1032,7 +1032,7 @@ abstract class FsHistoryProviderSuite extends 
SparkFunSuite with Matchers with P
     withTempDir { storeDir =>
       val conf = createTestConf().set(LOCAL_STORE_DIR, 
storeDir.getAbsolutePath())
       val clock = new ManualClock()
-      val provider = spy(new FsHistoryProvider(conf, clock))
+      val provider = spy[FsHistoryProvider](new FsHistoryProvider(conf, clock))
       val appId = "new1"
 
       // Write logs for two app attempts.
@@ -1196,11 +1196,11 @@ abstract class FsHistoryProviderSuite extends 
SparkFunSuite with Matchers with P
       SparkListenerApplicationStart("accessGranted", Some("accessGranted"), 
1L, "test", None),
       SparkListenerApplicationEnd(5L))
     var isReadable = false
-    val mockedFs = spy(provider.fs)
+    val mockedFs = spy[FileSystem](provider.fs)
     doThrow(new AccessControlException("Cannot read accessDenied 
file")).when(mockedFs).open(
       argThat((path: Path) => path.getName.toLowerCase(Locale.ROOT) == 
"accessdenied" &&
         !isReadable))
-    val mockedProvider = spy(provider)
+    val mockedProvider = spy[FsHistoryProvider](provider)
     when(mockedProvider.fs).thenReturn(mockedFs)
     updateAndCheck(mockedProvider) { list =>
       list.size should be(1)
@@ -1225,7 +1225,7 @@ abstract class FsHistoryProviderSuite extends 
SparkFunSuite with Matchers with P
   test("check in-progress event logs absolute length") {
     val path = new Path("testapp.inprogress")
     val provider = new FsHistoryProvider(createTestConf())
-    val mockedProvider = spy(provider)
+    val mockedProvider = spy[FsHistoryProvider](provider)
     val mockedFs = mock(classOf[FileSystem])
     val in = mock(classOf[FSDataInputStream])
     val dfsIn = mock(classOf[DFSInputStream])
diff --git 
a/core/src/test/scala/org/apache/spark/deploy/history/HistoryServerDiskManagerSuite.scala
 
b/core/src/test/scala/org/apache/spark/deploy/history/HistoryServerDiskManagerSuite.scala
index 373d1c557fc..e4248a49b90 100644
--- 
a/core/src/test/scala/org/apache/spark/deploy/history/HistoryServerDiskManagerSuite.scala
+++ 
b/core/src/test/scala/org/apache/spark/deploy/history/HistoryServerDiskManagerSuite.scala
@@ -62,7 +62,8 @@ abstract class HistoryServerDiskManagerSuite extends 
SparkFunSuite with BeforeAn
 
   private def mockManager(): HistoryServerDiskManager = {
     val conf = new SparkConf().set(MAX_LOCAL_DISK_USAGE, MAX_USAGE)
-    val manager = spy(new HistoryServerDiskManager(conf, testDir, store, new 
ManualClock()))
+    val manager = spy[HistoryServerDiskManager](
+      new HistoryServerDiskManager(conf, testDir, store, new ManualClock()))
     doAnswer(AdditionalAnswers.returnsFirstArg[Long]()).when(manager)
       .approximateSize(anyLong(), anyBoolean())
     manager
diff --git 
a/core/src/test/scala/org/apache/spark/deploy/worker/DriverRunnerTest.scala 
b/core/src/test/scala/org/apache/spark/deploy/worker/DriverRunnerTest.scala
index e429ddfd570..e97196084e0 100644
--- a/core/src/test/scala/org/apache/spark/deploy/worker/DriverRunnerTest.scala
+++ b/core/src/test/scala/org/apache/spark/deploy/worker/DriverRunnerTest.scala
@@ -39,7 +39,7 @@ class DriverRunnerTest extends SparkFunSuite {
     val conf = new SparkConf()
     val worker = mock(classOf[RpcEndpointRef])
     doNothing().when(worker).send(any())
-    spy(new DriverRunner(conf, "driverId", new File("workDir"), new 
File("sparkHome"),
+    spy[DriverRunner](new DriverRunner(conf, "driverId", new File("workDir"), 
new File("sparkHome"),
       driverDescription, worker, "spark://1.2.3.4/worker/", 
"http://publicAddress:80";,
       new SecurityManager(conf)))
   }
diff --git 
a/core/src/test/scala/org/apache/spark/executor/CoarseGrainedExecutorBackendSuite.scala
 
b/core/src/test/scala/org/apache/spark/executor/CoarseGrainedExecutorBackendSuite.scala
index 7b8b7cf4cdd..7ba5dd4793b 100644
--- 
a/core/src/test/scala/org/apache/spark/executor/CoarseGrainedExecutorBackendSuite.scala
+++ 
b/core/src/test/scala/org/apache/spark/executor/CoarseGrainedExecutorBackendSuite.scala
@@ -408,7 +408,8 @@ class CoarseGrainedExecutorBackendSuite extends 
SparkFunSuite
       val executor = backend.executor
       // Mock the executor.
       when(executor.threadPool).thenReturn(threadPool)
-      val runningTasks = spy(new ConcurrentHashMap[Long, Executor#TaskRunner])
+      val runningTasks = spy[ConcurrentHashMap[Long, Executor#TaskRunner]](
+        new ConcurrentHashMap[Long, Executor#TaskRunner])
       when(executor.runningTasks).thenAnswer(_ => runningTasks)
       when(executor.conf).thenReturn(conf)
 
@@ -496,7 +497,8 @@ class CoarseGrainedExecutorBackendSuite extends 
SparkFunSuite
       val executor = backend.executor
       // Mock the executor.
       when(executor.threadPool).thenReturn(threadPool)
-      val runningTasks = spy(new ConcurrentHashMap[Long, Executor#TaskRunner])
+      val runningTasks = spy[ConcurrentHashMap[Long, Executor#TaskRunner]](
+        new ConcurrentHashMap[Long, Executor#TaskRunner])
       when(executor.runningTasks).thenAnswer(_ => runningTasks)
       when(executor.conf).thenReturn(conf)
 
diff --git 
a/core/src/test/scala/org/apache/spark/executor/ProcfsMetricsGetterSuite.scala 
b/core/src/test/scala/org/apache/spark/executor/ProcfsMetricsGetterSuite.scala
index ff0374da1bc..d583afdf07c 100644
--- 
a/core/src/test/scala/org/apache/spark/executor/ProcfsMetricsGetterSuite.scala
+++ 
b/core/src/test/scala/org/apache/spark/executor/ProcfsMetricsGetterSuite.scala
@@ -43,7 +43,7 @@ class ProcfsMetricsGetterSuite extends SparkFunSuite {
 
   test("SPARK-34845: partial metrics shouldn't be returned") {
     val p = new ProcfsMetricsGetter(getTestResourcePath("ProcfsMetrics"))
-    val mockedP = spy(p)
+    val mockedP = spy[ProcfsMetricsGetter](p)
 
     var ptree: Set[Int] = Set(26109, 22763)
     when(mockedP.computeProcessTree).thenReturn(ptree)
diff --git 
a/core/src/test/scala/org/apache/spark/internal/plugin/PluginContainerSuite.scala
 
b/core/src/test/scala/org/apache/spark/internal/plugin/PluginContainerSuite.scala
index 2b8515d52d1..e7959c8f742 100644
--- 
a/core/src/test/scala/org/apache/spark/internal/plugin/PluginContainerSuite.scala
+++ 
b/core/src/test/scala/org/apache/spark/internal/plugin/PluginContainerSuite.scala
@@ -307,14 +307,14 @@ class TestSparkPlugin extends SparkPlugin {
   override def driverPlugin(): DriverPlugin = {
     val p = new TestDriverPlugin()
     require(TestSparkPlugin.driverPlugin == null, "Driver plugin already 
initialized.")
-    TestSparkPlugin.driverPlugin = spy(p)
+    TestSparkPlugin.driverPlugin = spy[TestDriverPlugin](p)
     TestSparkPlugin.driverPlugin
   }
 
   override def executorPlugin(): ExecutorPlugin = {
     val p = new TestExecutorPlugin()
     require(TestSparkPlugin.executorPlugin == null, "Executor plugin already 
initialized.")
-    TestSparkPlugin.executorPlugin = spy(p)
+    TestSparkPlugin.executorPlugin = spy[TestExecutorPlugin](p)
     TestSparkPlugin.executorPlugin
   }
 
diff --git 
a/core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala 
b/core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala
index 34bc8e31bbd..73ee879ad53 100644
--- a/core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala
+++ b/core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala
@@ -402,10 +402,11 @@ class DAGSchedulerSuite extends SparkFunSuite with 
TempLocalSparkContext with Ti
     results.clear()
     securityMgr = new SecurityManager(sc.getConf)
     broadcastManager = new BroadcastManager(true, sc.getConf)
-    mapOutputTracker = spy(new MyMapOutputTrackerMaster(sc.getConf, 
broadcastManager))
-    blockManagerMaster = spy(new MyBlockManagerMaster(sc.getConf))
+    mapOutputTracker = spy[MyMapOutputTrackerMaster](
+      new MyMapOutputTrackerMaster(sc.getConf, broadcastManager))
+    blockManagerMaster = spy[MyBlockManagerMaster](new 
MyBlockManagerMaster(sc.getConf))
     doNothing().when(blockManagerMaster).updateRDDBlockVisibility(any(), any())
-    scheduler = spy(new MyDAGScheduler(
+    scheduler = spy[MyDAGScheduler](new MyDAGScheduler(
       sc,
       taskScheduler,
       sc.listenerBus,
diff --git 
a/core/src/test/scala/org/apache/spark/scheduler/OutputCommitCoordinatorSuite.scala
 
b/core/src/test/scala/org/apache/spark/scheduler/OutputCommitCoordinatorSuite.scala
index 95e2429ea58..44dc9a5f97d 100644
--- 
a/core/src/test/scala/org/apache/spark/scheduler/OutputCommitCoordinatorSuite.scala
+++ 
b/core/src/test/scala/org/apache/spark/scheduler/OutputCommitCoordinatorSuite.scala
@@ -87,7 +87,8 @@ class OutputCommitCoordinatorSuite extends SparkFunSuite with 
BeforeAndAfter {
           isLocal: Boolean,
           listenerBus: LiveListenerBus): SparkEnv = {
         outputCommitCoordinator =
-          spy(new OutputCommitCoordinator(conf, isDriver = true, Option(this)))
+          spy[OutputCommitCoordinator](
+            new OutputCommitCoordinator(conf, isDriver = true, Option(this)))
         // Use Mockito.spy() to maintain the default infrastructure everywhere 
else.
         // This mocking allows us to control the coordinator responses in test 
cases.
         SparkEnv.createDriverEnv(conf, isLocal, listenerBus,
@@ -95,7 +96,8 @@ class OutputCommitCoordinatorSuite extends SparkFunSuite with 
BeforeAndAfter {
       }
     }
     // Use Mockito.spy() to maintain the default infrastructure everywhere else
-    val mockTaskScheduler = 
spy(sc.taskScheduler.asInstanceOf[TaskSchedulerImpl])
+    val mockTaskScheduler = spy[TaskSchedulerImpl](
+      sc.taskScheduler.asInstanceOf[TaskSchedulerImpl])
 
     doAnswer { (invoke: InvocationOnMock) =>
       // Submit the tasks, then force the task scheduler to dequeue the
diff --git 
a/core/src/test/scala/org/apache/spark/scheduler/TaskResultGetterSuite.scala 
b/core/src/test/scala/org/apache/spark/scheduler/TaskResultGetterSuite.scala
index 1f61fab3e07..3ea084307cb 100644
--- a/core/src/test/scala/org/apache/spark/scheduler/TaskResultGetterSuite.scala
+++ b/core/src/test/scala/org/apache/spark/scheduler/TaskResultGetterSuite.scala
@@ -145,7 +145,7 @@ class TaskResultGetterSuite extends SparkFunSuite with 
BeforeAndAfter with Local
   test("handling total size of results larger than maxResultSize") {
     sc = new SparkContext("local", "test", conf)
     val scheduler = new DummyTaskSchedulerImpl(sc)
-    val spyScheduler = spy(scheduler)
+    val spyScheduler = spy[DummyTaskSchedulerImpl](scheduler)
     val resultGetter = new TaskResultGetter(sc.env, spyScheduler)
     scheduler.taskResultGetter = resultGetter
     val myTsm = new TaskSetManager(spyScheduler, FakeTask.createTaskSet(2), 1) 
{
@@ -258,7 +258,7 @@ class TaskResultGetterSuite extends SparkFunSuite with 
BeforeAndAfter with Local
     // Set up custom TaskResultGetter and TaskSchedulerImpl spy
     sc = new SparkContext("local", "test", conf)
     val scheduler = sc.taskScheduler.asInstanceOf[TaskSchedulerImpl]
-    val spyScheduler = spy(scheduler)
+    val spyScheduler = spy[TaskSchedulerImpl](scheduler)
     val resultGetter = new MyTaskResultGetter(sc.env, spyScheduler)
     val newDAGScheduler = new DAGScheduler(sc, spyScheduler)
     scheduler.taskResultGetter = resultGetter
diff --git 
a/core/src/test/scala/org/apache/spark/scheduler/TaskSchedulerImplSuite.scala 
b/core/src/test/scala/org/apache/spark/scheduler/TaskSchedulerImplSuite.scala
index a484dae6f80..7d2b4f5221a 100644
--- 
a/core/src/test/scala/org/apache/spark/scheduler/TaskSchedulerImplSuite.scala
+++ 
b/core/src/test/scala/org/apache/spark/scheduler/TaskSchedulerImplSuite.scala
@@ -109,7 +109,7 @@ class TaskSchedulerImplSuite extends SparkFunSuite with 
LocalSparkContext
         override def createTaskSetManager(taskSet: TaskSet, maxFailures: Int): 
TaskSetManager = {
           val tsm = super.createTaskSetManager(taskSet, maxFailures)
           // we need to create a spied tsm just so we can set the 
TaskSetExcludelist
-          val tsmSpy = spy(tsm)
+          val tsmSpy = spy[TaskSetManager](tsm)
           val taskSetExcludelist = mock[TaskSetExcludelist]
           
when(tsmSpy.taskSetExcludelistHelperOpt).thenReturn(Some(taskSetExcludelist))
           stageToMockTaskSetManager(taskSet.stageId) = tsmSpy
@@ -1946,7 +1946,7 @@ class TaskSchedulerImplSuite extends SparkFunSuite with 
LocalSparkContext
       override def createTaskSetManager(taskSet: TaskSet, maxFailures: Int): 
TaskSetManager = {
         val tsm = super.createTaskSetManager(taskSet, maxFailures)
         // we need to create a spied tsm so that we can see the copies running
-        val tsmSpy = spy(tsm)
+        val tsmSpy = spy[TaskSetManager](tsm)
         stageToMockTaskSetManager(taskSet.stageId) = tsmSpy
         tsmSpy
       }
diff --git 
a/core/src/test/scala/org/apache/spark/scheduler/TaskSetManagerSuite.scala 
b/core/src/test/scala/org/apache/spark/scheduler/TaskSetManagerSuite.scala
index 45360f486ed..cb70dbb0289 100644
--- a/core/src/test/scala/org/apache/spark/scheduler/TaskSetManagerSuite.scala
+++ b/core/src/test/scala/org/apache/spark/scheduler/TaskSetManagerSuite.scala
@@ -389,7 +389,7 @@ class TaskSetManagerSuite
     manager.isZombie = false
 
     // offers not accepted due to excludelist are not delay schedule rejects
-    val tsmSpy = spy(manager)
+    val tsmSpy = spy[TaskSetManager](manager)
     val excludelist = mock(classOf[TaskSetExcludelist])
     when(tsmSpy.taskSetExcludelistHelperOpt).thenReturn(Some(excludelist))
     when(excludelist.isNodeExcludedForTaskSet(any())).thenReturn(true)
@@ -1416,7 +1416,7 @@ class TaskSetManagerSuite
     val taskSet = FakeTask.createTaskSet(4)
     val tsm = new TaskSetManager(sched, taskSet, 4)
     // we need a spy so we can attach our mock excludelist
-    val tsmSpy = spy(tsm)
+    val tsmSpy = spy[TaskSetManager](tsm)
     val excludelist = mock(classOf[TaskSetExcludelist])
     when(tsmSpy.taskSetExcludelistHelperOpt).thenReturn(Some(excludelist))
 
@@ -1497,7 +1497,7 @@ class TaskSetManagerSuite
     val mockListenerBus = mock(classOf[LiveListenerBus])
     val healthTracker = new HealthTracker(mockListenerBus, conf, None, clock)
     val taskSetManager = new TaskSetManager(sched, taskSet, 1, 
Some(healthTracker))
-    val taskSetManagerSpy = spy(taskSetManager)
+    val taskSetManagerSpy = spy[TaskSetManager](taskSetManager)
 
     val taskDesc = taskSetManagerSpy.resourceOffer(exec, host, 
TaskLocality.ANY)._1
 
diff --git 
a/core/src/test/scala/org/apache/spark/storage/BlockManagerReplicationSuite.scala
 
b/core/src/test/scala/org/apache/spark/storage/BlockManagerReplicationSuite.scala
index 8729ae1edfb..38a669bc857 100644
--- 
a/core/src/test/scala/org/apache/spark/storage/BlockManagerReplicationSuite.scala
+++ 
b/core/src/test/scala/org/apache/spark/storage/BlockManagerReplicationSuite.scala
@@ -275,7 +275,7 @@ trait BlockManagerReplicationBehavior extends SparkFunSuite
 
       // create 1 faulty block manager by injecting faulty memory manager
       val memManager = UnifiedMemoryManager(conf, numCores = 1)
-      val mockedMemoryManager = spy(memManager)
+      val mockedMemoryManager = spy[UnifiedMemoryManager](memManager)
       doAnswer(_ => 
false).when(mockedMemoryManager).acquireStorageMemory(any(), any(), any())
       val store2 = makeBlockManager(10000, "host-2", Some(mockedMemoryManager))
 
diff --git 
a/core/src/test/scala/org/apache/spark/storage/BlockManagerSuite.scala 
b/core/src/test/scala/org/apache/spark/storage/BlockManagerSuite.scala
index 29592434765..ab6c2693b0e 100644
--- a/core/src/test/scala/org/apache/spark/storage/BlockManagerSuite.scala
+++ b/core/src/test/scala/org/apache/spark/storage/BlockManagerSuite.scala
@@ -186,8 +186,8 @@ class BlockManagerSuite extends SparkFunSuite with Matchers 
with PrivateMethodTe
     when(sc.conf).thenReturn(conf)
 
     val blockManagerInfo = new mutable.HashMap[BlockManagerId, 
BlockManagerInfo]()
-    liveListenerBus = spy(new LiveListenerBus(conf))
-    master = spy(new BlockManagerMaster(rpcEnv.setupEndpoint("blockmanager",
+    liveListenerBus = spy[LiveListenerBus](new LiveListenerBus(conf))
+    master = spy[BlockManagerMaster](new 
BlockManagerMaster(rpcEnv.setupEndpoint("blockmanager",
       new BlockManagerMasterEndpoint(rpcEnv, true, conf,
         liveListenerBus, None, blockManagerInfo, mapOutputTracker, 
shuffleManager,
         isDriver = true)),
@@ -873,7 +873,7 @@ class BlockManagerSuite extends SparkFunSuite with Matchers 
with PrivateMethodTe
       conf.set("spark.shuffle.io.maxRetries", "0")
       val sameHostBm = makeBlockManager(8000, "sameHost", master)
 
-      val otherHostTransferSrv = spy(sameHostBm.blockTransferService)
+      val otherHostTransferSrv = 
spy[BlockTransferService](sameHostBm.blockTransferService)
       doAnswer { _ =>
          "otherHost"
       }.when(otherHostTransferSrv).hostName
@@ -888,7 +888,7 @@ class BlockManagerSuite extends SparkFunSuite with Matchers 
with PrivateMethodTe
       val blockId = "list"
       bmToPutBlock.putIterator(blockId, List(array).iterator, storageLevel, 
tellMaster = true)
 
-      val sameHostTransferSrv = spy(sameHostBm.blockTransferService)
+      val sameHostTransferSrv = 
spy[BlockTransferService](sameHostBm.blockTransferService)
       doAnswer { _ =>
          fail("Fetching over network is not expected when the block is 
requested from same host")
       }.when(sameHostTransferSrv).fetchBlockSync(mc.any(), mc.any(), mc.any(), 
mc.any(), mc.any())
@@ -935,7 +935,7 @@ class BlockManagerSuite extends SparkFunSuite with Matchers 
with PrivateMethodTe
         }
       }
       val store1 = makeBlockManager(8000, "executor1", this.master, 
Some(mockTransferService))
-      val spiedStore1 = spy(store1)
+      val spiedStore1 = spy[BlockManager](store1)
       doAnswer { inv =>
         val blockId = inv.getArguments()(0).asInstanceOf[BlockId]
         val localDirs = inv.getArguments()(1).asInstanceOf[Array[String]]
@@ -974,7 +974,7 @@ class BlockManagerSuite extends SparkFunSuite with Matchers 
with PrivateMethodTe
   }
 
   test("SPARK-14252: getOrElseUpdate should still read from remote storage") {
-    val store = spy(makeBlockManager(8000, "executor1"))
+    val store = spy[BlockManager](makeBlockManager(8000, "executor1"))
     val store2 = makeBlockManager(8000, "executor2")
     val list1 = List(new Array[Byte](4000))
     val blockId = RDDBlockId(0, 0)
diff --git 
a/core/src/test/scala/org/apache/spark/storage/PartiallySerializedBlockSuite.scala
 
b/core/src/test/scala/org/apache/spark/storage/PartiallySerializedBlockSuite.scala
index 8177ef6e140..9753b483153 100644
--- 
a/core/src/test/scala/org/apache/spark/storage/PartiallySerializedBlockSuite.scala
+++ 
b/core/src/test/scala/org/apache/spark/storage/PartiallySerializedBlockSuite.scala
@@ -58,18 +58,22 @@ class PartiallySerializedBlockSuite
       numItemsToBuffer: Int): PartiallySerializedBlock[T] = {
 
     val bbos: ChunkedByteBufferOutputStream = {
-      val spy = Mockito.spy(new ChunkedByteBufferOutputStream(128, 
ByteBuffer.allocate))
+      val spy = Mockito.spy[ChunkedByteBufferOutputStream](
+        new ChunkedByteBufferOutputStream(128, ByteBuffer.allocate))
       Mockito.doAnswer { (invocationOnMock: InvocationOnMock) =>
-        
Mockito.spy(invocationOnMock.callRealMethod().asInstanceOf[ChunkedByteBuffer])
+        Mockito.spy[ChunkedByteBuffer](
+          invocationOnMock.callRealMethod().asInstanceOf[ChunkedByteBuffer])
       }.when(spy).toChunkedByteBuffer
       spy
     }
 
     val serializer = serializerManager
       .getSerializer(implicitly[ClassTag[T]], autoPick = true).newInstance()
-    val redirectableOutputStream = Mockito.spy(new RedirectableOutputStream)
+    val redirectableOutputStream = Mockito.spy[RedirectableOutputStream](
+      new RedirectableOutputStream)
     redirectableOutputStream.setOutputStream(bbos)
-    val serializationStream = 
Mockito.spy(serializer.serializeStream(redirectableOutputStream))
+    val serializationStream = Mockito.spy[SerializationStream](
+      serializer.serializeStream(redirectableOutputStream))
 
     (1 to numItemsToBuffer).foreach { _ =>
       assert(iter.hasNext)
@@ -170,7 +174,7 @@ class PartiallySerializedBlockSuite
 
     test(s"$testCaseName with finishWritingToStream() and numBuffered = 
$numItemsToBuffer") {
       val partiallySerializedBlock = partiallyUnroll(items.iterator, 
numItemsToBuffer)
-      val bbos = Mockito.spy(new ByteBufferOutputStream())
+      val bbos = Mockito.spy[ByteBufferOutputStream](new 
ByteBufferOutputStream())
       partiallySerializedBlock.finishWritingToStream(bbos)
 
       Mockito.verify(memoryStore).releaseUnrollMemoryForThisTask(
diff --git a/dev/deps/spark-deps-hadoop-3-hive-2.3 
b/dev/deps/spark-deps-hadoop-3-hive-2.3
index 9f6a8f2573b..d275c827ef1 100644
--- a/dev/deps/spark-deps-hadoop-3-hive-2.3
+++ b/dev/deps/spark-deps-hadoop-3-hive-2.3
@@ -201,7 +201,7 @@ 
netty-transport-native-kqueue/4.1.92.Final/osx-aarch_64/netty-transport-native-k
 
netty-transport-native-kqueue/4.1.92.Final/osx-x86_64/netty-transport-native-kqueue-4.1.92.Final-osx-x86_64.jar
 
netty-transport-native-unix-common/4.1.92.Final//netty-transport-native-unix-common-4.1.92.Final.jar
 netty-transport/4.1.92.Final//netty-transport-4.1.92.Final.jar
-objenesis/3.2//objenesis-3.2.jar
+objenesis/3.3//objenesis-3.3.jar
 okhttp/3.12.12//okhttp-3.12.12.jar
 okio/1.15.0//okio-1.15.0.jar
 opencsv/2.3//opencsv-2.3.jar
diff --git a/pom.xml b/pom.xml
index bc6a49c44c2..2394e429218 100644
--- a/pom.xml
+++ b/pom.xml
@@ -203,9 +203,9 @@
     <!-- Please don't upgrade the version to 4.10+, it depends on JDK 11 -->
     <antlr4.version>4.9.3</antlr4.version>
     <jpam.version>1.1</jpam.version>
-    <selenium.version>4.7.2</selenium.version>
-    <htmlunit-driver.version>4.7.2</htmlunit-driver.version>
-    <htmlunit.version>2.67.0</htmlunit.version>
+    <selenium.version>4.9.1</selenium.version>
+    <htmlunit-driver.version>4.9.1</htmlunit-driver.version>
+    <htmlunit.version>2.70.0</htmlunit.version>
     <maven-antrun.version>3.1.0</maven-antrun.version>
     <commons-crypto.version>1.1.0</commons-crypto.version>
     <commons-cli.version>1.5.0</commons-cli.version>
@@ -390,12 +390,12 @@
     </dependency>
     <dependency>
       <groupId>org.scalatestplus</groupId>
-      <artifactId>mockito-4-6_${scala.binary.version}</artifactId>
+      <artifactId>mockito-4-11_${scala.binary.version}</artifactId>
       <scope>test</scope>
     </dependency>
     <dependency>
       <groupId>org.scalatestplus</groupId>
-      <artifactId>selenium-4-7_${scala.binary.version}</artifactId>
+      <artifactId>selenium-4-9_${scala.binary.version}</artifactId>
       <scope>test</scope>
     </dependency>
     <dependency>
@@ -1100,37 +1100,37 @@
       <dependency>
         <groupId>org.scalatest</groupId>
         <artifactId>scalatest_${scala.binary.version}</artifactId>
-        <version>3.2.15</version>
+        <version>3.2.16</version>
         <scope>test</scope>
       </dependency>
       <dependency>
         <groupId>org.scalatestplus</groupId>
         <artifactId>scalacheck-1-17_${scala.binary.version}</artifactId>
-        <version>3.2.15.0</version>
+        <version>3.2.16.0</version>
         <scope>test</scope>
       </dependency>
       <dependency>
         <groupId>org.scalatestplus</groupId>
-        <artifactId>mockito-4-6_${scala.binary.version}</artifactId>
-        <version>3.2.15.0</version>
+        <artifactId>mockito-4-11_${scala.binary.version}</artifactId>
+        <version>3.2.16.0</version>
         <scope>test</scope>
       </dependency>
       <dependency>
         <groupId>org.scalatestplus</groupId>
-        <artifactId>selenium-4-7_${scala.binary.version}</artifactId>
-        <version>3.2.15.0</version>
+        <artifactId>selenium-4-9_${scala.binary.version}</artifactId>
+        <version>3.2.16.0</version>
         <scope>test</scope>
       </dependency>
       <dependency>
         <groupId>org.mockito</groupId>
         <artifactId>mockito-core</artifactId>
-        <version>4.6.1</version>
+        <version>4.11.0</version>
         <scope>test</scope>
       </dependency>
       <dependency>
         <groupId>org.mockito</groupId>
         <artifactId>mockito-inline</artifactId>
-        <version>4.6.1</version>
+        <version>4.11.0</version>
         <scope>test</scope>
       </dependency>
       <dependency>
diff --git 
a/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesClusterSchedulerBackendSuite.scala
 
b/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesClusterSchedulerBackendSuite.scala
index bb5e93c92ac..b2e4a7182a7 100644
--- 
a/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesClusterSchedulerBackendSuite.scala
+++ 
b/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesClusterSchedulerBackendSuite.scala
@@ -174,7 +174,7 @@ class KubernetesClusterSchedulerBackendSuite extends 
SparkFunSuite with BeforeAn
   }
 
   test("Remove executor") {
-    val backend = spy(schedulerBackendUnderTest)
+    val backend = 
spy[KubernetesClusterSchedulerBackend](schedulerBackendUnderTest)
     when(backend.isExecutorActive(any())).thenReturn(false)
     when(backend.isExecutorActive(mockitoEq("2"))).thenReturn(true)
 
diff --git 
a/resource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/ClientSuite.scala
 
b/resource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/ClientSuite.scala
index 512359d0d6b..ba116c27716 100644
--- 
a/resource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/ClientSuite.scala
+++ 
b/resource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/ClientSuite.scala
@@ -228,7 +228,8 @@ class ClientSuite extends SparkFunSuite with Matchers {
       val sparkConf = new SparkConf().set("spark.yarn.applicationType", 
sourceType)
       val args = new ClientArguments(Array())
 
-      val appContext = 
spy(Records.newRecord(classOf[ApplicationSubmissionContext]))
+      val appContext = spy[ApplicationSubmissionContext](
+        Records.newRecord(classOf[ApplicationSubmissionContext]))
       val appId = ApplicationId.newInstance(123456, id)
       appContext.setApplicationId(appId)
       val getNewApplicationResponse = 
Records.newRecord(classOf[GetNewApplicationResponse])
@@ -248,7 +249,7 @@ class ClientSuite extends SparkFunSuite with Matchers {
         request.setApplicationSubmissionContext(subContext)
 
         val rmContext = mock(classOf[RMContext])
-        val conf = spy(classOf[Configuration])
+        val conf = spy[Configuration](classOf[Configuration])
         val map = new ConcurrentHashMap[ApplicationId, RMApp]()
         when(rmContext.getRMApps).thenReturn(map)
         val dispatcher = mock(classOf[Dispatcher])
@@ -732,7 +733,7 @@ class ClientSuite extends SparkFunSuite with Matchers {
       sparkConf: SparkConf,
       args: Array[String] = Array()): Client = {
     val clientArgs = new ClientArguments(args)
-    spy(new Client(clientArgs, sparkConf, null))
+    spy[Client](new Client(clientArgs, sparkConf, null))
   }
 
   private def classpath(client: Client): Array[String] = {
diff --git 
a/resource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/YarnAllocatorSuite.scala
 
b/resource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/YarnAllocatorSuite.scala
index 88c08abdca3..ed591fd9e36 100644
--- 
a/resource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/YarnAllocatorSuite.scala
+++ 
b/resource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/YarnAllocatorSuite.scala
@@ -638,7 +638,7 @@ class YarnAllocatorSuite extends SparkFunSuite with 
Matchers {
   }
 
   test("SPARK-26269: YarnAllocator should have same excludeOnFailure behaviour 
with YARN") {
-    val rmClientSpy = spy(rmClient)
+    val rmClientSpy = spy[AMRMClient[ContainerRequest]](rmClient)
     val maxExecutors = 11
 
     val (handler, _) = createAllocator(
@@ -763,7 +763,7 @@ class YarnAllocatorSuite extends SparkFunSuite with 
Matchers {
 
   test("Test YARN container decommissioning") {
     val rmClient: AMRMClient[ContainerRequest] = AMRMClient.createAMRMClient()
-    val rmClientSpy = spy(rmClient)
+    val rmClientSpy = spy[AMRMClient[ContainerRequest]](rmClient)
     val allocateResponse = mock(classOf[AllocateResponse])
     val (handler, sparkConfClone) = createAllocator(3, rmClientSpy)
 
diff --git 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/AnalysisExternalCatalogSuite.scala
 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/AnalysisExternalCatalogSuite.scala
index df99cd851cc..95233b13438 100644
--- 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/AnalysisExternalCatalogSuite.scala
+++ 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/AnalysisExternalCatalogSuite.scala
@@ -48,7 +48,7 @@ class AnalysisExternalCatalogSuite extends AnalysisTest with 
Matchers {
   test("query builtin functions don't call the external catalog") {
     withTempDir { tempDir =>
       val inMemoryCatalog = new InMemoryCatalog
-      val catalog = spy(inMemoryCatalog)
+      val catalog = spy[InMemoryCatalog](inMemoryCatalog)
       val analyzer = getAnalyzer(catalog, tempDir)
       reset(catalog)
       val testRelation = LocalRelation(AttributeReference("a", IntegerType, 
nullable = true)())
@@ -63,7 +63,7 @@ class AnalysisExternalCatalogSuite extends AnalysisTest with 
Matchers {
   test("check the existence of builtin functions don't call the external 
catalog") {
     withTempDir { tempDir =>
       val inMemoryCatalog = new InMemoryCatalog
-      val externCatalog = spy(inMemoryCatalog)
+      val externCatalog = spy[InMemoryCatalog](inMemoryCatalog)
       val catalog = new SessionCatalog(externCatalog, FunctionRegistry.builtin)
       catalog.createDatabase(
         CatalogDatabase("default", "", new URI(tempDir.toString), Map.empty),
diff --git 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/TableLookupCacheSuite.scala
 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/TableLookupCacheSuite.scala
index 7d6ad3bc609..399799983fd 100644
--- 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/TableLookupCacheSuite.scala
+++ 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/TableLookupCacheSuite.scala
@@ -74,7 +74,7 @@ class TableLookupCacheSuite extends AnalysisTest with 
Matchers {
   test("table lookups to external catalog are cached") {
     withTempDir { tempDir =>
       val inMemoryCatalog = new InMemoryCatalog
-      val catalog = spy(inMemoryCatalog)
+      val catalog = spy[InMemoryCatalog](inMemoryCatalog)
       val analyzer = getAnalyzer(catalog, tempDir)
       reset(catalog)
       analyzer.execute(table("t1").join(table("t1")).join(table("t1")))
@@ -85,7 +85,7 @@ class TableLookupCacheSuite extends AnalysisTest with 
Matchers {
   test("table lookups via nested views are cached") {
     withTempDir { tempDir =>
       val inMemoryCatalog = new InMemoryCatalog
-      val catalog = spy(inMemoryCatalog)
+      val catalog = spy[InMemoryCatalog](inMemoryCatalog)
       val analyzer = getAnalyzer(catalog, tempDir)
       val viewDef = CatalogTable(
         TableIdentifier("view", Some("default")),
diff --git a/sql/core/src/test/scala/org/apache/spark/sql/JoinSuite.scala 
b/sql/core/src/test/scala/org/apache/spark/sql/JoinSuite.scala
index 062814e58b9..2c242965339 100644
--- a/sql/core/src/test/scala/org/apache/spark/sql/JoinSuite.scala
+++ b/sql/core/src/test/scala/org/apache/spark/sql/JoinSuite.scala
@@ -46,7 +46,7 @@ class JoinSuite extends QueryTest with SharedSparkSession 
with AdaptiveSparkPlan
     // test case
     plan.foreachUp {
       case s: SortExec =>
-        val sortExec = spy(s)
+        val sortExec = spy[SortExec](s)
         verify(sortExec, atLeastOnce).cleanupResources()
         verify(sortExec.rowSorter, atLeastOnce).cleanupResources()
       case _ =>
diff --git 
a/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala
 
b/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala
index c37722133cb..377596466db 100644
--- 
a/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala
+++ 
b/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala
@@ -19,7 +19,7 @@ package org.apache.spark.sql.errors
 
 import java.io.{File, IOException}
 import java.net.{URI, URL}
-import java.sql.{Connection, Driver, DriverManager, PreparedStatement, 
ResultSet, ResultSetMetaData}
+import java.sql.{Connection, DatabaseMetaData, Driver, DriverManager, 
PreparedStatement, ResultSet, ResultSetMetaData}
 import java.util.{Locale, Properties, ServiceConfigurationError}
 
 import org.apache.hadoop.fs.{LocalFileSystem, Path}
@@ -743,8 +743,8 @@ class QueryExecutionErrorsSuite
             val driver: Driver = DriverRegistry.get(driverClass)
             val connection = ConnectionProvider.create(
               driver, options.parameters, options.connectionProviderName)
-            val spyConnection = spy(connection)
-            val spyMetaData = spy(connection.getMetaData)
+            val spyConnection = spy[Connection](connection)
+            val spyMetaData = spy[DatabaseMetaData](connection.getMetaData)
             when(spyConnection.getMetaData).thenReturn(spyMetaData)
             when(spyMetaData.supportsTransactions()).thenReturn(false)
 
diff --git 
a/sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/SparkExecuteStatementOperationSuite.scala
 
b/sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/SparkExecuteStatementOperationSuite.scala
index b61c91f3109..d085f596397 100644
--- 
a/sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/SparkExecuteStatementOperationSuite.scala
+++ 
b/sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/SparkExecuteStatementOperationSuite.scala
@@ -78,7 +78,7 @@ class SparkExecuteStatementOperationSuite extends 
SparkFunSuite with SharedSpark
 
       HiveThriftServer2.eventManager = 
mock(classOf[HiveThriftServer2EventManager])
 
-      val spySqlContext = spy(sqlContext)
+      val spySqlContext = spy[SQLContext](sqlContext)
 
       // When cancel() is called on the operation, cleanup causes an exception 
to be thrown inside
       // of execute(). This should not cause the state to become ERROR. The 
exception here will be
diff --git 
a/streaming/src/test/scala/org/apache/spark/streaming/ReceivedBlockTrackerSuite.scala
 
b/streaming/src/test/scala/org/apache/spark/streaming/ReceivedBlockTrackerSuite.scala
index 7bdb8bd9a02..ada6a9a4cb6 100644
--- 
a/streaming/src/test/scala/org/apache/spark/streaming/ReceivedBlockTrackerSuite.scala
+++ 
b/streaming/src/test/scala/org/apache/spark/streaming/ReceivedBlockTrackerSuite.scala
@@ -139,7 +139,7 @@ class ReceivedBlockTrackerSuite extends SparkFunSuite with 
BeforeAndAfter with M
   }
 
   test("block allocation to batch should not loose blocks from received 
queue") {
-    val tracker1 = spy(createTracker())
+    val tracker1 = spy[ReceivedBlockTracker](createTracker())
     tracker1.isWriteAheadLogEnabled should be (true)
     tracker1.getUnallocatedBlocks(streamId) shouldEqual Seq.empty
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org


Reply via email to