This is an automated email from the ASF dual-hosted git repository.

yangjie01 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new cdcf12b9fc7 [SPARK-44601][BUILD] Add `jackson-mapper-asl` as test 
dependency to `hive-thriftserver` module to make Maven test pass
cdcf12b9fc7 is described below

commit cdcf12b9fc7026c77d4e7c2b5506e5daa3472ff0
Author: yangjie01 <yangji...@baidu.com>
AuthorDate: Wed Aug 2 04:56:10 2023 +0800

    [SPARK-44601][BUILD] Add `jackson-mapper-asl` as test dependency to 
`hive-thriftserver` module to make Maven test pass
    
    ### What changes were proposed in this pull request?
    Run the following maven test commands to test `hive-thriftserver`
    
    ```
    ./build/mvn -DskipTests -Pyarn -Pmesos -Pkubernetes -Pvolcano -Phive 
-Phive-thriftserver -Phadoop-cloud -Pspark-ganglia-lgpl clean install
    ./build/mvn -pl sql/hive-thriftserver -Phive -Phive-thriftserver clean 
install
    ```
    
    There are many similar test failures like:
    
    ```
    2023-08-01 02:46:59.286 - stderr> Setting default log level to "WARN".
      2023-08-01 02:46:59.287 - stderr> To adjust logging level use 
sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
      2023-08-01 02:47:00.191 - stderr> 2023-08-01 17:47:00.190:INFO::main: 
Logging initialized 2591ms to org.eclipse.jetty.util.log.StdErrLog
      2023-08-01 02:47:03.673 - stderr> Spark master: local, Application Id: 
local-1690883219913
      2023-08-01 02:47:04.582 - stdout> spark-sql> CREATE TABLE t1(key string, 
val string)
      2023-08-01 02:47:04.594 - stdout>          > ROW FORMAT SERDE 
'org.apache.hive.hcatalog.data.JsonSerDe';
      2023-08-01 02:47:05.615 - stderr> org/codehaus/jackson/JsonParseException
      2023-08-01 02:47:05.616 - stderr> java.lang.NoClassDefFoundError: 
org/codehaus/jackson/JsonParseException
      2023-08-01 02:47:05.616 - stderr>     at java.lang.Class.forName0(Native 
Method)
      2023-08-01 02:47:05.616 - stderr>     at 
java.lang.Class.forName(Class.java:348)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2630)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2595)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:447)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:440)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:281)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:263)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.hadoop.hive.ql.metadata.Table.getColsInternal(Table.java:641)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:624)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:838)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:874)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.hive.client.Shim_v0_12.createTable(HiveShim.scala:614)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$createTable$1(HiveClientImpl.scala:573)
      2023-08-01 02:47:05.616 - stderr>     at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:303)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:234)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:233)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:283)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.hive.client.HiveClientImpl.createTable(HiveClientImpl.scala:571)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$createTable$1(HiveExternalCatalog.scala:288)
      2023-08-01 02:47:05.616 - stderr>     at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:99)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.hive.HiveExternalCatalog.createTable(HiveExternalCatalog.scala:245)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.createTable(ExternalCatalogWithListener.scala:94)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.createTable(SessionCatalog.scala:402)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.execution.command.CreateTableCommand.run(tables.scala:170)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:75)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:73)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:84)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:107)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:132)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:210)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:115)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:68)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:107)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:461)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:76)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:461)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:32)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:437)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:98)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:85)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:83)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.Dataset.<init>(Dataset.scala:221)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:101)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:98)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.SparkSession.$anonfun$sql$4(SparkSession.scala:691)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.SparkSession.sql(SparkSession.scala:682)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.SparkSession.sql(SparkSession.scala:713)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.SparkSession.sql(SparkSession.scala:744)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.SQLContext.sql(SQLContext.scala:651)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:68)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:501)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1(SparkSQLCLIDriver.scala:619)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1$adapted(SparkSQLCLIDriver.scala:613)
      2023-08-01 02:47:05.616 - stderr>     at 
scala.collection.Iterator.foreach(Iterator.scala:943)
      2023-08-01 02:47:05.616 - stderr>     at 
scala.collection.Iterator.foreach$(Iterator.scala:943)
      2023-08-01 02:47:05.616 - stderr>     at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
      2023-08-01 02:47:05.616 - stderr>     at 
scala.collection.IterableLike.foreach(IterableLike.scala:74)
      2023-08-01 02:47:05.616 - stderr>     at 
scala.collection.IterableLike.foreach$(IterableLike.scala:73)
      2023-08-01 02:47:05.616 - stderr>     at 
scala.collection.AbstractIterable.foreach(Iterable.scala:56)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processLine(SparkSQLCLIDriver.scala:613)
      2023-08-01 02:47:05.616 - stderr>     at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:310)
      2023-08-01 02:47:05.617 - stderr>     at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
      2023-08-01 02:47:05.617 - stderr>     at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      2023-08-01 02:47:05.617 - stderr>     at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
      2023-08-01 02:47:05.617 - stderr>     at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      2023-08-01 02:47:05.617 - stderr>     at 
java.lang.reflect.Method.invoke(Method.java:498)
      2023-08-01 02:47:05.617 - stderr>     at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
      2023-08-01 02:47:05.617 - stderr>     at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:1029)
      2023-08-01 02:47:05.617 - stderr>     at 
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:194)
      2023-08-01 02:47:05.617 - stderr>     at 
org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:217)
      2023-08-01 02:47:05.617 - stderr>     at 
org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:91)
      2023-08-01 02:47:05.617 - stderr>     at 
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1120)
      2023-08-01 02:47:05.617 - stderr>     at 
org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1129)
      2023-08-01 02:47:05.617 - stderr>     at 
org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
      2023-08-01 02:47:05.617 - stderr> Caused by: 
java.lang.ClassNotFoundException: org.codehaus.jackson.JsonParseException
      2023-08-01 02:47:05.617 - stderr>     at 
java.net.URLClassLoader.findClass(URLClassLoader.java:387)
      2023-08-01 02:47:05.617 - stderr>     at 
java.lang.ClassLoader.loadClass(ClassLoader.java:419)
      2023-08-01 02:47:05.617 - stderr>     at 
java.lang.ClassLoader.loadClass(ClassLoader.java:352)
      2023-08-01 02:47:05.617 - stderr>     ... 85 more
      ===========================
      End CliSuite failure output
      =========================== (CliSuite.scala:216)
    ```
    
    So this pr add `jackson-mapper-asl` as test dependency to 
`hive-thriftserver` module to make maven test pass.
    
    ### Why are the changes needed?
    Make `hive-thriftserver` module maven test pass.
    
    ### Does this PR introduce _any_ user-facing change?
    No
    
    ### How was this patch tested?
    - Pass GitHub Actions
    - Manual checked with this pr
    
    ```
    ./build/mvn -DskipTests -Pyarn -Pmesos -Pkubernetes -Pvolcano -Phive 
-Phive-thriftserver -Phadoop-cloud -Pspark-ganglia-lgpl clean install
    ./build/mvn -pl sql/hive-thriftserver -Phive -Phive-thriftserver clean 
install
    ```
    
    ```
    Run completed in 17 minutes, 0 seconds.
    Total number of tests run: 601
    Suites: completed 20, aborted 0
    Tests: succeeded 601, failed 0, canceled 0, ignored 20, pending 0
    All tests passed.
    ```
    
    Closes #42260 from LuciferYang/SPARK-44601.
    
    Authored-by: yangjie01 <yangji...@baidu.com>
    Signed-off-by: yangjie01 <yangji...@baidu.com>
---
 sql/hive-thriftserver/pom.xml | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/sql/hive-thriftserver/pom.xml b/sql/hive-thriftserver/pom.xml
index 76a1037f1cb..3d9f6e3b2e9 100644
--- a/sql/hive-thriftserver/pom.xml
+++ b/sql/hive-thriftserver/pom.xml
@@ -147,6 +147,15 @@
       <groupId>org.apache.httpcomponents</groupId>
       <artifactId>httpcore</artifactId>
     </dependency>
+    <!--
+      SPARK-44601: Add this test dependency to ensure that `hive-thriftserver` 
module
+      can be tested using Maven
+    -->
+    <dependency>
+      <groupId>org.codehaus.jackson</groupId>
+      <artifactId>jackson-mapper-asl</artifactId>
+      <scope>test</scope>
+    </dependency>
   </dependencies>
   <build>
     
<outputDirectory>target/scala-${scala.binary.version}/classes</outputDirectory>


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to