[ 
https://issues.apache.org/jira/browse/SPARK-10890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christian Kadner updated SPARK-10890:
-------------------------------------
    Description: 
I get the following error when I run the following test...

mvn -Dhadoop.version=2.4.0 
-DwildcardSuites=org.apache.spark.sql.jdbc.JDBCWriteSuite test

{noformat}
JDBCWriteSuite:
13:22:15.603 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
13:22:16.506 WARN org.apache.spark.metrics.MetricsSystem: Using default name 
DAGScheduler for source because spark.app.id is not set.
- Basic CREATE
- CREATE with overwrite
- CREATE then INSERT to append
- CREATE then INSERT to truncate
13:22:19.312 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in 
stage 23.0 (TID 31)
org.h2.jdbc.JdbcSQLException: Column count does not match; SQL statement:
INSERT INTO TEST.INCOMPATIBLETEST VALUES (?, ?, ?) [21002-183]
        at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
        at org.h2.message.DbException.get(DbException.java:179)
        at org.h2.message.DbException.get(DbException.java:155)
        at org.h2.message.DbException.get(DbException.java:144)
        at org.h2.command.dml.Insert.prepare(Insert.java:265)
        at org.h2.command.Parser.prepareCommand(Parser.java:247)
        at org.h2.engine.Session.prepareLocal(Session.java:446)
        at org.h2.engine.Session.prepareCommand(Session.java:388)
        at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1189)
        at 
org.h2.jdbc.JdbcPreparedStatement.<init>(JdbcPreparedStatement.java:72)
        at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:277)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.insertStatement(JdbcUtils.scala:72)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:100)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:229)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:228)
        at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$32.apply(RDD.scala:892)
        at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$32.apply(RDD.scala:892)
        at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1856)
        at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1856)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
        at org.apache.spark.scheduler.Task.run(Task.scala:88)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
13:22:19.312 ERROR org.apache.spark.executor.Executor: Exception in task 1.0 in 
stage 23.0 (TID 32)
org.h2.jdbc.JdbcSQLException: Column count does not match; SQL statement:
INSERT INTO TEST.INCOMPATIBLETEST VALUES (?, ?, ?) [21002-183]
        at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
        at org.h2.message.DbException.get(DbException.java:179)
        at org.h2.message.DbException.get(DbException.java:155)
        at org.h2.message.DbException.get(DbException.java:144)
        at org.h2.command.dml.Insert.prepare(Insert.java:265)
        at org.h2.command.Parser.prepareCommand(Parser.java:247)
        at org.h2.engine.Session.prepareLocal(Session.java:446)
        at org.h2.engine.Session.prepareCommand(Session.java:388)
        at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1189)
        at 
org.h2.jdbc.JdbcPreparedStatement.<init>(JdbcPreparedStatement.java:72)
        at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:277)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.insertStatement(JdbcUtils.scala:72)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:100)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:229)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:228)
        at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$32.apply(RDD.scala:892)
        at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$32.apply(RDD.scala:892)
        at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1856)
        at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1856)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
        at org.apache.spark.scheduler.Task.run(Task.scala:88)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
13:22:19.325 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in 
stage 23.0 (TID 32, localhost): org.h2.jdbc.JdbcSQLException: Column count does 
not match; SQL statement:
INSERT INTO TEST.INCOMPATIBLETEST VALUES (?, ?, ?) [21002-183]
        at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
        at org.h2.message.DbException.get(DbException.java:179)
        at org.h2.message.DbException.get(DbException.java:155)
        at org.h2.message.DbException.get(DbException.java:144)
        at org.h2.command.dml.Insert.prepare(Insert.java:265)
        at org.h2.command.Parser.prepareCommand(Parser.java:247)
        at org.h2.engine.Session.prepareLocal(Session.java:446)
        at org.h2.engine.Session.prepareCommand(Session.java:388)
        at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1189)
        at 
org.h2.jdbc.JdbcPreparedStatement.<init>(JdbcPreparedStatement.java:72)
        at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:277)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.insertStatement(JdbcUtils.scala:72)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:100)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:229)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:228)
        at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$32.apply(RDD.scala:892)
        at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$32.apply(RDD.scala:892)
        at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1856)
        at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1856)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
        at org.apache.spark.scheduler.Task.run(Task.scala:88)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

13:22:19.327 ERROR org.apache.spark.scheduler.TaskSetManager: Task 1 in stage 
23.0 failed 1 times; aborting job
- Incompatible INSERT to append
- INSERT to JDBC Datasource
- INSERT to JDBC Datasource with overwrite
Run completed in 6 seconds, 390 milliseconds.
Total number of tests run: 7
Suites: completed 2, aborted 0
Tests: succeeded 7, failed 0, canceled 0, ignored 0, pending 0
All tests passed.
{noformat}

The test completes successfully but it spits out an alarming stack trace. I 
think it would be better if the stack trace were swallowed.

---

*_Edit 2017/01/14:_* This issue may have been addressed by another fix. The 
reported ERROR messages and stack traces are no longer printed to the console:

{code:none|title=$ mvn scalatest:test -pl sql/core 
-Dsuites='org.apache.spark.sql.jdbc.JDBCWriteSuite' -q|titleBGColor=#eee}
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
support was removed in 8.0
Run starting. Expected test count is: 20
JDBCWriteSuite:
16:33:54.654 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
- Basic CREATE
- Basic CREATE with illegal batchsize
- Basic CREATE with batchsize
- CREATE with ignore
- CREATE with overwrite
- CREATE then INSERT to append
- SPARK-18123 Append with column names with different cases
- Truncate
- createTableOptions
- Incompatible INSERT to append
- INSERT to JDBC Datasource
- INSERT to JDBC Datasource with overwrite
- save works for format("jdbc") if url and dbtable are set
- save API with SaveMode.Overwrite
- save errors if url is not specified
- save errors if dbtable is not specified
- save errors if wrong user/password combination
- save errors if partitionColumn and numPartitions and bounds not set
- SPARK-18433: Improve DataSource option keys to be more case-insensitive
- SPARK-18413: Use `numPartitions` JDBCOption
Run completed in 6 seconds, 541 milliseconds.
Total number of tests run: 20
Suites: completed 1, aborted 0
Tests: succeeded 20, failed 0, canceled 0, ignored 0, pending 0
All tests passed.
{code}

  was:
I get the following error when I run the following test...

mvn -Dhadoop.version=2.4.0 
-DwildcardSuites=org.apache.spark.sql.jdbc.JDBCWriteSuite test

{noformat}
JDBCWriteSuite:
13:22:15.603 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
13:22:16.506 WARN org.apache.spark.metrics.MetricsSystem: Using default name 
DAGScheduler for source because spark.app.id is not set.
- Basic CREATE
- CREATE with overwrite
- CREATE then INSERT to append
- CREATE then INSERT to truncate
13:22:19.312 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in 
stage 23.0 (TID 31)
org.h2.jdbc.JdbcSQLException: Column count does not match; SQL statement:
INSERT INTO TEST.INCOMPATIBLETEST VALUES (?, ?, ?) [21002-183]
        at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
        at org.h2.message.DbException.get(DbException.java:179)
        at org.h2.message.DbException.get(DbException.java:155)
        at org.h2.message.DbException.get(DbException.java:144)
        at org.h2.command.dml.Insert.prepare(Insert.java:265)
        at org.h2.command.Parser.prepareCommand(Parser.java:247)
        at org.h2.engine.Session.prepareLocal(Session.java:446)
        at org.h2.engine.Session.prepareCommand(Session.java:388)
        at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1189)
        at 
org.h2.jdbc.JdbcPreparedStatement.<init>(JdbcPreparedStatement.java:72)
        at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:277)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.insertStatement(JdbcUtils.scala:72)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:100)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:229)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:228)
        at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$32.apply(RDD.scala:892)
        at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$32.apply(RDD.scala:892)
        at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1856)
        at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1856)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
        at org.apache.spark.scheduler.Task.run(Task.scala:88)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
13:22:19.312 ERROR org.apache.spark.executor.Executor: Exception in task 1.0 in 
stage 23.0 (TID 32)
org.h2.jdbc.JdbcSQLException: Column count does not match; SQL statement:
INSERT INTO TEST.INCOMPATIBLETEST VALUES (?, ?, ?) [21002-183]
        at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
        at org.h2.message.DbException.get(DbException.java:179)
        at org.h2.message.DbException.get(DbException.java:155)
        at org.h2.message.DbException.get(DbException.java:144)
        at org.h2.command.dml.Insert.prepare(Insert.java:265)
        at org.h2.command.Parser.prepareCommand(Parser.java:247)
        at org.h2.engine.Session.prepareLocal(Session.java:446)
        at org.h2.engine.Session.prepareCommand(Session.java:388)
        at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1189)
        at 
org.h2.jdbc.JdbcPreparedStatement.<init>(JdbcPreparedStatement.java:72)
        at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:277)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.insertStatement(JdbcUtils.scala:72)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:100)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:229)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:228)
        at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$32.apply(RDD.scala:892)
        at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$32.apply(RDD.scala:892)
        at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1856)
        at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1856)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
        at org.apache.spark.scheduler.Task.run(Task.scala:88)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
13:22:19.325 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in 
stage 23.0 (TID 32, localhost): org.h2.jdbc.JdbcSQLException: Column count does 
not match; SQL statement:
INSERT INTO TEST.INCOMPATIBLETEST VALUES (?, ?, ?) [21002-183]
        at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
        at org.h2.message.DbException.get(DbException.java:179)
        at org.h2.message.DbException.get(DbException.java:155)
        at org.h2.message.DbException.get(DbException.java:144)
        at org.h2.command.dml.Insert.prepare(Insert.java:265)
        at org.h2.command.Parser.prepareCommand(Parser.java:247)
        at org.h2.engine.Session.prepareLocal(Session.java:446)
        at org.h2.engine.Session.prepareCommand(Session.java:388)
        at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1189)
        at 
org.h2.jdbc.JdbcPreparedStatement.<init>(JdbcPreparedStatement.java:72)
        at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:277)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.insertStatement(JdbcUtils.scala:72)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:100)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:229)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:228)
        at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$32.apply(RDD.scala:892)
        at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$32.apply(RDD.scala:892)
        at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1856)
        at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1856)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
        at org.apache.spark.scheduler.Task.run(Task.scala:88)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

13:22:19.327 ERROR org.apache.spark.scheduler.TaskSetManager: Task 1 in stage 
23.0 failed 1 times; aborting job
- Incompatible INSERT to append
- INSERT to JDBC Datasource
- INSERT to JDBC Datasource with overwrite
Run completed in 6 seconds, 390 milliseconds.
Total number of tests run: 7
Suites: completed 2, aborted 0
Tests: succeeded 7, failed 0, canceled 0, ignored 0, pending 0
All tests passed.
{noformat}

The test completes successfully but it spits out an alarming stack trace. I 
think it would be better if the stack trace were swallowed.


> "Column count does not match; SQL statement:" error in JDBCWriteSuite
> ---------------------------------------------------------------------
>
>                 Key: SPARK-10890
>                 URL: https://issues.apache.org/jira/browse/SPARK-10890
>             Project: Spark
>          Issue Type: Bug
>          Components: Tests
>    Affects Versions: 1.5.0
>            Reporter: Rick Hillegas
>
> I get the following error when I run the following test...
> mvn -Dhadoop.version=2.4.0 
> -DwildcardSuites=org.apache.spark.sql.jdbc.JDBCWriteSuite test
> {noformat}
> JDBCWriteSuite:
> 13:22:15.603 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 13:22:16.506 WARN org.apache.spark.metrics.MetricsSystem: Using default name 
> DAGScheduler for source because spark.app.id is not set.
> - Basic CREATE
> - CREATE with overwrite
> - CREATE then INSERT to append
> - CREATE then INSERT to truncate
> 13:22:19.312 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 
> in stage 23.0 (TID 31)
> org.h2.jdbc.JdbcSQLException: Column count does not match; SQL statement:
> INSERT INTO TEST.INCOMPATIBLETEST VALUES (?, ?, ?) [21002-183]
>       at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
>       at org.h2.message.DbException.get(DbException.java:179)
>       at org.h2.message.DbException.get(DbException.java:155)
>       at org.h2.message.DbException.get(DbException.java:144)
>       at org.h2.command.dml.Insert.prepare(Insert.java:265)
>       at org.h2.command.Parser.prepareCommand(Parser.java:247)
>       at org.h2.engine.Session.prepareLocal(Session.java:446)
>       at org.h2.engine.Session.prepareCommand(Session.java:388)
>       at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1189)
>       at 
> org.h2.jdbc.JdbcPreparedStatement.<init>(JdbcPreparedStatement.java:72)
>       at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:277)
>       at 
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.insertStatement(JdbcUtils.scala:72)
>       at 
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:100)
>       at 
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:229)
>       at 
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:228)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$32.apply(RDD.scala:892)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$32.apply(RDD.scala:892)
>       at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1856)
>       at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1856)
>       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>       at org.apache.spark.scheduler.Task.run(Task.scala:88)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> 13:22:19.312 ERROR org.apache.spark.executor.Executor: Exception in task 1.0 
> in stage 23.0 (TID 32)
> org.h2.jdbc.JdbcSQLException: Column count does not match; SQL statement:
> INSERT INTO TEST.INCOMPATIBLETEST VALUES (?, ?, ?) [21002-183]
>       at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
>       at org.h2.message.DbException.get(DbException.java:179)
>       at org.h2.message.DbException.get(DbException.java:155)
>       at org.h2.message.DbException.get(DbException.java:144)
>       at org.h2.command.dml.Insert.prepare(Insert.java:265)
>       at org.h2.command.Parser.prepareCommand(Parser.java:247)
>       at org.h2.engine.Session.prepareLocal(Session.java:446)
>       at org.h2.engine.Session.prepareCommand(Session.java:388)
>       at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1189)
>       at 
> org.h2.jdbc.JdbcPreparedStatement.<init>(JdbcPreparedStatement.java:72)
>       at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:277)
>       at 
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.insertStatement(JdbcUtils.scala:72)
>       at 
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:100)
>       at 
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:229)
>       at 
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:228)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$32.apply(RDD.scala:892)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$32.apply(RDD.scala:892)
>       at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1856)
>       at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1856)
>       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>       at org.apache.spark.scheduler.Task.run(Task.scala:88)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> 13:22:19.325 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in 
> stage 23.0 (TID 32, localhost): org.h2.jdbc.JdbcSQLException: Column count 
> does not match; SQL statement:
> INSERT INTO TEST.INCOMPATIBLETEST VALUES (?, ?, ?) [21002-183]
>       at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
>       at org.h2.message.DbException.get(DbException.java:179)
>       at org.h2.message.DbException.get(DbException.java:155)
>       at org.h2.message.DbException.get(DbException.java:144)
>       at org.h2.command.dml.Insert.prepare(Insert.java:265)
>       at org.h2.command.Parser.prepareCommand(Parser.java:247)
>       at org.h2.engine.Session.prepareLocal(Session.java:446)
>       at org.h2.engine.Session.prepareCommand(Session.java:388)
>       at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1189)
>       at 
> org.h2.jdbc.JdbcPreparedStatement.<init>(JdbcPreparedStatement.java:72)
>       at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:277)
>       at 
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.insertStatement(JdbcUtils.scala:72)
>       at 
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:100)
>       at 
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:229)
>       at 
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:228)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$32.apply(RDD.scala:892)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$32.apply(RDD.scala:892)
>       at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1856)
>       at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1856)
>       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>       at org.apache.spark.scheduler.Task.run(Task.scala:88)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> 13:22:19.327 ERROR org.apache.spark.scheduler.TaskSetManager: Task 1 in stage 
> 23.0 failed 1 times; aborting job
> - Incompatible INSERT to append
> - INSERT to JDBC Datasource
> - INSERT to JDBC Datasource with overwrite
> Run completed in 6 seconds, 390 milliseconds.
> Total number of tests run: 7
> Suites: completed 2, aborted 0
> Tests: succeeded 7, failed 0, canceled 0, ignored 0, pending 0
> All tests passed.
> {noformat}
> The test completes successfully but it spits out an alarming stack trace. I 
> think it would be better if the stack trace were swallowed.
> ---
> *_Edit 2017/01/14:_* This issue may have been addressed by another fix. The 
> reported ERROR messages and stack traces are no longer printed to the console:
> {code:none|title=$ mvn scalatest:test -pl sql/core 
> -Dsuites='org.apache.spark.sql.jdbc.JDBCWriteSuite' -q|titleBGColor=#eee}
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
> support was removed in 8.0
> Run starting. Expected test count is: 20
> JDBCWriteSuite:
> 16:33:54.654 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> - Basic CREATE
> - Basic CREATE with illegal batchsize
> - Basic CREATE with batchsize
> - CREATE with ignore
> - CREATE with overwrite
> - CREATE then INSERT to append
> - SPARK-18123 Append with column names with different cases
> - Truncate
> - createTableOptions
> - Incompatible INSERT to append
> - INSERT to JDBC Datasource
> - INSERT to JDBC Datasource with overwrite
> - save works for format("jdbc") if url and dbtable are set
> - save API with SaveMode.Overwrite
> - save errors if url is not specified
> - save errors if dbtable is not specified
> - save errors if wrong user/password combination
> - save errors if partitionColumn and numPartitions and bounds not set
> - SPARK-18433: Improve DataSource option keys to be more case-insensitive
> - SPARK-18413: Use `numPartitions` JDBCOption
> Run completed in 6 seconds, 541 milliseconds.
> Total number of tests run: 20
> Suites: completed 1, aborted 0
> Tests: succeeded 20, failed 0, canceled 0, ignored 0, pending 0
> All tests passed.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to