appleyuchi created SPARK-31629:
----------------------------------

             Summary: "py4j.protocol.Py4JJavaError: An error occurred while 
calling o90.save" in pyspark 2.3.1
                 Key: SPARK-31629
                 URL: https://issues.apache.org/jira/browse/SPARK-31629
             Project: Spark
          Issue Type: Bug
          Components: Java API
    Affects Versions: 2.3.1
         Environment: Ubuntu19.10

anaconda3-python3.6.10

scala 2.11.8

apache-hive-3.0.0-bin

hadoop-2.7.7

spark-2.3.1-bin-hadoop2.7

java version "1.8.0_131"

Mysql Server version: 8.0.19-0ubuntu0.19.10.3 (Ubuntu)

driver:mysql-connector-java-8.0.20.jar

[Driver 
link|[https://mvnrepository.com/artifact/mysql/mysql-connector-java/8.0.20]]

 
            Reporter: appleyuchi
             Fix For: 1.5.0, 1.4.1


I have search the forum,

[SPARK-8365|https://issues.apache.org/jira/browse/SPARK-8365]

mentioned the same issue in spark 1.4.0

[SPARK-8368|https://issues.apache.org/jira/browse/SPARK-8368] 

fix it in spark 1.4.1 1.5.0

 

However,in spark 2.3.1,this bug occur again

Please help me ,thanks~!!!

#----------------------------------------------------------------------

test.py

[https://paste.ubuntu.com/p/HJfbcQ2zq3/]

 

running method is:

spark-submit --master yarn --deploy-mode cluster test.py

 

then I got:

Traceback (most recent call last):
 File "test.py", line 45, in <module>
 password="appleyuchi").mode('append').save()
 File 
"/home/appleyuchi/bigdata/hadoop_tmp/nm-local-dir/usercache/appleyuchi/appcache/application_1588504345289_0003/container_1588504345289_0003_01_000001/pyspark.zip/pyspark/sql/readwriter.py",
 line 703, in save
 File 
"/home/appleyuchi/bigdata/hadoop_tmp/nm-local-dir/usercache/appleyuchi/appcache/application_1588504345289_0003/container_1588504345289_0003_01_000001/py4j-0.10.7-src.zip/py4j/java_gateway.py",
 line 1257, in __call__
 File 
"/home/appleyuchi/bigdata/hadoop_tmp/nm-local-dir/usercache/appleyuchi/appcache/application_1588504345289_0003/container_1588504345289_0003_01_000001/pyspark.zip/pyspark/sql/utils.py",
 line 63, in deco
 File 
"/home/appleyuchi/bigdata/hadoop_tmp/nm-local-dir/usercache/appleyuchi/appcache/application_1588504345289_0003/container_1588504345289_0003_01_000001/py4j-0.10.7-src.zip/py4j/protocol.py",
 line 328, in get_return_value
*py4j.protocol.Py4JJavaError: An error occurred while calling o90.save.*
: java.sql.SQLSyntaxErrorException: Unknown database 'leaf'
 at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120)
 at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
 at 
com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
 at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:836)
 at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:456)
 at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:246)
 at 
com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:197)
 at 
org.apache.spark.sql.execution.datasources.jdbc.DriverWrapper.connect(DriverWrapper.scala:45)
 at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:63)
 at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:54)
 at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:63)
 at 
org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
 at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
 at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
 at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
 at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
 at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
 at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
 at 
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
 at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
 at 
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
 at 
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
 at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
 at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654)
 at 
org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
 at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
 at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
 at py4j.Gateway.invoke(Gateway.java:282)
 at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
 at py4j.commands.CallCommand.execute(CallCommand.java:79)
 at py4j.GatewayConnection.run(GatewayConnection.java:238)
 at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to