[ https://issues.apache.org/jira/browse/SPARK-16741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394501#comment-15394501 ]
Zoltan Fedor commented on SPARK-16741: -------------------------------------- Thanks Sean. I agree, it is debatable whether this is a bug or not. My thinking is that as I don't see a scenario where jdbc.write() would need to be running in speculative mode, it should automatically turn off speculative mode when you call jdbc.write(). jdbc.write() not turning speculative mode off automatically - whether that is a bug or a feature, is debatable. I am okey to change this to a feature request. > spark.speculation causes duplicate rows in df.write.jdbc() > ---------------------------------------------------------- > > Key: SPARK-16741 > URL: https://issues.apache.org/jira/browse/SPARK-16741 > Project: Spark > Issue Type: Bug > Components: PySpark > Affects Versions: 1.6.2 > Environment: PySpark 1.6.2, Oracle Linux 6.5, Oracle 11.2 > Reporter: Zoltan Fedor > > Since a fix added to Spark 1.6.2 we can write string data back into an Oracle > database, so I went to try it out and found that rows showed up duplicated in > the database table after they got inserted into our Oracle database. > The code we use it very simple: > df = sqlContext.sql("SELECT * FROM example_temp_table") > df.write.jdbc("jdbc:oracle:thin:"+connection_script, "target_table") > The data in the 'target_table' in the database has twice as many rows as the > 'df' dataframe in SparkSQL. > After some investigation it turns out that this is caused by our > spark.speculation setting is being set to True. > As soon as we turned this off, there were no more duplicates generated. > This somewhat makes sense - spark.speculation causes the map jobs to run 2 > copies - resulting in every row being inserted into our Oracle databases > twice. > Probably the df.jdbc.write() method does not consider a Spark context running > in speculative mode, hence the inserts coming from the speculative map also > get inserted - causing to have every record inserted twice. > Likely that this bug is independent from the database type (we use Oracle) > and whether PySpark is used or Scala or Java. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org