[
https://issues.apache.org/jira/browse/SPARK-18626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Hyukjin Kwon updated SPARK-18626:
---------------------------------
Labels: bug bulk-closed (was: bug)
> Concurrent write to table fails from spark
> ------------------------------------------
>
> Key: SPARK-18626
> URL: https://issues.apache.org/jira/browse/SPARK-18626
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 1.5.0
> Reporter: Thomas Sebastian
> Priority: Major
> Labels: bug, bulk-closed
>
> When a spark job is submitted twice, to execute concurrently using spark
> summit, both the jobs are failing, not allowing concurrent write(Append) to
> Hive tables.
> ERROR InsertIntoHadoopFsRelation: Aborting job.
> java.io.IOException: Failed to rename
> FileStatus{path=hdfs://nameservice1/user/hive/warehouse/aaa.db/table1/_temporary/0/task_201611210639_0017_m_000050/part-r-00050-00e873af-e3ab-4730-881f-e8a1b22077e0.gz.parquet;
>
> isDirectory=false; length=492; replication=3; blocksize=134217728;
> modification_time=1479728364366; access_time=1479728364062;
> owner=name; group=hive; permission=rw-rw-r--; isSymlink=false} to
> hdfs://nameservice1/user/hive/warehouse/aaa.db/table1t/part-r-00050-00e873af-e3ab-4730-881f-e8a1b22077e0.gz.parquet
> at
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths(FileOutputCommitter.java:371)
> at
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths(FileOutputCommitter.java:384)
> at
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJob(FileOutputCommitter.java:326)
> at
> parquet.hadoop.ParquetOutputCommitter.commitJob(ParquetOutputCommitter.java:46)
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]