[ 
https://issues.apache.org/jira/browse/HIVE-20969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16698807#comment-16698807
 ] 

Peter Vary commented on HIVE-20969:
-----------------------------------

My current theory is that HIVE-19008 changed sparkSessionId generation which 
affected scratchDir creation.

[~stakiar]: Could you help out me here? What was the original intention here? I 
would assume that it would be good to connect the spark session to the hive 
session in every log message so it would be good if the sparkSessionId would 
contain the hive session id too. Otherwise when we have multiple HoS queries 
running on the same HS2 instance then we will have hard time differentiating 
between the multiple spark sessions with id="1".

[~ngangam]: Any thoughts on this?

 

> HoS sessionId generation can cause race conditions when uploading files to 
> HDFS
> -------------------------------------------------------------------------------
>
>                 Key: HIVE-20969
>                 URL: https://issues.apache.org/jira/browse/HIVE-20969
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>    Affects Versions: 4.0.0
>            Reporter: Peter Vary
>            Assignee: Peter Vary
>            Priority: Major
>
> The observed exception is:
> {code}
> Caused by: java.io.FileNotFoundException: File does not exist: 
> /tmp/hive/_spark_session_dir/0/hive-exec-2.1.1-SNAPSHOT.jar (inode 21140) 
> [Lease.  Holder: DFSClient_NONMAPREDUCE_304217459_39, pending creates: 1]
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2781)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.analyzeFileState(FSDirWriteFileOp.java:599)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.validateAddBlock(FSDirWriteFileOp.java:171)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2660)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:872)
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:550)
>       at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>       at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>       at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:422)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
>       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to