[jira] [Resolved] (HDFS-198) org.apache.hadoop.dfs.LeaseExpiredException during dfs write

2014-01-20 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HDFS-198.
--

Resolution: Not A Problem

This one has gone very stale and we have not seen any properly true reports of 
lease renewals going amiss during long waiting tasks recently. Marking as 'Not 
a Problem' (anymore). If there's a proper new report of this behaviour, please 
lets file a new JIRA with the newer data.

[~bugcy013] - Your problem is pretty different from what OP appears to have 
reported in an older version. Your problem arises out of MR tasks not utilising 
an attempt ID based directory (which Hive appears to do sometimes), in which 
case two different running attempts (out of speculative exec. or otherwise) can 
cause one of them to run into this error as a result of the file overwrite. 
Best to investigate further on a mailing list rather than here.

> org.apache.hadoop.dfs.LeaseExpiredException during dfs write
> 
>
> Key: HDFS-198
> URL: https://issues.apache.org/jira/browse/HDFS-198
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, namenode
>Reporter: Runping Qi
>
> Many long running cpu intensive map tasks failed due to 
> org.apache.hadoop.dfs.LeaseExpiredException.
> See [a comment 
> below|https://issues.apache.org/jira/browse/HDFS-198?focusedCommentId=12910298&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12910298]
>  for the exceptions from the log:



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] Resolved: (HDFS-198) org.apache.hadoop.dfs.LeaseExpiredException during dfs write

2010-09-15 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HDFS-198.
-

Resolution: Not A Problem

I believe this issue went stale.  Closing.

> org.apache.hadoop.dfs.LeaseExpiredException during dfs write
> 
>
> Key: HDFS-198
> URL: https://issues.apache.org/jira/browse/HDFS-198
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Runping Qi
>
> Many long running cpu intensive map tasks failed due to 
> org.apache.hadoop.dfs.LeaseExpiredException.
> Here is except from the log:
> 2008-10-26 11:54:17,282 INFO org.apache.hadoop.dfs.DFSClient: 
> org.apache.hadoop.ipc.RemoteException: 
> org.apache.hadoop.dfs.LeaseExpiredException: No lease on 
> /xxx/_temporary/_task_200810232126_0001_m_33_0/part-00033 File does not 
> exist. [Lease.  Holder: 44 46 53 43 6c 69 65 6e 74 5f 74 61 73 6b 5f 32 30 30 
> 38 31 30 32 33 32 31 32 36 5f 30 30 30 31 5f 6d 5f 30 30 30 30 33 33 5f 30, 
> heldlocks: 0, pendingcreates: 1]
>   at org.apache.hadoop.dfs.FSNamesystem.checkLease(FSNamesystem.java:1194)
>   at 
> org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1125)
>   at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:300)
>   at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:446)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:896)
>   at org.apache.hadoop.ipc.Client.call(Client.java:557)
>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212)
>   at org.apache.hadoop.dfs.$Proxy1.addBlock(Unknown Source)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>   at org.apache.hadoop.dfs.$Proxy1.addBlock(Unknown Source)
>   at 
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2335)
>   at 
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2220)
>   at 
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1700(DFSClient.java:1702)
>   at 
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1842)
> 2008-10-26 11:54:17,282 WARN org.apache.hadoop.dfs.DFSClient: 
> NotReplicatedYetException sleeping 
> /xxx/_temporary/_task_200810232126_0001_m_33_0/part-00033 retries left 2
> 2008-10-26 11:54:18,886 INFO org.apache.hadoop.dfs.DFSClient: 
> org.apache.hadoop.ipc.RemoteException: 
> org.apache.hadoop.dfs.LeaseExpiredException: No lease on 
> /xxx/_temporary/_task_200810232126_0001_m_33_0/part-00033 File does not 
> exist. [Lease.  Holder: 44 46 53 43 6c 69 65 6e 74 5f 74 61 73 6b 5f 32 30 30 
> 38 31 30 32 33 32 31 32 36 5f 30 30 30 31 5f 6d 5f 30 30 30 30 33 33 5f 30, 
> heldlocks: 0, pendingcreates: 1]
>   at org.apache.hadoop.dfs.FSNamesystem.checkLease(FSNamesystem.java:1194)
>   at 
> org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1125)
>   at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:300)
>   at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:446)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:896)
>   at org.apache.hadoop.ipc.Client.call(Client.java:557)
>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212)
>   at org.apache.hadoop.dfs.$Proxy1.addBlock(Unknown Source)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>   at org.apache.hadoop.dfs.$Proxy1.a