Since I did not get any response, I am reposting it to get the attention...

On Fri, May 27, 2011 at 7:57 PM, sudhanshu arora
<sudhanshu.ar...@gmail.com>wrote:

> I am writing multiple files using multiple FSOutputStreams through
> different threads in HDFS. All the files are getting written properly and I
> see that namenode and datanode logs have no error. The namenode log suggests
> that all the files are closed.
>
> However, a close on one my streams fails consistently with the following
> exception.
>
> java.io.IOException: Call to flint/<ipaddress>:9000 failed on local
> exception: java.io.InterruptedIOException: Interruped while waiting for IO
> on channel java.nio.channels.SocketChannel[connected
> local=/<ipaddress>:57082 remote=flint/10.1.41.176:9000]. 59979 millis
> timeout left.
>
>                 at
> org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
>
>                 at org.apache.hadoop.ipc.Client.call(Client.java:743)
>
>                 at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>
>                 at $Proxy0.complete(Unknown Source)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>
>                 at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>
>                 at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>
>                 at java.lang.reflect.Method.invoke(Method.java:597)
>
>                 at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>
>                 at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>
>                 at $Proxy0.complete(Unknown Source)
>
>                 at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3264)
>
>                 at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3188)
>
>                 at
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:61)
>
>                 at
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:86)
>
> Caused by: java.io.InterruptedIOException: Interruped while waiting for IO
> on channel java.nio.channels.SocketChannel[connected
> local=/<ipaddress>:57082 remote=flint/<ipaddress>:9000]. 59979 millis
> timeout left.
>
>                 at
> org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>
>                 at
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>
>                 at
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>
>                 at
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>
>                 at
> java.io.FilterInputStream.read(FilterInputStream.java:116)
>
>                 at
> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:276)
>
>                 at
> java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>
>                 at
> java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>
>                 at
> java.io.DataInputStream.readInt(DataInputStream.java:370)
>
>                 at
> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
>
>                 at
> org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)
>
>
> Any help is highly appreciated.
>
> Thanks,
> Sudhanshu
>

Reply via email to