Hi Team,

We are running HDFS mover on our Cluster to perform move from Disk to
Archive but while error, Datanode are shutting down with error *"read(2)
error: Resource temporarily unavailable"*.

Since this error is from sys call It does not says much on which exact
resource is unavailable. I have already tried increasing the open file
limit but with no success.


Here is the StackTrace:


8215463-10.10.220.2-1445859014971:blk_5370724851_1103954669487): I/O error
requesting file descriptors.  Disabling domain socket
DomainSocket(fd=861,path=/var/run/hadoop/dn_socket)
java.net.SocketTimeoutException: read(2) error: Resource temporarily
unavailable at org.apache.hadoop.net.unix.DomainSocket.readArray0(Native
Method) at
org.apache.hadoop.net.unix.DomainSocket.access$000(DomainSocket.java:45) at
org.apache.hadoop.net.unix.DomainSocket$DomainInputStream.read(DomainSocket.java:532)
at java.io.FilterInputStream.read(FilterInputStream.java:83) at
org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2278)
at
org.apache.hadoop.hdfs.BlockReaderFactory.requestFileDescriptors(BlockReaderFactory.java:542)
at
org.apache.hadoop.hdfs.BlockReaderFactory.createShortCircuitReplicaInfo(BlockReaderFactory.java:490)
at
org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.create(ShortCircuitCache.java:782)
at
org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.fetchOrCreate(ShortCircuitCache.java:716)
at
org.apache.hadoop.hdfs.BlockReaderFactory.getBlockReaderLocal(BlockReaderFactory.java:422)
at
org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:333)
at
org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:635)
at
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:844)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:896) at
org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:697) at
java.io.FilterInputStream.read(FilterInputStream.java:83) at
org.apache.hadoop.util.LimitInputStream.read(LimitInputStream.java:68) at
java.io.FilterInputStream.read(FilterInputStream.java:83) at
java.io.PushbackInputStream.read(PushbackInputStream.java:139) at
io.netty.handler.stream.ChunkedStream.isEndOfInput(ChunkedStream.java:87)
at io.netty.handler.stream.ChunkedStream.readChunk(ChunkedStream.java:104)
at io.netty.handler.stream.ChunkedStream.readChunk(ChunkedStream.java:34)
at
io.netty.handler.stream.ChunkedWriteHandler.doFlush(ChunkedWriteHandler.java:225)
at
io.netty.handler.stream.ChunkedWriteHandler.flush(ChunkedWriteHandler.java:139)
at
io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:663)
at
io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:693)
at
io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:681)
at
io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:716)
at
org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.onOpen(WebHdfsHandler.java:221)
at
org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.handle(WebHdfsHandler.java:131)
at
org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler$1.run(WebHdfsHandler.java:113)
at
org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler$1.run(WebHdfsHandler.java:110)
at java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:422) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at
org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.channelRead0(WebHdfsHandler.java:110)
at
org.apache.hadoop.hdfs.server.datanode.web.URLDispatcher.channelRead0(URLDispatcher.java:52)
at
org.apache.hadoop.hdfs.server.datanode.web.URLDispatcher.channelRead0(URLDispatcher.java:32)
at
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at
io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:748)




-- 

Regards,
Akash Mishra.


"It's not our abilities that make us, but our decisions."--Albus Dumbledore

Reply via email to