Hey all,

We're working on setting up a Spark 1.6.1 cluster on Amazon EC2, and
encountering some problems related to pre-emption.  We have followed all
the instructions for setting up dynamic allocation, including enabling the
external spark shuffle service in the YARN NodeManagers.

When a container is pre-empted, we see lots of RPC calls continuing to be
made to its host/port; obviously, these fail.  Eventually, a combination of
exceptions manages to actually kill our job.  Interestingly, downgrading to
1.4.1 and switching the spark.shuffle.blocktransferservice to nio fixes the
problem completely; but, we need functionality developed in later versions,
so we can't actually go back to Spark 1.4.1.

Is this a network configuration issue, or a resource allocation issue, or
something to that effect?  Or, is there actually a bug here?  Have any of
you set up a cluster for production use that runs Spark 1.6.1, Hadoop
2.7.2, Yarn FairScheduler with preemption, and dynamic allocation?

Some examples of exceptions are below; thank you very much for any pointers
you can provide.

Thanks!
Nick

---------------------------

16/06/22 08:13:30 ERROR spark.ContextCleaner: Error cleaning RDD 49
java.io.IOException: Failed to send RPC 5721681506291542850 to
nodexx.xx.xxxx.ddns.xx.com/xx.xx.xx.xx:42857:
java.nio.channels.ClosedChannelException
at
org.apache.spark.network.client.TransportClient$3.operationComplete(TransportClient.java:239)
at
org.apache.spark.network.client.TransportClient$3.operationComplete(TransportClient.java:226)
at
io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
at
io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:567)
at
io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424)
at
io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:801)
at
io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:699)
at
io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1122)
at
io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:633)
at
io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:32)
at
io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:908)
at
io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:960)
at
io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:893)
at
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.nio.channels.ClosedChannelException

------------

16/06/19 22:33:14 INFO storage.BlockManager: Removing RDD 122
16/06/19 22:33:14 WARN server.TransportChannelHandler: Exception in
connection from nodexx-xx-xx.xx.ddns.xx.com/xx.xx.xx.xx:56618
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at
io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:313)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881)
at
io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:242)
at
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
at
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:745)
16/06/19 22:33:14 ERROR client.TransportResponseHandler: Still have 2
requests outstanding when connection from
nodexx-xx-xx.xxxx.ddns.xx.com/xx.xx.xx.xx:56618 is closed.

Reply via email to