ok running on tomcat 8.0.8 with spring 4.0.5 and reactor 1.1.2, my log
within minutes gets filled with these exceptions.

Seems like it could be a tomcat issue after all:
11:35:37,922 ERROR http-nio-80-exec-37
handler.LoggingWebSocketHandlerDecorator:61 - Transport error for SockJS
session id=hv4ncrvg
java.io.IOException: Connection reset by peer
        at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
        at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
        at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
        at sun.nio.ch.IOUtil.read(IOUtil.java:197)
        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
        at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:135)
        at
org.apache.coyote.http11.upgrade.NioServletInputStream.fillReadBuffer(NioServletInputStream.java:136)
        at
org.apache.coyote.http11.upgrade.NioServletInputStream.doRead(NioServletInputStream.java:80)
        at
org.apache.coyote.http11.upgrade.AbstractServletInputStream.read(AbstractServletInputStream.java:120)
        at
org.apache.tomcat.websocket.server.WsFrameServer.onDataAvailable(WsFrameServer.java:46)
        at
org.apache.tomcat.websocket.server.WsHttpUpgradeHandler$WsReadListener.onDataAvailable(WsHttpUpgradeHandler.java:194)
        at
org.apache.coyote.http11.upgrade.AbstractServletInputStream.onDataAvailable(AbstractServletInputStream.java:194)
        at
org.apache.coyote.http11.upgrade.AbstractProcessor.upgradeDispatch(AbstractProcessor.java:95)
        at
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:650)
        at
org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:222)
        at
org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1566)
        at
org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1523)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at
org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
        at java.lang.Thread.run(Thread.java:744)


Prashant


On Mon, Jun 16, 2014 at 5:26 PM, Prashant Deva <prashant.d...@gmail.com>
wrote:

> just noticed this, you tried running on tomcat 8, while we are running
> 7.0.53...
>
> Prashant
>
>
> On Mon, Jun 16, 2014 at 1:36 PM, Prashant Deva <prashant.d...@gmail.com>
> wrote:
>
>> our production instance (which we are running into issues with) has about
>> 2500 concurrent users.
>>
>> Prashant
>>
>>
>> On Mon, Jun 16, 2014 at 1:26 PM, Rossen Stoyanchev <
>> rstoyanc...@gopivotal.com> wrote:
>>
>>> On Mon, Jun 16, 2014 at 4:04 PM, Prashant Deva <prashant.d...@gmail.com>
>>> wrote:
>>>
>>> > Rossen,
>>> >  Did you use an external queue?
>>>
>>>
>>> Yes I did have the sample configured to use RabbitMQ for broadcasting
>>> messages. That's running as a separate process though so it shouldn't
>>> change the output of "lsof".
>>>
>>> How many clients were connected at the same time?
>>> >
>>>
>>> In the sample, just one. I also ran a load test with 500 concurrent users
>>> (1 million messages) and the file descriptor count remains stable (around
>>> 500).
>>>
>>> I'd reverse the question, how many users do you have to run with to
>>> demonstrate the issue?
>>>
>>> Rossen
>>>
>>
>>
>

Reply via email to