Andre, Christopher
thx for response,
requirement is system should be possible to process 160 req/sec (200 is
better to multiply)
and system is kind of failover proxy itself

there are 2 backing webservices, each can answer max 20s, it there is
timeout on first, I must call the second, if there is timeout on second I
send soap fault to client,
so usually it shouldn't be more than 20s per req, guys say that normally it
is 7-10 seconds/request,
but in worst case it is 2*20s*160 requests/s ~= 6400 pending requests (and
according to deal we must fulfill worst case)

even if there are so many requests they are pending on sockets,
I try to do it with NIO, asynchronous servlets and async cxf - both async
cxf webservice is exposed by me, and I also call backing ws with async cxf
I think even one tomcat on one server should be able to serve such 6400
pending requests with 160req/s,
apart from proxy there are also 4-6 inserts into database (cli req, resp;
1st ws call, resp; 2nd ws call, resp

how do You assess such architecture/attitude ?
do You expect problems with async exposed webservice based on async servlet
and NIO, and async cxf ws client ?
afaik cxf use thread locals, are they all right with tomcat async servlets
? (I don't define threadlocals by myself, only cxf possibly does)

Regards
Jakub

ps
I didn't really expect to be able to serialize tcp socket to db, but if it
where possible, than I would have request replication


On Tue, Jun 11, 2013 at 9:57 PM, Christopher Schultz <
ch...@christopherschultz.net> wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> André,
>
> On 6/11/13 11:32 AM, André Warnier wrote:
> > Christopher Schultz wrote:
> >> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
> >>
> >> Ja,
> >>
> >> On 6/11/13 9:54 AM, Ja kub wrote:
> >>> What can be done to guarantee failover in below scenario:
> >>>
> >>> 2 tomcats behind cisco loadbalancer 1 http request can last
> >>> very long about 50 seconds - response from webservice can take
> >>> so long load is 200 requests per second I must response in max
> >>> 4 seconds more than backing webservice
> >>>
> >>> is there something like http request replication ?
> >>>
> >>> 50 s * 200 req/s = 10.000 pending requests
> >>>
> >>> if one tomcat is eg killed, can in any way other tomcat serve
> >>> his requests ?
> >>>
> >>> is there any out of the box solution, eg similar to session
> >>> replication ?
> >>
> >> The best way to do this is to configure your load balancer to
> >> buffer responses and re-try another cluster node in the case of
> >> an unexpected disconnect.
> >>
> >> If you can't buffer the response, then it is entirely
> >> inappropriate to re-process a request: instead, you should let
> >> the failure propagate all the way back to the client and let them
> >> decide what to do.
> >>
> >>> is it possible to save socket to database, or send it via
> >>> network?
> >>
> >> No. I think you are confused about what a socket is.
> >>
> >
> > Is that just me, or does this look like a *massive* imbalance
> > between the load, and the resources dedicated to serve that load ?
>
> +1
>
> 200 req/sec * 50 seconds per request?
>
> I get some folks do high-volume, high-response-time transactions.
> Thankfully, not I ;)
>
> > I somehow have trouble to envision any system working in any stable
> > way, when right from the start it is assumed to have 10,000
> > requests simultaneously being in various stages of processing.
> > Unless one would have some Google-like server farms behind the
> > thing anyway.
>
> 10k concurrent requests isn't really that insane. It's just having
> them for nearly a minute each that's quite extraordinary.
>
> - -chris
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
> Comment: GPGTools - http://gpgtools.org
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iQIcBAEBCAAGBQJRt4EhAAoJEBzwKT+lPKRY/+oQAI+BlbCJ7i7DzRYD2PnQfGYH
> KXPTcXg7iYaiQtzP+jZBtKa9+EAkS2Kad+5fUyY8/rd81yxgRNVJ7N2EbNlOAJNM
> 9zbeAszl5+3tEUOuqktcibtuwdMjC4U0XcmyThBjFy1LAvggvoGOaZvVyLQleyps
> Lw6fdUh0gy4fvkfSCEwZb1BQRbF8qO8bpqfaR7WorOgAcXEQMp5d0iiUwBydJLYQ
> hFraOXvmDfNl6lbODoW0Wtd9YQKmj/sMCG86Tm9BVVUmOgL5df9Pbgac1FzDAMpP
> /llROIH+T/8aT4u+iSByKcqmpAB6qI/csRk09vn3O6ZfffrmPGTKT1XfcN8iU6bn
> b9nRTVah+pES6eHlOVMgFJ2hZ8uYSTETteZZAMUr24oH6TTvHDj7CYXfFioLQjI9
> elvKvMpgU+JDpOfEX8ly/+u0GmMJH4WXT1EjL9l4JEMZuQyvWCgzwfC0JyqS0vVq
> hGCOZlLWhwDyEZ9atESKasuRamexYUMqgMQimXhWNzI+ruP4NU050M3n1bM+vl7J
> r1qzMCgcxD3jOvhoACQmfJ3APeoEfVKn2vc5ypzjGkS2fCK3rTmCnsEAl4R0JzBu
> zYVWTCqFPZlgKaqEb+xlzdoi7CwEDRHc12CblYAQBIXkEW4c9fI929wuQPsuI3yp
> bVZBgYBAeckMEr03ay+Y
> =30OJ
> -----END PGP SIGNATURE-----
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>

Reply via email to