yes, most hardware load balancer handle sticky sessions. this was back in
2001-2002. I don't know which model number it was, but it was part of
cisco's local director line of routers.

peter

On Jan 31, 2008 3:46 AM, andrey.morskoy <[EMAIL PROTECTED]>
wrote:

> About cisco: Peter Lin, what was the model in your case?
> Was it able to replicate sessions (sticky session maybe ) ?
>
> Peter Lin wrote:
> > from past experience, it's much better to use hardware load balancing.
> At a
> > previous job, we had any where from 12-24 servers load balanced behind a
> > cisco local director.
> >
> > Any load balancing router today can do the job, it doesn't have to be
> > cisco.  What I did in the past was to take production logs and run them
> in
> > jmeter against a cluster of tomcat servers. All the servers were behind
> the
> > load balancer.
> >
> > After the test was run, we collected the logs to make sure the load was
> > distributed evenly and we generated reports. From those reports, we
> compared
> > the results of the new system against the system we were replacing.
> >
> > That established the baseline performance for untuned tomcat. I then
> spent a
> > week going through 10 different jvm settings and running a full set of
> > benchmarks on each. At the end I looked at what concurrent loads the
> entire
> > system needed to support and chose the settings that match those needs.
> In
> > my test suite, I varied the number of concurrent connections, think time
> > between requests, ramp up time and a mix of requests.
> >
> > the key to tuning the system correctly is understanding exactly what
> kind of
> > traffic you expect. of course that isn't always possible, so you might
> have
> > to take a guess.
> >
> > peter
> >
> > On 1/30/08, David Brown <[EMAIL PROTECTED]> wrote:
> >
> >> Hello Andrew, reading your email, Alan's email and the Mladen email
> piqued
> >> my interest because I am currently working on a gig to improve the
> >> performance and monitoring of two Tomcat instances supporting 3 web
> >> applications and one web service. I am inclined to agree with Alan.
> And, did
> >> you read the ML replies to the Xmx and Xms Subject line emails? I must
> agree
> >> with the ML contributors that answered the email named in the previous
> >> sentence: baseline test, apply Eden parameters to the JVM then monitor
> the
> >> results of the load testing. Once you have all your monitoring results
> >> including logs the next step is to create a new metric by comparing the
> >> newly acquired data to your initial baseline test. There are very good
> >> points made on both sides but I have to believe that Tomcat tuning a
> priori
> >> is like trying to predict the weather. In my gig there are too many
> unknowns
> >> to resolve:
> >> (1) this is a legacy system which means differing verions of JDKs,
> Tomcat
> >> instances, web apps or web services built with framework versions no
> longer
> >> supported e.g. AXIS 1.3.
> >> (2) Commercial vendors that have taken FOSS and re-packaged it as
> >> proprietary software and as a result there is no direct support from
> the
> >> vendors for: SLA source code or updated binaries that were written in
> this
> >> century.
> >> (3) I know my client wants everything upgraded and migrated if possible
> >> when in reality I will have to improve the monitoring and performance
> issues
> >> with the current servlet containers, web services and network topology
> as it
> >> stands now.
> >>
> >> I know the rest of the world is moving away from clustering (horizontal
> >> scaling) and more toward virtualization (vertical scaling). In my case
> I
> >> will have to settle for horizontal scaling and the Tomcat software load
> >> balancing. I welcome anyone wanting to expound on Tomcat load
> balancing:
> >> say, a comparison between Tomcat JK connector load balancing and using
> an
> >> appliance like Big IP.
> >>
> >> Like you Andrew I would cheer a <calculated> solution if it existed:
> just
> >> dump in the number of nodes, instances, network(s), applications, web
> >> services, bandwidth and client users and viola! out comes the network
> >> diagram with annotations. Discussion, suggestions, advice, solutions,
> rants
> >> and raves welcomed.
> >>
> >> Andrew Hole wrote ..
> >>
> >>> Hello
> >>>
> >>> I read an interesting document from Mladen Turk (with whom I want to
> >>>
> >> speak
> >>
> >>> directly, but I don't know direct contact) that there is a formula to
> >>> calculate the number of concurrent request:
> >>> http://people.apache.org/~mturk/docs/article/ftwai.html<http://people.apache.org/%7Emturk/docs/article/ftwai.html>
> >>>
> >>> Calculating Load
> >>>
> >>> When determining the number of Tomcat servers that you will need to
> >>>
> >> satisfy
> >>
> >>> the client load, the first and major task is determining the Average
> >>> Application Response Time (hereafter AART). As said before, to satisfy
> >>>
> >> the
> >>
> >>> user experience the application has to respond within half of second.
> >>>
> >> The
> >>
> >>> content received by the client browser usually triggers couple of
> >>>
> >> physical
> >>
> >>> requests to the Web server (e.g. images). The web page usually
> consists
> >>>
> >> of
> >>
> >>> html and image data, so client issues a series of requests, and the
> time
> >>> that all this gets processed and delivered is called AART. To get most
> >>>
> >> out
> >>
> >>> of Tomcat you should limit the number of concurrent requests to 200
> per
> >>>
> >> CPU.
> >>
> >>> So we can come with the simple formula to calculate the maximum number
> >>>
> >> of
> >>
> >>> concurrent connections a physical box can handle:
> >>>
> >>>                               500
> >>>     Concurrent requests = ( ---------- max 200 ) * Number of CPU's
> >>>                             AART (ms)
> >>>
> >>> The other thing that you must care is the Network throughput between
> the
> >>>
> >> Web
> >>
> >>> server and Tomcat instances. This introduces a new variable called
> >>>
> >> Average
> >>
> >>> Application Response Size (hereafter AARS), that is the number of
> bytes
> >>>
> >> of
> >>
> >>> all context on a web page presented to the user. On a standard 100Mbps
> >>> network card with 8 Bits per Byte, the maximum theoretical throughput
> is
> >>> 12.5 MBytes.
> >>>
> >>>                                12500
> >>>     Concurrent requests = ---------------
> >>>                             AARS (KBytes)
> >>>
> >>> For a 20KB AARS this will give a theoretical maximum of 625 concurrent
> >>> requests. You can add more cards or use faster 1Gbps hardware if need
> to
> >>> handle more load.
> >>>
> >>> The formulas above will give you rudimentary estimation of the number
> of
> >>> Tomcat boxes and CPU's that you will need to handle the desired number
> >>>
> >> of
> >>
> >>> concurrent client requests. If you have to deploy the configuration
> >>>
> >> without
> >>
> >>> having actual hardware, the closest you can get is to measure the AART
> >>>
> >> on a
> >>
> >>> test platform and then compare the hardware vendor Specmarks.
> >>>
> >>> I would like to launch a discussion on the validity of this formula
> and,
> >>>
> >> in
> >>
> >>> case of inappropriate, to try to get a more accurate formula.
> >>>
> >>> Thanks a lot
> >>>
> >> ---------------------------------------------------------------------
> >> To start a new topic, e-mail: users@tomcat.apache.org
> >> To unsubscribe, e-mail: [EMAIL PROTECTED]
> >> For additional commands, e-mail: [EMAIL PROTECTED]
> >>
> >>
> >>
> >
> >
>
>
> --
> Best regards,
> Andrey Morskoy
> System Manager
>
> Negeso Kiev
> 19 M. Raskovoy St., 7th floor, office 719
> 02002 Kiev, Ukraine
> Tel: +380-44-516 83 84
> Fax: +380-44-516 83 84
> MSN: [EMAIL PROTECTED]
> www.negeso.com.ua
>
>
> Mobile:+380-95-490-29-65
> E-mail: [EMAIL PROTECTED]
>
> ---------------------------------------------------------------------
> To start a new topic, e-mail: users@tomcat.apache.org
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>
>

Reply via email to