Re: Serious mod_jk performance issue

2006-12-18 Thread Jess Holle

Thanks for the efforts, Rainer.

I'll have to look deeper -- perhaps there is some aspect of our 
configuration that is causing us the issues as they seem to be quite 
repeatable.


--
Jess Holle

Rainer Jung wrote:

Hi Jess,

I did some simple tests and was not able to reproduce your performance
observations. Nevertheless I could observe a couple of strange things,
but I doubt, if they are relevant to most use cases.

First my setup:

Apache 2.0.59 worker with mod_jk 1.2.20 and Tomcat 5.5.17 with normal
(non-apr) connectors, using Java 1.5.0_06 on an early Release of Solaris
10. Hardware Sun T-2000 (Niagara), which means relatively slow CPU but
good scalability.

I didn't have the system exclusively, but it was rather idle during the
test.

Client ab from apache 2.0.59. All ab measurements have been verified
with %D in the apache access log. No restarts between measurements, so
the file was most likely coming from the file system cache.

Client running either on the same machine, or on a SLES 9 SP2, 64Bit AMD
Opteron connected by 100MBit Ethernet.

Apache and mod_jk compiled with -mcpu=v9 -O2 -g -Wall. Apache, mod_jk
and Tomcat configured default (apart from ports and log format), JVM for
tomcat with a couple of non-default values:

-server \
-Xms64m -Xmx64m \
-XX:NewSize=8m -XX:MaxNewSize=8m \
-XX:SurvivorRatio=6 -XX:MaxTenuringThreshold=31 \
-XX:+UseConcMarkSweepGC -XX:-UseAdaptiveSizePolicy

File to test throughput had size 316702480 bytes (some .tar.gz I found
lying around).

1) local client, i.e. client running on the same machine as Apache and
Tomcat

A single request took 15.71 sec (mod_jk) (=153.8 MBit/Sec) and 15.61 (TC
HTTP direct) (=154.8 MBit/sec), the same with 10 consecutive -
non-parallel - requests gave 157.1 sec resp. 156.8 sec, so this result
seems to be stable.

Now parallel requests: I used parallelity (-c with ab) of 2 4 8 16 32
and the double amount of requests (4, 8, ...):

Throughput results in MBit/sec, depending on concurrency:

   mod_jk  http
conc.
1  153.8   154.8
2  306.3   303.6
4  605.5   627.7
8 1090.0  1185.5
161137.7  1161.8
321210.7  1114.3

mod_jk and HTTP direct behave almost the same for the huge file. We
saturate the system at about 1100 MBit/second (going via loopback). CPU
was busy at most 60% during these tests.

This also shows, that mod_jk and HTTP throughput is enough to saturate a
lot of bandwidth - as long as your IP stack doesn't add to much overhead
to it.

2) remote client, i.e. ab running on the SLES 9 SP2 x86_64 machine,
connected via 100MBit to Apache and Tomcat.

Throughput results in MBit/sec, depending on concurrency:

   mod_jk  http
conc.
1   88.689.1
2   88.989.1

So even with only one request in parallel we saturate the network and it
does not make sense to measure more than two parallel requests.

3) Dependancy on file size:

Measuring with local client without concurrency for 50, 100, 200, 300,
400, 500, ..., 1000MB:

   mod_jk  http
  MB
  50   167.5   234.9 (5 consecutive requests)
 100   168.8   170.1 (5 consecutive requests)
 200   168.6   169.8 (2 consecutive requests)
 300   169.1   169.7 (2 consecutive requests)
 400   168.9   169.7 (2 consecutive requests)
 500   168.8   169.4 (2 consecutive requests)
 600   167.9   168.0 (2 consecutive requests)
 700   167.8   168.9 (2 consecutive requests)
 800   168.1   168.6 (2 consecutive requests)
 900   168.0   168.0 (2 consecutive requests)
1000   156.2   214.9 (2 consecutive requests)
2000   156.9   214.7 (1 request)

Interestingly the result for 1000M and for 2000M is reproducible. But as
soon as I switch from the client ab to wget or curl (writing output to
/dev/null) I get the same numbers for mod_jk, but for HTTP I get the
same result as for mod_jk!

The numbers are slightly better than in the first test, I guess because
this test was done using a file in the webapps file system, the first
test was done using a file in another file system symlinked from within
webapps (but still a local fs). Another possibilty would be, that a
mkfile generated file has a better block layout in the fs, than a usual
file, which was growing over time.

All in all I think that throughput for huge files is very good in both
cases. I would expect, that most often it would be much more intersting
to inspect scalability and system load (cpu/memory) for massive
concurrency. When serving large files, downloads will run a long time
because most often the client side of the connection is not a fat line.
As a result users will add up in parallel, so one might need to serve a
few thousands of users.

Regards,

Rainer

Jess Holle schrieb:
  

Mladen Turk wrote:


Jess Holle wrote:
  

We're seeing a *serious *performance issue with mod_jk and large


(e.g. 500MB+) file transfers.  [This is with Apache 2.0.55, Tomcat
5.0.30, and various recent mod_jk including 1.2.20.]

SunOS dev12.qa.atl.jboss.com 5.9 Generic_118558-25 sun4u sparc
SUNW,Sun-Fire-V210


Re: Serious mod_jk performance issue

2006-12-18 Thread Jess Holle

Do your results differ when Windows XP is used as the client?

I looked back at all our notes and though we tested with Solaris and 
Windows servers the client was always XP.


Rainer Jung wrote:

Hi Jess,

I did some simple tests and was not able to reproduce your performance
observations. Nevertheless I could observe a couple of strange things,
but I doubt, if they are relevant to most use cases.

First my setup:

Apache 2.0.59 worker with mod_jk 1.2.20 and Tomcat 5.5.17 with normal
(non-apr) connectors, using Java 1.5.0_06 on an early Release of Solaris
10. Hardware Sun T-2000 (Niagara), which means relatively slow CPU but
good scalability.

I didn't have the system exclusively, but it was rather idle during the
test.

Client ab from apache 2.0.59. All ab measurements have been verified
with %D in the apache access log. No restarts between measurements, so
the file was most likely coming from the file system cache.

Client running either on the same machine, or on a SLES 9 SP2, 64Bit AMD
Opteron connected by 100MBit Ethernet.

Apache and mod_jk compiled with -mcpu=v9 -O2 -g -Wall. Apache, mod_jk
and Tomcat configured default (apart from ports and log format), JVM for
tomcat with a couple of non-default values:

-server \
-Xms64m -Xmx64m \
-XX:NewSize=8m -XX:MaxNewSize=8m \
-XX:SurvivorRatio=6 -XX:MaxTenuringThreshold=31 \
-XX:+UseConcMarkSweepGC -XX:-UseAdaptiveSizePolicy

File to test throughput had size 316702480 bytes (some .tar.gz I found
lying around).

1) local client, i.e. client running on the same machine as Apache and
Tomcat

A single request took 15.71 sec (mod_jk) (=153.8 MBit/Sec) and 15.61 (TC
HTTP direct) (=154.8 MBit/sec), the same with 10 consecutive -
non-parallel - requests gave 157.1 sec resp. 156.8 sec, so this result
seems to be stable.

Now parallel requests: I used parallelity (-c with ab) of 2 4 8 16 32
and the double amount of requests (4, 8, ...):

Throughput results in MBit/sec, depending on concurrency:

   mod_jk  http
conc.
1  153.8   154.8
2  306.3   303.6
4  605.5   627.7
8 1090.0  1185.5
161137.7  1161.8
321210.7  1114.3

mod_jk and HTTP direct behave almost the same for the huge file. We
saturate the system at about 1100 MBit/second (going via loopback). CPU
was busy at most 60% during these tests.

This also shows, that mod_jk and HTTP throughput is enough to saturate a
lot of bandwidth - as long as your IP stack doesn't add to much overhead
to it.

2) remote client, i.e. ab running on the SLES 9 SP2 x86_64 machine,
connected via 100MBit to Apache and Tomcat.

Throughput results in MBit/sec, depending on concurrency:

   mod_jk  http
conc.
1   88.689.1
2   88.989.1

So even with only one request in parallel we saturate the network and it
does not make sense to measure more than two parallel requests.

3) Dependancy on file size:

Measuring with local client without concurrency for 50, 100, 200, 300,
400, 500, ..., 1000MB:

   mod_jk  http
  MB
  50   167.5   234.9 (5 consecutive requests)
 100   168.8   170.1 (5 consecutive requests)
 200   168.6   169.8 (2 consecutive requests)
 300   169.1   169.7 (2 consecutive requests)
 400   168.9   169.7 (2 consecutive requests)
 500   168.8   169.4 (2 consecutive requests)
 600   167.9   168.0 (2 consecutive requests)
 700   167.8   168.9 (2 consecutive requests)
 800   168.1   168.6 (2 consecutive requests)
 900   168.0   168.0 (2 consecutive requests)
1000   156.2   214.9 (2 consecutive requests)
2000   156.9   214.7 (1 request)

Interestingly the result for 1000M and for 2000M is reproducible. But as
soon as I switch from the client ab to wget or curl (writing output to
/dev/null) I get the same numbers for mod_jk, but for HTTP I get the
same result as for mod_jk!

The numbers are slightly better than in the first test, I guess because
this test was done using a file in the webapps file system, the first
test was done using a file in another file system symlinked from within
webapps (but still a local fs). Another possibilty would be, that a
mkfile generated file has a better block layout in the fs, than a usual
file, which was growing over time.

All in all I think that throughput for huge files is very good in both
cases. I would expect, that most often it would be much more intersting
to inspect scalability and system load (cpu/memory) for massive
concurrency. When serving large files, downloads will run a long time
because most often the client side of the connection is not a fat line.
As a result users will add up in parallel, so one might need to serve a
few thousands of users.

Regards,

Rainer

Jess Holle schrieb:
  

Mladen Turk wrote:


Jess Holle wrote:
  

We're seeing a *serious *performance issue with mod_jk and large


(e.g. 500MB+) file transfers.  [This is with Apache 2.0.55, Tomcat
5.0.30, and various recent mod_jk including 1.2.20.]

SunOS dev12.qa.atl.jboss.com 5.9 Generic_118558-25 sun4u sparc
SUNW,Sun-Fire-V210

Tomcat:8080
Total 

Re: Serious mod_jk performance issue

2006-12-17 Thread Rainer Jung
Hi Jess,

I did some simple tests and was not able to reproduce your performance
observations. Nevertheless I could observe a couple of strange things,
but I doubt, if they are relevant to most use cases.

First my setup:

Apache 2.0.59 worker with mod_jk 1.2.20 and Tomcat 5.5.17 with normal
(non-apr) connectors, using Java 1.5.0_06 on an early Release of Solaris
10. Hardware Sun T-2000 (Niagara), which means relatively slow CPU but
good scalability.

I didn't have the system exclusively, but it was rather idle during the
test.

Client ab from apache 2.0.59. All ab measurements have been verified
with %D in the apache access log. No restarts between measurements, so
the file was most likely coming from the file system cache.

Client running either on the same machine, or on a SLES 9 SP2, 64Bit AMD
Opteron connected by 100MBit Ethernet.

Apache and mod_jk compiled with -mcpu=v9 -O2 -g -Wall. Apache, mod_jk
and Tomcat configured default (apart from ports and log format), JVM for
tomcat with a couple of non-default values:

-server \
-Xms64m -Xmx64m \
-XX:NewSize=8m -XX:MaxNewSize=8m \
-XX:SurvivorRatio=6 -XX:MaxTenuringThreshold=31 \
-XX:+UseConcMarkSweepGC -XX:-UseAdaptiveSizePolicy

File to test throughput had size 316702480 bytes (some .tar.gz I found
lying around).

1) local client, i.e. client running on the same machine as Apache and
Tomcat

A single request took 15.71 sec (mod_jk) (=153.8 MBit/Sec) and 15.61 (TC
HTTP direct) (=154.8 MBit/sec), the same with 10 consecutive -
non-parallel - requests gave 157.1 sec resp. 156.8 sec, so this result
seems to be stable.

Now parallel requests: I used parallelity (-c with ab) of 2 4 8 16 32
and the double amount of requests (4, 8, ...):

Throughput results in MBit/sec, depending on concurrency:

   mod_jk  http
conc.
1  153.8   154.8
2  306.3   303.6
4  605.5   627.7
8 1090.0  1185.5
161137.7  1161.8
321210.7  1114.3

mod_jk and HTTP direct behave almost the same for the huge file. We
saturate the system at about 1100 MBit/second (going via loopback). CPU
was busy at most 60% during these tests.

This also shows, that mod_jk and HTTP throughput is enough to saturate a
lot of bandwidth - as long as your IP stack doesn't add to much overhead
to it.

2) remote client, i.e. ab running on the SLES 9 SP2 x86_64 machine,
connected via 100MBit to Apache and Tomcat.

Throughput results in MBit/sec, depending on concurrency:

   mod_jk  http
conc.
1   88.689.1
2   88.989.1

So even with only one request in parallel we saturate the network and it
does not make sense to measure more than two parallel requests.

3) Dependancy on file size:

Measuring with local client without concurrency for 50, 100, 200, 300,
400, 500, ..., 1000MB:

   mod_jk  http
  MB
  50   167.5   234.9 (5 consecutive requests)
 100   168.8   170.1 (5 consecutive requests)
 200   168.6   169.8 (2 consecutive requests)
 300   169.1   169.7 (2 consecutive requests)
 400   168.9   169.7 (2 consecutive requests)
 500   168.8   169.4 (2 consecutive requests)
 600   167.9   168.0 (2 consecutive requests)
 700   167.8   168.9 (2 consecutive requests)
 800   168.1   168.6 (2 consecutive requests)
 900   168.0   168.0 (2 consecutive requests)
1000   156.2   214.9 (2 consecutive requests)
2000   156.9   214.7 (1 request)

Interestingly the result for 1000M and for 2000M is reproducible. But as
soon as I switch from the client ab to wget or curl (writing output to
/dev/null) I get the same numbers for mod_jk, but for HTTP I get the
same result as for mod_jk!

The numbers are slightly better than in the first test, I guess because
this test was done using a file in the webapps file system, the first
test was done using a file in another file system symlinked from within
webapps (but still a local fs). Another possibilty would be, that a
mkfile generated file has a better block layout in the fs, than a usual
file, which was growing over time.

All in all I think that throughput for huge files is very good in both
cases. I would expect, that most often it would be much more intersting
to inspect scalability and system load (cpu/memory) for massive
concurrency. When serving large files, downloads will run a long time
because most often the client side of the connection is not a fat line.
As a result users will add up in parallel, so one might need to serve a
few thousands of users.

Regards,

Rainer

Jess Holle schrieb:
 Mladen Turk wrote:
 Jess Holle wrote:
  We're seeing a *serious *performance issue with mod_jk and large
 (e.g. 500MB+) file transfers.  [This is with Apache 2.0.55, Tomcat
 5.0.30, and various recent mod_jk including 1.2.20.]

 SunOS dev12.qa.atl.jboss.com 5.9 Generic_118558-25 sun4u sparc
 SUNW,Sun-Fire-V210

 Tomcat:8080
 Total transferred:  1782932700 bytes
 HTML transferred:   1782908800 bytes
 Requests per second:5.60 [#/sec] (mean)

 Apache-mod_jk-Tomcat:8009
 Total transferred:  1782935400 bytes
 HTML transferred:   

Re: Serious mod_jk performance issue

2006-12-17 Thread Henri Gomez

All in all I think that throughput for huge files is very good in both
cases. I would expect, that most often it would be much more intersting
to inspect scalability and system load (cpu/memory) for massive
concurrency. When serving large files, downloads will run a long time
because most often the client side of the connection is not a fat line.
As a result users will add up in parallel, so one might need to serve a
few thousands of users.


Yes if we could be able to determine the cpu/mem load of a tomcat
instance in a tomcat group it will help construct a tru load-balancing
system.

Any ideas how we could grab such informations in a portable way from Java ?

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Serious mod_jk performance issue

2006-12-14 Thread Mladen Turk

Jess Holle wrote:
 We're seeing a *serious *performance issue with mod_jk and large (e.g. 
500MB+) file transfers.  [This is with Apache 2.0.55, Tomcat 5.0.30, and various 
recent mod_jk including 1.2.20.]

SunOS dev12.qa.atl.jboss.com 5.9 Generic_118558-25 sun4u sparc 
SUNW,Sun-Fire-V210

Tomcat:8080
Total transferred:  1782932700 bytes
HTML transferred:   1782908800 bytes
Requests per second:5.60 [#/sec] (mean)

Apache-mod_jk-Tomcat:8009
Total transferred:  1782935400 bytes
HTML transferred:   1782908800 bytes
Requests per second:3.68 [#/sec] (mean)

So I see no performance degradation that would
be higher then the performance needed to transfer
the data twice.

Anyhow, why would you like to serve the 500+ MB
files trough mod_jk? The entire point is that you
have the option to separate the static and dynamic
content.

Regards,
Mladen




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Serious mod_jk performance issue

2006-12-14 Thread Andy Wang
FWIW, this was also seen in less scientific testing from a linux system 
to an XP client on the same 100Mbit network.


Andy


Jess Holle wrote:

Apache and tomcat are both on the same Solaris 10 box and the network
between client (XP) is 100Mbit.

--
Jess Holle

Rainer Jung wrote:

If noone finds a reason for it, I can go into it during the weekend. I
would try to reproduce and research on Solaris. Concerning your data for
Solaris: Apache and Tomcat were both on Solaris? The same machine or
different? Network between Client (Browser?) and Apache was 100MBit or
1GBit?

Regards,

Rainer

Jess Holle schrieb:
 

We're seeing a *serious *performance issue with mod_jk and large (e.g.
500MB+) file transfers.  [This is with Apache 2.0.55, Tomcat 5.0.30, 
and

various recent mod_jk including 1.2.20.]

The performance of downloading the file via Apache is good, as is the
performance when downloading directly from Tomcat.  The performance 
when

downloading from Tomcat through Apache via mod_jk is, however, quite
abysmal.  I'd obviously expect *some* degradation due to the extra
interprocess hop, but given that this is a just a single-user,
single-request test, I'd expect that the network would still be the
limiting factor -- or at least that the degradation would be in the
order of 25% of less.  What we're seeing, however, is far worse:

   On Windows:

   * Apache 2.0.55, Tomcat 5.0.30, and mod_jk 1.2.20 - Started at
 10 MB/sec ended at 3 MB/sec with mod_deflate disabled (1.5
 MB/sec with mod_deflate enabled)
   * Apache 2.0.55, Tomcat 5.0.30, and mod_jk 1.2.19 - Disabling
 JkFlushPackets only slightly improved performance.
   * Apache 2.2.3 with Tomcat 5.5.20 w/ the native connector -
 Didn't work period.  I didn't have a chance to look into it,
 but the download failed after getting serveral packets (!)
   * Apache 2.2.3 with Tomcat 5.5.20 w/o the native connector - Was
 only slightly slower than going straight through Apache
 about 7-8 MB/sec

   On Solaris:

   * Apache 2.0.55, Tomcat 5.0.30, recent mod_jk - Fairly constant
 4MB/s when going through mod_jk, 10MB/s when just downloading
 via Apache

   [This issue originally was thought to be Windows specific, which is
   why we have many more results for Windows.]

Obviously if our end goal was simple static file transfers we'd just
share/mirror them to Apache to solve this (we need the load balancing
flexibility, etc, of mod_jk, so directly using Tomcat is not really an
option -- nor is doing non-AJP-proxying).  The static file case is the
simplified reproduction of our real issue, however, which is large file
downloads from our (Java-based) content store.

We had much better results with Apache 2.2.3 and Tomcat 5.5.20 with
tcnative, but we really don't want to force a move to 2.2.x and Tomcat
5.5.x in this case and we've had issues with tcnative (which we *hope*
may be resolved with 1.1.8).  Overall we'd much prefer to get mod_jk
working reasonably than to force a disruptive move to 2.2.x right now.

Is this a known issue?  Any pointers as to where/how to look for the
performance bottleneck?  Some VTune examination showed that almost all
of Apache's CPU time during this time was in libapr.dll, but that's
obviously not terribly specific.

--
Jess Holle





-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

  





-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Serious mod_jk performance issue

2006-12-13 Thread Jess Holle

Apache and tomcat are both on the same Solaris 10 box and the network
between client (XP) is 100Mbit.

--
Jess Holle

Rainer Jung wrote:

If noone finds a reason for it, I can go into it during the weekend. I
would try to reproduce and research on Solaris. Concerning your data for
Solaris: Apache and Tomcat were both on Solaris? The same machine or
different? Network between Client (Browser?) and Apache was 100MBit or
1GBit?

Regards,

Rainer

Jess Holle schrieb:
  

We're seeing a *serious *performance issue with mod_jk and large (e.g.
500MB+) file transfers.  [This is with Apache 2.0.55, Tomcat 5.0.30, and
various recent mod_jk including 1.2.20.]

The performance of downloading the file via Apache is good, as is the
performance when downloading directly from Tomcat.  The performance when
downloading from Tomcat through Apache via mod_jk is, however, quite
abysmal.  I'd obviously expect *some* degradation due to the extra
interprocess hop, but given that this is a just a single-user,
single-request test, I'd expect that the network would still be the
limiting factor -- or at least that the degradation would be in the
order of 25% of less.  What we're seeing, however, is far worse:

   On Windows:

   * Apache 2.0.55, Tomcat 5.0.30, and mod_jk 1.2.20 - Started at
 10 MB/sec ended at 3 MB/sec with mod_deflate disabled (1.5
 MB/sec with mod_deflate enabled)
   * Apache 2.0.55, Tomcat 5.0.30, and mod_jk 1.2.19 - Disabling
 JkFlushPackets only slightly improved performance.
   * Apache 2.2.3 with Tomcat 5.5.20 w/ the native connector -
 Didn't work period.  I didn't have a chance to look into it,
 but the download failed after getting serveral packets (!)
   * Apache 2.2.3 with Tomcat 5.5.20 w/o the native connector - Was
 only slightly slower than going straight through Apache
 about 7-8 MB/sec

   On Solaris:

   * Apache 2.0.55, Tomcat 5.0.30, recent mod_jk - Fairly constant
 4MB/s when going through mod_jk, 10MB/s when just downloading
 via Apache

   [This issue originally was thought to be Windows specific, which is
   why we have many more results for Windows.]

Obviously if our end goal was simple static file transfers we'd just
share/mirror them to Apache to solve this (we need the load balancing
flexibility, etc, of mod_jk, so directly using Tomcat is not really an
option -- nor is doing non-AJP-proxying).  The static file case is the
simplified reproduction of our real issue, however, which is large file
downloads from our (Java-based) content store.

We had much better results with Apache 2.2.3 and Tomcat 5.5.20 with
tcnative, but we really don't want to force a move to 2.2.x and Tomcat
5.5.x in this case and we've had issues with tcnative (which we *hope*
may be resolved with 1.1.8).  Overall we'd much prefer to get mod_jk
working reasonably than to force a disruptive move to 2.2.x right now.

Is this a known issue?  Any pointers as to where/how to look for the
performance bottleneck?  Some VTune examination showed that almost all
of Apache's CPU time during this time was in libapr.dll, but that's
obviously not terribly specific.

--
Jess Holle





-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

  




Serious mod_jk performance issue

2006-12-12 Thread Jess Holle
We're seeing a *serious *performance issue with mod_jk and large (e.g. 
500MB+) file transfers.  [This is with Apache 2.0.55, Tomcat 5.0.30, and 
various recent mod_jk including 1.2.20.]


The performance of downloading the file via Apache is good, as is the 
performance when downloading directly from Tomcat.  The performance when 
downloading from Tomcat through Apache via mod_jk is, however, quite 
abysmal.  I'd obviously expect *some* degradation due to the extra 
interprocess hop, but given that this is a just a single-user, 
single-request test, I'd expect that the network would still be the 
limiting factor -- or at least that the degradation would be in the 
order of 25% of less.  What we're seeing, however, is far worse:


   On Windows:

   * Apache 2.0.55, Tomcat 5.0.30, and mod_jk 1.2.20 - Started at
 10 MB/sec ended at 3 MB/sec with mod_deflate disabled (1.5
 MB/sec with mod_deflate enabled)
   * Apache 2.0.55, Tomcat 5.0.30, and mod_jk 1.2.19 - Disabling
 JkFlushPackets only slightly improved performance.
   * Apache 2.2.3 with Tomcat 5.5.20 w/ the native connector -
 Didn't work period.  I didn't have a chance to look into it,
 but the download failed after getting serveral packets (!)
   * Apache 2.2.3 with Tomcat 5.5.20 w/o the native connector - Was
 only slightly slower than going straight through Apache
 about 7-8 MB/sec

   On Solaris:

   * Apache 2.0.55, Tomcat 5.0.30, recent mod_jk - Fairly constant
 4MB/s when going through mod_jk, 10MB/s when just downloading
 via Apache

   [This issue originally was thought to be Windows specific, which is
   why we have many more results for Windows.]

Obviously if our end goal was simple static file transfers we'd just 
share/mirror them to Apache to solve this (we need the load balancing 
flexibility, etc, of mod_jk, so directly using Tomcat is not really an 
option -- nor is doing non-AJP-proxying).  The static file case is the 
simplified reproduction of our real issue, however, which is large file 
downloads from our (Java-based) content store.


We had much better results with Apache 2.2.3 and Tomcat 5.5.20 with 
tcnative, but we really don't want to force a move to 2.2.x and Tomcat 
5.5.x in this case and we've had issues with tcnative (which we *hope* 
may be resolved with 1.1.8).  Overall we'd much prefer to get mod_jk 
working reasonably than to force a disruptive move to 2.2.x right now.


Is this a known issue?  Any pointers as to where/how to look for the 
performance bottleneck?  Some VTune examination showed that almost all 
of Apache's CPU time during this time was in libapr.dll, but that's 
obviously not terribly specific.


--
Jess Holle



Re: Serious mod_jk performance issue

2006-12-12 Thread Rainer Jung
If noone finds a reason for it, I can go into it during the weekend. I
would try to reproduce and research on Solaris. Concerning your data for
Solaris: Apache and Tomcat were both on Solaris? The same machine or
different? Network between Client (Browser?) and Apache was 100MBit or
1GBit?

Regards,

Rainer

Jess Holle schrieb:
 We're seeing a *serious *performance issue with mod_jk and large (e.g.
 500MB+) file transfers.  [This is with Apache 2.0.55, Tomcat 5.0.30, and
 various recent mod_jk including 1.2.20.]
 
 The performance of downloading the file via Apache is good, as is the
 performance when downloading directly from Tomcat.  The performance when
 downloading from Tomcat through Apache via mod_jk is, however, quite
 abysmal.  I'd obviously expect *some* degradation due to the extra
 interprocess hop, but given that this is a just a single-user,
 single-request test, I'd expect that the network would still be the
 limiting factor -- or at least that the degradation would be in the
 order of 25% of less.  What we're seeing, however, is far worse:
 
On Windows:
 
* Apache 2.0.55, Tomcat 5.0.30, and mod_jk 1.2.20 - Started at
  10 MB/sec ended at 3 MB/sec with mod_deflate disabled (1.5
  MB/sec with mod_deflate enabled)
* Apache 2.0.55, Tomcat 5.0.30, and mod_jk 1.2.19 - Disabling
  JkFlushPackets only slightly improved performance.
* Apache 2.2.3 with Tomcat 5.5.20 w/ the native connector -
  Didn't work period.  I didn't have a chance to look into it,
  but the download failed after getting serveral packets (!)
* Apache 2.2.3 with Tomcat 5.5.20 w/o the native connector - Was
  only slightly slower than going straight through Apache
  about 7-8 MB/sec
 
On Solaris:
 
* Apache 2.0.55, Tomcat 5.0.30, recent mod_jk - Fairly constant
  4MB/s when going through mod_jk, 10MB/s when just downloading
  via Apache
 
[This issue originally was thought to be Windows specific, which is
why we have many more results for Windows.]
 
 Obviously if our end goal was simple static file transfers we'd just
 share/mirror them to Apache to solve this (we need the load balancing
 flexibility, etc, of mod_jk, so directly using Tomcat is not really an
 option -- nor is doing non-AJP-proxying).  The static file case is the
 simplified reproduction of our real issue, however, which is large file
 downloads from our (Java-based) content store.
 
 We had much better results with Apache 2.2.3 and Tomcat 5.5.20 with
 tcnative, but we really don't want to force a move to 2.2.x and Tomcat
 5.5.x in this case and we've had issues with tcnative (which we *hope*
 may be resolved with 1.1.8).  Overall we'd much prefer to get mod_jk
 working reasonably than to force a disruptive move to 2.2.x right now.
 
 Is this a known issue?  Any pointers as to where/how to look for the
 performance bottleneck?  Some VTune examination showed that almost all
 of Apache's CPU time during this time was in libapr.dll, but that's
 obviously not terribly specific.
 
 -- 
 Jess Holle
 
 

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]