Re: mod_jk release policy - was: JK 1.2.9-dev test results

2005-02-21 Thread William A. Rowe, Jr.
At 04:43 AM 2/19/2005, Remy Maucherat wrote:
William A. Rowe, Jr. wrote:
It definately seems like j-t-c should be a first candidate
for svn conversion.  The other jakarta-tomcat repositories
are considerabily more complex.
But it would be good to have line endings straightened out
beforehand.

I find svn quite confusing to work with. Especially, the possibility of 
browsing a revision tree seems unusable (due to the fact that revisions are 
global, tortoise cannot make a graph in less than 4 hours :( ), and it's an 
important tool for me.

Unfortunately, this means I'll have to veto a move to svn for the time being, 
until I figure out how to use it.

Sadly, I agree with you - my biggest hiccup is the mess that
moving from cvs-svn creates if you want to see annotated source
files - knowing a line changed in 1.99.3.1 is hugely important
to me.

Bill


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: mod_jk release policy - was: JK 1.2.9-dev test results

2005-02-19 Thread Remy Maucherat
William A. Rowe, Jr. wrote:
It definately seems like j-t-c should be a first candidate
for svn conversion.  The other jakarta-tomcat repositories
are considerabily more complex.
But it would be good to have line endings straightened out
beforehand.
I find svn quite confusing to work with. Especially, the possibility of 
browsing a revision tree seems unusable (due to the fact that revisions 
are global, tortoise cannot make a graph in less than 4 hours :( ), and 
it's an important tool for me.

Unfortunately, this means I'll have to veto a move to svn for the time 
being, until I figure out how to use it.

Rémy
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


AW: mod_jk release policy - was: JK 1.2.9-dev test results

2005-02-18 Thread Hans Schmid
Hi,

I just want to describe our usecase because we make heavy use of the
local_worker and local_worker_only flags right now.

We use those flags for 'maintenance' mode and failover very successfuly.

But please see our setup and usecase below.

 -Ursprüngliche Nachricht-
 Von: Mladen Turk [mailto:[EMAIL PROTECTED]
 Gesendet: Donnerstag, 17. Februar 2005 20:34
 An: Tomcat Developers List
 Betreff: Re: mod_jk release policy - was: JK 1.2.9-dev test results


 Rainer Jung wrote:
  Hi,
 
  first: thanks a lot to Mladen for adding all the beautiful features [and
  removing CRLF :) ]. Big leap forward!
 

 Still, I cope with those on a daily basis.

  I think that until Monday we were still in the progress of adding
  features, and fixing bugs. 1.2.8 changed a lot internally, but most was
  functionally compatible to 1.2.6. Release 1.2.9 still supported all
  features of 1.2.6.
 

 Something similar I already explained discussing with guys interested
 on Netware platform.

 Something need to be done, and the obvious solution was not to reinvent
 the wheel, but rather use all the code and knowledge about the subject
 already present.

 To be able to use some new features like dynamic config, some things
 has to be changed internally, but nothing was touched in the protocol
 level, only how that protocol is managed.

 So I don't see the point of forking 1.3. Both config and core features
 are the same. Of course some advanced configuration properties where
 changes, lot new added, but from the outside its still old mod_jk.

 Further more adding shared memory and dynamic config I see as a final
 design change for mod_jk.

  Now we are in the discussion of dropping features (and we even did drop
  some like locality support) and I have the impresssion there should be a
  separate discussion thread about the future of mod_jk:
 


 Other thing is 'deprecating' certain thing.
 By that I don't mean deleting them or something like that, but rather
 marking them as 'no more developed'.
 The reason is for that is pure fact. For example we have lotus domino
 connector that works only for domino5. Think that later versions don't
 even have compatible api. I'm not aware anyone in the
 world used jk to connect domino with tomcat (at least never saw
 bugzilla entry on that). So it is deprecated by that fact.
 The same applies to JNI. Who uses that?

 Regarding locality, you mean local_worker and local_worker_only flags?
 IMHO that was one of the fuzziest things about jk that no one ever
 understood, not to mention that this never worked actually.
 Take for example the current documentation about local_worker:

 If local_worker is set to True it is marked as local worker. If in
 minimum one worker is marked as local worker, lb_worker is in local
 worker mode. All local workers are moved to the beginning of the
 internal worker list in lb_worker during validation.

 Now what that means to the actual user? I reeded that zillion times
 and never understood.
 Also further more:

This one is crucial for our Maintenance switchover see later.


 We need a graceful shut down of a node for maintenance. The balancer in
 front asks a special port on each node periodically. If we want to
 remove a node from the cluster, we switch off this port.

 WTF !? How? Which port? How to switch of this port?

 What counts the most is that you where unable to mark the node for
 shutdown, and not to accept new connections without session id.
 I suppose that was the purpose for those two directives, but I was
 never able to setup the jk in that direction.



First we use TC 3.3.2 (moving to 5.5.7)  behind Apache 1.3 on Solaris.
mod_jk version is a patched version based on mod_jk1.2.5

We only use one tomcat at a time to get traffic with a standby tomcat for 
maineneance.
This scenario also covers failover. We do not use the loadbalancer to actually 
balance
by factors.


We use sticky_sessions=true

This is our mod_jk setup if Tomcat-01 is serving the requests:

worker.list=loadbalancer
worker.loadbalancer.balanced_workers=ajp13-01, ajp13-02
worker.loadbalancer.local_worker_only=0

worker.ajp13-01.port=8009
worker.ajp13-01.host=tomcat-01
worker.ajp13-01.type=ajp13
worker.ajp13-01.lbfactor=1
worker.ajp13-01.local_worker=1

worker.ajp13-02.port=8019
worker.ajp13-02.host=tomcat-02
worker.ajp13-02.type=ajp13
worker.ajp13-02.lbfactor=1
worker.ajp13-02.local_worker=0


Now, all requests go to worker.ajp13-01, since local_worker=1 only for tomcat-01
so it is first in the queue.

Failover (in case tomcat-01 crashes) works, since local_worker_only=0 meaning
it also distributes the requests to the other machine if ajp13-01 is in error 
state


Now lets do maintenance (tomcat-01 should be shut down, tomcat-02 shall take 
the load):

What we do is just link in an other worker.property file on the webserver and
gracefully restart Apache to take effect.

The second worker.properties looks like this (almost the same):

worker.list=loadbalancer

Re: AW: mod_jk release policy - was: JK 1.2.9-dev test results

2005-02-18 Thread Mladen Turk
Hans Schmid wrote:
Hi,
I just want to describe our usecase because we make heavy use of the
local_worker and local_worker_only flags right now.

We use those flags for 'maintenance' mode and failover very successfuly.
Cool ;).

But please see our setup and usecase below.
We only use one tomcat at a time to get traffic with a standby tomcat for 
maineneance.
This scenario also covers failover. We do not use the loadbalancer to actually 
balance
by factors.
OK. So basically you have two tomcat boxes where the second is used
only when you wish to put the first on maintenance?
Using new config:
worker.list=loadbalancer
worker.loadbalancer.balanced_workers=ajp13-01,ajp13-02
worker.loadbalancer.sticky_session=True
worker.ajp13-01.disabled=0
...
worker.ajp13-02.disabled=1
Disabled flag initially mark the worker as disabled.
It will not be used until:
Use the jkstatus console and set the:
worker.ajp13-02.disabled=0
and
worker.ajp13-01.disabled=1
And that's it.
Existing sessions will be forwarded to ajp13-01,
while new will go to the ajp13-02.
No need to make tricks with symlnks, graceful restarts, etc.
What's more, it works on all platforms and all web servers.
Also take a look at:
http://jakarta.apache.org/tomcat/connectors-doc/config/workers.html
(Big red warning about worker names)
Regards,
Mladen.
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: mod_jk release policy - was: JK 1.2.9-dev test results

2005-02-18 Thread Rainer Jung
So I don't see the point of forking 1.3. Both config and core features
are the same. Of course some advanced configuration properties where
changes, lot new added, but from the outside its still old mod_jk.
OK, understood from below. I agree concerning JNI deprecation. But read 
comments about local_worker.

Regarding locality, you mean local_worker and local_worker_only flags?
IMHO that was one of the fuzziest things about jk that no one ever
understood, not to mention that this never worked actually.
You are totally right about the bad documentation (at least concerning 
the status before you gave it a refresh). But I have the feeling that 
more people where using it like me, by studying the code (at least it's 
open source) and learning the functionality from there. So local_worker 
is a feature, that I assume is being useful and used.

So locality is not deprecated. Quite opposite, now it works, just
the local_worker_only is changed to sticky_session_force.
IMHO this is more clearer and descriptive directive then previous one.
My understanding of the use case: the term local_worker historically 
most likely comes from the idea, that if you use multiple systems each 
with apache and tomcat on them, then a call from an apache to the tomcat 
on the same system would be faster then going to a remote tomcat. 
local_worker should have indicated to prefer this (until 1.2.6 only one 
worker would work as a local_worker) worker unless a request carries a 
session id and stickyness is on or the local_worker is in error state. A 
more general better term would have been preferred_worker or just 
preferred and that's the way it is used today. At the moment there seems 
to be no more possibility to map a preference (I don't mean load 
balancing weights).

Still I know cases, where it makes sense to have a distinction for 
requests without session id/stickyness between:

- preferred (one or more)
- failover for the preferred (your redirect)
- maybe allowed (although the first two cases should be enough not to 
need more)
- the rest

The rest is there, because some worker may only be used, in case 
stickyness comes in.

With stickyness and session id one would have:
- sticky worker (the correct one)
- failover for the preferred (your redirect)
- any other in the same replication cluster (domain)
- the rest (loose session but can start the app again from the beginning)
Your redirect concept and my older domain patch share some use cases.
On the other hand local_worker_only only makes a difference if you 
configure local and non-local workers in a load balancer and all local 
workers go into error state. With local_worker_only, all further 
requests will fail. Without local_worker_only the non-local workers will 
be used. I always had the impression that only very few - if any - 
people will need this kind of feature.

You indicated i a separate answer, that one could use the disabled 
attribute instead. But I assume there is no failover to a disabled 
worker, whereas there should be to a non-preferred worker.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: mod_jk release policy - was: JK 1.2.9-dev test results

2005-02-18 Thread Mladen Turk
Rainer Jung wrote:
With stickyness and session id one would have:
- sticky worker (the correct one)
- failover for the preferred (your redirect)
- any other in the same replication cluster (domain)
- the rest (loose session but can start the app again from the beginning)
Your redirect concept and my older domain patch share some use cases.
Yes, I took your patch as conceptual start point.

On the other hand local_worker_only only makes a difference if you 
configure local and non-local workers in a load balancer and all local 
workers go into error state. With local_worker_only, all further 
requests will fail. Without local_worker_only the non-local workers will 
be used. I always had the impression that only very few - if any - 
people will need this kind of feature.

Well you have sticky_session_force. If the worker that has that session
is in error state then first the redirect (preferred) will be checked.
If this one is in error state too, the domain will be checked and
if not set or all are in error state 500 will be returned.
(Meaning: we don't have session replication and wish to brake in
case of failure)
If sticky_session_force is not set then other worker will be chosen
with loosing session.
(Meaning: we don't have session replication but wish to continue anyhow)
So you have two basic sticky_session concepts:
with or without session replication,
and that is what sticky_session_force determines as well how the
session loosing is treated.
The same applies when you have multiple domains. Then each domain
or group is treated as single worker and all rules mentioned above
are in place.
What is missing perhaps is a flag to indicate whether the worker name
or domain name will be used as service jvm route.
That way you could group workers without session replication in place.
(Putting workers on top of list concept). But then you don't need
the domains at the first place.
If you think some nodes (like local one) are faster, then simply use
higher lb_factor for those nodes.
You indicated i a separate answer, that one could use the disabled 
attribute instead. But I assume there is no failover to a disabled 
worker, whereas there should be to a non-preferred worker.

Disabled can be initially set for hot-standby. If set to on only the
requests with matched session id will be processed.
Regards,
Mladen.
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


AW: AW: mod_jk release policy - was: JK 1.2.9-dev test results

2005-02-18 Thread Hans Schmid
Thanks, Mladen,

as long as this disabled feature does not prevent the failover case, I am fine 
;)

See inline ...

 -Ursprüngliche Nachricht-
 Von: Mladen Turk [mailto:[EMAIL PROTECTED]
 Gesendet: Freitag, 18. Februar 2005 10:36
 An: Tomcat Developers List
 Betreff: Re: AW: mod_jk release policy - was: JK 1.2.9-dev test results


 Hans Schmid wrote:
  Hi,
 
  I just want to describe our usecase because we make heavy use of the
  local_worker and local_worker_only flags right now.
 


  We use those flags for 'maintenance' mode and failover very successfuly.
 

 Cool ;).


  But please see our setup and usecase below.
 
  We only use one tomcat at a time to get traffic with a standby tomcat for 
  maineneance.
  This scenario also covers failover. We do not use the loadbalancer to 
  actually balance
  by factors.
 

 OK. So basically you have two tomcat boxes where the second is used
 only when you wish to put the first on maintenance?

Both Tomcats are always running, but the second one is used only for:
1.) Failover
2.) Maintenance switch - after this the roles of both Tomcats have switched
(TC-01 becomes standby)


In fact our scenario is a little bit more complex (I just did not want to 
explain
it in the first place). This brings in Loadbalancing as well:

We actually have between 3 and 6 Tomcats running at the same time depending on 
our
load, which has high seasonal peeks. So November is usually 20 times as much as 
February.
We are talking about 500 concurrent users in our webapp plus many more on the
static apache pages.

Example: 4 Tomcats are running in parallel. Just TC-01 has local_worker=1, the 
other
ones have local_worker=0. Every 30 minutes we switch our worker.properties to 
activate
a different tomcat by setting its local_worker=1 and the old one to 0.
The new tomcat has been just restarted before.

TC-01 - 30min. - TC-02 - 30min. - TC-03 - 30min. - TC-04 - 30min. - 
TC-01 again 

That way, every Tomcat gets new sessions for about 30 minutes. The long lasting
ols sticky sessions of our users (avg. session time 30min.) stay active on the
Tomcat which was active before for the rest of their live.

This effectively generates a loadbalancing distribution of about

TC-01 = 55% (the currently active Tomcat)
TC-04 = 35% (the one which was active before but still handles sticky sessions)
TC-03 = 10% (the one before TC-04, handling really long lasting old sessions)
TC-02 = 0%  (this one is the next candidate to restart and become active)


We can easily scale this approach by bringing in even more tomcats and shorter 
roll times (or less and longer times).

Works really well with our highly changing but well known traffic ;)
(and handles memory leaks as well ...)

Cheers Hans


 Using new config:

 worker.list=loadbalancer
 worker.loadbalancer.balanced_workers=ajp13-01,ajp13-02
 worker.loadbalancer.sticky_session=True

 worker.ajp13-01.disabled=0
 ...
 worker.ajp13-02.disabled=1


 Disabled flag initially mark the worker as disabled.
 It will not be used until:

 Use the jkstatus console and set the:
 worker.ajp13-02.disabled=0
 and
 worker.ajp13-01.disabled=1

 And that's it.
 Existing sessions will be forwarded to ajp13-01,
 while new will go to the ajp13-02.
 No need to make tricks with symlnks, graceful restarts, etc.
 What's more, it works on all platforms and all web servers.


 Also take a look at:
 http://jakarta.apache.org/tomcat/connectors-doc/config/workers.html
 (Big red warning about worker names)

 Regards,
 Mladen.


 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: AW: AW: mod_jk release policy - was: JK 1.2.9-dev test results

2005-02-18 Thread Mladen Turk
Hans Schmid wrote:
Thanks, Mladen,
as long as this disabled feature does not prevent the failover case, I am fine 
;)

OK. So basically you have two tomcat boxes where the second is used
only when you wish to put the first on maintenance?

Both Tomcats are always running, but the second one is used only for:
1.) Failover
2.) Maintenance switch - after this the roles of both Tomcats have switched
(TC-01 becomes standby)
Ah, now I see your point.
If disabled worker will be never used unless enabled again,
but for failover you will need to set the
'redirect' to match that failover node.
Then regardless if it's disabled it will be used (of course if not being
in error state too).
redirect is ment to be used for that. You can even make redirect to a
group, thus having not only one, but rather n hot standby nodes.
In short: If initialy disabled worker will be never used (not even
for failover) unless some other worker has a redirect pointing to it,
all that untill the worker is enabled again.
If disabled during runtime the worker will not accept connections
without matching sessionid, while preserving exiting one.
Regards,
Mladen.
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: mod_jk release policy - was: JK 1.2.9-dev test results

2005-02-18 Thread William A. Rowe, Jr.
At 12:56 PM 2/17/2005, Rainer Jung wrote:
Hi,

first: thanks a lot to Mladen for adding all the beautiful features [and 
removing CRLF :) ]. Big leap forward!

Here's a list of all mixed up line endings currently 
in jakarta-tomcat-connectors/jk/ ...

The Mismatch'ed files all represent files with mixed line endings
(some cr/lf, some cr/cr/lf.)

Fixed lines ./native/apache-1.3/mod_jk.dsp
Fixed lines ./native/apache-2.0/bldjk.qclsrc
Fixed lines ./native/apache-2.0/mod_jk.dsp
Fixed lines ./native/common/portable.h
Fixed lines ./native/domino/dsapi.dsp
Fixed lines ./native/iis/isapi.dsp
Fixed lines ./native/iis/isapi_redirect.reg
Fixed lines ./native/iis/installer/isapi-redirector-win32-msi.ism
Fixed lines ./native/iis/installer/License.rtf
Fixed lines ./native/isapi/tomcat_redirector.reg
Fixed lines ./native/netscape/nsapi.dsp
Mismatch in ./native2/CHANGES.txt:2 expected 1
Mismatch in ./native2/README.txt:2 expected 1
Mismatch in ./native2/STATUS.txt:2 expected 1
Fixed lines ./support/jk_exec.m4
Mismatch in ./xdocs/changelog.xml:2 expected 1
Mismatch in ./xdocs/index.xml:2 expected 1
Mismatch in ./xdocs/style.css:2 expected 1
Mismatch in ./xdocs/config/iis.xml:2 expected 1
Mismatch in ./xdocs/config/workers.xml:2 expected 1
Mismatch in ./xdocs/install/apache1.xml:2 expected 1
Mismatch in ./xdocs/install/iis.xml:2 expected 1
Mismatch in ./xdocs/news/20041100.xml:2 expected 1

Attached is my current lineendings script, if it's helpful.

Bill
#!/usr/local/bin/perl
#
#  Heuristically converts line endings to the current OS's preferred format
#  
#  All existing line endings must be identical (e.g. lf's only, or even
#  the accedental cr.cr.lf sequence.)  If some lines end lf, and others as
#  cr.lf, the file is presumed binary.  If the cr character appears anywhere
#  except prefixed to an lf, the file is presumed binary.  If there is no 
#  change in the resulting file size, or the file is binary, the conversion 
#  is discarded.
#  
#  Todo: Handle NULL stdin characters gracefully.
#

use IO::File;
use File::Find;

# The ignore list is '-' seperated, with this leading hyphen and
# trailing hyphens in ever concatinated list below.
$ignore = -;

# Image formats
$ignore .= gif-jpg-jpeg-png-ico-bmp-;

# Archive formats
$ignore .= tar-gz-z-zip-jar-war-;

# Many document formats
$ignore .= eps-psd-pdf-ai-;

# Some encodings
$ignore .= ucs2-ucs4-;

# Some binary objects
$ignore .= class-so-dll-exe-obj-;

# Some build env files in NW/Win32
$ignore .= mcp-xdc-ncb-opt-pdb-ilk-sbr-;

$preservedate = 1;

$forceending = 0;

$givenpaths = 0;

$notnative = 0;

while (defined @ARGV[0]) {
if (@ARGV[0] eq '--touch') {
$preservedate = 0;
}
elsif (@ARGV[0] eq '--nocr') {
$notnative = -1;
}
elsif (@ARGV[0] eq '--cr') {
$notnative = 1;
}
elsif (@ARGV[0] eq '--force') {
$forceending = 1;
}
elsif (@ARGV[0] eq '--FORCE') {
$forceending = 2;
}
elsif (@ARGV[0] =~ m/^-/) {
die What is  . @ARGV[0] .  supposed to mean?\n\n 
  . Syntax:\t$0 [option()s] [path(s)]\n\n . 'OUTCH'
Where:  paths specifies the top level directory to convert (default of '.')
options are;

  --cr keep/add one ^M
  --nocr   remove ^M's
  --touch  the datestamp (default: keeps date/attribs)
  --force  mismatched corrections (unbalanced ^M's)
  --FORCE  all files regardless of file name!

OUTCH
}
else {
find(\totxt, @ARGV[0]);
print scanned  . @ARGV[0] . \n;
$givenpaths = 1;
}
shift @ARGV;
}

if (!$givenpaths) {
find(\totxt, '.');
print did .\n;
}

sub totxt {
$oname = $_;
$tname = '.#' . $_;
if (!-f) {
return;
}
@exts = split /\./;
if ($forceending  2) {
while ($#exts  ($ext = pop(@exts))) {
if ($ignore =~ m|-$ext-|i) {
return;
}
}
}
if (($File::Find::dir . /) =~ m|/.svn/|i) {
return;
}
if (($File::Find::dir . /) =~ m|/CVS/|i) {
return;
}
@ostat = stat($oname);
$srcfl = new IO::File $oname, r or die;
$dstfl = new IO::File $tname, w or die;
binmode $srcfl; 
if ($notnative) {
binmode $dstfl;
} 
undef $t;
while ($srcfl) { 
if (s/(\r*)\n$/\n/) {
$n = length $1;
if (!defined $t) { 
$t = $n; 
}
if (!$forceending  (($n != $t) || m/\r/)) {
print Mismatch in  .$File::Find::dir./.$oname. : .$n. 
 expected  .$t. \n;
undef $t;
last;
}
elsif ($notnative  0) {
s/\n$/\r\n/; 
}
}
print $dstfl $_; 
}
if (defined $t  (tell $srcfl == tell $dstfl)) {
   

Re: mod_jk release policy - was: JK 1.2.9-dev test results

2005-02-18 Thread Mladen Turk
William A. Rowe, Jr. wrote:
Here's a list of all mixed up line endings currently 
in jakarta-tomcat-connectors/jk/ ...

The Mismatch'ed files all represent files with mixed line endings
(some cr/lf, some cr/cr/lf.)

Two things.
See no CRLFs for any .h or .c inisde j-t-c.
Also Bill, will you be OK and ready to push
j-t-c to svn?
Regards,
Mladen.
Fixed lines ./native/apache-1.3/mod_jk.dsp
Fixed lines ./native/apache-2.0/bldjk.qclsrc
Fixed lines ./native/apache-2.0/mod_jk.dsp
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: mod_jk release policy - was: JK 1.2.9-dev test results

2005-02-18 Thread William A. Rowe, Jr.
It definately seems like j-t-c should be a first candidate
for svn conversion.  The other jakarta-tomcat repositories
are considerabily more complex.

But it would be good to have line endings straightened out
beforehand.

This checkout was with the cvs Win32 client.  It seems, from
all the troubles you have, that you are using the cygwin cvs
client?  The cygwin client checks out Unix text because it is
a unix shell, and shouldn't be expected to check out with Win32 
semantics (that combo would be an oxymoron.)

One nice advantage of SVN is that you can force an LF checkout
on win32, or CRLF checkout on unix, if that is what you desire.
Either is predicated on storing text files as (of all things)
text files - which the files I mentioned were not conformant.

Here are the results from checking out under unix (FYI - you can
force win32 or unix semantics with --cr or --nocr using my
lineends.pl script, and --force will ignore the mixed up line
endings when the file contains a mix of LF, CR/LF and CR/CR/LF
line endings);

Fixed lines ./jni/native/libtcnative.dsp
Fixed lines ./jni/native/libtcnative.dsw
Fixed lines ./jni/native/tcnative.dsp
Mismatch in ./jni/native/src/pool.c:1 expected 0
Mismatch in ./jni/native/src/shm.c:1 expected 0
Fixed lines ./jni/native/src/ssl.c
Fixed lines ./jni/native/build/win32ver.awk
Mismatch in ./jni/java/org/apache/tomcat/jni/OS.java:1 expected 0
Mismatch in ./jk/xdocs/changelog.xml:1 expected 0
Mismatch in ./jk/xdocs/index.xml:1 expected 0
Mismatch in ./jk/xdocs/style.css:1 expected 0
Mismatch in ./jk/xdocs/news/20041100.xml:1 expected 0
Mismatch in ./jk/xdocs/install/apache1.xml:1 expected 0
Mismatch in ./jk/xdocs/install/iis.xml:1 expected 0
Mismatch in ./jk/xdocs/config/iis.xml:1 expected 0
Mismatch in ./jk/xdocs/config/workers.xml:1 expected 0
Mismatch in ./jk/native2/CHANGES.txt:1 expected 0
Mismatch in ./jk/native2/README.txt:1 expected 0
Mismatch in ./jk/native2/STATUS.txt:1 expected 0
Fixed lines ./jk/native2/server/isapi/install4iis.js
Fixed lines ./jk/native2/server/apache2/bldjk2.qclsrc
Fixed lines ./jk/native/nt_service/nt_service.dsp
Fixed lines ./jk/native/netscape/nsapi.dsp
Fixed lines ./jk/native/isapi/tomcat_redirector.reg
Fixed lines ./jk/native/iis/isapi.dsp
Fixed lines ./jk/native/iis/isapi_redirect.reg
Fixed lines ./jk/native/iis/installer/isapi-redirector-win32-msi.ism
Fixed lines ./jk/native/domino/dsapi.dsp
Fixed lines ./jk/native/apache-2.0/mod_jk.dsp
Fixed lines ./jk/native/apache-1.3/mod_jk.dsp
Fixed lines ./ajp/ajplib/test/test.sln
Fixed lines ./ajp/ajplib/test/testajp.vcproj

(Just to reclarify, 0 expecting 1 means the module first encountered
0 CR's - just an LF, and deeper in the file encountered CR/LF - one
CR found.)

At 02:52 PM 2/18/2005, Mladen Turk wrote:
William A. Rowe, Jr. wrote:
Here's a list of all mixed up line endings currently in 
jakarta-tomcat-connectors/jk/ ...
The Mismatch'ed files all represent files with mixed line endings
(some cr/lf, some cr/cr/lf.)


Two things.
See no CRLFs for any .h or .c inisde j-t-c.
Also Bill, will you be OK and ready to push
j-t-c to svn?

Regards,
Mladen.

Fixed lines ./native/apache-1.3/mod_jk.dsp
Fixed lines ./native/apache-2.0/bldjk.qclsrc
Fixed lines ./native/apache-2.0/mod_jk.dsp





-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: mod_jk release policy - was: JK 1.2.9-dev test results

2005-02-18 Thread Mladen Turk
William A. Rowe, Jr. wrote:
 It definately seems like j-t-c should be a first candidate
 for svn conversion.  The other jakarta-tomcat repositories
 are considerabily more complex.

Yes, if everyone else agree we should consider moving to svn.
The problem is only with Tomcat build process. If ant can have
svn task, then we can move. Without that it will be impossible.
 But it would be good to have line endings straightened out
 beforehand.

Sure.
 Fixed lines ./jni/native/libtcnative.dsp
 Fixed lines ./jni/native/libtcnative.dsw
Well, I intentionally changed (probably was wrong) only
windows specific files to have CRLFs. Both .dsp and .dsw
files are usable only if they have CRLF line endings.
Each time when checking out I have to convert them if
they where in LF mode only.
So not sure. What do you suggest?
 Mismatch in ./jni/java/org/apache/tomcat/jni/OS.java:1 expected 0
Those should be fixed, of course.
Regards,
Mladen.
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


JK 1.2.9-dev test results

2005-02-17 Thread Mladen Turk
Hi,
Henri said that he noticed current dev version
of mod_jk being quite faster then previous (1.2.8).
Although it was not the primary intention to
be faster, I think no one will object :).
So here are some benchmark results from my side:
JK 1.2.8 single thread
Requests per second:784.31 [#/sec] (mean)
JK 1.2.9-dev single thread
Requests per second:798.01 [#/sec] (mean)
JK 1.2.9-dev 10 concurrent threads
Requests per second:918.22 [#/sec] (mean)
JK 1.2.9-dev 10 concurrent threads with socket_timeout
Requests per second:910.38 [#/sec] (mean)
So. Is this a speedup or not ;)?
Interesting is that new socket_timeout implementation
does not slow down that much. After all it sets the
socket to nonblocking mode before each request, checks if
the socket is still connected and then sets to blocking mode again.
Compared to cping/cpong prepost, the system is almost twice
as faster. Of course it will not detect hanged tomcat,
only if tomcat broke down or some other network problem
happened.
Cheers,
Mladen.
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: JK 1.2.9-dev test results

2005-02-17 Thread Henri Gomez
Good works Mladen.

I found jk a bit faster and it's good to see that we could speed it up a little.

The next step could be to use larger AJP packets (4k too small)

On Thu, 17 Feb 2005 14:11:28 +0100, Mladen Turk [EMAIL PROTECTED] wrote:
 Hi,
 
 Henri said that he noticed current dev version
 of mod_jk being quite faster then previous (1.2.8).
 
 Although it was not the primary intention to
 be faster, I think no one will object :).
 So here are some benchmark results from my side:
 
 JK 1.2.8 single thread
 Requests per second:784.31 [#/sec] (mean)
 
 JK 1.2.9-dev single thread
 Requests per second:798.01 [#/sec] (mean)
 
 JK 1.2.9-dev 10 concurrent threads
 Requests per second:918.22 [#/sec] (mean)
 
 JK 1.2.9-dev 10 concurrent threads with socket_timeout
 Requests per second:910.38 [#/sec] (mean)
 
 So. Is this a speedup or not ;)?
 
 Interesting is that new socket_timeout implementation
 does not slow down that much. After all it sets the
 socket to nonblocking mode before each request, checks if
 the socket is still connected and then sets to blocking mode again.
 Compared to cping/cpong prepost, the system is almost twice
 as faster. Of course it will not detect hanged tomcat,
 only if tomcat broke down or some other network problem
 happened.
 
 Cheers,
 Mladen.
 
 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]
 


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: JK 1.2.9-dev test results

2005-02-17 Thread Mladen Turk
Henri Gomez wrote:
Good works Mladen.
I found jk a bit faster and it's good to see that we could speed it up a little.
The next step could be to use larger AJP packets (4k too small)
Sure ;).
For 100K file the speed is the same, as expected.
On large files we are measuring the network throughput
not the speed of the jk itself.
Anyhow what is more important then speed is the fact
that endpoint cache is working as expected on threaded
servers.
BTW, what do you think of deprecating the JNI connector.
Since it can be (theoretically) used only on windows and netware,
I wonder if it make sense to continue the support.
Regards,
Mladen
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


mod_jk release policy - was: JK 1.2.9-dev test results

2005-02-17 Thread Rainer Jung
Hi,
first: thanks a lot to Mladen for adding all the beautiful features [and 
removing CRLF :) ]. Big leap forward!

I think that until Monday we were still in the progress of adding 
features, and fixing bugs. 1.2.8 changed a lot internally, but most was 
functionally compatible to 1.2.6. Release 1.2.9 still supported all 
features of 1.2.6.

Now we are in the discussion of dropping features (and we even did drop 
some like locality support) and I have the impresssion there should be a 
separate discussion thread about the future of mod_jk:

Do we need to reflect the incompatible changes by shifting to 1.3? By 
this I mean: will we still need to maintain bugs in the parallel 1.2 tree?

Stated differently:
Which features can be dropped without further maintenance for older 
releases?

Usually one would deprecate by first announcing deprecation but still 
supporting for some time to allow migration. Then after e.g. 6 months 
one could drop the functionality totally.

People have just been told few months ago, that mod_jk2 is no longer 
supported and that they should move to mod_jk. Mladen helps them by 
reimplementing valuable mod_jk2 features inside mod_jk, but we should 
not kick out long-time mod_jk users by dropping features without having 
a visible discussion on that item.

Regards,
Rainer
Mladen Turk wrote:
Henri Gomez wrote:
Good works Mladen.
I found jk a bit faster and it's good to see that we could speed it up 
a little.

The next step could be to use larger AJP packets (4k too small)
Sure ;).
For 100K file the speed is the same, as expected.
On large files we are measuring the network throughput
not the speed of the jk itself.
Anyhow what is more important then speed is the fact
that endpoint cache is working as expected on threaded
servers.
BTW, what do you think of deprecating the JNI connector.
Since it can be (theoretically) used only on windows and netware,
I wonder if it make sense to continue the support.
Regards,
Mladen
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: JK 1.2.9-dev test results

2005-02-17 Thread Bill Barker

- Original Message -
From: Mladen Turk [EMAIL PROTECTED]
To: Tomcat Developers List tomcat-dev@jakarta.apache.org
Sent: Thursday, February 17, 2005 7:27 AM
Subject: Re: JK 1.2.9-dev test results


 Henri Gomez wrote:
  Good works Mladen.
 
  I found jk a bit faster and it's good to see that we could speed it up a
little.
 
  The next step could be to use larger AJP packets (4k too small)
 

 Sure ;).

 For 100K file the speed is the same, as expected.
 On large files we are measuring the network throughput
 not the speed of the jk itself.

 Anyhow what is more important then speed is the fact
 that endpoint cache is working as expected on threaded
 servers.

 BTW, what do you think of deprecating the JNI connector.
 Since it can be (theoretically) used only on windows and netware,
 I wonder if it make sense to continue the support.


Not only that, but the mod_jk version can only be used by TC 3.3.x.  I don't
think that it was ever really supported, so deprecating it is just a
formality.

 Regards,
 Mladen

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]





This message is intended only for the use of the person(s) listed above as the 
intended recipient(s), and may contain information that is PRIVILEGED and 
CONFIDENTIAL.  If you are not an intended recipient, you may not read, copy, or 
distribute this message or any attachment. If you received this communication 
in error, please notify us immediately by e-mail and then delete all copies of 
this message and any attachments.

In addition you should be aware that ordinary (unencrypted) e-mail sent through 
the Internet is not secure. Do not send confidential or sensitive information, 
such as social security numbers, account numbers, personal identification 
numbers and passwords, to us via ordinary (unencrypted) e-mail.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Re: mod_jk release policy - was: JK 1.2.9-dev test results

2005-02-17 Thread Mladen Turk
Rainer Jung wrote:
Hi,
first: thanks a lot to Mladen for adding all the beautiful features [and 
removing CRLF :) ]. Big leap forward!

Still, I cope with those on a daily basis.
I think that until Monday we were still in the progress of adding 
features, and fixing bugs. 1.2.8 changed a lot internally, but most was 
functionally compatible to 1.2.6. Release 1.2.9 still supported all 
features of 1.2.6.
 
Something similar I already explained discussing with guys interested
on Netware platform.
Something need to be done, and the obvious solution was not to reinvent
the wheel, but rather use all the code and knowledge about the subject
already present.
To be able to use some new features like dynamic config, some things
has to be changed internally, but nothing was touched in the protocol
level, only how that protocol is managed.
So I don't see the point of forking 1.3. Both config and core features
are the same. Of course some advanced configuration properties where
changes, lot new added, but from the outside its still old mod_jk.
Further more adding shared memory and dynamic config I see as a final
design change for mod_jk.
Now we are in the discussion of dropping features (and we even did drop 
some like locality support) and I have the impresssion there should be a 
separate discussion thread about the future of mod_jk:


Other thing is 'deprecating' certain thing.
By that I don't mean deleting them or something like that, but rather
marking them as 'no more developed'.
The reason is for that is pure fact. For example we have lotus domino
connector that works only for domino5. Think that later versions don't
even have compatible api. I'm not aware anyone in the
world used jk to connect domino with tomcat (at least never saw
bugzilla entry on that). So it is deprecated by that fact.
The same applies to JNI. Who uses that?
Regarding locality, you mean local_worker and local_worker_only flags?
IMHO that was one of the fuzziest things about jk that no one ever
understood, not to mention that this never worked actually.
Take for example the current documentation about local_worker:
If local_worker is set to True it is marked as local worker. If in 
minimum one worker is marked as local worker, lb_worker is in local 
worker mode. All local workers are moved to the beginning of the 
internal worker list in lb_worker during validation.

Now what that means to the actual user? I reeded that zillion times
and never understood.
Also further more:
We need a graceful shut down of a node for maintenance. The balancer in 
front asks a special port on each node periodically. If we want to 
remove a node from the cluster, we switch off this port.

WTF !? How? Which port? How to switch of this port?
What counts the most is that you where unable to mark the node for
shutdown, and not to accept new connections without session id.
I suppose that was the purpose for those two directives, but I was
never able to setup the jk in that direction.
So locality is not deprecated. Quite opposite, now it works, just
the local_worker_only is changed to sticky_session_force.
IMHO this is more clearer and descriptive directive then previous one.
New things like 'domain' (present from 1.2.8) and 'redirect' are just
extra cookies to be able to finer tune the cluster topology.
Regards,
Mladen.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]