Re: Fwd: An unexpected error has been detected by HotSpot Virtual Machine:

2008-07-10 Thread Filip Hanik - Dev Lists

not a geronimo error, but in the sun jdbc-odbc bridge
Stack: [0x33f5,0x33f9),  sp=0x33f8f104,  free space=252k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, 
C=native code)

C  [ntdll.dll+0x910e]
C  [ODBC32.dll+0xa3ec]
j  sun.jdbc.odbc.JdbcOdbc.allocConnect(J[B)J+0
j  sun.jdbc.odbc.JdbcOdbc.SQLAllocConnect(J)J+30
j  sun.jdbc.odbc.JdbcOdbcDriver.allocConnection(J)J+6

use another JDBC driver to connect directly to the DB

Filip


ravi naik wrote:



Note: forwarded message attached.


Subject:
Fwd: An unexpected error has been detected by HotSpot Virtual Machine:
From:
ravi naik [EMAIL PROTECTED]
Date:
Tue, 8 Jul 2008 10:34:44 +0100 (BST)
To:
[EMAIL PROTECTED]

To:
[EMAIL PROTECTED]




Note: forwarded message attached.


Subject:
An unexpected error has been detected by HotSpot Virtual Machine:
From:
ravi naik [EMAIL PROTECTED]
Date:
Mon, 30 Jun 2008 14:09:07 +0100 (BST)
To:
[EMAIL PROTECTED]

To:
[EMAIL PROTECTED]


hi jose..
this is ravi from india...
i am using apache geronimo...
i am facing An unexpected error has been detected by HotSpot Virtual 
Machine:
 
I am attaching log for this plz do help..
 
 



No virus found in this incoming message.
Checked by AVG. 
Version: 7.5.526 / Virus Database: 270.4.7/1541 - Release Date: 7/8/2008 7:50 PM
  




Re: Fwd: svn commit: r573772 - in /tomcat: sandbox/gdev6x/ trunk/

2007-09-11 Thread Filip Hanik - Dev Lists
The whole debate was a huge fiasco, and unfortunately the one that 
screams the most and makes up the best stories win, not necessarily what 
is best for the community or the product.
I had no choice but to follow what was going on. I'm not sure what is 
going to happen to this code base at this point, there is a lot more bs 
to be resolved before anything happens.


I'm waiting to see if the board will say anything about the issue, if 
they stay dormant, then most likely that codebase will to. There is no 
way an API change like that will make it's way into a stable branch like 
6.0.x.


sorry about the hassle, but I had little support keeping it in trunk, 
the decision to move it to sandbox had nothing to do with comet, or 
anything else, simply egos that got in the way of rational decisions.


Once again, my apologies, but you can't say I didn't try :|

Filip

Paul McMahan wrote:
FYI,  the Tomcat team decided to move tomcat/trunk to sandbox while 
details over RTC vs. CTR and their comet implementation are being 
resolved.  The Geronimo 2.x Tomcat assemblies use a patched version of 
what was in tomcat/trunk for the enhanced annotation support.   See 
https://issues.apache.org/jira/browse/GERONIMO-3206 for details on how 
Geronimo's patched version of Tomcat is built.


I raised a concern about moving trunk to sandbox since it's the only 
branch that contains the annotation support needed by Geronimo.  A 
committer responded that he will think about maintaining a patchset 
with this annotation support.  See http://tinyurl.com/2enw25



Best wishes,
Paul

Begin forwarded message:


From: [EMAIL PROTECTED]
Date: September 7, 2007 10:35:34 PM EDT
To: [EMAIL PROTECTED]
Subject: svn commit: r573772 - in /tomcat: sandbox/gdev6x/ trunk/
Reply-To: Tomcat Developers List [EMAIL PROTECTED]

Author: fhanik
Date: Fri Sep  7 19:35:33 2007
New Revision: 573772

URL: http://svn.apache.org/viewvc?rev=573772view=rev
Log: (empty)

Added:
tomcat/sandbox/gdev6x/
  - copied from r573771, tomcat/trunk/
Removed:
tomcat/trunk/


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]









Re: Tomcat connectors

2007-08-10 Thread Filip Hanik - Dev Lists

David Jencks wrote:


On Aug 9, 2007, at 3:58 PM, Filip Hanik - Dev Lists wrote:


David Jencks wrote:


On Aug 9, 2007, at 11:18 AM, threepointsomething wrote:



I am quite new to Geronimo, so I am not sure if the steps I 
followed are

right. Here goes:

I had to ensure that the NIO connector is picked up in place of the 
basic

HTTP connector, so I made the following change in
configs\tomcat6\src\plan\plan.xml:

lt;gbean name=TomcatWebConnector
class=org.apache.geronimo.tomcat.connector.Http11NIOConnectorGBeangt; 



I then rebuilt config\tomcat6\. When I started Geronimo, it picked 
up the

NIO connector as expected.

  Listening on Ports:
1050 127.0.0.1 CORBA Naming Service
1099 0.0.0.0   RMI Naming
1527 0.0.0.0   Derby Connector
2001 127.0.0.1 OpenEJB ORB Adapter
4201 0.0.0.0   OpenEJB Daemon
6882 127.0.0.1 OpenEJB ORB Adapter
8009 0.0.0.0   Tomcat Connector AJP AJP
8080 0.0.0.0   Tomcat Connector HTTP NIO HTTP
8443 0.0.0.0   Tomcat Connector HTTPS BIO HTTPS
 0.0.0.0   JMX Remoting Connector
   61613 0.0.0.0   ActiveMQ Transport Connector
   61616 0.0.0.0   ActiveMQ Transport Connector

I then ran a sample comet application (WAR) that was executing 
properly in

Tomcat and tried it in this instance of Geronimo. Seemed to work fine.

I was wondering if there is a simpler way of configuring NIO 
without having
to rebuild config\tomcat6. If so can you please suggest how I can 
do that?


Well, I expect we actually want to ship with the NIO connectors used 
by default anyway, like we do for jetty.
I'd ship with the 6.0.14 code, tons of fixes since the last stable 
release.
the code has been voted stable and ready to announce, we're just 
waiting for the RM to pull his head out of his rear :)

http://people.apache.org/~remm/tomcat-6/v6.0.14/


That's a bit of a different point.  I was referring to which of the 8 
or so tomcat connectors we turn on by default: I think we want to turn 
on the NIO ones rather than the BIO ones.  The tomcat code base we are 
shipping is based pretty much on near-to-6.0.14 code but with the 
annotation processor changes applied, which we need for 
certification.  I imagine as soon as the annotation processor changes 
are in a released tomcat version we'll switch to that, until then we 
are stuck building our own copies.


thanks
david jencks
forgot about that, I'll probably volunteer as RM for the trunk project, 
so that we can get some snapshots and alpha/beta(s) out the door


Filip




Filip


However until we get there you can either turn off the BIO connector 
and add a NIO connector in var/config/config.xml or turn off the BIO 
connector in config.xml and add the appropriate connector to the 
geronimo plan for your app.  You can add the NIO connector using the 
admin console, but I think you need to turn off the BIO connector by 
editing config.xml when geronimo is not running.  add the attribute 
load=false to the gbean entry for the BIO connector.


Hope this helps
david jencks




Thanks,
Gautham.

--View this message in context: 
http://www.nabble.com/Tomcat-connectors-tf4132628s134.html#a12077742
Sent from the Apache Geronimo - Dev mailing list archive at 
Nabble.com.






--No virus found in this incoming message.
Checked by AVG Free Edition.Version: 7.5.476 / Virus Database: 
269.11.10/943 - Release Date: 8/8/2007 5:38 PM









--No virus found in this incoming message.
Checked by AVG Free Edition.Version: 7.5.476 / Virus Database: 
269.11.10/943 - Release Date: 8/8/2007 5:38 PM







Re: Tomcat connectors

2007-08-09 Thread Filip Hanik - Dev Lists

David Jencks wrote:


On Aug 9, 2007, at 11:18 AM, threepointsomething wrote:



I am quite new to Geronimo, so I am not sure if the steps I followed are
right. Here goes:

I had to ensure that the NIO connector is picked up in place of the 
basic

HTTP connector, so I made the following change in
configs\tomcat6\src\plan\plan.xml:

lt;gbean name=TomcatWebConnector
class=org.apache.geronimo.tomcat.connector.Http11NIOConnectorGBeangt;

I then rebuilt config\tomcat6\. When I started Geronimo, it picked up 
the

NIO connector as expected.

  Listening on Ports:
1050 127.0.0.1 CORBA Naming Service
1099 0.0.0.0   RMI Naming
1527 0.0.0.0   Derby Connector
2001 127.0.0.1 OpenEJB ORB Adapter
4201 0.0.0.0   OpenEJB Daemon
6882 127.0.0.1 OpenEJB ORB Adapter
8009 0.0.0.0   Tomcat Connector AJP AJP
8080 0.0.0.0   Tomcat Connector HTTP NIO HTTP
8443 0.0.0.0   Tomcat Connector HTTPS BIO HTTPS
 0.0.0.0   JMX Remoting Connector
   61613 0.0.0.0   ActiveMQ Transport Connector
   61616 0.0.0.0   ActiveMQ Transport Connector

I then ran a sample comet application (WAR) that was executing 
properly in

Tomcat and tried it in this instance of Geronimo. Seemed to work fine.

I was wondering if there is a simpler way of configuring NIO without 
having
to rebuild config\tomcat6. If so can you please suggest how I can do 
that?


Well, I expect we actually want to ship with the NIO connectors used 
by default anyway, like we do for jetty.

I'd ship with the 6.0.14 code, tons of fixes since the last stable release.
the code has been voted stable and ready to announce, we're just waiting 
for the RM to pull his head out of his rear :)

http://people.apache.org/~remm/tomcat-6/v6.0.14/

Filip


However until we get there you can either turn off the BIO connector 
and add a NIO connector in var/config/config.xml or turn off the BIO 
connector in config.xml and add the appropriate connector to the 
geronimo plan for your app.  You can add the NIO connector using the 
admin console, but I think you need to turn off the BIO connector by 
editing config.xml when geronimo is not running.  add the attribute 
load=false to the gbean entry for the BIO connector.


Hope this helps
david jencks




Thanks,
Gautham.

--View this message in context: 
http://www.nabble.com/Tomcat-connectors-tf4132628s134.html#a12077742

Sent from the Apache Geronimo - Dev mailing list archive at Nabble.com.





--No virus found in this incoming message.
Checked by AVG Free Edition.Version: 7.5.476 / Virus Database: 
269.11.10/943 - Release Date: 8/8/2007 5:38 PM







Re: Tomcat connectors

2007-07-26 Thread Filip Hanik - Dev Lists

Jeff Genender wrote:

Ok I added a whole bunch of new connectors in the o.a.g.t.connectors
package.

I am still working on APR - more notes to follow on this as its a little
squirly since the Tomcat Connector somewhat chooses this automatically
based on the existence of a native libraries.  For the console we may
wish to do a check on whether the native libs exist, and if so, present
the APR connector.  More on this in another email.
  
not really, it works the same as the NIO connector selection, in 
server.xml if
protocol=HTTP/1.1 and the java.library.path contains the TC native 
library (tcnative.dll or libtcnative.so)

then APR is selected.
however, the protocol attribute also takes a complete class name, like

protocol=org.apache.coyote.http11.Http11Protocol -- java blocking 
connector
protocol=org.apache.coyote.http11.Http11NioProtocol -- java non 
blocking connector

protocol=org.apache.coyote.http11.Http11AprProtocol -- APR connector

so there is no need to dabble with the auto select, personally I don't 
think its very usable feature, since the APR SSL connector has different 
attributes than the Java SSL connector and the auto select wouldn't work 
in that scenario anyway.


Filip


Here are the connectors we care about at the moment...

AJP13ConnectorGBean - Implements AJP
Http11ConnectorGBean - Implements blocking Http connector
Https11ConnectorGBean  - Implements blocking Https connector
Http11NIOConnectorGBean - Implements non-blocking Http connector
Https11NIOConnectorGBean - Implements non-blocking Https connector

I have not wired them into the container and other GBeans yet...I want
to clena them up and get any feedback before making the switch since
this obviously will impact the console upon wiring them in.

As a side note...I am not using any references to the WebManager or
other interfaces we used that hooked into the console.  We can re-add
those if those are deemed necessary.

Jeff

Paul McMahan wrote:
  

I agree NIO support would be great to have in 2.0, especially since its
required for comet.

Best wishes,
Paul

On Jul 23, 2007, at 2:42 PM, Jeff Genender wrote:



Hi,

I was going through some JIRAs and the Geronimo2.0 source and noticed it
will be difficult at best to get the NIO connector and setting
attributes on the APR connector for Tomcat due to its current
implementation.  I really think the ability to use these 2 connectors is
very important for the 2.0 release and I would like to put these in.  If
there are no objections, I would like this to be a part of the 2.0
release.

Jeff
  



  




Re: [VOTE] Release specs for El, J2EE Management, WS-Metadata - rc2

2007-06-13 Thread Filip Hanik - Dev Lists

do none of the spec releases get md5 sums nor pgp signatures?

Filip

Prasad Kashyap wrote:

Please review the specifications located at
http://people.apache.org/~prasad/specs_rc2

The only changes that were made to the binaries that passed a vote
over the past weekend was to add the scm section to the pom.xml.

I have dropped jsp specs from the vote now.

ws-metadata will have a 3 digit version number (1.1.1) because 1.1.0
was already released some 6 weeks ago. This is a minor update to the
released version.

Voting concludes on Saturday, June 16th at 1700 ET.

Cheers
Prasad






Re: Tomcat m2 repo?

2007-03-30 Thread Filip Hanik - Dev Lists

I'll give the antlibs another shot

Filip

Jason Dillon wrote:

FYI the issue + patch to the tasks is here:

http://jira.codehaus.org/browse/MANTTASKS-42

--jason


On Mar 29, 2007, at 6:39 AM, Filip Hanik - Dev Lists wrote:


Jason Dillon wrote:

On Mar 27, 2007, at 4:50 PM, Filip Hanik - Dev Lists wrote:
I don't expect that Tomcat will switch to m2, though if they are 
gonna be publishing m2 repos they should use the m2 antlib for 
that.  But, looks like the m2 antlib is not up to snuff wrt the 
new? apache requirements to publish .asc files for releases.  I 
think the antlib tasks probably need to be updated to allow extra 
files to be attached when install/deploying and then ant folks 
should be sorted... well, that and if they implement a task or 
macro to sign stuff.
we're note even using the antlibs, they were not really working 
out. It was easier to just exec the mvn script directly. If Maven 
has the command line option to do what we want, then we can do it.


Just curious, what wasn't working out with the antlibs?  They should 
prolly be fixed if they are not usable by ant projects.



So if you show me the $MAVEN_HOME/bin/mvn command to publish a 
single JAR(with a POM) and being able to make sure the signature 
goes with it, then we are fine.

GPG signing is a no brainer, we can do that any day.


Hrm... I'm not sure there exists such a command at the moment, 
though its probably easy enough to craft a simple goal to implement 
what you need.
yeah, I might just implement this in ANT all together, and skip 
maven, if it is a simple SCP copy.


The reason it doesn't work asis, is that the gpg .asc stuff is 
attached to the current projects artifact and the install/deploy 
will handle the primary artifact and then any attached artifacts 
separately.  The install-file/deploy-file goals don't have a project 
to work on so there is nothing to attach to.


I suppose that either install-file/deploy-file need to take an 
additional csv list of other files to attach or perhaps simply 
craft a pom.xml which uses build-helper:attach-artifact ( 
http://mojo.codehaus.org/build-helper-maven-plugin/attach-artifact-mojo.html 
) and dance around mvn a little to make `mvn deploy` work.


But, it would really be better IMO, to use the deploy task and 
update the task to have a nested set of attached-file elements 
which can be used to effectively behave the same as mvn would 
normally by deploying the primary artifact, and then any attached 
artifacts.  Thats *much* less of a hack.


Can you tell me why the antlib tasks aren't working for you?

there were a few things
1. documentation or my inability to work with it
2. learning curve, I'm trying to do something really simple
3. SCP with maven on windows simply didn't work, turns out that it 
still doesn't work when using the command line arguments, so I am 
still running from linux.


since all I wanna do is SCP a .jar .pom .md5 and .asc, why does this 
have to be so complicated :)
if I can reverse engineer what it is Maven is doing when publishing a 
file to a repo, it will be easier for me to implement it in pure ant.


Filip


--jason


--No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.446 / Virus Database: 268.18.18/734 - Release Date: 
3/26/2007 2:31 PM









--No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.446 / Virus Database: 268.18.20/737 - Release Date: 
3/28/2007 4:23 PM







Re: Tomcat m2 repo?

2007-03-30 Thread Filip Hanik - Dev Lists

eeeh, and you were asking why we havent got around to this?
lack of expertise if I remember correctly :)

Just messing with you Jason

Filip


Jason Dillon wrote:
Until Jason gets to releasing the updated tasks you will need to build 
a few bits by hand to use the new antlib attach stuff.


First build Maven 2.0.6 from its tag:

http://svn.apache.org/repos/asf/maven/components/tags/maven-2.0.6/

And then build the updated ant tasks from this feature branch:


http://svn.apache.org/repos/asf/maven/sandbox/trunk/ant-tasks/install-deploy-attached/ 



Both should build with no problems with Maven 2.0.5.

Then you should have artifact:install and artifact:deploy tasks 
which support a nested attach file= type=/ elements as 
documented in the JIRA issue:


http://jira.codehaus.org/browse/MANTTASKS-42

Let me know if you run into any issues and I will do what I can to 
help you resolve them.


Cheers,

--jason


On Mar 30, 2007, at 1:26 PM, Filip Hanik - Dev Lists wrote:


I'll give the antlibs another shot

Filip

Jason Dillon wrote:

FYI the issue + patch to the tasks is here:

http://jira.codehaus.org/browse/MANTTASKS-42

--jason


On Mar 29, 2007, at 6:39 AM, Filip Hanik - Dev Lists wrote:


Jason Dillon wrote:

On Mar 27, 2007, at 4:50 PM, Filip Hanik - Dev Lists wrote:
I don't expect that Tomcat will switch to m2, though if they are 
gonna be publishing m2 repos they should use the m2 antlib for 
that.  But, looks like the m2 antlib is not up to snuff wrt the 
new? apache requirements to publish .asc files for releases.  I 
think the antlib tasks probably need to be updated to allow 
extra files to be attached when install/deploying and then ant 
folks should be sorted... well, that and if they implement a 
task or macro to sign stuff.
we're note even using the antlibs, they were not really working 
out. It was easier to just exec the mvn script directly. If Maven 
has the command line option to do what we want, then we can do it.


Just curious, what wasn't working out with the antlibs?  They 
should prolly be fixed if they are not usable by ant projects.



So if you show me the $MAVEN_HOME/bin/mvn command to publish a 
single JAR(with a POM) and being able to make sure the signature 
goes with it, then we are fine.

GPG signing is a no brainer, we can do that any day.


Hrm... I'm not sure there exists such a command at the moment, 
though its probably easy enough to craft a simple goal to 
implement what you need.
yeah, I might just implement this in ANT all together, and skip 
maven, if it is a simple SCP copy.


The reason it doesn't work asis, is that the gpg .asc stuff is 
attached to the current projects artifact and the install/deploy 
will handle the primary artifact and then any attached artifacts 
separately.  The install-file/deploy-file goals don't have a 
project to work on so there is nothing to attach to.


I suppose that either install-file/deploy-file need to take an 
additional csv list of other files to attach or perhaps simply 
craft a pom.xml which uses build-helper:attach-artifact ( 
http://mojo.codehaus.org/build-helper-maven-plugin/attach-artifact-mojo.html 
) and dance around mvn a little to make `mvn deploy` work.


But, it would really be better IMO, to use the deploy task and 
update the task to have a nested set of attached-file elements 
which can be used to effectively behave the same as mvn would 
normally by deploying the primary artifact, and then any attached 
artifacts.  Thats *much* less of a hack.


Can you tell me why the antlib tasks aren't working for you?

there were a few things
1. documentation or my inability to work with it
2. learning curve, I'm trying to do something really simple
3. SCP with maven on windows simply didn't work, turns out that it 
still doesn't work when using the command line arguments, so I am 
still running from linux.


since all I wanna do is SCP a .jar .pom .md5 and .asc, why does 
this have to be so complicated :)
if I can reverse engineer what it is Maven is doing when publishing 
a file to a repo, it will be easier for me to implement it in pure 
ant.


Filip


--jason


--No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.446 / Virus Database: 268.18.18/734 - Release Date: 
3/26/2007 2:31 PM









--No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.446 / Virus Database: 268.18.20/737 - Release Date: 
3/28/2007 4:23 PM









--No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.446 / Virus Database: 268.18.22/739 - Release Date: 
3/29/2007 1:36 PM







Re: Tomcat m2 repo?

2007-03-30 Thread Filip Hanik - Dev Lists

Jason Dillon wrote:

Mocking me?  Ha... I prolly deserve it a little :-P

But I'm here if you need more help.

sounds good
Filip


--jason


On Mar 30, 2007, at 4:58 PM, Filip Hanik - Dev Lists wrote:


eeeh, and you were asking why we havent got around to this?
lack of expertise if I remember correctly :)

Just messing with you Jason

Filip


Jason Dillon wrote:
Until Jason gets to releasing the updated tasks you will need to 
build a few bits by hand to use the new antlib attach stuff.


First build Maven 2.0.6 from its tag:

http://svn.apache.org/repos/asf/maven/components/tags/maven-2.0.6/

And then build the updated ant tasks from this feature branch:


http://svn.apache.org/repos/asf/maven/sandbox/trunk/ant-tasks/install-deploy-attached/ 



Both should build with no problems with Maven 2.0.5.

Then you should have artifact:install and artifact:deploy tasks 
which support a nested attach file= type=/ elements as 
documented in the JIRA issue:


http://jira.codehaus.org/browse/MANTTASKS-42

Let me know if you run into any issues and I will do what I can to 
help you resolve them.


Cheers,

--jason


On Mar 30, 2007, at 1:26 PM, Filip Hanik - Dev Lists wrote:


I'll give the antlibs another shot

Filip

Jason Dillon wrote:

FYI the issue + patch to the tasks is here:

http://jira.codehaus.org/browse/MANTTASKS-42

--jason


On Mar 29, 2007, at 6:39 AM, Filip Hanik - Dev Lists wrote:


Jason Dillon wrote:

On Mar 27, 2007, at 4:50 PM, Filip Hanik - Dev Lists wrote:
I don't expect that Tomcat will switch to m2, though if they 
are gonna be publishing m2 repos they should use the m2 antlib 
for that.  But, looks like the m2 antlib is not up to snuff 
wrt the new? apache requirements to publish .asc files for 
releases.  I think the antlib tasks probably need to be 
updated to allow extra files to be attached when 
install/deploying and then ant folks should be sorted... well, 
that and if they implement a task or macro to sign stuff.
we're note even using the antlibs, they were not really working 
out. It was easier to just exec the mvn script directly. If 
Maven has the command line option to do what we want, then we 
can do it.


Just curious, what wasn't working out with the antlibs?  They 
should prolly be fixed if they are not usable by ant projects.



So if you show me the $MAVEN_HOME/bin/mvn command to publish 
a single JAR(with a POM) and being able to make sure the 
signature goes with it, then we are fine.

GPG signing is a no brainer, we can do that any day.


Hrm... I'm not sure there exists such a command at the moment, 
though its probably easy enough to craft a simple goal to 
implement what you need.
yeah, I might just implement this in ANT all together, and skip 
maven, if it is a simple SCP copy.


The reason it doesn't work asis, is that the gpg .asc stuff is 
attached to the current projects artifact and the install/deploy 
will handle the primary artifact and then any attached artifacts 
separately.  The install-file/deploy-file goals don't have a 
project to work on so there is nothing to attach to.


I suppose that either install-file/deploy-file need to take an 
additional csv list of other files to attach or perhaps simply 
craft a pom.xml which uses build-helper:attach-artifact ( 
http://mojo.codehaus.org/build-helper-maven-plugin/attach-artifact-mojo.html 
) and dance around mvn a little to make `mvn deploy` work.


But, it would really be better IMO, to use the deploy task and 
update the task to have a nested set of attached-file elements 
which can be used to effectively behave the same as mvn would 
normally by deploying the primary artifact, and then any 
attached artifacts.  Thats *much* less of a hack.


Can you tell me why the antlib tasks aren't working for you?

there were a few things
1. documentation or my inability to work with it
2. learning curve, I'm trying to do something really simple
3. SCP with maven on windows simply didn't work, turns out that 
it still doesn't work when using the command line arguments, so I 
am still running from linux.


since all I wanna do is SCP a .jar .pom .md5 and .asc, why does 
this have to be so complicated :)
if I can reverse engineer what it is Maven is doing when 
publishing a file to a repo, it will be easier for me to 
implement it in pure ant.


Filip


--jason


--No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.446 / Virus Database: 268.18.18/734 - Release Date: 
3/26/2007 2:31 PM









--No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.446 / Virus Database: 268.18.20/737 - Release Date: 
3/28/2007 4:23 PM









--No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.446 / Virus Database: 268.18.22/739 - Release Date: 
3/29/2007 1:36 PM









--No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.446 / Virus Database: 268.18.22/739 - Release Date: 
3/29/2007 1:36 PM







Re: Tomcat m2 repo?

2007-03-29 Thread Filip Hanik - Dev Lists

Jason Dillon wrote:

On Mar 27, 2007, at 4:50 PM, Filip Hanik - Dev Lists wrote:
I don't expect that Tomcat will switch to m2, though if they are 
gonna be publishing m2 repos they should use the m2 antlib for 
that.  But, looks like the m2 antlib is not up to snuff wrt the new? 
apache requirements to publish .asc files for releases.  I think the 
antlib tasks probably need to be updated to allow extra files to be 
attached when install/deploying and then ant folks should be 
sorted... well, that and if they implement a task or macro to sign 
stuff.
we're note even using the antlibs, they were not really working out. 
It was easier to just exec the mvn script directly. If Maven has the 
command line option to do what we want, then we can do it.


Just curious, what wasn't working out with the antlibs?  They should 
prolly be fixed if they are not usable by ant projects.



So if you show me the $MAVEN_HOME/bin/mvn command to publish a 
single JAR(with a POM) and being able to make sure the signature goes 
with it, then we are fine.

GPG signing is a no brainer, we can do that any day.


Hrm... I'm not sure there exists such a command at the moment, though 
its probably easy enough to craft a simple goal to implement what you 
need.
yeah, I might just implement this in ANT all together, and skip maven, 
if it is a simple SCP copy.


The reason it doesn't work asis, is that the gpg .asc stuff is 
attached to the current projects artifact and the install/deploy will 
handle the primary artifact and then any attached artifacts 
separately.  The install-file/deploy-file goals don't have a project 
to work on so there is nothing to attach to.


I suppose that either install-file/deploy-file need to take an 
additional csv list of other files to attach or perhaps simply craft 
a pom.xml which uses build-helper:attach-artifact ( 
http://mojo.codehaus.org/build-helper-maven-plugin/attach-artifact-mojo.html 
) and dance around mvn a little to make `mvn deploy` work.


But, it would really be better IMO, to use the deploy task and 
update the task to have a nested set of attached-file elements which 
can be used to effectively behave the same as mvn would normally by 
deploying the primary artifact, and then any attached artifacts.  
Thats *much* less of a hack.


Can you tell me why the antlib tasks aren't working for you?

there were a few things
1. documentation or my inability to work with it
2. learning curve, I'm trying to do something really simple
3. SCP with maven on windows simply didn't work, turns out that it still 
doesn't work when using the command line arguments, so I am still 
running from linux.


since all I wanna do is SCP a .jar .pom .md5 and .asc, why does this 
have to be so complicated :)
if I can reverse engineer what it is Maven is doing when publishing a 
file to a repo, it will be easier for me to implement it in pure ant.


Filip


--jason


--No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.446 / Virus Database: 268.18.18/734 - Release Date: 
3/26/2007 2:31 PM







Re: Tomcat m2 repo?

2007-03-27 Thread Filip Hanik - Dev Lists
if PGP signatures in form of .asc are not required, then we can switch 
to the new repo anytime


Filip

Jeff Genender wrote:

Why do they need pgp signatures? That is new to me.

Do you mean SHA1 signatures?

For SHA1 sigs, they can use the sha1 program on minotaur.

They would need to do something like:

sha1 -q ./tomcat-whatever.jar  ./timcat-whatever.jar.sha1
sha1 -q ./tomcat-whatever.pom  ./timcat-whatever.pom.sha1

Jeff


Paul McMahan wrote:
  

A few months ago I opened a tomcat issue requesting them to publish
the tomcat jars to a maven repo.
http://issues.apache.org/bugzilla/show_bug.cgi?id=41093

Yesterday they marked the issue as resolved since the tomcat jars are
now available at http://tomcat.apache.org/dev/dist/m2-repository, but
commented that they would publish to central if there is a way to work
around a problem with PGP signatures:



 Doesn't seem like I can publish to the central repo
 because it requires PGP  signatures and maven
 doesn't let you do that with single JAR if you're not
 building with Maven.

 If you have a way to get the PGP info in there in
 the way we are doing it now,  let us know and
open a new bug.
  

If anyone knows of a way to sign single JARs not built with maven then
please enlighten us or just go ahead open a new tomcat bug with the
details.  This may be the only remaining impediment towards making the
tomcat jars available on central.

Best wishes,
Paul


On 2/13/07, Jason Dillon [EMAIL PROTECTED] wrote:


Okay, hopefully they will get the kinks out soon ;-)

--jason


On Feb 13, 2007, at 4:39 PM, Paul McMahan wrote:

  

Tomcat currently builds with ant and then manually publish their jars
to a repo at tomcat.apache.org.See
http://www.nabble.com/Tomcat-Jars---Maven2-repo-
tf3023226.html#a8397986
They have only just started to make publishing to an m2 repo part of
their release process, and IIUC they want to work the kinks out of
their scripts before pointing them at central.

Best wishes,
Paul

On 2/13/07, Jason Dillon [EMAIL PROTECTED] wrote:


Why do we need this repo:

 http://tomcat.apache.org/dev/dist/m2-repository

Is Tomcat not publishing to central?  If not... anyone know why?

--jason

  
  



  




Re: Tomcat m2 repo?

2007-03-27 Thread Filip Hanik - Dev Lists

Jason Dillon wrote:

On Mar 27, 2007, at 4:04 PM, Dain Sundstrom wrote:

On Mar 27, 2007, at 2:54 PM, Jason Dillon wrote:

On Mar 27, 2007, at 1:58 PM, Filip Hanik - Dev Lists wrote:
Yesterday they marked the issue as resolved since the tomcat jars 
are
now available at http://tomcat.apache.org/dev/dist/m2-repository, 
but


Ug... why on earth is Tomcat not just using the standard Maven 2 
repos that the rest of us have to use?  This is so annoying.   If 
they just published to m2-ibiblio-rsync-repository then they would 
be done already.

wow, you're such a joy to be around ;)


Sorry, this repo stuff is a sore subject for me.  I've been trying 
to keep things orderly (and limited) and new repos just keep poping 
up and into our build.  This has been going on for a while now, 
though it has gotten better.


Anyways, nothing personal... I am just a little annoyed with all the 
repo related issues in whole.


If you remember back, Ant was originally developed to build Tomcat, 
so the odds of them dropping Ant are pretty slim.  Now they maybe 
willing to add the maven ant tasks to publish to the maven repo, but 
I suggest you ask really nicely.


I don't expect that Tomcat will switch to m2, though if they are gonna 
be publishing m2 repos they should use the m2 antlib for that.  But, 
looks like the m2 antlib is not up to snuff wrt the new? apache 
requirements to publish .asc files for releases.  I think the antlib 
tasks probably need to be updated to allow extra files to be attached 
when install/deploying and then ant folks should be sorted... well, 
that and if they implement a task or macro to sign stuff.
we're note even using the antlibs, they were not really working out. It 
was easier to just exec the mvn script directly. If Maven has the 
command line option to do what we want, then we can do it.
So if you show me the $MAVEN_HOME/bin/mvn command to publish a single 
JAR(with a POM) and being able to make sure the signature goes with it, 
then we are fine.

GPG signing is a no brainer, we can do that any day.

Filip



Anyways, I didn't mean to ruffle any feathers, I was just frustrated...

--jason




--No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.446 / Virus Database: 268.18.18/734 - Release Date: 
3/26/2007 2:31 PM







Re: Annotation Injection in Geronimo + Tomcat + Jasper

2007-03-24 Thread Filip Hanik - Dev Lists
Looks good, I'm not really the annotations person, but I will put it 
directly in his hands.
The word lifecycle is widely used in Tomcat, so we might have to come 
up with a better name, to avoid confusion.


Filip

David Jencks wrote:
I've been working on connecting geronimo annotation processing and 
injection support to tomcat and jasper and studying how the current 
tomcat and jasper support this and have been working on a proposal for 
changing how tomcat deals with this area that I hope will be seen as 
an improvement.


I've generally removed exceptions from the method signatures below in 
an attempt to make the situation a little clearer.


As far as I can tell (so far) there are four kinds of objects relevant 
here:

servlets
filters
listeners
tags

The first three are created more or less directly by tomcat whereas 
tags are created by some code in jasper and by code generated by jasper.


Currently tomcat and jasper use a very wide variety of techniques to 
create the objects and then use an instance of


public interface AnnotationProcessor {
public void postConstruct(Object instance);
public void preDestroy(Object instance);
public void processAnnotations(Object instance);
}

in a sequence like this:

Object o = create a new instance through various kinds of magic
annotationProcessor.processAnnotations(o);
annotationProcessor.postConstruct(o);

When its time to toss the object they call
annotationProcessor.preDestroy(o);


What I would like to do is replace the AnnotationProcessor with a 
different simpler interface like this: (the name is not important...)


public interface LifecycleProvider {
Object newInstance(String fqcn, ClassLoader classLoader);
void destroyInstance(Object o);
}

The idea is that the newInstance method does everything necessary to 
construct a fully injected and post-constructed instance, and the 
destroyInstance does everything necessary to stop it.  Its very easy 
to write an adapter between this proposed interface and the 
AnnotationProcessor interface, so tomcat and jasper would continue to 
support the AnnotationProcessor approach just as they do today.


The reason use this interface from geronimo's perspective is that we 
have some very nice code that can do the object creation and injection 
in one step.  It's designed to support constructor injection as well 
as property injection, so the object instance construction and 
injection aren't really separable.


Aside from altruism the reason I think the tomcat developers might be 
interested in this is that there is such a wide variety of code in the 
create a new instance through various kinds of magic step and it 
looks to me as if this is most likely a consequence of less and less 
attention getting paid as new kinds of objects need to be created.  
This would put all the managed object creation code in one place so 
each object creation would get the same level of attention.


For instance, while listeners and tags are created with a simple 
clazz.newInstance(), the servlet construction code checks a lot of 
conditions before deciding how to construct the object: in particular 
security settings might cause it to be in a PrivilegedAction and if it 
is available in the same classloader as tomcat then special actions 
are taken.  While I don't entirely understand the point of some of 
this it seems highly unlikely that it is appropriate only for servlets 
and not filters, listeners and tags.



I've been working on this approach for about a week now and think I 
have most everything related to tomcat changes working.  There are 
still some problems with things like tld schema upgrades which are not 
related to tomcat code.  I would be more comfortable proposing this to 
the tomcat community after we've ironed out more of the geronimo 
problems, but I'd like to start some discussion on this and also 
commit my code so everyone can see at least the geronimo side 
clearly.  Since my copy of geronimo doesn't build without my changes 
to tomcat/jasper, to proceed with this I need to get my version of 
tomcat out somehow.


Here are some possibilities I've thought of, maybe someone can think 
of something even better:


- attach my tomcat patch to a geronimo jira issue, maybe check the 
patch into svn somewhere, build tomcat + jasper locally, and put them 
into our repository module (mabye with source jars if I can figure 
out how to create them)


- svn copy tomcat to our tree and apply the patch, check it in, and 
push org.apache.geronimo.tomcat/jasper jars to the snapshot repo


In any case I'd prefer to check my geronimo changes into trunk in the 
optimistic expectation that the Tomcat community will accept something 
like this proposal and that if they don't it will still be easier to 
adapt to the AnnotationProcessor approach in trunk than to deal with a 
geronimo branch.


Anyway I started GERONIMO-3010 and I'll attach my tomcat patches there 
even though they aren't quite ready to propose 

Re: Annotation Injection in Geronimo + Tomcat + Jasper

2007-03-24 Thread Filip Hanik - Dev Lists
looks like the patch might introduce some funky dependencies, it got 
neglected

http://marc.info/?l=tomcat-devm=117477476413027w=2

The interface lifecycle provider, you put in o.a.catalina, thus creating 
a funky relationship.
remember, that other containers use jasper, now they'd have to embed 
tomcat to do so.


The patch probably can be a lot easier, if you want to inject your own 
annotation processor, just do so by 
StandardContext.setAnnotationProcessor, and the default one will not be 
created.


for example, right now we are looking at code something like this

 Tag instance = (Tag) handlerClass.newInstance();
 AnnotationHelper.postConstruct(annotationProcessor, instance);

the patch proposes to handle these two lines (plus the actual 
classloading) in a single line, by using your method. The feedback was 
that the patch introduced some code complexity that is not really needed.
Especially when you already can inject your own annotation processor 
into the mix.


I'll try to draw out some more constructive criticism out of the group, 
but I need to better understand the need for Geronimo here, based on 
that there might be some other ideas that come up.


If there simply is a patch to tomcat, with no added benefit to tomcat, 
then I'm afraid it will be hard to justify. And I don't think you wanna 
fork parts of the tomcat tree, it will not be popular.


Filip

David Jencks wrote:
I've been working on connecting geronimo annotation processing and 
injection support to tomcat and jasper and studying how the current 
tomcat and jasper support this and have been working on a proposal for 
changing how tomcat deals with this area that I hope will be seen as 
an improvement.


I've generally removed exceptions from the method signatures below in 
an attempt to make the situation a little clearer.


As far as I can tell (so far) there are four kinds of objects relevant 
here:

servlets
filters
listeners
tags

The first three are created more or less directly by tomcat whereas 
tags are created by some code in jasper and by code generated by jasper.


Currently tomcat and jasper use a very wide variety of techniques to 
create the objects and then use an instance of


public interface AnnotationProcessor {
public void postConstruct(Object instance);
public void preDestroy(Object instance);
public void processAnnotations(Object instance);
}

in a sequence like this:

Object o = create a new instance through various kinds of magic
annotationProcessor.processAnnotations(o);
annotationProcessor.postConstruct(o);

When its time to toss the object they call
annotationProcessor.preDestroy(o);


What I would like to do is replace the AnnotationProcessor with a 
different simpler interface like this: (the name is not important...)


public interface LifecycleProvider {
Object newInstance(String fqcn, ClassLoader classLoader);
void destroyInstance(Object o);
}

The idea is that the newInstance method does everything necessary to 
construct a fully injected and post-constructed instance, and the 
destroyInstance does everything necessary to stop it.  Its very easy 
to write an adapter between this proposed interface and the 
AnnotationProcessor interface, so tomcat and jasper would continue to 
support the AnnotationProcessor approach just as they do today.


The reason use this interface from geronimo's perspective is that we 
have some very nice code that can do the object creation and injection 
in one step.  It's designed to support constructor injection as well 
as property injection, so the object instance construction and 
injection aren't really separable.


Aside from altruism the reason I think the tomcat developers might be 
interested in this is that there is such a wide variety of code in the 
create a new instance through various kinds of magic step and it 
looks to me as if this is most likely a consequence of less and less 
attention getting paid as new kinds of objects need to be created.  
This would put all the managed object creation code in one place so 
each object creation would get the same level of attention.


For instance, while listeners and tags are created with a simple 
clazz.newInstance(), the servlet construction code checks a lot of 
conditions before deciding how to construct the object: in particular 
security settings might cause it to be in a PrivilegedAction and if it 
is available in the same classloader as tomcat then special actions 
are taken.  While I don't entirely understand the point of some of 
this it seems highly unlikely that it is appropriate only for servlets 
and not filters, listeners and tags.



I've been working on this approach for about a week now and think I 
have most everything related to tomcat changes working.  There are 
still some problems with things like tld schema upgrades which are not 
related to tomcat code.  I would be more comfortable proposing this to 
the tomcat community after we've ironed out more of the geronimo 

Re: [RESULT] VOTE J2G Conversion tool acceptance

2007-03-02 Thread Filip Hanik - Dev Lists
any thoughts on when the PMC will have time to look/comment/approve the 
remaining items on this donation


Filip

Davanum Srinivas wrote:

Filip,

There are 6 items under Copyright and Verify distribution rights.
Someone on the pmc needs to deal with them and send an updated patch.
At least the first 2 under Verify distribution rights can be marked
Not Applicable.

thanks,
dims

On 2/26/07, Filip Hanik - Dev Lists [EMAIL PROTECTED] wrote:

cool, sorry to bug, but is there anything more you need from us?

Filip

Davanum Srinivas wrote:
 Checked in.

 thanks,
 dims

 On 2/26/07, Filip Hanik - Dev Lists [EMAIL PROTECTED] wrote:
 dims,
 I've updated some info in the IP clearance form, attached is the
 patch file.
 The JIRA has also been updated with the codebase that reflects the 
ASF
 license in the source header, and the IBM copyright in the 
COPYRIGHT.txt

 file
 Both Covalent and IBM CCLA are also attached to the JIRA item.

 Is there anything left from us, or is the rest left up to the
 Geronimo PMC?

 Filip









Tomcat 6 and G Certification, WAS: Heads up re: Apache Geronimo and JavaOne

2007-03-01 Thread Filip Hanik - Dev Lists


Ladies and Gents,
Just wanted to extend a hand here. If there is any help needed to 
integrate TC6 and to make it pass the tests, I am more than willing to 
help to get the two platforms working correctly within the timeframe you 
are looking at.


Albeit I have a hard time following the volume of posts on the G dev 
lists, if I do miss one, ping me independently.


So if you have any outstanding issues, questions or problems let me 
know, point me in their direction, and I will jump on them.


Best regards,
Filip


Matt Hogstrom wrote:
Just a heads up what's happening in Apache Geronimo. We're currently 
working to complete a server that passes all required tests before 
JavaOne.  It would be most excellent to be able to get visibility for 
Apache Geronimo at this industry event.   So why am I sending you this 
note?


We are using MyFaces to meet the requirements around JSF.  I just 
wanted to give you a heads up on where were going.  We'll need to lock 
down our release for testing at the end of March (yup, an insane 
amount of work remains) so if ya'll are interested in helping that 
would be awesome.  Also, I'll let you know in he March / April 
timeframe for a released or stable version of code.  Hopefully that 
will be possible.


Just wanted to get on your radar.

Matt Hogstrom



--No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.446 / Virus Database: 268.18.4/705 - Release Date: 
2/27/2007 3:24 PM







Re: svn commit: r513561 - /geronimo/server/trunk/pom.xml

2007-03-01 Thread Filip Hanik - Dev Lists
we're about to switch, just verifying that everything works the way we 
want it,


give it a couple of weeks, and we will be publishing to the main one
Filip

Jason Dillon wrote:
Do we still need the tomcat-m2-repo?  Or will the tomcat folks be 
using the normal repos like other projects?


--jason


On Mar 1, 2007, at 4:26 PM, [EMAIL PROTECTED] wrote:


Author: pmcmahan
Date: Thu Mar  1 16:26:47 2007
New Revision: 513561

URL: http://svn.apache.org/viewvc?view=revrev=513561
Log:
GERONIMO-2920 upgrade to tomcat 6.0.10 (stable)

Modified:
geronimo/server/trunk/pom.xml

Modified: geronimo/server/trunk/pom.xml
URL: 
http://svn.apache.org/viewvc/geronimo/server/trunk/pom.xml?view=diffrev=513561r1=513560r2=513561 

== 


--- geronimo/server/trunk/pom.xml (original)
+++ geronimo/server/trunk/pom.xml Thu Mar  1 16:26:47 2007
@@ -75,7 +75,7 @@
 !--
 HACK: Used by jsp and servlet example configs to point to 
the tomcat deployer

 --
-tomcatVersion6.0.8/tomcatVersion
+tomcatVersion6.0.10/tomcatVersion

 !--
 HACK: Used by uddi-jetty and uddi-tomcat config plans






--No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.446 / Virus Database: 268.18.5/706 - Release Date: 
2/28/2007 4:09 PM







Re: Context level clustering not supported in Tomcat it seems

2007-02-28 Thread Filip Hanik - Dev Lists

http://tomcat.apache.org/tomcat-6.0-doc/config/cluster.html

Filip

Shiva Kumar H R wrote:

Is it! Is there some document that you are referring to?

Meanwhile I will repost the query on Tomcat mailing list as well test 
context level clustering on Tomcat 6.


--
Thx,
Shiva

On 2/28/07, *Filip Hanik - Dev Lists* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Context level clustering is supported in TC 6

Shiva Kumar H R wrote:
 As part of https://issues.apache.org/jira/browse/GERONIMO-2577 I had
 opened the following bug in Tomcat:
 Context level clustering on 3 or more nodes fails in Tomcat 5.5.20
 http://issues.apache.org/bugzilla/show_bug.cgi?id=41620
http://issues.apache.org/bugzilla/show_bug.cgi?id=41620

 They have closed the bug as Resolved Invalid with the following
 comments:
 --- /Additional Comment #7
  http://issues.apache.org/bugzilla/show_bug.cgi?id=41620#c7
From Mark
 Thomas mailto:[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
2007-02-15 18:54 / [reply
 
http://issues.apache.org/bugzilla/show_bug.cgi?id=41620#add_comment]
 ---
 It is not possible to configure clustering in context.xml. It
must be done at
 the Host level (with the jvmRoute defined at the Engine level)
within server.xml
 That makes our default clustering article
 http://cwiki.apache.org/GMOxDOC11/clustering-sample-application.html
 invalid. Should we now remove it saying that Geronimo (Tomcat
 version) supports clustering at the Host/Engine level only?

 Dave Colasurdo and myself have already created a new article
 illustrating how to setup Geronimo (Tomcat version) clustering
at the
 host/engine level:


http://cwiki.apache.org/GMOxDOC11/clustering-sample-application-tomcat-host-level.html

http://cwiki.apache.org/GMOxDOC11/clustering-sample-application-tomcat-host-level.html.
 Please suggest if we should retain only this and delete the other
 article on context level clustering?

 -- Shiva




 No virus found in this incoming message.
 Checked by AVG Free Edition.
 Version: 7.5.441 / Virus Database: 268.17.37/682 - Release Date:
2/12/2007 1:23 PM






No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.446 / Virus Database: 268.18.4/705 - Release Date: 2/27/2007 3:24 
PM
  




Re: Context level clustering not supported in Tomcat it seems

2007-02-28 Thread Filip Hanik - Dev Lists
maybe I misunderstood you, you are asking if you can shove the Cluster 
implementation in a context, the answer to that is no.

6 adds in support of clustering context attributes
Filip

Filip Hanik - Dev Lists wrote:

http://tomcat.apache.org/tomcat-6.0-doc/config/cluster.html

Filip

Shiva Kumar H R wrote:

Is it! Is there some document that you are referring to?

Meanwhile I will repost the query on Tomcat mailing list as well test 
context level clustering on Tomcat 6.


--
Thx,
Shiva

On 2/28/07, *Filip Hanik - Dev Lists* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Context level clustering is supported in TC 6

Shiva Kumar H R wrote:
 As part of https://issues.apache.org/jira/browse/GERONIMO-2577 
I had

 opened the following bug in Tomcat:
 Context level clustering on 3 or more nodes fails in Tomcat 
5.5.20

 http://issues.apache.org/bugzilla/show_bug.cgi?id=41620
http://issues.apache.org/bugzilla/show_bug.cgi?id=41620

 They have closed the bug as Resolved Invalid with the following
 comments:
 --- /Additional Comment #7
  http://issues.apache.org/bugzilla/show_bug.cgi?id=41620#c7
From Mark
 Thomas mailto:[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
2007-02-15 18:54 / [reply
 

http://issues.apache.org/bugzilla/show_bug.cgi?id=41620#add_comment]

 ---
 It is not possible to configure clustering in context.xml. It
must be done at
 the Host level (with the jvmRoute defined at the Engine level)
within server.xml
 That makes our default clustering article
 
http://cwiki.apache.org/GMOxDOC11/clustering-sample-application.html

 invalid. Should we now remove it saying that Geronimo (Tomcat
 version) supports clustering at the Host/Engine level only?

 Dave Colasurdo and myself have already created a new article
 illustrating how to setup Geronimo (Tomcat version) clustering
at the
 host/engine level:


http://cwiki.apache.org/GMOxDOC11/clustering-sample-application-tomcat-host-level.html 


http://cwiki.apache.org/GMOxDOC11/clustering-sample-application-tomcat-host-level.html. 


 Please suggest if we should retain only this and delete the other
 article on context level clustering?

 -- Shiva






 No virus found in this incoming message.
 Checked by AVG Free Edition.
 Version: 7.5.441 / Virus Database: 268.17.37/682 - Release Date:
2/12/2007 1:23 PM






No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.446 / Virus Database: 268.18.4/705 - Release Date: 
2/27/2007 3:24 PM
  








Re: Tomcat m2 repo?

2007-02-27 Thread Filip Hanik - Dev Lists

Jason Dillon wrote:

Okay, hopefully they will get the kinks out soon ;-)

yes, still work in progress :)

Filip


--jason


On Feb 13, 2007, at 4:39 PM, Paul McMahan wrote:


Tomcat currently builds with ant and then manually publish their jars
to a repo at tomcat.apache.org.See
http://www.nabble.com/Tomcat-Jars---Maven2-repo-tf3023226.html#a8397986
They have only just started to make publishing to an m2 repo part of
their release process, and IIUC they want to work the kinks out of
their scripts before pointing them at central.

Best wishes,
Paul

On 2/13/07, Jason Dillon [EMAIL PROTECTED] wrote:

Why do we need this repo:

 http://tomcat.apache.org/dev/dist/m2-repository

Is Tomcat not publishing to central?  If not... anyone know why?

--jason





--No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.441 / Virus Database: 268.17.37/682 - Release Date: 
2/12/2007 1:23 PM







Re: Context level clustering not supported in Tomcat it seems

2007-02-27 Thread Filip Hanik - Dev Lists

Context level clustering is supported in TC 6

Shiva Kumar H R wrote:
As part of https://issues.apache.org/jira/browse/GERONIMO-2577 I had 
opened the following bug in Tomcat:
Context level clustering on 3 or more nodes fails in Tomcat 5.5.20 
http://issues.apache.org/bugzilla/show_bug.cgi?id=41620;


They have closed the bug as Resolved Invalid with the following 
comments:
--- /Additional Comment #7 
http://issues.apache.org/bugzilla/show_bug.cgi?id=41620#c7 From Mark 
Thomas mailto:[EMAIL PROTECTED] 2007-02-15 18:54 / [reply 
http://issues.apache.org/bugzilla/show_bug.cgi?id=41620#add_comment] 
---

It is not possible to configure clustering in context.xml. It must be done at
the Host level (with the jvmRoute defined at the Engine level) within server.xml 
That makes our default clustering article 
http://cwiki.apache.org/GMOxDOC11/clustering-sample-application.html 
invalid. Should we now remove it saying that Geronimo (Tomcat 
version) supports clustering at the Host/Engine level only?


Dave Colasurdo and myself have already created a new article 
illustrating how to setup Geronimo (Tomcat version) clustering at the 
host/engine level: 
http://cwiki.apache.org/GMOxDOC11/clustering-sample-application-tomcat-host-level.html. 
Please suggest if we should retain only this and delete the other 
article on context level clustering?


-- Shiva


No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.441 / Virus Database: 268.17.37/682 - Release Date: 2/12/2007 1:23 
PM
  




Re: [RESULT] VOTE J2G Conversion tool acceptance

2007-02-26 Thread Filip Hanik - Dev Lists

dims,
I've updated some info in the IP clearance form, attached is the patch file.
The JIRA has also been updated with the codebase that reflects the ASF 
license in the source header, and the IBM copyright in the COPYRIGHT.txt 
file

Both Covalent and IBM CCLA are also attached to the JIRA item.

Is there anything left from us, or is the rest left up to the Geronimo PMC?

Filip

Davanum Srinivas wrote:

Yes, i can help with the ip-clearance.

-- dims

On 2/19/07, Kevan Miller [EMAIL PROTECTED] wrote:


On Feb 19, 2007, at 9:47 AM, Matt Hogstrom wrote:

 So, we voted this monster in and accepted the code base.   At this
 point, per Geir's note in this thread Noted below:


  Here's the process :
 
  1) Contributor offers code  *Complete*
 
  2) Project decides to accept or reject code.  Formally, this is
 the PMC, but everyone should chime in.   *Complete* per vote below
 
  3) Contributor provides CCLA
 *Complete* CCLA located in JIRA

  Cleans up code to remove copyright statements
 *Outstanding* Need a volunteer here.  I think ideally the
 contributor would accomplish this step and post an updated file in
 the JIRA with this work completed.

  And puts the standard apache file header in place.
 *Outstanding* Really same as above.  Needs to have a committer
 review this so need a new volunteer here.

 
  4) Project accepts code contribution and registers the code
 contribution w/ the incubator with an ip_clearance form :  http://
 svn.apache.org/viewvc/incubator/public/trunk/site-author/ip-clearance/
 *Outstanding*  We've already accepted the code through this vote.
 This is the mechanics of getting the code into Apache.  I think the
 committer noted above should do this as well.

Matt, committers can certainly help. However, either an officer (you)
or an ASF member (Dims or Geir) will need to fill out and submit the
IP Clearance form -- http://incubator.apache.org/ip-clearance/ip-
clearance-template.html. Also, the software grant form needs to be
acknowledged by the ASF secretary (or another ASF officer).

--kevan











Index: geronimo-2743-ibm-covalent-j2g.xml
===
--- geronimo-2743-ibm-covalent-j2g.xml  (revision 511944)
+++ geronimo-2743-ibm-covalent-j2g.xml  (working copy)
@@ -35,7 +35,7 @@
titleProject info
 /title
ul
-   liWhich PMC will be responsible for the 
code  : Apache Geronimo
+   liWhich PMC will be responsible for the code  
: Apache Geronimo
 
   br/
/li
@@ -59,15 +59,28 @@
 /th
/tr
tr
-   tdNot Applicable
+tdJanuary-16-2007/td
 
-  br/
-   /td
-   tdIf applicable, make sure 
that any associated name does not already
-exist and check www.nameprotect.com to be sure that the name is not
-already trademarked for an existing software product.
-/td
+tdpJIRA a 
href=https://issues.apache.org/jira/browse/GERONIMO-2743;GERONIMO-2743/a 
Created. br/
+   a 
href=https://issues.apache.org/jira/secure/attachment/12349176/Covalent-J2G-Tool.pdfCovalent
 CCLA/a Attachedbr/
+   a 
href=https://issues.apache.org/jira/secure/attachment/12349047/CCLA.tif;IBM 
CCLA/a Attachedbr/
+   a 
href=https://issues.apache.org/jira/secure/attachment/12349046/J2G-Migration-v2_src_1.0.0.zip;Original
 codebase/a attachedbr//p
+/td
+  /tr
+  tr
+tdFebruary-21-2007/td
+
+tdpCopyright adjusted and a 
href=https://issues.apache.org/jira/secure/attachment/12351724/J2G-Migration_2.0.0_src_20070220-1501.zip;codebase/a
 cleaned up, ready for import /p
+/td
/tr
+ tr
+   tdFebruary-21-2007/td
+   tdp
+MD5 or SHA1 sum for donated software: br/
+  3cfbefd2424c3556fdcbf162a1129399 
*J2G-Migration_2.0.0_src_20070220-1501.zipbr/
+  e49e61df710dae15025b0126e4f8e672 *J2G-Migration-v2_src_1.0.0.zipbr/
+  (emNote versioned software used to calculate sum in parentheses/em).
+/p/td
/table
section id=Copyright
titleCopyright
@@ -80,7 +93,7 @@
 /th
/tr
tr
-   tdNot Yet Done.
+   tdNot yet done
 /td
tdCheck and make sure 
that the papers that transfer rights to the ASF
 been received. It is only necessary to transfer rights for the
@@ -107,13 

[RESULT] VOTE J2G Conversion tool acceptance

2007-02-02 Thread Filip Hanik - Dev Lists

Here is the result:

+1;
Jeff Genender
Paul McMahan
Kevan Miller
Prasad Kashyap
Hernan Cunico
Dain Sundstrom
David Jencks
Cris Cardona
Aaron Mulder
David Blevins
Anita Kulshreshtha
Gianny Damour
Rick McGuire
Matt Hogstrom
Sachin Patel
Vamsi Reddy

No 0's and no -1's.
We will start filling out the IP clearance form, and attach it to the 
JIRA item along with the updated codebase.

We will use this template
http://incubator.apache.org/ip-clearance/ip-clearance-template.html

Once this has been done, we will bring the JIRA to the attention of the 
G committers for review.


Filip


Filip Hanik - Dev Lists wrote:
This is the formal vote to accept the J2G codebase and bring it 
through incubation (see 
http://marc.theaimsgroup.com/?l=geronimo-devm=116906208022256w=2)

The final destination is to be part of the geronimo devtool subproject.
(see http://marc.theaimsgroup.com/?l=geronimo-devm=116958894929809w=2)

The code donation is located at:
https://issues.apache.org/jira/browse/GERONIMO-2743

[ ] +1 lets bring it in, this is great
[ ]  0 do what ever you want, not my cup of tea
[ ] -1 keep it out of our sight, I have a good reason

Optional
[ ] I'm willing to mentor this project while it is in incubation
[ ] I'm willing to champion the effort while it is in incubation

Committers' votes are binding, all other votes will be duly noted

Best regards
Filip






[VOTE] J2G Conversion tool acceptance

2007-01-31 Thread Filip Hanik - Dev Lists
This is the formal vote to accept the J2G codebase and bring it through 
incubation (see 
http://marc.theaimsgroup.com/?l=geronimo-devm=116906208022256w=2)

The final destination is to be part of the geronimo devtool subproject.
(see http://marc.theaimsgroup.com/?l=geronimo-devm=116958894929809w=2)

The code donation is located at:
https://issues.apache.org/jira/browse/GERONIMO-2743

[ ] +1 lets bring it in, this is great
[ ]  0 do what ever you want, not my cup of tea
[ ] -1 keep it out of our sight, I have a good reason

Optional
[ ] I'm willing to mentor this project while it is in incubation
[ ] I'm willing to champion the effort while it is in incubation

Committers' votes are binding, all other votes will be duly noted

Best regards
Filip


Re: [VOTE] J2G Conversion tool acceptance

2007-01-31 Thread Filip Hanik - Dev Lists

Kevan Miller wrote:


On Jan 31, 2007, at 12:30 PM, Filip Hanik - Dev Lists wrote:


Kevan Miller wrote:


On Jan 31, 2007, at 10:10 AM, Filip Hanik - Dev Lists wrote:

This is the formal vote to accept the J2G codebase and bring it 
through incubation (see 
http://marc.theaimsgroup.com/?l=geronimo-devm=116906208022256w=2)
The final destination is to be part of the geronimo devtool 
subproject.
(see 
http://marc.theaimsgroup.com/?l=geronimo-devm=116958894929809w=2)


The code donation is located at:
https://issues.apache.org/jira/browse/GERONIMO-2743

[ ] +1 lets bring it in, this is great
[ ]  0 do what ever you want, not my cup of tea
[ ] -1 keep it out of our sight, I have a good reason

Optional
[ ] I'm willing to mentor this project while it is in incubation
[ ] I'm willing to champion the effort while it is in incubation


As I mentioned on the previous thread. I'm in favor of accepting the 
donation. However, I think the IBM and Axmor copyrights removed from 
the src code donation (and replaced with Apache src license 
headers). The donation would be noted in a NOTICE. I'm withholding 
vote, until there's a response on this subject.
Yes that will be done, as soon as its accepted, the copyright notices 
will be corrected when first checked into the incubator SVN


???

Filip,
OK. Now I'm confused.

Do you want Geronimo to accept a code donation? Or do you want to 
start a new project in incubator? I thought it was the former (and I'm 
pretty sure you do, too).


The process IIUC is roughly

1. Geronimo votes to accept the donation
2. The Geronimo project fills out some paperwork (update an html page 
and fill out the IP Clearance form -- 
http://incubator.apache.org/ip-clearance/ip-clearance-template.html)
3. The Incubator PMC is notified of the donation and given 48 hours to 
raise any objections.


That's it. There is no incubator SVN. If you want us to vote to accept 
the donation prior to posting a new version of the source to the JIRA, 
then I'd be ok with that. IMO that's all that the process requires.
That's all we want, vote on the donation, then everything else follows. 
No point in us changing copyrights and do all the other stuff until the 
vote is there.
there is an incubator SVN, if we don't need that for this donation, then 
even better


Filip


--kevan



--No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.432 / Virus Database: 268.17.17/661 - Release Date: 
1/30/2007 11:30 PM







Re: [VOTE] J2G Conversion tool acceptance

2007-01-31 Thread Filip Hanik - Dev Lists

Dain Sundstrom wrote:

+1 to accept the donation

I don't think we need to incubate the code as FWIU it is a pure 
donation, and therefore goes through the much simpler ip clearance path.
that sounds great, makes everything smoother, we'll work on getting that 
taken care of and will update anything that is needed to the JIRA item


Filip


-dain

On Jan 31, 2007, at 7:10 AM, Filip Hanik - Dev Lists wrote:

This is the formal vote to accept the J2G codebase and bring it 
through incubation (see 
http://marc.theaimsgroup.com/?l=geronimo-devm=116906208022256w=2)

The final destination is to be part of the geronimo devtool subproject.
(see http://marc.theaimsgroup.com/?l=geronimo-devm=116958894929809w=2)

The code donation is located at:
https://issues.apache.org/jira/browse/GERONIMO-2743

[ ] +1 lets bring it in, this is great
[ ]  0 do what ever you want, not my cup of tea
[ ] -1 keep it out of our sight, I have a good reason

Optional
[ ] I'm willing to mentor this project while it is in incubation
[ ] I'm willing to champion the effort while it is in incubation

Committers' votes are binding, all other votes will be duly noted

Best regards
Filip




--No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.432 / Virus Database: 268.17.17/661 - Release Date: 
1/30/2007 11:30 PM







Re: [Code donation] J2G Conversion tool

2007-01-23 Thread Filip Hanik - Dev Lists

So far we have received a few positive comments, no negative and no vetos.
So are we ok with this donation and ready to move forward, possible into 
incubation?


Filip

Kevan Miller wrote:


On Jan 17, 2007, at 10:33 AM, Alex Karasulu wrote:

+1 on the CCLA's with a patch submission.  If it's a considerable 
piece of code perhaps a software grant may be in order.


There is no reason why something this small should incubate.


It seems to match the incubator guidelines for code donations pretty 
well. You may disagree with these guidelines, but the incubation 
process for code donations (which are being accepted by an existing 
project) seems lightweight enough... I don't see why we shouldn't 
follow it... It's 2 days of waiting + some extra paper work collecting 
info that we should probably have anyway...


--kevan




--No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.432 / Virus Database: 268.16.13/632 - Release Date: 
1/16/2007 4:36 PM







Re: [Code donation] J2G Conversion tool

2007-01-18 Thread Filip Hanik - Dev Lists

A small correction,
there is not a 1.1 in the making, however there are numerous bug fixes 
that will be applied to the tool once it finds a home either in 
incubator or in G svn.
Those bug fixes will be applied already having a ASF license and will 
not have to be considered a donation.
Covalent's CCLA has been uploaded to JIRA 
https://issues.apache.org/jira/browse/GERONIMO-2743


Filip

Filip Hanik - Dev Lists wrote:


IBM has together with in a joint effort with Covalent developed a 
JBoss to Geronimo conversion tool. This tool is used when converting 
applications from JBoss to Geronimo, and automatically converts the 
configuration file from one app server to the other.


We feel that this piece of software adds value to Geronimo and users 
adopting Geronimo and would like to see this effort continue as part 
of the Geronimo project, a plugin or a sub project of Geronimo.


The initial donation is for version 1.0 of this tool, and while a 1.1 
is in the making to improve 1.0, 1.1 is not yet complete but will be 
donated as soon as the community feels that this tool belongs at the 
ASF, more specifically within the Geronimo project.


If you'd think this tool is valuable, but believe it should go through 
incubation, we would hope that a Geronimo committer would step up and 
champion this effort.


The tool, including IBM's CCLA, can be found at 
http://people.apache.org/~fhanik/j2g/j2g.html (Covalent will file the 
CCLA upon request)


thanks for your time,
Filip







Re: [Code donation] J2G Conversion tool

2007-01-17 Thread Filip Hanik - Dev Lists

Dain Sundstrom wrote:

On Jan 16, 2007, at 12:19 PM, Filip Hanik - Dev Lists wrote:

IBM has together with in a joint effort with Covalent developed a 
JBoss to Geronimo conversion tool. This tool is used when converting 
applications from JBoss to Geronimo, and automatically converts the 
configuration file from one app server to the other.


We feel that this piece of software adds value to Geronimo and users 
adopting Geronimo and would like to see this effort continue as part 
of the Geronimo project, a plugin or a sub project of Geronimo.


The initial donation is for version 1.0 of this tool, and while a 1.1 
is in the making to improve 1.0, 1.1 is not yet complete but will be 
donated as soon as the community feels that this tool belongs at the 
ASF, more specifically within the Geronimo project.


If you'd think this tool is valuable, but believe it should go 
through incubation, we would hope that a Geronimo committer would 
step up and champion this effort.


The tool, including IBM's CCLA, can be found at 
http://people.apache.org/~fhanik/j2g/j2g.html (Covalent will file the 
CCLA upon request)


I suggest you file one regardless.  Apache likes to see CCLAs for any 
non trivial donation.


-dain

will do as soon as JIRA comes back up.
Filip


[Code donation] J2G Conversion tool

2007-01-16 Thread Filip Hanik - Dev Lists


IBM has together with in a joint effort with Covalent developed a JBoss 
to Geronimo conversion tool. This tool is used when converting 
applications from JBoss to Geronimo, and automatically converts the 
configuration file from one app server to the other.


We feel that this piece of software adds value to Geronimo and users 
adopting Geronimo and would like to see this effort continue as part of 
the Geronimo project, a plugin or a sub project of Geronimo.


The initial donation is for version 1.0 of this tool, and while a 1.1 is 
in the making to improve 1.0, 1.1 is not yet complete but will be 
donated as soon as the community feels that this tool belongs at the 
ASF, more specifically within the Geronimo project.


If you'd think this tool is valuable, but believe it should go through 
incubation, we would hope that a Geronimo committer would step up and 
champion this effort.


The tool, including IBM's CCLA, can be found at 
http://people.apache.org/~fhanik/j2g/j2g.html (Covalent will file the 
CCLA upon request)


thanks for your time,
Filip



Re: [Code donation] J2G Conversion tool

2007-01-16 Thread Filip Hanik - Dev Lists

Jacek Laskowski wrote:

On 1/16/07, Filip Hanik - Dev Lists [EMAIL PROTECTED] wrote:


IBM has together with in a joint effort with Covalent developed a JBoss
to Geronimo conversion tool. This tool is used when converting
applications from JBoss to Geronimo, and automatically converts the
configuration file from one app server to the other.


Whow! That's great! Could you rise an jira issue with the files
attached (and ASF donation field checked)?

Jacek


JIRA Issue: 2743
https://issues.apache.org/jira/browse/GERONIMO-2743

Please note: I can't find the ASF Donation field, although I checked 
the inclusion GRANT radio button on the attachment page


Filip



Re: Jetty 6- Clustering - How it works

2007-01-10 Thread Filip Hanik - Dev Lists

really awesome work Gianny, I know you've worked very hard on it.

Filip

David Jencks wrote:

Wow this is great!!

Are there instructions somewhere on how to set up a demo system?  
Having an integration test would be even better :-)


thanks
david jencks

On Jan 6, 2007, at 7:30 PM, Gianny Damour wrote:


Hi,

I think that support for clustered Web-applications with Jetty is now 
working.



Here is a description of how this works; note that most of the 
described behavior is WADI specific.


Group Communication
Group communications are performed by Tribes, which is the Tomcat 6 
group communication engine. I know very little of Tribes; however, I 
am pretty sure that Filip Hanik can give us an in-depth description, 
if requested. At a very high level, Tribes provides membership 
discovery and failure detection. It also provides basic message 
exchange communication primitives, that WADI builds upon to provide 
additional message exchange operations (e.g. request-reply).


Logical group communication engines are layered on top of the above 
(physical) group communication engine. A logical group communication 
engine, a ServiceSpace in the WADI's terminology, provides the same 
features than a physical group communication engine and allows the 
definition of sub-groups. This means that at the physical level, you 
could have three nodes interconnected and at the logical level only 
two of them could appear as existing.



Clustered Web-Application Discovery
A clustered Web-application is placed into a logical group 
communication engine, which is uniquely identified by a URI. This URI 
is the Artifact id of the Web-application configuration. When this 
Web-application starts, the logical group communication engine starts 
and joins the logical sub-group identified by its unique id. 
Conversely, when the application stops, the logical group 
communication engine leaves the logical sub-group.



Partitioned State
WADI implements a partitioned session state topology. In a cluster 
all the session states are distributed across the cluster nodes and 
only one node holds the state of a given session. This design choice 
was made to improve scalability with respect to the size of data to 
be managed by a single node. Session locations, information required 
when a node requests a session that it does not hold, are also 
managed by a single node. When a node fails, the session states and 
session locations managed by this node are lost and WADI is able to 
recreate them. Session states are lazily recreated from replicas. 
Session locations are recreated by querying the cluster and asking 
each member what are the sessions that they are currently holding.



Session Creation
When an inbound request wants to create a new HttpSession, a WADI 
session is created. This session is hosted by the node receiving the 
inbound request. An HttpSession using under the cover the WADI 
session is then created and returned. Under the cover, WADI ensures 
that the session has a unique identifier cluster-wide.



Session Migration
When an inbound request wants to access an HttpSession and this 
session is not hosted by the node, The node hosting the requested 
session migrates it to the node receiving the request. Under the 
cover, WADI ensures correct handling of concurrent session migration 
requests via a locking approach and maintains its internal book 
keeping of session locations following migration events.



Session Replication
When a request complete, the WADI session used under the cover of the 
HttpSession is notified. The WADI session is then replicated 
synchronously to one or multiple nodes. The selection of the back-up 
nodes for each session is customizable via a plugin strategy.


When a node fails, replicas are re-organized based on the new list of 
existing members.



Fail-Over
When an inbound request wants to access an HttpSession, which was 
hosted by a node which has died, the cluster is queried for a replica 
of the requested session and an HttpSession is recreated if possible.



Session Evacuation
When a Web-application stops, all the session that it holds are 
evacuated to the other nodes hosting the same clustered Web-application.



It will take me 1 to 2 weeks to test specific error scenarios and 
ensure correctness; meanwhile, if anyone wants to know more of some 
specific areas, then please feel free to ask.


Thanks,
Gianny




--No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.432 / Virus Database: 268.16.6/617 - Release Date: 
1/5/2007 11:11 AM







Re: JSTL dependencies on JSP/EL

2006-10-26 Thread Filip Hanik - Dev Lists
Tomcat 6 is just around the corner, but is pretty easy to build from 
SVN, three steps,

1. svn co https://svn.apache.org/repos/asf/tomcat/tc6.0.x/trunk/ trunk
2. cd trunk
3. ant download
4. ant

It generates all the libraries that you'll need, more answers inline

Joe Bohn wrote:


JSTL 1.2 is dependent upon a JSP 2.1 web container (this is no great 
surprise).  However, more specifically the Glassfish JSTL 
implementation is dependent upon a JSP 2.1 implementation and a 2.1 EL 
implementation as well.


So, I'm wondering where we will pick up the JSP support for the 
various containers.

it should come with the container itself. Tomcat 6 will ship with it.
We are also hoping to publish the individual JARs to the ASF maven repo, 
so that they can be utilized independently.


I've seen it mentioned someplace that Jetty 6 would pick up the 
Glassfish JSP 2.1 implementation.  Is this true?  I just downloaded 
Jetty 6.0.1 and it seems that the JSP2.1 jar in lib does include 
com.sun classes.  However, if look under modules/jsp-2.1 there are 
apache jasper items there (which I presume are from jakarta).  
Jan/Greg ... could one of you clarify this?  If it is Glassfish then I 
presume there should be no issue with using the Glassfish JSTL library 
with Jetty6 ... do you agree?


I'm having a more difficult time finding information for Tomcat 6.x. 
There's no download yet and the documentation for tomcat 6.x seems 
like it's still a 5.x version of the doc with just 6.x headers.  It 
still references servlet 2.4 and JSP 2.0 (via Jakarta) rather than 
servlet 2.5 and JSP 2.1.  Does anybody (Jeff?) know if Tomcat is 
planning to pick up the Glassfish JSP 2.1 impl as well or is there 
going to be a new Jakarta Jasper implementation?

Tomcat doesn't pick it up from Glassfish, it sits in the Tomcat source tree.

Filip


Re: gcache imlementation ideas[long]

2006-10-11 Thread Filip Hanik - Dev Lists


I addressed the discussion about what transport do we use, a long time 
ago by creating an agnostic API to plug into.

http://marc.theaimsgroup.com/?l=geronimo-devm=115281186718399w=2
http://people.apache.org/~fhanik/geronimo-cluster-api.zip

this way, we can continue the pluggability of G, and not pushing any 
specific protocols.

but writing a custom one just for G doesn't sound like a sound solution

in addition to ehcache, I'd like to propose that we take a look at ASF's 
JCS(Java Cache System) which sits under the Jakarta umbrella.

http://jakarta.apache.org/jcs/index.html

and a performance report http://jakarta.apache.org/jcs/JCSvsEHCache.html 
(could be outdated)


sorry about splitting up the gcache discussion, actually it was already 
split when we started talking about the transport a while back in this 
thread.
However, my take on it is that we should use code already written, while 
this is all really cool stuff to work on, creating a distributed cache 
from start is hard and takes a very long time. I would think the main 
goal is to get to JEE 5 right now.



Filip


Jason Dillon wrote:

On Sep 14, 2006, at 7:56 AM, Jeff Genender wrote:

The JMS provider would be a pluggable comm strategy.  For performance
reasons, I want to start with TCP communication.


Why do you think that AMQ will not perform well?



I definitely want to
have a JMS strategy...maybe next.  But initially I don't want any
dependencies on other servers or brokers.

With that said, after looking at openwire, the comm marshaller for
ActiveMQ, there is a lot to leverage there and will prevent a rewrite of
the comm layer.  So, there will be some use of that code base initially.


IMO, AMQ already provides a rich clustering environment, with 
failover, master-slave, dynamic discovery, firewall-happy transports, 
monitoring and a heck of a lot more.


Seems like it would be a waste to go and re-implement all of that.  It 
also seems that if you wanted to get something up sooner, that it 
would be much easier to design a AMQ strategy first, which means that 
you only have to worry about the message passing to sync up and 
invalidate state, rather than all of the details of who is in what 
cluster, failing over, blah, blah...


And, I guess that if after that was implemented you still thought it 
was not fast enough, then it might be better to get AMQ fixed to 
perform better, though I don't think that the performance using AMQ 
will differ all that much from a custom socket protocol to pass messages.


I am a huge fan of AMQ and would really like to see G exploit its 
network communications facilities as much as possible.


IMO, this is the best way to get the most features for clustering up 
and running sooner, with less code to maintain.


--jason


--No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.405 / Virus Database: 268.12.3/447 - Release Date: 9/13/2006






Re: Cluster API proposal?

2006-07-14 Thread Filip Hanik - Dev Lists

Cool, thanks for posting this.
While I do believe everything in this API is very useful, I see it as an 
extension to the one I created.
My API is only about the cluster, and its meta data, while the API below 
is very session oriented.


In a cluster without state replication, most of the methods below would 
return null or throw UnsupportedOperationException, hence it would make 
it harder to implement, and less useful.
The API below is essentially meta data about session state. I personally 
consider this an extension to the Cluster API, or a higher level 
component, and bada bim, we are back at SessionAPI :), our favorite topics.


Does this make sense? I was looking for the lowest common denominator, 
for what a cluster is, essentially, nothing but a group of VMs.
So what I try to do, is that the group wont be forced to expose session 
state, cause if there is no state replication, you couldn't implement 
that API.


Because I haven't thought much of the session API, as I do consider that 
a higher level component, I haven't yet though of a good way, if there 
is one, how that would sit on top of a cluster API. But I do believe 
they shouldn't be morphed together, instead of the SessionAPI must know 
about nodes and clusters, it would get that from the cluster api, 
otalthough i personally believe the session api should only know about 
sessions and nothing else, but that is why I am staying out of that 
topic :)/ot


Filip


Greg Wilkins wrote:

This is my idea of how we could morph the currently proposed session APIs
into a cluster API
  
I have created a spot for Cluster meta data - but I have not filled it out much.


The key difference is that the state Map is now indexed by session ID and 
context ID.
This allows the state for different contexts within the same session to be on 
different
nodes (this is a real requirement) and also means that locking is at context 
rather
than meta session level.  Note that some implementations may not fully support 
this and may just do sessionId+contextId behind the scenes and colocate all context

states for the same session (and move them as one).
I have also added an async aspect to the API for potentially long operations
such as moving state about the place - again this can be optionally supported.

Also I support the idea of multiple Nodes per server (really useful for testing
and heterogeneous clusters).



// The top level Cluster API - this was the Locator... but let's call a spade a 
spade.

interface Cluster
{
 // methods to get/set meta data about the cluster
 // these signatures here are just a guess... but you get the idea.
 int getMaxNodes();
 SetNode getKnownNodes();
 void setKnownNodes(SetNode nodes);
 Node getLocalNode();

 // Access sessions in cluster.  
 MetaSession getMetaSession(String clientID);

 Session createMetaSession(String sessionId);

}


// Node API
// was Server - but may have multiple Nodes per server
interface Node
{
String getName();
String[] getAddresses(String protocol);
void setAddresses(String string, String[] strings);
boolean isLocalServer();

boolean isActive();

int getPort(String protocol);  // one way to handle the multi nodes per 
server
int getPortOffset();   // or this one (add to standard port)
}

// Meta Session - was SessionLocation
interface MetaSession
{
String getSessionId();
void invalidate();

void addEventListener(MetaSessionListener listener);
void removeEventListener(MetaSessionListener listener);

// State API has map per context ID , where a context
// ID might be web:/context or ejb: or random
boolean isStateLocal(String contextId);


Map getState(String contextId);  // implies a move local!
void getStateAsync(Object key, String contextId);  // async version 


Map createState(String contextId);
void releaseState(String contextId); // don't lock whole meta session!
void invalidate(String contextId);

// Locaton / Policy API.
Node getNode(String contextId); 
Node getExecutionNode(String contextId); 
void getExecutionNodeAsync(Object key, String contextId);



// Don't know if these are too HTTP specific... but we need them 
void setPassivationTimeout(long ms, String contextId);

void setInvalidationTimeout(long ms, String contextId);
}


interface MetaSessionListener
{
// callbacks to allow session manager to inspect contents for 
// tier specific handling (eg servlet listeners etc.)

void activateState(String sessionId, String contextId, Map state);
void passivateState(String sessionId, String contextId, Map state);
void invalidateState(String sessionId, String contextId, Map state);

// callbacks for async operations
void gotState(Object key, String sessionId, String contextId, Map state);
void executionNode(Object key, String sessionId, String contextId, Node 
location);

}


  




Re: Session API in 1.1 trunk?

2006-05-22 Thread Filip Hanik - Dev Lists

no.
you should wait at least couple of days to let people review it, then 
vote, then summarize the votes :)

otherwise you wont give people a chance

Filip


Jeff Genender wrote:

Thats 3! Ok...as soon as the new trunk is cut, I will merge in the
session API.

Thanks!

Jeff

Alan D. Cabrera wrote:
  

+1 new trunk
-1 current 1.1 branch - I think we're trying to get this out but I'm not
intransigent on this


Regards,
Alan


Jeff Genender wrote:


The session API is in 1.2 and it was put togther by James and Dain with
input from many others on a pluggable API framework for clustering.  It
allows different clustering implementations to be used with Geronimo.

It can be found here:

http://svn.apache.org/repos/asf/geronimo/trunk/modules/session

Jeff

Alan D. Cabrera wrote:
 
  

Jeff Genender wrote:
   


We have an initial swipe at some clustering to put into the sandbox,
but
will have a need for the session api ;-)

Anyone have issue with putting the session API in 1.1 (the new trunk
version that is)? (Need 3 +1s)

Jeff

  

What is the session API?


Regards,
Alan






  




Re: Was: Clustering: Monitoring... - Now: Clustering: OpenEJB...

2006-05-04 Thread Filip Hanik - Dev Lists

Jules Gosnell wrote:

David Blevins wrote:



On May 4, 2006, at 12:57 AM, Jules Gosnell wrote:





Sort of.  Both your explanations involve smartening the java clients  
on the other end of WS or CORBA to play nice.


??

smart java stubs for RMI over OpenEJB-protocol (what is it called?) or 
IIOP.


for WS, the load-balancer will do it.

The goal of those  protocols is to interop in a language agnostic 
fashion.  WS are all  stateless for EJB, so there is nothing to 
cluster anyway.


stateless calls are still clustered - the load-balancing and failover 
considerations still exist - you just do not require session affinity 
(stickiness). If you are talking about server-side requirements, then 
I agree.


But for  IIOP, would we simply not offer clustering to people using 
CORBA to  interop with clients in other languages or on other platforms?


to tell the truth, these cases have escaped my radar - My CORBA 
knowledge is pretty thin and I have only really considered it in a 
java/java environment - I am sure that Kresten would be much better 
positioned to answer this question... I will CC this to him, in case 
he would like to pick it up...
corba is not far from RMI, and the corba implementation that you use, 
create their own stubs, and those stubs can do the same stuff
as smart RMI stubs. I'm sure that corba could even do dynamic proxies in 
some sort of sense, they werent able to when I used it a long time ago, 
but if the technology has kept up, then yes, you should be able to build 
in significant logic in the clients.


Filip


Re: Clustering: Monitoring...

2006-05-03 Thread Filip Hanik - Dev Lists

its a neat solution, I like it.

one would still need to build the aggregate view, so that summaries 
etc can be reported on,
otherwise you only view a server at a time, which can be achieved by 
just connecting to the server itself.


by aggregate I mean the sum of the nodes in the cluster, for example, 
number of active sessions in the cluster, would be an aggregate view


Filip


James Strachan wrote:
BTW I'm not necessarily saying that the Lingo JMX connector is *the* 
solution, just *a* solution - but I think the general approach of 
using distributed JMX connectors seems like a good, standards based 
approach to monitoring and controlling clusters of POJOs. Then folks 
can choose the JMX connector that suits their needs.  Already today 
there are JMX connectors for HTTP, RMI and JMS and I'm sure folks 
could implement it on top of other things too like Jabber, 
WS-Notification or JBI etc.


On 5/3/06, *Jules Gosnell* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


James Strachan wrote:

 Distributed JMX sounds like a simple way to monitor a cluster of any
 Java stuff. Details on using clustered JMX over JMS here...

 http://lingo.codehaus.org/JMX+over+JMS

 which lets you monitor the cluster in a single MBeanServer

cool - sounds like exactly the sort of thing that I am looking for,

thanks,

Jules


 On 5/3/06, Jules Gosnell [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:

 I'd like to kick off a thread about the monitoring of clustered
 deployments...

 There is a section in the 1,000ft Clustering Overview

(http://opensource.atlassian.com/confluence/oss/display/GERONIMO/Clustering

http://opensource.atlassian.com/confluence/oss/display/GERONIMO/Clustering),

 but little content, due to the fact that there has been little
 discussion about this subject on the list...

 Obviously we can use standard tools etc to monitor individual
nodes in a
 cluster - but, if you want to aggregate all the stats together,
for a
 clusterwide view of what is going on in your deployment, life
becomes a
 little more complicated

 I have a few ideas, but I haven't done much reading around this
area, so
 it may be that there are already specs and standard ways of
achieving
 all of this. if there are and you know of them, please shout.
If not,
 lets throw it open to discussion

 thanks for your time,


 Jules

 --
 Open Source is a self-assembling organism. You dangle a piece of
 string into a super-saturated solution and a whole operating-system
 crystallises out around it.

 /**
  * Jules Gosnell
  * Partner
  * Core Developers Network (Europe)
  *
  *www.coredevelopers.net http://www.coredevelopers.net
  *
  * Open Source Training  Support.
  **/




 --

 James
 ---
 http://radio.weblogs.com/0112098/



--
Open Source is a self-assembling organism. You dangle a piece of
string into a super-saturated solution and a whole operating-system
crystallises out around it.

/**
* Jules Gosnell
* Partner
* Core Developers Network (Europe)
*
*www.coredevelopers.net http://www.coredevelopers.net
*
* Open Source Training  Support.
**/




--

James
---
http://radio.weblogs.com/0112098/ 




Re: Tomcat version in G1.1 for clustering

2006-04-19 Thread Filip Hanik - Dev Lists
5.5.15,16,17 has some new features, like the JvmRouteBinderValve, that 
will rewrite the session id for a new node when a node crashes.
This is an important feature. The coordination error that you ran into I 
am not yet sure why it is happening, hence I can't comment on it, and I 
don't know if it is a result of a code change or just a one time fluke.


I would make the same recommendation, to use 5.5.9 for 1.1 since 1.1 is 
right around the corner.


And I will extend/commit my help to get 1.2/5.5.17 in a good shape, 
including documentation and testing for the clustering piece.


Filip

Dave Colasurdo wrote:



Jeff Genender wrote:

I would vote for not moving to 5.5.16 for 1.1.  IMHO, its too close.  We
did some preliminary testing for 5.5.15 and it seems ok...and we will
know in the next several days if its good to bake in to 1.1.  


Filip,

How significant are the 5.5.15 bugs that you alluded to?  Or is this 
just a general request to use the latest level...


Are the problems unique to clustering?

Do you suspect the coordination error to be a code bug in 5.5.15? 
AFAICT, my setup is identical to 5.5.9..


Would like your input on 5.5.9 -vs- 5.5.15..

Thanks
-Dave-

5.5.9 is fine to stick with since its pretty stable and it just 
works, and in the

event 5.5.15 causes any discomfort during testing, we are comfortable
that we can fall back on it.

IIRC, the 5.5.16 issues had to do with cross context stuff that David
Jencks and I worked pretty diligently on to fix.  So I would probably be
apt to push a -1 on 5.5.16 for 1.1.

Jeff

Dave Colasurdo wrote:
Hmmm..  What level of Tomcat does the community want to include in 
G1.1?


Background...

Tomcat 5.5.9 - current working level in G1.0 and G1.1.. Clustering
works.. TCK is testing with this level..

Tomcat 5.5.10-5.5.14 - clustering is broken

Tomcat 5.5.15 - Clustering seems to work somewhat. We've encountered at
least one bug. Filip (tomcat clustering developer) mentioned there are
still some significant bugs in this level and advises us to move to 
5.5.16.


Tomcat 5.5.16 - Jeff has mentioned that he and David J had previously
discovered some issues that required significant rework that he didn't
want to tackle until G1.2..

So...  Do we stick with 5.5.9 for G1.1 and move to 5.5.16+ in G1.2?

Thanks
-Dave-



Filip Hanik - Dev Lists wrote:

looks like you are right, there where some other fixes in .16 that
were important, so it may be better to use that one.
seems like you got a coordination error, ie, node1 requested state
from node2, but node2 didn't know about node1, and that caused the
stack trace from below.

Filip


Dave Colasurdo wrote:

Thanks Filip!!

http://mail-archives.apache.org/mod_mbox/tomcat-users/200512.mbox/[EMAIL PROTECTED] 




seems to indicate that it is fixed in 5.5.15..

Is it fixed in 5.5.15 or 5.5.16?

Thanks
-Dave-

Filip Hanik - Dev Lists wrote:

Clustering was broken in Tomcat 5.5.10-5.5.15 due to a protocol
change, this was corrected in 5.5.16.
I would run the tests again that version, and then I can help you
out with any problems you run into.

Filip


Dave Colasurdo wrote:

Jeff,

Upgraded tomcat, tomcat_ajp and jasper to 5.5.15 and ran the
clustering tests.

The *good* news...
 Load balancing, sticky session, session replication and session
failover seem to work using the same deployment plan that was
created for G1.1 w/ TC 5.5.9..

The *bad* news...

*Problem1*
When testing Sticky session, my browser locks unto a particular
cluster member (e.g. node1) due to the nodeid in the cookie. If I
kill node1, the session fails over into node2 and all my session
data is still present. This is good.
The nodeid in the cookie continues to say node1 (this is also true
w/ TC 5.5.9 w/ and mod-jk)..

Now, if I restart node1 and wait a minute or so and then hit my
browser,I am directed to node1 and all my session data is 
gone. :(

BTW, an earlier run using TC 5.5.9 also resulted in being directed
back to node1 though the httpsession is retained.  I think this may
be related to problems replicating data whenever nodes are
added..   Which leads me to ...


*Problem2*
Whenever a cluster member is added to the cluster, the other nodes
receive the following exception.  This occurs both during the
initial addition of a node and after a stopped node is restarted...

(Though later when I access an httpsession (via a servlet
request)it does result in session replication between members.)

15:30:19,352 INFO  [SimpleTcpCluster] Replication member
added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.14.160 


:4001,catalina,192.168.14.160,4001
, alive=0]
15:30:19,692 ERROR [SimpleTcpCluster] Unable to send message
through cluster sender.
java.io.IOException: Sender not available. Make sure sender
information is available to the ReplicationTransmitter.
at
org.apache.catalina.cluster.tcp.ReplicationTransmitter.sendMessageDat 


a(ReplicationTransmitter.java:857

Re: Tomcat version in G1.1 for clustering

2006-04-18 Thread Filip Hanik - Dev Lists
Clustering was broken in Tomcat 5.5.10-5.5.15 due to a protocol change, 
this was corrected in 5.5.16.
I would run the tests again that version, and then I can help you out 
with any problems you run into.


Filip


Dave Colasurdo wrote:

Jeff,

Upgraded tomcat, tomcat_ajp and jasper to 5.5.15 and ran the 
clustering tests.


The *good* news...
 Load balancing, sticky session, session replication and session 
failover seem to work using the same deployment plan that was created 
for G1.1 w/ TC 5.5.9..


The *bad* news...

*Problem1*
When testing Sticky session, my browser locks unto a particular 
cluster member (e.g. node1) due to the nodeid in the cookie. If I kill 
node1, the session fails over into node2 and all my session data is 
still present. This is good.
The nodeid in the cookie continues to say node1 (this is also true w/ 
TC 5.5.9 w/ and mod-jk)..


Now, if I restart node1 and wait a minute or so and then hit my 
browser,I am directed to node1 and all my session data is gone. :(
BTW, an earlier run using TC 5.5.9 also resulted in being directed 
back to node1 though the httpsession is retained.  I think this may be 
related to problems replicating data whenever nodes are added..   
Which leads me to ...



*Problem2*
Whenever a cluster member is added to the cluster, the other nodes 
receive the following exception.  This occurs both during the initial 
addition of a node and after a stopped node is restarted...


(Though later when I access an httpsession (via a servlet request)it 
does result in session replication between members.)


15:30:19,352 INFO  [SimpleTcpCluster] Replication member 
added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.14.160 
:4001,catalina,192.168.14.160,4001

, alive=0]
15:30:19,692 ERROR [SimpleTcpCluster] Unable to send message through 
cluster sender.
java.io.IOException: Sender not available. Make sure sender 
information is available to the ReplicationTransmitter.
at 
org.apache.catalina.cluster.tcp.ReplicationTransmitter.sendMessageDat

a(ReplicationTransmitter.java:857)
at 
org.apache.catalina.cluster.tcp.ReplicationTransmitter.sendMessage(Re

plicationTransmitter.java:430)
at 
org.apache.catalina.cluster.tcp.SimpleTcpCluster.send(SimpleTcpCluste

r.java:1074)
at 
org.apache.catalina.cluster.session.DeltaManager.sendSessions(DeltaMa

nager.java:1690)
at 
org.apache.catalina.cluster.session.DeltaManager.handleGET_ALL_SESSIO

NS(DeltaManager.java:1629)
at 
org.apache.catalina.cluster.session.DeltaManager.messageReceived(Delt

aManager.java:1443)
at 
org.apache.catalina.cluster.session.DeltaManager.messageDataReceived(

DeltaManager.java:1225)
at 
org.apache.catalina.cluster.session.ClusterSessionListener.messageRec

eived(ClusterSessionListener.java:85)
at 
org.apache.catalina.cluster.tcp.SimpleTcpCluster.receive(SimpleTcpClu

ster.java:1160)
at 
org.apache.catalina.cluster.tcp.ClusterReceiverBase.messageDataReceiv

ed(ClusterReceiverBase.java:418)
at 
org.apache.catalina.cluster.io.ObjectReader.execute(ObjectReader.java

:107)
at 
org.apache.catalina.cluster.tcp.TcpReplicationThread.drainChannel(Tcp

ReplicationThread.java:131)
at 
org.apache.catalina.cluster.tcp.TcpReplicationThread.run(TcpReplicati

onThread.java:69)
15:30:19,692 ERROR [SimpleTcpCluster] Unable to send message through 
cluster sen

der.
java.io.IOException: Sender not available. Make sure sender 
information is avail

able to the ReplicationTransmitter.
at 
org.apache.catalina.cluster.tcp.ReplicationTransmitter.sendMessageDat

a(ReplicationTransmitter.java:857)
at 
org.apache.catalina.cluster.tcp.ReplicationTransmitter.sendMessage(Re

plicationTransmitter.java:430)
at 
org.apache.catalina.cluster.tcp.SimpleTcpCluster.send(SimpleTcpCluste

r.java:1074)
at 
org.apache.catalina.cluster.session.DeltaManager.handleGET_ALL_SESSIO

NS(DeltaManager.java:1660)
at 
org.apache.catalina.cluster.session.DeltaManager.messageReceived(Delt

aManager.java:1443)
at 
org.apache.catalina.cluster.session.DeltaManager.messageDataReceived(

DeltaManager.java:1225)
at 
org.apache.catalina.cluster.session.ClusterSessionListener.messageRec

eived(ClusterSessionListener.java:85)
at 
org.apache.catalina.cluster.tcp.SimpleTcpCluster.receive(SimpleTcpClu

ster.java:1160)
at 
org.apache.catalina.cluster.tcp.ClusterReceiverBase.messageDataReceiv

ed(ClusterReceiverBase.java:418)
at 
org.apache.catalina.cluster.io.ObjectReader.execute(ObjectReader.java

:107)
at 
org.apache.catalina.cluster.tcp.TcpReplicationThread.drainChannel(Tcp

ReplicationThread.java:131)
at 
org.apache.catalina.cluster.tcp.TcpReplicationThread.run(TcpReplicati

onThread.java:69)

*Problem3*
Getting a bunch of exceptions relating to session invalidation

[snip]
java.lang.IllegalStateException: getId: Session already invalidated
[snip]


Re: Tomcat version in G1.1 for clustering

2006-04-18 Thread Filip Hanik - Dev Lists

Dave Colasurdo wrote:

Jeff,

Upgraded tomcat, tomcat_ajp and jasper to 5.5.15 and ran the 
clustering tests.


The *good* news...
 Load balancing, sticky session, session replication and session 
failover seem to work using the same deployment plan that was created 
for G1.1 w/ TC 5.5.9..


The *bad* news...

*Problem1*
When testing Sticky session, my browser locks unto a particular 
cluster member (e.g. node1) due to the nodeid in the cookie. If I kill 
node1, the session fails over into node2 and all my session data is 
still present. This is good.
The nodeid in the cookie continues to say node1 (this is also true w/ 
TC 5.5.9 w/ and mod-jk)..
ok, this is probably not desired behavior for a cluster with more than 2 
nodes.
For this to work correctly, you need to have the JvmRouteBinderValve 
configured in tomcat.
This valve, will rewrite the sessionId to include the new jvmRoute, 
node2 in your scenario.


Filip



Re: Tomcat version in G1.1 for clustering

2006-04-18 Thread Filip Hanik - Dev Lists
looks like you are right, there where some other fixes in .16 that were 
important, so it may be better to use that one.
seems like you got a coordination error, ie, node1 requested state from 
node2, but node2 didn't know about node1, and that caused the stack 
trace from below.


Filip


Dave Colasurdo wrote:

Thanks Filip!!

http://mail-archives.apache.org/mod_mbox/tomcat-users/200512.mbox/[EMAIL PROTECTED] 



seems to indicate that it is fixed in 5.5.15..

Is it fixed in 5.5.15 or 5.5.16?

Thanks
-Dave-

Filip Hanik - Dev Lists wrote:
Clustering was broken in Tomcat 5.5.10-5.5.15 due to a protocol 
change, this was corrected in 5.5.16.
I would run the tests again that version, and then I can help you out 
with any problems you run into.


Filip


Dave Colasurdo wrote:

Jeff,

Upgraded tomcat, tomcat_ajp and jasper to 5.5.15 and ran the 
clustering tests.


The *good* news...
 Load balancing, sticky session, session replication and session 
failover seem to work using the same deployment plan that was 
created for G1.1 w/ TC 5.5.9..


The *bad* news...

*Problem1*
When testing Sticky session, my browser locks unto a particular 
cluster member (e.g. node1) due to the nodeid in the cookie. If I 
kill node1, the session fails over into node2 and all my session 
data is still present. This is good.
The nodeid in the cookie continues to say node1 (this is also true 
w/ TC 5.5.9 w/ and mod-jk)..


Now, if I restart node1 and wait a minute or so and then hit my 
browser,I am directed to node1 and all my session data is gone. :(
BTW, an earlier run using TC 5.5.9 also resulted in being directed 
back to node1 though the httpsession is retained.  I think this may 
be related to problems replicating data whenever nodes are added..   
Which leads me to ...



*Problem2*
Whenever a cluster member is added to the cluster, the other nodes 
receive the following exception.  This occurs both during the 
initial addition of a node and after a stopped node is restarted...


(Though later when I access an httpsession (via a servlet request)it 
does result in session replication between members.)


15:30:19,352 INFO  [SimpleTcpCluster] Replication member 
added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.14.160 
:4001,catalina,192.168.14.160,4001

, alive=0]
15:30:19,692 ERROR [SimpleTcpCluster] Unable to send message through 
cluster sender.
java.io.IOException: Sender not available. Make sure sender 
information is available to the ReplicationTransmitter.
at 
org.apache.catalina.cluster.tcp.ReplicationTransmitter.sendMessageDat

a(ReplicationTransmitter.java:857)
at 
org.apache.catalina.cluster.tcp.ReplicationTransmitter.sendMessage(Re

plicationTransmitter.java:430)
at 
org.apache.catalina.cluster.tcp.SimpleTcpCluster.send(SimpleTcpCluste

r.java:1074)
at 
org.apache.catalina.cluster.session.DeltaManager.sendSessions(DeltaMa

nager.java:1690)
at 
org.apache.catalina.cluster.session.DeltaManager.handleGET_ALL_SESSIO

NS(DeltaManager.java:1629)
at 
org.apache.catalina.cluster.session.DeltaManager.messageReceived(Delt

aManager.java:1443)
at 
org.apache.catalina.cluster.session.DeltaManager.messageDataReceived(

DeltaManager.java:1225)
at 
org.apache.catalina.cluster.session.ClusterSessionListener.messageRec

eived(ClusterSessionListener.java:85)
at 
org.apache.catalina.cluster.tcp.SimpleTcpCluster.receive(SimpleTcpClu

ster.java:1160)
at 
org.apache.catalina.cluster.tcp.ClusterReceiverBase.messageDataReceiv

ed(ClusterReceiverBase.java:418)
at 
org.apache.catalina.cluster.io.ObjectReader.execute(ObjectReader.java

:107)
at 
org.apache.catalina.cluster.tcp.TcpReplicationThread.drainChannel(Tcp

ReplicationThread.java:131)
at 
org.apache.catalina.cluster.tcp.TcpReplicationThread.run(TcpReplicati

onThread.java:69)
15:30:19,692 ERROR [SimpleTcpCluster] Unable to send message through 
cluster sen

der.
java.io.IOException: Sender not available. Make sure sender 
information is avail

able to the ReplicationTransmitter.
at 
org.apache.catalina.cluster.tcp.ReplicationTransmitter.sendMessageDat

a(ReplicationTransmitter.java:857)
at 
org.apache.catalina.cluster.tcp.ReplicationTransmitter.sendMessage(Re

plicationTransmitter.java:430)
at 
org.apache.catalina.cluster.tcp.SimpleTcpCluster.send(SimpleTcpCluste

r.java:1074)
at 
org.apache.catalina.cluster.session.DeltaManager.handleGET_ALL_SESSIO

NS(DeltaManager.java:1660)
at 
org.apache.catalina.cluster.session.DeltaManager.messageReceived(Delt

aManager.java:1443)
at 
org.apache.catalina.cluster.session.DeltaManager.messageDataReceived(

DeltaManager.java:1225)
at 
org.apache.catalina.cluster.session.ClusterSessionListener.messageRec

eived(ClusterSessionListener.java:85)
at 
org.apache.catalina.cluster.tcp.SimpleTcpCluster.receive(SimpleTcpClu

ster.java:1160

Re: Session replication in Geronimo clustering

2006-04-03 Thread Filip Hanik - Dev Lists
the correct attr name is mcastBindAddress if I remember it correctly, 
but if it is working for you now, then that is great


Phani Madgula wrote:

Hi,
 
Sorry for deply in reply..I was out of the work...!
 
I did change the xx.yy.zz.aa to the  proper value. I am testing these 
scenarios on Linux machines. I got information from google search 
about this error.
http://mail-archives.apache.org/mod_mbox/tomcat-users/200503.mbox/[EMAIL PROTECTED] 
http://mail-archives.apache.org/mod_mbox/tomcat-users/200503.mbox/[EMAIL PROTECTED]
 
It said that, if the network is not multihomed, then we do not have to 
specify the attribute mcastBindAddress.
 
So, I just commented out
mcastBindAddr=192.168.11.3 http://192.168.11.3 in the 
geronimo-web.xml files and redeployed the applications on each node. 
Now, all the session replication and fail-over is happening.
 
I do not know what is multihomed network. I will try to know and 
update you on this.
 
To my surprise, when I tested on only windows machines, this problem 
is not there. It is experienced only on Linux machines.
 
Thanks

Phani
 
 
 

 

 

 

 



 
On 3/30/06, *Filip Hanik - Dev Lists* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


tcpListenAddress=xx.yy.zz.aa

yup, this would cause a null pointer later on if not changed. it
would
have to be a valid value, or auto, which will decide the IP on
its own.

Filip

Jeff Genender wrote:
 Yep...those should be set if the example was followed...

 gbean name=TomcatReceiver
 class=org.apache.geronimo.tomcat.cluster.ReceiverGBean
   attribute
 name=classNameorg.apache.catalina.cluster.tcp.ReplicationListener
 /attribute

   attribute name=initParams
 tcpListenAddress=xx.yy.zz.aa
 tcpListenPort=4001
 tcpSelectorTimeout=100
 tcpThreadCount=6
   /attribute
 /gbean

 Phani, did you change the tcpListenAddress initParams attribute to a
 real address?

 Jeff


 Filip Hanik - Dev Lists wrote:

 it would be one of these, they should all be set to a value.

 tcpListenAddress=auto
 tcpListenPort=9015
 tcpSelectorTimeout=100
 tcpThreadCount=6

 also, if tcpListenAddress says auto instead of an IP address,
the the
 following code gets executed

 public java.net.InetAddress getBind() {
if (bind == null) {
try {
if (auto.equals(tcpListenAddress))
tcpListenAddress =
 java.net.InetAddress.getLocalHost ().getHostAddress();
bind =
java.net.InetAddress.getByName(tcpListenAddress);
} catch (IOException ioe) {
log.error(Failed bind replication listener on
address:+
 tcpListenAddress, ioe);
}
}
  return bind;
 }

 so, if there is an error getting the correct address for the
localhost
 machine, it will return null, and could cause your nullpointer
exception

 my guess is of course that the attribute is missing all together.

 Filip




 Jeff Genender wrote:

 Filip,

 Thanks for the input...any idea on the missing attribute?

 Jeff

 Filip Hanik - Dev Lists wrote:


 gentlemen,
 looks like there is an attribute missing from the
 Cluster...*Receiver.../*/Cluster element.
 the ReplicationListener.listen() method just gets the listen
address (or
 tries to resolve the name, then gets the port)
 then it starts up a server socket using NIO.

 the other error, no active members in group, just means that
the tomcat
 instances didn't discover each other using multicast heart beats.

 Lets get the ReplicationListener error first, then we can
move on to
 membership, can you post your tomcat config file
 PS. the error is not related to mod_jk, its in the tomcat
java code.
 thanks
 Filip

 Phani Madgula wrote:


 Hi,

 I have been trying to use tomcat clustering with Geronimo for a
 customer application. Sometimes, I face the following problem.


 I downloaded apache2.0.54 and mod_jk_1.2.15 and tested
clustering. I
 have three machines on a same subnet one windows and other
are linux
 boxes. I have also enabled IPMulticast and no firewalls between
 systems.

 To my observation, session replication is not working. However,
 loadbalancer is able to fail-over successfully.

 When I shutdown the instance which is serving the
HttpRequests, it
 will throw an exception stating not able to start cluster
listener
 and also no active members in the cluster

 11:09:10,572 DEBUG [WebappLoader] Stopping this Loader

 11:09:10,573 ERROR [ReplicationListener] Unable to start cluster

Re: Session replication in Geronimo clustering

2006-03-29 Thread Filip Hanik - Dev Lists

gentlemen,
looks like there is an attribute missing from the 
Cluster...*Receiver.../*/Cluster element.
the ReplicationListener.listen() method just gets the listen address (or 
tries to resolve the name, then gets the port)

then it starts up a server socket using NIO.

the other error, no active members in group, just means that the tomcat 
instances didn't discover each other using multicast heart beats.


Lets get the ReplicationListener error first, then we can move on to 
membership, can you post your tomcat config file

PS. the error is not related to mod_jk, its in the tomcat java code.
thanks
Filip

Phani Madgula wrote:

Hi,
 
I have been trying to use tomcat clustering with Geronimo for a 
customer application. Sometimes, I face the following problem.
 

I downloaded apache2.0.54 and mod_jk_1.2.15 and tested clustering. I 
have three machines on a same subnet one windows and other are linux 
boxes. I have also enabled IPMulticast and no firewalls between systems.


To my observation, session replication is not working. However, 
loadbalancer is able to fail-over successfully.


When I shutdown the instance which is serving the HttpRequests, it 
will throw an exception stating not able to start cluster listener 
and also no active members in the cluster


11:09:10,572 DEBUG [WebappLoader] Stopping this Loader

11:09:10,573 ERROR [ReplicationListener] Unable to start cluster listener.

java.lang.NullPointerException

at 
org.apache.catalina.cluster.tcp.ReplicationListener.listen(ReplicationListener.java(Compiled 
Code))


at 
org.apache.catalina.cluster.tcp.ReplicationListener.run(ReplicationListener.java:125)


at java.lang.Thread.run(Thread.java:570)

11:09:10,573 DEBUG [StandardContext] resetContext Geronimo 
:j2eeType=WebModule,name=//localhost/servlet-examples-cluster,J2EEApplication=none,J2EEServer=none 
null


11:09:10,575 DEBUG [StandardContext] Stopping complete

or

11:03:07,998 INFO [DeltaManager] Manager [/servlet-examples-cluster]: 
skipping state transfer. No members active in cluster group.


I have tested with both mod_jk_1.2.14  mod_jk_1.2.15, but failed.

Any ideas on why this error comes?..
 
Thx

phani




Re: Session replication in Geronimo clustering

2006-03-29 Thread Filip Hanik - Dev Lists

it would be one of these, they should all be set to a value.

tcpListenAddress=auto
tcpListenPort=9015
tcpSelectorTimeout=100
tcpThreadCount=6

also, if tcpListenAddress says auto instead of an IP address, the the 
following code gets executed


public java.net.InetAddress getBind() {
   if (bind == null) {
   try {
   if (auto.equals(tcpListenAddress))
   tcpListenAddress = 
java.net.InetAddress.getLocalHost().getHostAddress();

   bind = java.net.InetAddress.getByName(tcpListenAddress);
   } catch (IOException ioe) {
   log.error(Failed bind replication listener on 
address:+ tcpListenAddress, ioe);

   }
   }
 return bind;
}

so, if there is an error getting the correct address for the localhost 
machine, it will return null, and could cause your nullpointer exception


my guess is of course that the attribute is missing all together.

Filip




Jeff Genender wrote:

Filip,

Thanks for the input...any idea on the missing attribute?

Jeff

Filip Hanik - Dev Lists wrote:
  

gentlemen,
looks like there is an attribute missing from the
Cluster...*Receiver.../*/Cluster element.
the ReplicationListener.listen() method just gets the listen address (or
tries to resolve the name, then gets the port)
then it starts up a server socket using NIO.

the other error, no active members in group, just means that the tomcat
instances didn't discover each other using multicast heart beats.

Lets get the ReplicationListener error first, then we can move on to
membership, can you post your tomcat config file
PS. the error is not related to mod_jk, its in the tomcat java code.
thanks
Filip

Phani Madgula wrote:


Hi,
 
I have been trying to use tomcat clustering with Geronimo for a

customer application. Sometimes, I face the following problem.
 


I downloaded apache2.0.54 and mod_jk_1.2.15 and tested clustering. I
have three machines on a same subnet one windows and other are linux
boxes. I have also enabled IPMulticast and no firewalls between systems.

To my observation, session replication is not working. However,
loadbalancer is able to fail-over successfully.

When I shutdown the instance which is serving the HttpRequests, it
will throw an exception stating not able to start cluster listener
and also no active members in the cluster

11:09:10,572 DEBUG [WebappLoader] Stopping this Loader

11:09:10,573 ERROR [ReplicationListener] Unable to start cluster
listener.

java.lang.NullPointerException

at
org.apache.catalina.cluster.tcp.ReplicationListener.listen(ReplicationListener.java(Compiled
Code))

at
org.apache.catalina.cluster.tcp.ReplicationListener.run(ReplicationListener.java:125)


at java.lang.Thread.run(Thread.java:570)

11:09:10,573 DEBUG [StandardContext] resetContext Geronimo
:j2eeType=WebModule,name=//localhost/servlet-examples-cluster,J2EEApplication=none,J2EEServer=none
null

11:09:10,575 DEBUG [StandardContext] Stopping complete

or

11:03:07,998 INFO [DeltaManager] Manager [/servlet-examples-cluster]:
skipping state transfer. No members active in cluster group.

I have tested with both mod_jk_1.2.14  mod_jk_1.2.15, but failed.

Any ideas on why this error comes?..
 
Thx

phani
  


  




Re: Session replication in Geronimo clustering

2006-03-29 Thread Filip Hanik - Dev Lists

tcpListenAddress=xx.yy.zz.aa

yup, this would cause a null pointer later on if not changed. it would 
have to be a valid value, or auto, which will decide the IP on its own.


Filip

Jeff Genender wrote:

Yep...those should be set if the example was followed...

gbean name=TomcatReceiver
class=org.apache.geronimo.tomcat.cluster.ReceiverGBean
  attribute
name=classNameorg.apache.catalina.cluster.tcp.ReplicationListener
/attribute

  attribute name=initParams
tcpListenAddress=xx.yy.zz.aa
tcpListenPort=4001
tcpSelectorTimeout=100
tcpThreadCount=6
  /attribute
/gbean

Phani, did you change the tcpListenAddress initParams attribute to a
real address?

Jeff


Filip Hanik - Dev Lists wrote:
  

it would be one of these, they should all be set to a value.

tcpListenAddress=auto
tcpListenPort=9015
tcpSelectorTimeout=100
tcpThreadCount=6

also, if tcpListenAddress says auto instead of an IP address, the the
following code gets executed

public java.net.InetAddress getBind() {
   if (bind == null) {
   try {
   if (auto.equals(tcpListenAddress))
   tcpListenAddress =
java.net.InetAddress.getLocalHost().getHostAddress();
   bind = java.net.InetAddress.getByName(tcpListenAddress);
   } catch (IOException ioe) {
   log.error(Failed bind replication listener on address:+
tcpListenAddress, ioe);
   }
   }
 return bind;
}

so, if there is an error getting the correct address for the localhost
machine, it will return null, and could cause your nullpointer exception

my guess is of course that the attribute is missing all together.

Filip




Jeff Genender wrote:


Filip,

Thanks for the input...any idea on the missing attribute?

Jeff

Filip Hanik - Dev Lists wrote:
 
  

gentlemen,
looks like there is an attribute missing from the
Cluster...*Receiver.../*/Cluster element.
the ReplicationListener.listen() method just gets the listen address (or
tries to resolve the name, then gets the port)
then it starts up a server socket using NIO.

the other error, no active members in group, just means that the tomcat
instances didn't discover each other using multicast heart beats.

Lets get the ReplicationListener error first, then we can move on to
membership, can you post your tomcat config file
PS. the error is not related to mod_jk, its in the tomcat java code.
thanks
Filip

Phani Madgula wrote:
   


Hi,
 
I have been trying to use tomcat clustering with Geronimo for a

customer application. Sometimes, I face the following problem.
 


I downloaded apache2.0.54 and mod_jk_1.2.15 and tested clustering. I
have three machines on a same subnet one windows and other are linux
boxes. I have also enabled IPMulticast and no firewalls between
systems.

To my observation, session replication is not working. However,
loadbalancer is able to fail-over successfully.

When I shutdown the instance which is serving the HttpRequests, it
will throw an exception stating not able to start cluster listener
and also no active members in the cluster

11:09:10,572 DEBUG [WebappLoader] Stopping this Loader

11:09:10,573 ERROR [ReplicationListener] Unable to start cluster
listener.

java.lang.NullPointerException

at
org.apache.catalina.cluster.tcp.ReplicationListener.listen(ReplicationListener.java(Compiled

Code))

at
org.apache.catalina.cluster.tcp.ReplicationListener.run(ReplicationListener.java:125)



at java.lang.Thread.run(Thread.java:570)

11:09:10,573 DEBUG [StandardContext] resetContext Geronimo
:j2eeType=WebModule,name=//localhost/servlet-examples-cluster,J2EEApplication=none,J2EEServer=none

null

11:09:10,575 DEBUG [StandardContext] Stopping complete

or

11:03:07,998 INFO [DeltaManager] Manager [/servlet-examples-cluster]:
skipping state transfer. No members active in cluster group.

I have tested with both mod_jk_1.2.14  mod_jk_1.2.15, but failed.

Any ideas on why this error comes?..
 
Thx

phani
  
  
  
  


  




Re: Session Policy was: heads up: initial contribution of a client API to session state management for OpenEJB, ServiceMix, Lingo and Tuscany

2006-03-16 Thread Filip Hanik - Dev Lists

Hey Jules,
thanks for commenting, I will pop in on codehaus devlists.
The lazy replicated map supports more than one backup node, with a very 
small tweak in just one method, you can change it to be N number of 
backup nodes. N being configurable, just a matter of getting the conf 
param down to the impl level.


Apache Tribes, as I like to nickname the Tomcat group communication 
protocol, has an implementation at

http://svn.apache.org/viewcvs.cgi/tomcat/container/tc5.5.x/modules/groupcom/
including the LazyReplicatedMap and a MapDemo (you're gonna be awed by 
my Swing skills).


I am also in the place of implementing a regular ReplicatedMap, to use 
for context attribute replication, a feature sought after.


I will subscribe to the WADI list and we can continue over there re: 
session management.


Filip



Jules Gosnell wrote:

Filip Hanik - Dev Lists wrote:


gentlemen, not a committer here, but wanted to share some thoughts.

in my opinion, the Session API should not have to know about 
clustering or session replication, nor should it need to worry about 
location.

the clustering API should take care of all of that.


We are 100% in agreement here, Filip.

the solution that we plan to implement for Tomcat is fairly straight 
forward. Let me see if I can give an idea of how the API shouldn't 
need to worry, its a little lengthy, but it shows that the Session 
and the SessionManager need to know zero about clustering or session 
locations. (this is only one solution, and other solutions should 
demonstrate the same point, SessionAPI need to know nothing about 
clustering or session locations)


1. Requirements to be implemented by the Session.java API
  bool isDirty - (has the session changed in this request)
  bool isDiffable - is the session able provide a diff
  byte[] getSessionData() - returns the whole session
  byte[] getSessionDiff() - optional, see isDiffable, resets the diff 
data
  void setSessionDiff(byte[] diff) - optional, see isDiffable, apply 
changes from another node


So, delta-ed sessions, at whole session or attribute granularity ? and 
when will you be sending the deltas - immediately, end of 
request[-group], pluggable strategies ?



2. Requirements to be implemented by the SessionManager.java API
  void setSessionMap(HashMap map) - makes the map implementation 
pluggable


3. And the key to this, is that we will have an implementation of a 
LazyReplicatedHashMap

  The key object in this map is the session Id.
  The map entry object is an object that looks like this
  ReplicatedEntry {
 string id;//sessionid
 bool isPrimary; //does this node hold the data
 bool isBackup; //does this node hold backup data
 Session session; //not null values for primary and backup nodes
 Member primary; //information about the primary node
 Member backup; //information about the backup node
  }

  The LazyReplicatedHashMap overrides get(key) and put(id,session)


interesting...

So all the nodes will have the a sessionId,ReplicatedEntry 
combinations in their session map. But only two 


two is a fixed number or deploy-time parameter ?


nodes will have the actual data.
This solution is for sticky LB only, but when failover happens, the 
LB can pick any node as each node knows where to get the data.
The newly selected node, will keep the backup node or select a new 
one, and do a publish to the entire cluster of the locations.


As you can see, all-to-all communications only happens when a Session 
is (created|destroyed|failover). Other than that it is 
primary-to-backup communication only, and this can be in terms of 
diffs or entire sessions using the isDirty or getDiff. This is 
triggered either by an interceptor at the end of each request or by a 
batch process for less network jitter but less accuracy (but 
adequate) for fail over.


I see - that answers my question about when replication occurs :-)



As you can see, access time is not relevant here, nor does the 
Session API even know about clustering.


yes !



In tomcat we have separated out group communication into a separate 
module, we are implementing the LazyReplicatedHashMap right now just 
for this purpose.


positive thoughts, criticism and bashing are all welcome :)


This approach has much more in common with WADI's - in fact there is 
lot of synergy here. I think the WADI and TC clustering teams could 
learn a lot from each other. I would be very interested in sitting 
down with you Filip and having a long chat about session management. 
Do you have a Tomcat clustering-specific list that I could jump onto ? 
You might be interested in popping in on [EMAIL PROTECTED]  and 
learning a little more about WADI ?


regards,

Jules



Filip


 








Re: Session Policy was: heads up: initial contribution of a client API to session state management for OpenEJB, ServiceMix, Lingo and Tuscany

2006-03-03 Thread Filip Hanik - Dev Lists

gentlemen, not a committer here, but wanted to share some thoughts.

in my opinion, the Session API should not have to know about clustering 
or session replication, nor should it need to worry about location.

the clustering API should take care of all of that.

the solution that we plan to implement for Tomcat is fairly straight 
forward. Let me see if I can give an idea of how the API shouldn't need 
to worry, its a little lengthy, but it shows that the Session and the 
SessionManager need to know zero about clustering or session locations. 
(this is only one solution, and other solutions should demonstrate the 
same point, SessionAPI need to know nothing about clustering or session 
locations)


1. Requirements to be implemented by the Session.java API
  bool isDirty - (has the session changed in this request)
  bool isDiffable - is the session able provide a diff
  byte[] getSessionData() - returns the whole session
  byte[] getSessionDiff() - optional, see isDiffable, resets the diff data
  void setSessionDiff(byte[] diff) - optional, see isDiffable, apply 
changes from another node


2. Requirements to be implemented by the SessionManager.java API
  void setSessionMap(HashMap map) - makes the map implementation pluggable

3. And the key to this, is that we will have an implementation of a 
LazyReplicatedHashMap

  The key object in this map is the session Id.
  The map entry object is an object that looks like this
  ReplicatedEntry {
 string id;//sessionid
 bool isPrimary; //does this node hold the data
 bool isBackup; //does this node hold backup data
 Session session; //not null values for primary and backup nodes
 Member primary; //information about the primary node
 Member backup; //information about the backup node
  }

  The LazyReplicatedHashMap overrides get(key) and put(id,session)

So all the nodes will have the a sessionId,ReplicatedEntry combinations 
in their session map. But only two nodes will have the actual data.
This solution is for sticky LB only, but when failover happens, the LB 
can pick any node as each node knows where to get the data.
The newly selected node, will keep the backup node or select a new one, 
and do a publish to the entire cluster of the locations.


As you can see, all-to-all communications only happens when a Session is 
(created|destroyed|failover). Other than that it is primary-to-backup 
communication only, and this can be in terms of diffs or entire sessions 
using the isDirty or getDiff. This is triggered either by an interceptor 
at the end of each request or by a batch process for less network jitter 
but less accuracy (but adequate) for fail over.


As you can see, access time is not relevant here, nor does the Session 
API even know about clustering.


In tomcat we have separated out group communication into a separate 
module, we are implementing the LazyReplicatedHashMap right now just for 
this purpose.


positive thoughts, criticism and bashing are all welcome :)

Filip


 


Re: Session Policy was: heads up: initial contribution of a client API to session state management for OpenEJB, ServiceMix, Lingo and Tuscany

2006-03-03 Thread Filip Hanik - Dev Lists

gentlemen, not a committer here, but wanted to share some thoughts.

in my opinion, the Session API should not have to know about clustering
or session replication, nor should it need to worry about location.
the clustering API should take care of all of that.

the solution that we plan to implement for Tomcat is fairly straight
forward. Let me see if I can give an idea of how the API shouldn't need
to worry, its a little lengthy, but it shows that the Session and the
SessionManager need to know zero about clustering or session locations.
(this is only one solution, and other solutions should demonstrate the
same point, SessionAPI need to know nothing about clustering or session
locations)

1. Requirements to be implemented by the Session.java API
  bool isDirty - (has the session changed in this request)
  bool isDiffable - is the session able provide a diff
  byte[] getSessionData() - returns the whole session
  byte[] getSessionDiff() - optional, see isDiffable, resets the diff data
  void setSessionDiff(byte[] diff) - optional, see isDiffable, apply
changes from another node

2. Requirements to be implemented by the SessionManager.java API
  void setSessionMap(HashMap map) - makes the map implementation pluggable

3. And the key to this, is that we will have an implementation of a
LazyReplicatedHashMap
  The key object in this map is the session Id.
  The map entry object is an object that looks like this
  ReplicatedEntry {
 string id;//sessionid
 bool isPrimary; //does this node hold the data
 bool isBackup; //does this node hold backup data
 Session session; //not null values for primary and backup nodes
 Member primary; //information about the primary node
 Member backup; //information about the backup node
  }

  The LazyReplicatedHashMap overrides get(key) and put(id,session)

So all the nodes will have the a sessionId,ReplicatedEntry combinations
in their session map. But only two nodes will have the actual data.
This solution is for sticky LB only, but when failover happens, the LB
can pick any node as each node knows where to get the data.
The newly selected node, will keep the backup node or select a new one,
and do a publish to the entire cluster of the locations.

As you can see, all-to-all communications only happens when a Session is
(created|destroyed|failover). Other than that it is primary-to-backup
communication only, and this can be in terms of diffs or entire sessions
using the isDirty or getDiff. This is triggered either by an interceptor
at the end of each request or by a batch process for less network jitter
but less accuracy (but adequate) for fail over.

As you can see, access time is not relevant here, nor does the Session
API even know about clustering.

In tomcat we have separated out group communication into a separate
module, we are implementing the LazyReplicatedHashMap right now just for
this purpose.

positive thoughts, criticism and bashing are all welcome :)

Filip






Re: Session Policy was: heads up: initial contribution of a client API to session state management for OpenEJB, ServiceMix, Lingo and Tuscany

2006-03-03 Thread Filip Hanik - Dev Lists

Hi Dain,
let me address the location, and show you how the location is completely 
transparent.


The way the LazyReplicatedMap works is as follows:
1. Backup node fails - primary node chooses a new backup node
2. Primary node fails - since Tomcat doesn't know which node the user 
will come to their

  next http request, nothing is done.
  When the user makes a request, and the session manager says 
LazyMap.getSession(id) and that session is not yet on the server,
  the lazymap will request the session from the backup server, load it 
up, set this node as primary.
  that is why it is called lazy, cause it wont load the session until 
it is actually needed, and because it doesn't know what node will become 
primary, this is decided by the load balancer. remember, that each node 
knows where the session with Id= is located. they all carry the same 
map, but only two carry the data (primary secondary).


on a false positive, the new primary node will cancel out the old one. 
so you can have as many false positives as you want, but the more you 
have the worse your performance will get :). that is why sticky lb is 
important, but false positive is handled the same way as a crash except 
that the old primary gets cancelled out.


the rest is inlined


1. Requirements to be implemented by the Session.java API
  bool isDirty - (has the session changed in this request)
  bool isDiffable - is the session able provide a diff
  byte[] getSessionData() - returns the whole session
  byte[] getSessionDiff() - optional, see isDiffable, resets the diff 
data
  void setSessionDiff(byte[] diff) - optional, see isDiffable, apply 
changes from another node


To throw you arguments back on you, why should my code be exposed to 
this level of detail :)   From my perspective, I get a session and it 
is the Session API implementation's problem to figure out how to diff 
it, back it up, and migrate it.


exactly. the methods above is what is required from the servlet 
container, not the webapp developer.
so if you are a jetty developer, you would implement the above methods. 
This way, the jetty developer can optimize the serialization algorithm, 
and locking (during diff creation), and your session will never be out 
of date. in tomcat, we are making the getSessionDiff() a pluggable 
algorithm, but it is implemented in the container, otherwise, just 
serialization is too slow.



2. Requirements to be implemented by the SessionManager.java API
  void setSessionMap(HashMap map) - makes the map implementation 
pluggable


3. And the key to this, is that we will have an implementation of a 
LazyReplicatedHashMap

  The key object in this map is the session Id.
  The map entry object is an object that looks like this
  ReplicatedEntry {
 string id;//sessionid
 bool isPrimary; //does this node hold the data
 bool isBackup; //does this node hold backup data
 Session session; //not null values for primary and backup nodes
 Member primary; //information about the primary node
 Member backup; //information about the backup node
  }

  The LazyReplicatedHashMap overrides get(key) and put(id,session)


Why would anyone need to know this level of detail?
you don't and you will not, I just giving you some architectural insight 
on how it works under the hood :)




So all the nodes will have the a sessionId,ReplicatedEntry 
combinations in their session map. But only two nodes will have the 
actual data.
This solution is for sticky LB only, but when failover happens, the 
LB can pick any node as each node knows where to get the data.
The newly selected node, will keep the backup node or select a new 
one, and do a publish to the entire cluster of the locations.


I don't see anyway to deal with locking or the fact that servlet 
sessions are multi threaded (overlaping requests).  How do you know 
when the session is not being used by anyone so you have a stable 
state for replication.
in tomcat we have an access counter, gets incremented when the request 
comes in, and decremented when the request leaves. if the counter is 0, 
lock the session and suck out the diff. or just lock it at the end of 
each request on a periodic basis, regardless of what the counter is.




As you can see, all-to-all communications only happens when a Session 
is (created|destroyed|failover). Other than that it is 
primary-to-backup communication only, and this can be in terms of 
diffs or entire sessions using the isDirty or getDiff. This is 
triggered either by an interceptor at the end of each request or by a 
batch process for less network jitter but less accuracy (but 
adequate) for fail over.


As you can see, access time is not relevant here, nor does the 
Session API even know about clustering.


How do you deal with access-time?  I agree that your API doesn't know 
about clustering, but you also can't do a client side or server side 
redirect to the correct node; you must always migrate the session to 
your request.
it doesn't, 

Re: Session Policy was: heads up: initial contribution of a client API to session state management for OpenEJB, ServiceMix, Lingo and Tuscany

2006-03-03 Thread Filip Hanik - Dev Lists
btw, for very large clusters, you use the same mechanism, except, 
instead of distributing the entire session map, the backup node info is 
stored in a cookie.


and by doing this, you don't need to remember the backup location 
throughout the cluster. you still broadcast cancellations of primary 
node to account for false positive.


the only scenario that is not accounted for is when you have a wacky lb 
that sends two parallel requests to two different servers. this would 
require distributed locking, and that is a path that is too much 
overhead to walk down.



Filip


Filip Hanik - Dev Lists wrote:

Hi Dain,
let me address the location, and show you how the location is 
completely transparent.


The way the LazyReplicatedMap works is as follows:
1. Backup node fails - primary node chooses a new backup node
2. Primary node fails - since Tomcat doesn't know which node the user 
will come to their

  next http request, nothing is done.
  When the user makes a request, and the session manager says 
LazyMap.getSession(id) and that session is not yet on the server,
  the lazymap will request the session from the backup server, load it 
up, set this node as primary.
  that is why it is called lazy, cause it wont load the session until 
it is actually needed, and because it doesn't know what node will 
become primary, this is decided by the load balancer. remember, that 
each node knows where the session with Id= is located. they all 
carry the same map, but only two carry the data (primary secondary).


on a false positive, the new primary node will cancel out the old one. 
so you can have as many false positives as you want, but the more you 
have the worse your performance will get :). that is why sticky lb is 
important, but false positive is handled the same way as a crash 
except that the old primary gets cancelled out.


the rest is inlined


1. Requirements to be implemented by the Session.java API
  bool isDirty - (has the session changed in this request)
  bool isDiffable - is the session able provide a diff
  byte[] getSessionData() - returns the whole session
  byte[] getSessionDiff() - optional, see isDiffable, resets the 
diff data
  void setSessionDiff(byte[] diff) - optional, see isDiffable, apply 
changes from another node


To throw you arguments back on you, why should my code be exposed to 
this level of detail :)   From my perspective, I get a session and it 
is the Session API implementation's problem to figure out how to diff 
it, back it up, and migrate it.


exactly. the methods above is what is required from the servlet 
container, not the webapp developer.
so if you are a jetty developer, you would implement the above 
methods. This way, the jetty developer can optimize the serialization 
algorithm, and locking (during diff creation), and your session will 
never be out of date. in tomcat, we are making the getSessionDiff() a 
pluggable algorithm, but it is implemented in the container, 
otherwise, just serialization is too slow.



2. Requirements to be implemented by the SessionManager.java API
  void setSessionMap(HashMap map) - makes the map implementation 
pluggable


3. And the key to this, is that we will have an implementation of a 
LazyReplicatedHashMap

  The key object in this map is the session Id.
  The map entry object is an object that looks like this
  ReplicatedEntry {
 string id;//sessionid
 bool isPrimary; //does this node hold the data
 bool isBackup; //does this node hold backup data
 Session session; //not null values for primary and backup nodes
 Member primary; //information about the primary node
 Member backup; //information about the backup node
  }

  The LazyReplicatedHashMap overrides get(key) and put(id,session)


Why would anyone need to know this level of detail?
you don't and you will not, I just giving you some architectural 
insight on how it works under the hood :)




So all the nodes will have the a sessionId,ReplicatedEntry 
combinations in their session map. But only two nodes will have the 
actual data.
This solution is for sticky LB only, but when failover happens, the 
LB can pick any node as each node knows where to get the data.
The newly selected node, will keep the backup node or select a new 
one, and do a publish to the entire cluster of the locations.


I don't see anyway to deal with locking or the fact that servlet 
sessions are multi threaded (overlaping requests).  How do you know 
when the session is not being used by anyone so you have a stable 
state for replication.
in tomcat we have an access counter, gets incremented when the request 
comes in, and decremented when the request leaves. if the counter is 
0, lock the session and suck out the diff. or just lock it at the end 
of each request on a periodic basis, regardless of what the counter is.




As you can see, all-to-all communications only happens when a 
Session is (created|destroyed|failover). Other than that it is 
primary-to-backup