Re: Attacking M4 - Final stuff

2005-08-02 Thread David Jencks


On Aug 1, 2005, at 5:53 PM, Jeremy Boynes wrote:


David Jencks wrote:
I rescusitated the M4 branch and removed the snapshot tag from the 
deploy plugin and the numbered version from the packaging plugin.


I think we should remove all SNAPSHOT references including those in 
the sandbox and in plugins we are not using. This can be done by 
fixing them to use the POM current version, or by removing those 
modules entirely (e.g. sandbox probably does not want to be in the 
release tag at all).


I think we should do a build from the branch, test it, and if it 
works, then make the tag.


Agreed. Let's clean up the cruft and reduce to the minimum number of 
version definitions (just etc/project.properties and project.xml for 
the plugins?). We can do a build there using an explicit revision 
number, then if we are happy copy that rev to the tag.


I've removed all the 'SNAPSHOT' and other hard coded versions I can 
find and successfully built geronimo from rev  226951 after removing my 
maven geronimo and geronimo-spec repository entries.  I see dblevins 
has posted source and binary tar.gzs etc but I have no idea how to 
produce them from a plain maven build.  I consider his publish_build.sh 
script unacceptable for releases because it modifies what is checked 
out from  svn.


I'm going to start running tck tests on my build and hope we can 
straighten out how to package stuff up later.  If there are problems 
tomorrow I can always start over.


thanks
david jencks



IIUC aaron is fixing the release notes so I hope we can get them 
finalized by tonight also.


Lets be careful this time and make sure everything is in order first.
--
Jeremy





working on bugs

2005-08-02 Thread Krishnakumar B
hi,

i would like to work on some of the open bugs posted in JIRA. How do i
go about working on the same and closing them. I have a dev
environment created from the daily builds and am able to debug the
issues.

Regards
Krishnakumar B


Re: working on bugs

2005-08-02 Thread David Jencks

Welcome!

While it may seem easier at first to work from source snapshots, you 
will really need to check out the source code using subversion so you 
will be able to create diffs of your patches.  There is some 
information on getting the source code here: 
http://geronimo.apache.org/svn.html


When you find an issue you are interested in working on, you may want 
to discuss it on the dev list to make sure no one else is almost done 
fixing it and to find out if anyone has advice or opinions on the best 
solution.  Many jira issues are assigned to a developer already but 
that usually does not mean that the issue is being actively worked on.  
Often issues are assigned to the person who knows the most about a 
topic with the hope that they will eventually fix them.


In any case, feel free to ask questions!

When you have a fix you want to contribute, run svn diff to get a patch 
file and attach it to the jira issue, being sure to check the licensing 
box so we can actually apply the patch.  One or more committers will 
review it and if there are no objections apply it.  If you make 
sustained and significant contributions we will eventually vote to give 
you commit status.


Many thanks,
david jencks

On Aug 2, 2005, at 12:08 AM, Krishnakumar B wrote:


hi,

i would like to work on some of the open bugs posted in JIRA. How do i
go about working on the same and closing them. I have a dev
environment created from the daily builds and am able to debug the
issues.

Regards
Krishnakumar B





pruning proposal

2005-08-02 Thread David Jencks
I wonder if we could spend a little time thinking about stuff we might 
like to discard.


I think the geronimo itests modules are good candidates.  It would be 
great to have integration tests, but I don't think what is there is a 
good starting point.


I can't figure out what the xpom plugin is for, so I wonder if we need 
it.


We have some contributed demo apps in jira and I wonder if we would 
like to remove some of the existing demo apps and replace them with the 
new ones.


I haven't dared look in sandbox :-)

thanks
david jencks



[jira] Closed: (GERONIMO-810) Following default installation in izpack installer in M4 results in Error: Cannot read file C:\Program Files\GeronimoM4QA\installer-temp\j2ee-server-tomcat-plan.xml

2005-08-02 Thread John Sisson (JIRA)
 [ http://issues.apache.org/jira/browse/GERONIMO-810?page=all ]
 
John Sisson closed GERONIMO-810:



 Following default installation in izpack installer in M4 results in Error: 
 Cannot read file C:\Program 
 Files\GeronimoM4QA\installer-temp\j2ee-server-tomcat-plan.xml
 

  Key: GERONIMO-810
  URL: http://issues.apache.org/jira/browse/GERONIMO-810
  Project: Geronimo
 Type: Bug
   Components: installer
 Versions: 1.0-M4
 Reporter: John Sisson
 Assignee: John Sisson


 In step 15/15 of the Processing panel, you get the error:
 17:31:01,694 INFO  [SecurityServiceImpl] JACC factory registered
 Error: Cannot read file C:\Program
 Files\GeronimoM4QA\installer-temp\j2ee-server-tomcat-plan.xml
 I am investigating...

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Resolved: (GERONIMO-810) Following default installation in izpack installer in M4 results in Error: Cannot read file C:\Program Files\GeronimoM4QA\installer-temp\j2ee-server-tomcat-plan.xml

2005-08-02 Thread John Sisson (JIRA)
 [ http://issues.apache.org/jira/browse/GERONIMO-810?page=all ]
 
John Sisson resolved GERONIMO-810:
--

Resolution: Invalid

This is no longer valid, as we ended up removing Tomcat from the izpack 
installer for M4.

 Following default installation in izpack installer in M4 results in Error: 
 Cannot read file C:\Program 
 Files\GeronimoM4QA\installer-temp\j2ee-server-tomcat-plan.xml
 

  Key: GERONIMO-810
  URL: http://issues.apache.org/jira/browse/GERONIMO-810
  Project: Geronimo
 Type: Bug
   Components: installer
 Versions: 1.0-M4
 Reporter: John Sisson
 Assignee: John Sisson


 In step 15/15 of the Processing panel, you get the error:
 17:31:01,694 INFO  [SecurityServiceImpl] JACC factory registered
 Error: Cannot read file C:\Program
 Files\GeronimoM4QA\installer-temp\j2ee-server-tomcat-plan.xml
 I am investigating...

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Created: (GERONIMO-842) Enhance DerbyNetworkGBean to allow secure Derby Network Client connections (once Derby is enhanced to allow secure connections).

2005-08-02 Thread John Sisson (JIRA)
Enhance DerbyNetworkGBean to allow secure Derby Network Client connections 
(once Derby is enhanced to allow secure connections).


 Key: GERONIMO-842
 URL: http://issues.apache.org/jira/browse/GERONIMO-842
 Project: Geronimo
Type: Task
  Components: core, installer  
Versions: 1.0-M4
Reporter: John Sisson
 Fix For: 1.0


I have created this issue to raise awareness of the security limitations of the 
Network Server currently embeded in derby and to flag that the Geronimo 
installer/configuration tools may need to be enhanced when Derby's client 
security is enhanced to allow the user to configure security for the Network 
Server..

Currently the DerbyNetworkGBean only accepts connections from the localhost.  

Although this could be easily changed, it would not be secure even if Derby's 
current (version 10.1 at the time of writing) client security features are 
utilised.  Rather than repeating information see the mails in the thread titled 
DRDA Password Encryption (SECMEC_EUSRIDPWD and SECMEC_USRENCPWD) at:

http://mail-archives.apache.org/mod_mbox/db-derby-dev/200506.mbox/[EMAIL 
PROTECTED]

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Updated: (GERONIMO-842) Enhance DerbyNetworkGBean to allow secure Derby Network Client connections (once Derby is enhanced to allow secure connections).

2005-08-02 Thread John Sisson (JIRA)
 [ http://issues.apache.org/jira/browse/GERONIMO-842?page=all ]

John Sisson updated GERONIMO-842:
-

Version: 1.0-M4

 Enhance DerbyNetworkGBean to allow secure Derby Network Client connections 
 (once Derby is enhanced to allow secure connections).
 

  Key: GERONIMO-842
  URL: http://issues.apache.org/jira/browse/GERONIMO-842
  Project: Geronimo
 Type: Task
   Components: core, installer
 Versions: 1.0-M4
 Reporter: John Sisson
  Fix For: 1.0


 I have created this issue to raise awareness of the security limitations of 
 the Network Server currently embeded in derby and to flag that the Geronimo 
 installer/configuration tools may need to be enhanced when Derby's client 
 security is enhanced to allow the user to configure security for the Network 
 Server..
 Currently the DerbyNetworkGBean only accepts connections from the localhost.  
 Although this could be easily changed, it would not be secure even if Derby's 
 current (version 10.1 at the time of writing) client security features are 
 utilised.  Rather than repeating information see the mails in the thread 
 titled DRDA Password Encryption (SECMEC_EUSRIDPWD and SECMEC_USRENCPWD) at:
 http://mail-archives.apache.org/mod_mbox/db-derby-dev/200506.mbox/[EMAIL 
 PROTECTED]

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: Clustering (long)

2005-08-02 Thread Andy Piper

Hi Jules

At 05:37 AM 7/27/2005, Jules Gosnell wrote:

I agree on the SPoF thing - but I think you misunderstand my 
Coordinator arch. I do not have a single static Coordinator node, 
but a dynamic Coordinator role, into which a node may be elected. 
Thus every node is a potential Coordinator. If the elected 
Coordinator dies, another is immediately elected. The election 
strategy is pluggable, although it will probably end up being 
hardwired to oldest-cluster-member. The reason behind this is that 
relaying out your cluster is much simpler if it is done in a single 
vm. I originally tried to do it in multiple vms, each taking 
responsibility for pieces of the cluster, but if the vms views are 
not completely in sync, things get very hairy, and completely in 
sync is an expensive thing to achieve - and would introduce a 
cluster-wide single point of contention. So I do it in a single vm, 
as fast as I can, with fail over, in case that vm evaporates. Does 
that sound better than the scenario that you had in mind ?


This is exactly the hard computer science problem that you 
shouldn't be trying to solve if at all possible. Its hard because 
network partitions or hung processes (think GC) make it very easy for 
your colleagues to think you are dead when you do not share that 
view. The result is two processes who think they are the coordinator 
and anarchy can ensue (commonly called split-brain syndrome). I can 
point you at papers if you want, but I really suggest that you aim 
for an implementation that is independent of a central coordinator. 
Note that a central coordinator is necessary if you want to implement 
a strongly-consistent in-memory database, but this is not usually a 
requirement for session replication say.


http://research.microsoft.com/Lampson/58-Consensus/Abstract.html 
gives a good introduction to some of these things. I also presented 
at JavaOne on related issues, you should be able to download the 
presentation from dev2dev.bea.com at some point (not there yet - I 
just checked).


The Coordinator is not there to support session replication, but 
rather the management of the distributed map (map of which a few 
buckets live on each node) which is used by WADI to discover very 
efficiently whether a session exists and where it is located. This 
map must be rearranged, in the most efficient way possible, each 
time a node joins or leaves the cluster.


Understood. Once you have a fault-tolerant singleton coordinator you 
can solve lots of interesting problems, its just hard and often not 
worth the effort or the expense (typical implementations involve HA 
HW or an HA DB or at least 3 server processes).


Replication is NYI - but I'm running a few mental background threads 
that suggest that an extension to the index will mean that it 
associates the session's id not just to its current location, but 
also to the location of a number of replicants. I also have ideas on 
how a session might choose nodes into which it will place its 
replicants and how I can avoid the primary session copy ever being 
colocated with a replicant (potential SPoF - if you only have one 
replicant), etc...


Right definitely something you want to avoid.

Yes, I can see that happening - I have an improvement (NYI) to 
WADI's evacuation strategy (how sessions are evacuated when a node 
wishes to leave). Each session will be evacuated to the node which 
owns the bucket into which its id hashes. This is because colocation 
of the session with the bucket allows many messages concered with 
its future destruction and relocation to be optimised away. Future 
requests falling elsewhere but needing this session should, in the 
most efficient case, be relocated to this same node, other wise the 
session may be relocated, but at a cost...


How do you relocate the request? Many HW load-balancers do not 
support this (or else it requires using proprietary APIs), so you 
probably have to count on

moving sessions in the normal failover case.

I would be very grateful in any thoughts or feedback that you could 
give me. I hope to get much more information about WADI into the 
wiki over the next few weeks. That should help generate more 
discussion, although I would be more than happy for people to ask me 
questions here on Geronimo-dev because this will give me an idea of 
what documentation I should write and how existing documentation may 
be lacking or misleading.


I guess my general comment would be that you might find it better to 
think specifically about the end-user problem you are trying to solve 
(say session replication) and work towards a solution based on that. 
Most short-cuts / optimizations that vendors make are specific to the 
problem domain and do not generally apply to all clustering problems.


Hope this helps

andy 





Re: Issue - configuring the binary distribution

2005-08-02 Thread Joe Bohn






Aaron Mulder wrote:

  On Mon, 1 Aug 2005, Bruce Snyder wrote:
  
  
At any rate, what I still don't understand is the desire to use a GUI
for everything. Any type of UI can be developed if there's an API
behind it.

  
  
	Sure, one of the reasons I'm keen on a nice management API.
  

I agree

  
  
  
But what I don't agree with is the need to load the server into an
unstarted state of some kind just to reconfigure it. Aaron stated
previously that he has code to load the configs, change them and write
them out again, correct? If I'm missing something, please explain this
further.

  
  
	The GBean state is stored in a config store.  Without loading part 
of the server, you don't know where that config store is (file, database, 
which file, which database, etc.).  I mean, we currently only have one 
implementation (file) storing data in one place 
(GERONIMO_HOME/config-store) at the moment, but there's no guarantee that 
will always be true, and I think Jeremy strongly objected to hardcoding 
stuff to assume any single (config store implementation plus path).
  

Why is this so much different than most applications on the market
today? 
The standard mechanism to modify configurations that I see is this:
- modify properties/xml files when the server is inactive
- when the server is active use a command line or GUI to make necessary
modifications (presumably using some common management API in the
background).
- Depending upon the nature of the configuration changes and user
requirements the changes may either be temporary for this execution of
the server or permanently written to the configuration store.
- Configuration changes that make fundamental changes to the server are
typically only possible via modification or files that are processed by
the server at the next restart. The server may be active or inactive
when changing the files, but they are not effective until the next
server restart. 
I think this is the behavior/paradigm that most users are familiar
with and expect.

  
	Anyway, once you have the config store loaded, you can access the
saved data for each Configuration.  

It sounds like one of the configuration items is the specification of
the config store itself. How would a users change the configure store
from file to database or from one database to another? I think for
this case you would have to something different (possibly more along
the lines of a file/restart mechanism anyway).

  That essentially contains the state
information for a bunch of GBeans for each Configuration.  You have two
options to actually read and update the GBean state.  One is to manually
deal with a whole bunch of Serialized junk, depending on specific
implementations of certain internals to translate that to usable data and
back.  The other is to just go ahead and start a kernel and its config
store and then load the GBeans from the config store, read and update
their properties, and then store them back to the config store again.  
That handles all the nitty gritty automatically and can't be foiled by 
changing implementations of various core services.

	But there are some subtleties here that could use more attention
-- I think Jeremy and/or David J dealt with all this when working on the
deployer stuff.  Right now I believe the server.jar contains the config
store information, and the deployer.jar has to contain the same config
store information, or else they won't work properly together.  


  There was
some talk about separating dependencies and things out of server.jar (to
avoid manifest class path entries for reasons that I can't recall), which
might make the initial server plan more accessible to deployer.jar and
also to whatever the management tool would use.  But for the moment, I
think they all would need to be built with the same expectations to work
correctly together.  Grr, really wish we could have this talk around a
whiteboard!  :)

Aaron


  


-- 
Joe Bohn 

[EMAIL PROTECTED] 
"He is no fool who gives what he cannot keep, to gain what he cannot lose."   -- Jim Elliot





Re: Geronimo tooling dontation

2005-08-02 Thread Geir Magnusson Jr.
Does anyone else have any comments or objections before we go down  
this path?


geir

On Aug 1, 2005, at 5:29 PM, Sachin Patel wrote:

Great.  I assuming no one else has any strong objections.  If not,  
I'll be glad to start driving this.  I'm not sure about the  
licensing issues... I'll shoot of a note to the eclipse folks.


Thanks.

Geir Magnusson Jr. wrote:


I think it would be great to have it here.

Question - we would need to have it re-licensed under the Apache   
License v2.0.  Is that possible?


geir

On Jul 29, 2005, at 9:28 PM, Sachin Patel wrote:



Hello all,

It seems as if a decision needs to be made to determine the   
appropiate place to host the Eclipse tooling support for  
Geronimo,  in particular the Geronimo Server Adapter that is  
currently in  development in the Eclipse - WTP project.


http://www-128.ibm.com/developerworks/library/os-gerplug/

As I've made enhancements to this Geronimo tooling support, I've   
been forced to pull in Geronimo runtime dependencies, for access   
not only to the Geronimo runtime itself but to JEE spec jars.


Discussions have arose as I've been working with the Eclipse WTP   
team to push these changes in.  It is their belief that if this   
server adapter is pulling in third party jars then it falls  
under  the category of being a full-fledged adapter that is too  
large to  host and be a part of WTP.  The existing adapters for  
the other  application servers that are included in WTP are  
lightweight and  serve primarily as examples for basic support.   
There has been some  discussion planning to provide the JMX jars  
and a set of utilties  as part of WTP that other server adapters  
can exploit.  However  this is not expected until WTP 1.5 in June  
06 at the earliest.   This is not good for the Geronimo  
community.  We need an immediate  place to host this source so  
the community can have the latest  source to start using, opening  
bugs and feature requests, and  contributing to it.  As everyone  
works hard toward the completion  of the Geronimo's first release  
it would be great if we had a good  start for tooling support to  
go along with 1.0.


So the first decision we need to come to is where should this be   
hosted? The first option is creating a sourceforge project and   
build a secondary community around it.  The other, is to host on   
Apache itself as part of the Geronimo project.  The latter I  
feel  is a much better option as by simply including it as part  
of the  Geronimo project itself, it provides a much stronger  
integration  statement to the existing community.  So the Apache  
Geronimo  project would include not only the development of  
server itself but  also the tooling that goes along with it.


So the proposal is that we create a seperate branch for tooling  
in  subversion and host the source there.  The next step would be  
to  provide the build and packaging infrastructure to go around  
it and  to be able to pull down dependencies to build which  
include  Eclipse, WTP, and of course the Geronimo runtime itself.


Thoughts, objections, comments???

Thank you,

Sachin.












--
Geir Magnusson Jr  +1-203-665-6437
[EMAIL PROTECTED]




Re: Geronimo tooling dontation

2005-08-02 Thread Donald Woods
+1 on hosting the Eclipse tooling for Geronimo, but
wouldn't it be more useful for everyone if its
integrated into the main trunk, so it can be built and
shipped with a given server level for runtime
application deployment and debugging?

-Donald


--- Geir Magnusson Jr. [EMAIL PROTECTED] wrote:

 Does anyone else have any comments or objections
 before we go down  
 this path?
 
 geir
 
 On Aug 1, 2005, at 5:29 PM, Sachin Patel wrote:
 
  Great.  I assuming no one else has any strong
 objections.  If not,  
  I'll be glad to start driving this.  I'm not sure
 about the  
  licensing issues... I'll shoot of a note to the
 eclipse folks.
 
  Thanks.
 
  Geir Magnusson Jr. wrote:
 
  I think it would be great to have it here.
 
  Question - we would need to have it re-licensed
 under the Apache   
  License v2.0.  Is that possible?
 
  geir
 
  On Jul 29, 2005, at 9:28 PM, Sachin Patel wrote:
 
 
  Hello all,
 
  It seems as if a decision needs to be made to
 determine the   
  appropiate place to host the Eclipse tooling
 support for  
  Geronimo,  in particular the Geronimo Server
 Adapter that is  
  currently in  development in the Eclipse - WTP
 project.
 
 

http://www-128.ibm.com/developerworks/library/os-gerplug/
 
  As I've made enhancements to this Geronimo
 tooling support, I've   
  been forced to pull in Geronimo runtime
 dependencies, for access   
  not only to the Geronimo runtime itself but to
 JEE spec jars.
 
  Discussions have arose as I've been working with
 the Eclipse WTP   
  team to push these changes in.  It is their
 belief that if this   
  server adapter is pulling in third party jars
 then it falls  
  under  the category of being a full-fledged
 adapter that is too  
  large to  host and be a part of WTP.  The
 existing adapters for  
  the other  application servers that are included
 in WTP are  
  lightweight and  serve primarily as examples for
 basic support.   
  There has been some  discussion planning to
 provide the JMX jars  
  and a set of utilties  as part of WTP that other
 server adapters  
  can exploit.  However  this is not expected
 until WTP 1.5 in June  
  06 at the earliest.   This is not good for the
 Geronimo  
  community.  We need an immediate  place to host
 this source so  
  the community can have the latest  source to
 start using, opening  
  bugs and feature requests, and  contributing to
 it.  As everyone  
  works hard toward the completion  of the
 Geronimo's first release  
  it would be great if we had a good  start for
 tooling support to  
  go along with 1.0.
 
  So the first decision we need to come to is
 where should this be   
  hosted? The first option is creating a
 sourceforge project and   
  build a secondary community around it.  The
 other, is to host on   
  Apache itself as part of the Geronimo project. 
 The latter I  
  feel  is a much better option as by simply
 including it as part  
  of the  Geronimo project itself, it provides a
 much stronger  
  integration  statement to the existing
 community.  So the Apache  
  Geronimo  project would include not only the
 development of  
  server itself but  also the tooling that goes
 along with it.
 
  So the proposal is that we create a seperate
 branch for tooling  
  in  subversion and host the source there.  The
 next step would be  
  to  provide the build and packaging
 infrastructure to go around  
  it and  to be able to pull down dependencies to
 build which  
  include  Eclipse, WTP, and of course the
 Geronimo runtime itself.
 
  Thoughts, objections, comments???
 
  Thank you,
 
  Sachin.
 
 
 
 
 
 
 
 
 
 -- 
 Geir Magnusson Jr 
 +1-203-665-6437
 [EMAIL PROTECTED]
 
 
 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


Re: Geronimo tooling dontation

2005-08-02 Thread sissonj

I agree it provides a stronger integration
statement having the tooling as part of the Geronimo project, e.g:

* Tooling is released at the same time
as Geronimo, so people can start playing with a new release immediately
rather than waiting for the tooling project to catch up.
* Geronimo documentation can cover tool
usage
* Single place to report problems
* Chance to bring more users to the
Geronimo web site

I vote that it be hosted at geronimo.apache.org if
possible.

John

Sachin Patel [EMAIL PROTECTED] wrote on 30/07/2005
11:28:56 AM:

 Hello all,
 
 It seems as if a decision needs to be made to determine the appropiate

 place to host the Eclipse tooling support for Geronimo, in particular

 the Geronimo Server Adapter that is currently in development in the

 Eclipse - WTP project.
 
 http://www-128.ibm.com/developerworks/library/os-gerplug/
 
 As I've made enhancements to this Geronimo tooling support, I've been

 forced to pull in Geronimo runtime dependencies, for access not only
to 
 the Geronimo runtime itself but to JEE spec jars.
 
 Discussions have arose as I've been working with the Eclipse WTP team
to 
 push these changes in. It is their belief that if this server
adapter 
 is pulling in third party jars then it falls under the category of
being 
 a full-fledged adapter that is too large to host and be a part of
WTP. 
 The existing adapters for the other application servers that are 
 included in WTP are lightweight and serve primarily as examples for

 basic support. There has been some discussion planning to provide
the 
 JMX jars and a set of utilties as part of WTP that other server adapters

 can exploit. However this is not expected until WTP 1.5 in June
06 at 
 the earliest. This is not good for the Geronimo community. We
need an 
 immediate place to host this source so the community can have the
latest 
 source to start using, opening bugs and feature requests, and 
 contributing to it. As everyone works hard toward the completion
of the 
 Geronimo's first release it would be great if we had a good start
for 
 tooling support to go along with 1.0.
 
 So the first decision we need to come to is where should this be hosted?

 The first option is creating a sourceforge project and build a secondary

 community around it. The other, is to host on Apache itself
as part of 
 the Geronimo project. The latter I feel is a much better option
as by 
 simply including it as part of the Geronimo project itself, it provides

 a much stronger integration statement to the existing community. So
the 
 Apache Geronimo project would include not only the development of
server 
 itself but also the tooling that goes along with it.
 
 So the proposal is that we create a seperate branch for tooling in

 subversion and host the source there. The next step would be
to provide 
 the build and packaging infrastructure to go around it and to be able
to 
 pull down dependencies to build which include Eclipse, WTP, and of

 course the Geronimo runtime itself.
 
 Thoughts, objections, comments???
 
 Thank you,
 
 Sachin.
 
 


Re: Geronimo tooling dontation

2005-08-02 Thread Geir Magnusson Jr.


On Aug 2, 2005, at 8:50 AM, Donald Woods wrote:


+1 on hosting the Eclipse tooling for Geronimo, but
wouldn't it be more useful for everyone if its
integrated into the main trunk, so it can be built and
shipped with a given server level for runtime
application deployment and debugging?


The place in the source tree isn't as important as making sure that  
we have it.  I'm sure that if we have it here, we'll certainly ensure  
that it's part of our distributions - we have a strong interest in  
ensuring that Geronimo has the best tooling we can provide.


(And maybe it will inspire IDEA-based tooling as well so I can use it :)

geir



-Donald


--- Geir Magnusson Jr. [EMAIL PROTECTED] wrote:



Does anyone else have any comments or objections
before we go down
this path?

geir

On Aug 1, 2005, at 5:29 PM, Sachin Patel wrote:



Great.  I assuming no one else has any strong


objections.  If not,


I'll be glad to start driving this.  I'm not sure


about the


licensing issues... I'll shoot of a note to the


eclipse folks.



Thanks.

Geir Magnusson Jr. wrote:



I think it would be great to have it here.

Question - we would need to have it re-licensed


under the Apache


License v2.0.  Is that possible?

geir

On Jul 29, 2005, at 9:28 PM, Sachin Patel wrote:




Hello all,

It seems as if a decision needs to be made to


determine the


appropiate place to host the Eclipse tooling


support for


Geronimo,  in particular the Geronimo Server


Adapter that is


currently in  development in the Eclipse - WTP


project.









http://www-128.ibm.com/developerworks/library/os-gerplug/



As I've made enhancements to this Geronimo


tooling support, I've


been forced to pull in Geronimo runtime


dependencies, for access


not only to the Geronimo runtime itself but to


JEE spec jars.



Discussions have arose as I've been working with


the Eclipse WTP


team to push these changes in.  It is their


belief that if this


server adapter is pulling in third party jars


then it falls


under  the category of being a full-fledged


adapter that is too


large to  host and be a part of WTP.  The


existing adapters for


the other  application servers that are included


in WTP are


lightweight and  serve primarily as examples for


basic support.


There has been some  discussion planning to


provide the JMX jars


and a set of utilties  as part of WTP that other


server adapters


can exploit.  However  this is not expected


until WTP 1.5 in June


06 at the earliest.   This is not good for the


Geronimo


community.  We need an immediate  place to host


this source so


the community can have the latest  source to


start using, opening


bugs and feature requests, and  contributing to


it.  As everyone


works hard toward the completion  of the


Geronimo's first release


it would be great if we had a good  start for


tooling support to


go along with 1.0.

So the first decision we need to come to is


where should this be


hosted? The first option is creating a


sourceforge project and


build a secondary community around it.  The


other, is to host on


Apache itself as part of the Geronimo project.


The latter I


feel  is a much better option as by simply


including it as part


of the  Geronimo project itself, it provides a


much stronger


integration  statement to the existing


community.  So the Apache


Geronimo  project would include not only the


development of


server itself but  also the tooling that goes


along with it.



So the proposal is that we create a seperate


branch for tooling


in  subversion and host the source there.  The


next step would be


to  provide the build and packaging


infrastructure to go around


it and  to be able to pull down dependencies to


build which


include  Eclipse, WTP, and of course the


Geronimo runtime itself.



Thoughts, objections, comments???

Thank you,

Sachin.















--
Geir Magnusson Jr
+1-203-665-6437
[EMAIL PROTECTED]







__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com




--
Geir Magnusson Jr  +1-203-665-6437
[EMAIL PROTECTED]




interop-server-plan.xml and izpack installer questions

2005-08-02 Thread sissonj

I noticed that the izpack installer
has an EJB/IIOP Configuration panel where the user can configure
things such as:

* Naming port
* EJB port
* IP addresses the server should accept
EJB Client connections from
* IIOP port
* ORB port
* CosNaming port

Even though I am prompted for Corba
config information, the org/apache/geronimo/InteropServer configuration
isn't started when Geronimo is started, which isn't intuitive. Should
we be starting the configurations that they configure in the installer?

Should the Interop config be optional
(have it as a pack you can select at the beginning) and the IIOP port,
ORB port and CosNaming port on a separate screen?

I also noticed that in the following
change, the interop server was removed from the assembly. Can anyone
give some more background on this?

Revision: 159233
Author: adc
Date: 10:58:39 PM, Monday, 28 March
2005
Message:
Temporarily turned off.

Modified : /geronimo/trunk/modules/assembly/maven.xml

Thanks,

John

Re: Clustering (long)

2005-08-02 Thread Jules Gosnell

Andy Piper wrote:


Hi Jules

At 05:37 AM 7/27/2005, Jules Gosnell wrote:

I agree on the SPoF thing - but I think you misunderstand my 
Coordinator arch. I do not have a single static Coordinator node, but 
a dynamic Coordinator role, into which a node may be elected. Thus 
every node is a potential Coordinator. If the elected Coordinator 
dies, another is immediately elected. The election strategy is 
pluggable, although it will probably end up being hardwired to 
oldest-cluster-member. The reason behind this is that relaying out 
your cluster is much simpler if it is done in a single vm. I 
originally tried to do it in multiple vms, each taking responsibility 
for pieces of the cluster, but if the vms views are not completely in 
sync, things get very hairy, and completely in sync is an expensive 
thing to achieve - and would introduce a cluster-wide single point of 
contention. So I do it in a single vm, as fast as I can, with fail 
over, in case that vm evaporates. Does that sound better than the 
scenario that you had in mind ?



This is exactly the hard computer science problem that you shouldn't 
be trying to solve if at all possible. Its hard because network 
partitions or hung processes (think GC) make it very easy for your 
colleagues to think you are dead when you do not share that view. The 
result is two processes who think they are the coordinator and anarchy 
can ensue (commonly called split-brain syndrome). I can point you at 
papers if you want, but I really suggest that you aim for an 
implementation that is independent of a central coordinator. Note that 
a central coordinator is necessary if you want to implement a 
strongly-consistent in-memory database, but this is not usually a 
requirement for session replication say.


http://research.microsoft.com/Lampson/58-Consensus/Abstract.html gives 
a good introduction to some of these things. I also presented at 
JavaOne on related issues, you should be able to download the 
presentation from dev2dev.bea.com at some point (not there yet - I 
just checked).


OK - I will have a look at these papers and reconsider... perhaps I can 
come up with some sort of fractal algorithm which recursively breaks 
down the cluster into subclusters each of which is capable of doing 
likewise to itself and then  layout the buckets recursively via this 
metaphor... - this would be much more robust, as you point out, but, I 
think, a more complicated architecture. I will give it some serious 
thought. Have you any suggestions/papers as to how you might do 
something like this in a distributed manner, bearing in mind that as a 
node joins, some existing nodes will see it as having joined and some 
will not yet have noticed and vice-versa on leaving




The Coordinator is not there to support session replication, but 
rather the management of the distributed map (map of which a few 
buckets live on each node) which is used by WADI to discover very 
efficiently whether a session exists and where it is located. This 
map must be rearranged, in the most efficient way possible, each time 
a node joins or leaves the cluster.



Understood. Once you have a fault-tolerant singleton coordinator you 
can solve lots of interesting problems, its just hard and often not 
worth the effort or the expense (typical implementations involve HA HW 
or an HA DB or at least 3 server processes).


Since I am only currently using the singleton coordinator for bucket 
arrangement, I may just live with it for the moment, in order to move 
forward, but make a note to replace it and start background threads on 
how that might be achieved...




Replication is NYI - but I'm running a few mental background threads 
that suggest that an extension to the index will mean that it 
associates the session's id not just to its current location, but 
also to the location of a number of replicants. I also have ideas on 
how a session might choose nodes into which it will place its 
replicants and how I can avoid the primary session copy ever being 
colocated with a replicant (potential SPoF - if you only have one 
replicant), etc...



Right definitely something you want to avoid.

Yes, I can see that happening - I have an improvement (NYI) to WADI's 
evacuation strategy (how sessions are evacuated when a node wishes to 
leave). Each session will be evacuated to the node which owns the 
bucket into which its id hashes. This is because colocation of the 
session with the bucket allows many messages concered with its future 
destruction and relocation to be optimised away. Future requests 
falling elsewhere but needing this session should, in the most 
efficient case, be relocated to this same node, other wise the 
session may be relocated, but at a cost...



How do you relocate the request? Many HW load-balancers do not support 
this (or else it requires using proprietary APIs), so you probably 
have to count on

moving sessions in the normal failover case.


If I can squeeze the behaviour 

Re: interop-server-plan.xml and izpack installer questions

2005-08-02 Thread David Jencks


On Aug 2, 2005, at 6:35 AM, [EMAIL PROTECTED] wrote:



I noticed that the izpack installer has an EJB/IIOP Configuration 
panel where the user can configure things such as:


* Naming port
* EJB port
* IP addresses the server should accept EJB Client connections from
* IIOP port
* ORB port
* CosNaming port

Even though I am prompted for Corba config information, the 
org/apache/geronimo/InteropServer configuration isn't started when 
Geronimo is started, which isn't intuitive.  Should we be starting the 
configurations that they configure in the installer?


The org/apache/geronimo/InteropServer relates to the code in the 
geronimo interop module, which we aren't using at the moment.  The 
actual CORBA support is entirely in openejb and uses the Sun orb.


Should the Interop config be optional (have it as a pack you can 
select at the beginning) and the IIOP port, ORB port and CosNaming 
port on a separate screen?


I think it would probably be appropriate to put the openejb corba 
support in a separate corba module but it is most likely to be a 
separate openejb corba module.  At that time making it optional seems 
reasonable.  I doubt this will happen before 1.0


I also noticed that in the following change, the interop server was 
removed from the assembly.  Can anyone give some more background on 
this?


As noted above, we aren't using it for anything.  The generated code 
(using the IDL compiler) is now in a spec module.


thanks
david jencks


Revision: 159233
Author: adc
Date: 10:58:39 PM, Monday, 28 March 2005
Message:
Temporarily turned off.

Modified : /geronimo/trunk/modules/assembly/maven.xml

Thanks,

John


Re: Issue - configuring the binary distribution

2005-08-02 Thread Alan D. Cabrera

On 8/1/2005 4:15 PM, David Jencks wrote:



On Aug 1, 2005, at 4:08 PM, Jeff Genender wrote:




Aaron Mulder wrote:


I want to provide the necessary features in the web console to
handle the stuff that a user is likely to want to change.



Would this include the ability to add GBeans as well as configure 
existing ones?



So far I am really against adding gbeans to existing configurations.  
I don't have a problem with the web console generating entirely new 
configurations, although I doubt it is all that useful.  My opinions 
can always be argued against :-)


I feel the same way.


Regards,
Alan





Re: Issue - configuring the binary distribution

2005-08-02 Thread Alan D. Cabrera




On 8/1/2005 4:30 PM, Aaron Mulder wrote:

  On Mon, 1 Aug 2005, David Jencks wrote:
  
  
So far I am really against adding gbeans to existing configurations.  I 
don't have a problem with the web console generating entirely new 
configurations, although I doubt it is all that useful.  My opinions 
can always be argued against :-)

  
  
	It seems like Jeremy and I came to a mutually satisfactory
conclusion where (and I'm summarizing in haste) altering the configuration
(including adding/removing) would be possible, but it would also be
possible to flag specific configurations as not alterable and give them a
specific version number and all.  I think that's the best of both worlds
-- if you want it editable, you use the standard behavior.  If you want it
locked down so you can guarantee that a deployment performed on machine X
will work when exported and imported into machine Y, then you freeze the
configurations you're dealing with and build machines X and Y from those.

	I will apply a little elbow grease to getting David J on board 
this week.  :)
  

This sounds interesting to me.


Regards,
Alan





MissingDependencyException - tmporg-org-1.0-SNAPSHOT.jar not found in repo

2005-08-02 Thread Michael Malgeri

Hi,

I did a full check out
of Geronimo and I'm getting a failure which says, MissingDependencyException:
uri tmporb/jars/tmporg-orb-1.0-SNAPSHOT.jar not found in repository.
But the tmporb jar is definitely in my repo. Any ideas as to what might
be wrong? Another developer says he's getting the same error.

Michael Malgeri
Mgr Gluecode Client Technical Services
PHONE: 310-536-8355 x 14
FAX: 310-536-9062
CELLULAR: 310-704-6403

RE: MissingDependencyException - tmporg-org-1.0-SNAPSHOT.jar not found in repo

2005-08-02 Thread Whitlock, Jeremy x66075
Michael,
I've had that problem for a week now.  I'm not sure what the problem is
but some others think it may be openejb related.  I've not gotten to the
bottom of it but if I do, I'll let you know.  If you figure it out before
hearing from me, let me know.  :)  Good to see that I'm not alone with this
issue.  Good luck and take care, Jeremy

-Original Message-
From: Michael Malgeri [mailto:[EMAIL PROTECTED]
Sent: Tuesday, August 02, 2005 3:12 PM
To: dev@geronimo.apache.org
Subject: MissingDependencyException - tmporg-org-1.0-SNAPSHOT.jar not found
in repo



Hi, 

I did a full check out of Geronimo and I'm getting a failure which says,
MissingDependencyException: uri tmporb/jars/tmporg-orb-1.0-SNAPSHOT.jar not
found in repository. But the tmporb jar is definitely in my repo. Any ideas
as to what might be wrong? Another developer says he's getting the same
error. 

Michael Malgeri
Mgr Gluecode Client Technical Services
PHONE: 310-536-8355 x 14
FAX: 310-536-9062
CELLULAR: 310-704-6403

_
This message and any attachments are intended only for the use of the addressee 
and
may contain information that is privileged and confidential. If the reader of 
the 
message is not the intended recipient or an authorized representative of the
intended recipient, you are hereby notified that any dissemination of this
communication is strictly prohibited. If you have received this communication in
error, please notify us immediately by e-mail and delete the message and any
attachments from your system.
application/ms-tnef

Re: possible bug/enhancement - Setting the log level

2005-08-02 Thread Joe Bohn


I'd like to go ahead and create a JIRA issue for this problem and then 
submit the fix to at least get this working as I believe it was 
originally intended to work   that being the last update wins from 
either the Portlet or the file itself.  That way things at least make 
some sense to an end user until we decide upon the ultimate end user 
experience that we would like to create for this and other config. settings.


Details of the problem:
Each update action from the LogManagerPortlet invokes the appropriate 3 
methods on the SystemLog without checking for actual changes in the 
submitted values.   For the refresh interval this isn't a problem 
because Log4JService checks itself to ensure the period has changed 
before updating the value.  For the logging level this also isn't a 
problem because there is no ill effect to updating the level with the 
exact same level.   However, when setting the ConfigFileName the 
Log4JService doesn't check the value and assumes that there really is a 
new file and therefore sets the lastChanged value to -1 to ensure that 
the file values will take effect on the next timer refresh.  The result 
is that any change (including only a change to the logging level) from 
the console also guarantees that the file settings will be refreshed.


Before I create the issue and submit a patch I'd like to see if anybody 
has any strong opinions on how this should be fixed.  I see the 
following possibilities:
1) Change the LogManagerPortlet to ensure that the name or level has 
changed before updating the SystemLog (Log4JService) ... I'd also ensure 
that we check for changes in the refresh period as well just for good 
measure. 
2) Change the Log4JService to always check for an actual change to the 
level and/or the configPathName before taking any real action (just as 
it does for refresh interval).

3) Both of the above.

Of these I prefer #3 since it ensures that the same mistake won't happen 
again from something like a command line interface when interacting with 
the logging service and also cleans up the console code.  


Any comments before I create the JIRA and submit a patch?


Dain Sundstrom wrote:


On Jul 29, 2005, at 8:21 PM, Aaron Mulder wrote:


On Fri, 29 Jul 2005, Dain Sundstrom wrote:


Before Aaron got his hands on it you used to be able to modify the
log4j configuration file via the management interface, but it looks
like he remove that feature.  Aaron is a lot more clever than I am,
so hopeful he can come up with something better than I did :)



Now now, no crazy accusations.  I don't believe I've removed
anything, even the despised applet that lets you drop tables in the  
system

database.  :)



:)


I assumed the config file poller would only apply the config file
if it had been updated.  So that you could change things in the  
console,

and if you then altered the config file it would overwrite your  console
change, but if you just wait for the poller timeout it wouldn't revert
back to the config file version.  Is that not correct?  I'm OK with  the
last change wins style.  I wouldn't be too happy if the file poller
automatically reverted anything you did in the console.



I actually was talking about the log service not the portlet.  The  
service used to have a setConfiguration(String configuration) method  
that would overwrite the file. I know you love dangerous stuff like  
that :)  The code is here:


http://svn.apache.org/viewcvs.cgi/geronimo/trunk/modules/system/src/ 
java/org/apache/geronimo/system/logging/log4j/Log4jService.java? 
rev=57150view=markup


I'm really not sure what the best thing to do here.  Another thing to  
think about is do we want these changes to be persistent.



But the truth is, I don't think changing the overall system log
level is all that useful -- I'd much rather see a feature that let you
change the threshold for an individual appender (for example, to  
turn up

or down the console output).  I'm not sure about whether that should
rewrite your config file or defaults for next time.  I guess maybe it
should update the default starting console log level, which as far  as I
know is coming from a static variable right now.  We'll have to  
think that

through -- we'd want to automatically disable the progress bar if the
level was below WARN, for example.



I agree that we really need more specific control then global.  I  
just have no idea what to do here; good thing it isn't my problem  
anymore :)


-dain





--
Joe Bohn 

[EMAIL PROTECTED] 
He is no fool who gives what he cannot keep, to gain what he cannot lose.   -- Jim Elliot




Re: MissingDependencyException - tmporg-org-1.0-SNAPSHOT.jar not found in repo

2005-08-02 Thread David Jencks
I suspect the problem is that you are not using an appropriate version 
of openejb.M4 is now using tmporg-foo-1.0-DEAD and head is not 
using tmporb at all


david jencks

On Aug 2, 2005, at 2:11 PM, Whitlock, Jeremy x66075 wrote:


Michael,
I've had that problem for a week now.  I'm not sure what the 
problem is
but some others think it may be openejb related.  I've not gotten to 
the
bottom of it but if I do, I'll let you know.  If you figure it out 
before
hearing from me, let me know.  :)  Good to see that I'm not alone with 
this

issue.  Good luck and take care, Jeremy

-Original Message-
From: Michael Malgeri [mailto:[EMAIL PROTECTED]
Sent: Tuesday, August 02, 2005 3:12 PM
To: dev@geronimo.apache.org
Subject: MissingDependencyException - tmporg-org-1.0-SNAPSHOT.jar not 
found

in repo



Hi,

I did a full check out of Geronimo and I'm getting a failure which 
says,
MissingDependencyException: uri 
tmporb/jars/tmporg-orb-1.0-SNAPSHOT.jar not
found in repository. But the tmporb jar is definitely in my repo. Any 
ideas

as to what might be wrong? Another developer says he's getting the same
error.

Michael Malgeri
Mgr Gluecode Client Technical Services
PHONE: 310-536-8355 x 14
FAX: 310-536-9062
CELLULAR: 310-704-6403

_
This message and any attachments are intended only for the use of the 
addressee and
may contain information that is privileged and confidential. If the 
reader of the
message is not the intended recipient or an authorized representative 
of the
intended recipient, you are hereby notified that any dissemination of 
this
communication is strictly prohibited. If you have received this 
communication in
error, please notify us immediately by e-mail and delete the message 
and any

attachments from your system.
mime-attachment




Re: Clustering (long)

2005-08-02 Thread Jules Gosnell
I've had a look at the Lampson paper, but didn't take it all in on the 
first pass - I think it will need some serious concentration. The Paxos 
algorithm looks interesting, I will definitely pursue this avenue.


I've also given a little thought to exactly why I need a Coordinator and 
how Paxos might be used to replace it. My use of a Coordinator and plans 
for its future do not actually seem that far from Paxos, on a 
preliminary reading.


Given that WADI currently uses a distributed map of 
sessionId:sessionLocation, that this distribution is achieved by sharing 
out responsibility for the set number of buckets that comprise the map 
roughly evenly between the cluster members and that this is currently my 
most satisfying design, I can break my problem space (for bucket 
arrangement) down into 3 basic cases :


1) Node joins
2) Node leaves in controlled fashion
3) Node dies

If the node under discussion is the only cluster member, then no bucket 
rearrangement is necessary - this node will either create or destroy the 
full set of buckets. I'll leave this set of subcases as trivial.


1)  The joining node will need to assume responsibility for a number of 
buckets. If buckets-per-node is to be kept roughly the same for every 
node, it is likely that the joining node will require transfer of a 
small number of buckets from every current cluster member i.e. we are 
starting a bucket rearrangement that will involve every cluster member 
and only need be done if the join is successful. So, although we wish to 
avoid an SPoF, if that SPoF turns out to be the joining node, then I 
don't see it as a problem, If the node joining dies, then we no longer 
have to worry about rearranging our buckets (unless we have lost some 
that had already been transferred - see (3)). Thus the joining node may 
be used as a single Coordinator/Leader for this negotiation without fear 
of the SPoF problem. Are we on the same page here ?


2) The same argument may be applied in reverse to a node leaving in a 
controlled fashion. It will wish to evacuate its buckets roughly equally 
to all remaining cluster members. If it shuts down cleanly, this would 
form part of its shutdown protocol. If it dies before or during the 
execution of this protocol then we are back at (3), if not, then the 
SPoF issue may again be put to one side.


3) This is where things get tricky :-) Currently WADI has, for the sake 
of simplicity, one single algorithm / thread / point-of-failure which 
recalculates a complete bucket arrangement if it detects (1), (2) or 
(3). It would be simple enough to offload the work done for (1) and (2) 
to the node joining/leaving and this should reduce wadi's current 
vulnerability, but we still need to deal with catastrophic failure. 
Currently WADI rebuilds the missing buckets by querying the cluster for 
the locations of any sessions that fall within them, but it could 
equally carry a replicated backup and dust it off as part of this 
procedure. It's just a trade-off between work done up front and work 
done in exceptional circumstance... This is the place where the Paxos 
algorithm may come in handy - bucet recomposition and rearrangement. I 
need to give this further thought. For the immediate future, however, I 
think WADI will stay with a single Coordinator in this situation, which 
fails-over if http://activecluster.codehaus.org says it should - I'm 
delegating the really thorny problem to James :-). I agree with you that 
this is an SPoF and that WADI's ability to recover from failure here 
depends directly on how we decide if a node is alive or dead - a very 
tricky thing to do.


In conclusion then, I think that we have usefully identified a weakness 
that will become more relevant as the rest of WADI's features mature. 
The Lampson paper mentioned describes an algorithm for allowing nodes to 
reach a consensus on actions to be performed, in a redundant manner with 
no SPoF and I shall consider how this might replace WADI's currently 
single Coordintor, whilst also looking at performing other Coordination 
on joining/leaving nodes where its failure, coinciding with that of its 
host node, will be irrelevant, since the very condition that it was 
intended to resolve has ceased to exist.


How does that sound, Andy ? Do you agree with my thoughts on (1)  (2) ? 
This is great input - thanks,



Jules


Jules Gosnell wrote:


Andy Piper wrote:


Hi Jules

At 05:37 AM 7/27/2005, Jules Gosnell wrote:

I agree on the SPoF thing - but I think you misunderstand my 
Coordinator arch. I do not have a single static Coordinator node, 
but a dynamic Coordinator role, into which a node may be elected. 
Thus every node is a potential Coordinator. If the elected 
Coordinator dies, another is immediately elected. The election 
strategy is pluggable, although it will probably end up being 
hardwired to oldest-cluster-member. The reason behind this is that 
relaying out your cluster is much simpler if it is done in a single 

Re: Clustering (long)

2005-08-02 Thread Jules Gosnell

hmmm...

now I'm wondering about my solutions to (1) and (2) - if more than one 
node tries to join or leave at the same time I may be in trouble - so it 
may be safer to go straight to (3) for all cases...


more thought needed :-)

Jules



Jules Gosnell wrote:

I've had a look at the Lampson paper, but didn't take it all in on the 
first pass - I think it will need some serious concentration. The 
Paxos algorithm looks interesting, I will definitely pursue this avenue.


I've also given a little thought to exactly why I need a Coordinator 
and how Paxos might be used to replace it. My use of a Coordinator and 
plans for its future do not actually seem that far from Paxos, on a 
preliminary reading.


Given that WADI currently uses a distributed map of 
sessionId:sessionLocation, that this distribution is achieved by 
sharing out responsibility for the set number of buckets that comprise 
the map roughly evenly between the cluster members and that this is 
currently my most satisfying design, I can break my problem space (for 
bucket arrangement) down into 3 basic cases :


1) Node joins
2) Node leaves in controlled fashion
3) Node dies

If the node under discussion is the only cluster member, then no 
bucket rearrangement is necessary - this node will either create or 
destroy the full set of buckets. I'll leave this set of subcases as 
trivial.


1)  The joining node will need to assume responsibility for a number 
of buckets. If buckets-per-node is to be kept roughly the same for 
every node, it is likely that the joining node will require transfer 
of a small number of buckets from every current cluster member i.e. we 
are starting a bucket rearrangement that will involve every cluster 
member and only need be done if the join is successful. So, although 
we wish to avoid an SPoF, if that SPoF turns out to be the joining 
node, then I don't see it as a problem, If the node joining dies, then 
we no longer have to worry about rearranging our buckets (unless we 
have lost some that had already been transferred - see (3)). Thus the 
joining node may be used as a single Coordinator/Leader for this 
negotiation without fear of the SPoF problem. Are we on the same page 
here ?


2) The same argument may be applied in reverse to a node leaving in a 
controlled fashion. It will wish to evacuate its buckets roughly 
equally to all remaining cluster members. If it shuts down cleanly, 
this would form part of its shutdown protocol. If it dies before or 
during the execution of this protocol then we are back at (3), if not, 
then the SPoF issue may again be put to one side.


3) This is where things get tricky :-) Currently WADI has, for the 
sake of simplicity, one single algorithm / thread / point-of-failure 
which recalculates a complete bucket arrangement if it detects (1), 
(2) or (3). It would be simple enough to offload the work done for (1) 
and (2) to the node joining/leaving and this should reduce wadi's 
current vulnerability, but we still need to deal with catastrophic 
failure. Currently WADI rebuilds the missing buckets by querying the 
cluster for the locations of any sessions that fall within them, but 
it could equally carry a replicated backup and dust it off as part of 
this procedure. It's just a trade-off between work done up front and 
work done in exceptional circumstance... This is the place where the 
Paxos algorithm may come in handy - bucet recomposition and 
rearrangement. I need to give this further thought. For the immediate 
future, however, I think WADI will stay with a single Coordinator in 
this situation, which fails-over if http://activecluster.codehaus.org 
says it should - I'm delegating the really thorny problem to James 
:-). I agree with you that this is an SPoF and that WADI's ability to 
recover from failure here depends directly on how we decide if a node 
is alive or dead - a very tricky thing to do.


In conclusion then, I think that we have usefully identified a 
weakness that will become more relevant as the rest of WADI's features 
mature. The Lampson paper mentioned describes an algorithm for 
allowing nodes to reach a consensus on actions to be performed, in a 
redundant manner with no SPoF and I shall consider how this might 
replace WADI's currently single Coordintor, whilst also looking at 
performing other Coordination on joining/leaving nodes where its 
failure, coinciding with that of its host node, will be irrelevant, 
since the very condition that it was intended to resolve has ceased to 
exist.


How does that sound, Andy ? Do you agree with my thoughts on (1)  (2) 
? This is great input - thanks,



Jules


Jules Gosnell wrote:


Andy Piper wrote:


Hi Jules

At 05:37 AM 7/27/2005, Jules Gosnell wrote:

I agree on the SPoF thing - but I think you misunderstand my 
Coordinator arch. I do not have a single static Coordinator node, 
but a dynamic Coordinator role, into which a node may be elected. 
Thus every node is a potential Coordinator. If the 

How I can get involved in the admin web console?

2005-08-02 Thread Daniel Alejandro Rangel Gavia
Hello, I'm interested in collaborating with the web console, how I can help?
		  
Do You Yahoo!? 
La mejor conexión a Internet y 2GB extra a tu correo por $100 al mes. http://net.yahoo.com.mx 


Naming schema port elements

2005-08-02 Thread Aaron Mulder
So our our naming schema, when dealing with web services, has an 
unfortunate overlap with two different port elements that are subtlely 
different:

service-ref
  service-ref-name /
  service-completion
service-name /
port (def #1, portCompletionType)
  (contents of port #2)
  binding-name /
/port
  /service-completion
  port  (def #2)
...
  /port
/service-ref

In other words, the first port element contains all the same 
stuff as the second port element plus one extra element.

I think it would be preferable to have what's currently the first 
port look like this:

port-completion
  port ... /
  binding-name /
/port-completion

That way both port elements would be identical.

Any objections to making this change in M5?

Thanks,
Aaron


Re: How I can get involved in the admin web console?

2005-08-02 Thread Aaron Mulder
Start by checking out the code for Geronimo, building the server, 
building the web console, and getting the console running in the server.  
If you need help with this, let us know.

Next, think about which portlets you want to work on, and speak up 
on the list.  Some of them are implemented but need updating; many of them 
are not implemented yet.  There's plenty of work to go around.  :)

Aaron

On Tue, 2 Aug 2005, Daniel Alejandro Rangel Gavia wrote:
 Hello, I'm interested in collaborating with the web console, how I can
 help?


Re: How I can get involved in the admin web console?

2005-08-02 Thread Daniel Alejandro Rangel Gavia
I have this: geronimo-1.0-M3-src, is this the last one? 

there are the folder: geronimo-1.0-M3/modules/console-web... 
I need build this with maven, or what? 

daniel rangel

Aaron Mulder [EMAIL PROTECTED] escribió:
Start by checking out the code for Geronimo, building the server, building the web console, and getting the console running in the server. If you need help with this, let us know.Next, think about which portlets you want to work on, and speak up on the list. Some of them are implemented but need updating; many of them are not implemented yet. There's plenty of work to go around. :)AaronOn Tue, 2 Aug 2005, Daniel Alejandro Rangel Gavia wrote: Hello, I'm interested in collaborating with the web console, how I can help?
		  
Do You Yahoo!? 
La mejor conexión a Internet y 2GB extra a tu correo por $100 al mes. http://net.yahoo.com.mx 


Re: How I can get involved in the admin web console?

2005-08-02 Thread Aaron Mulder
Well, you have two problems:

1) M4 is in the process of being released right now, and M3 is really old 
and should not be used.

2) That doesn't really matter because you can't use a milestone to work on 
the web console anyway.  You need to check out the source code from 
Subversion (see http://geronimo.apache.org/svn.html) and then build it 
from there.  (First, the console source is not included in M3 or M4 
because it's more recent than that, and second, you'll need to be able to 
run svn diff in order to contribute your changes.)

Aaron

On Tue, 2 Aug 2005, Daniel Alejandro Rangel Gavia wrote:

 I have this: geronimo-1.0-M3-src, is this the last one? 
  
 there are the folder: geronimo-1.0-M3/modules/console-web... 
 I need build this with maven, or what? 
  
 daniel rangel
  
 
 Aaron Mulder [EMAIL PROTECTED] escribi?:
 Start by checking out the code for Geronimo, building the server, 
 building the web console, and getting the console running in the server. 
 If you need help with this, let us know.
 
 Next, think about which portlets you want to work on, and speak up 
 on the list. Some of them are implemented but need updating; many of them 
 are not implemented yet. There's plenty of work to go around. :)
 
 Aaron
 
 On Tue, 2 Aug 2005, Daniel Alejandro Rangel Gavia wrote:
  Hello, I'm interested in collaborating with the web console, how I can
  help?
 
   
 -
   Do You Yahoo!? La mejor conexi?n a Internet y 2GB extra a tu correo por 
 $100 al mes. http://net.yahoo.com.mx 


Security Role Mapping Authentication

2005-08-02 Thread Aaron Mulder
So in web apps, the developer provides a list of roles in web.xml,
and then we let you map any principals from any Geronimo security realms
to the J2EE roles using the security element in geronimo-web.xml (it's 
quite possible to allow principals from multiple realms).

However, on top of that, there's a security-realm-name element 
in geronimo-web.xml, which appears to be used by Jetty and not Tomcat.  
This appears to be used to set the JettyJAASRealm on the 
JettyWebAppContext (see JettyWebAppContext.java:257).

I'm assuming that when you log in to Jetty, it authenticates you 
against the security realm named in the security-realm-name element, and 
then authorizes you against the mappings performed in the security 
element.  So logically, it wouldn't help you to include principals from 
any other realm in the security element, but we don't enforce that in 
the schemas.

If that's true, then what realm does Tomcat authenticate against?  
And what realm do EJBs authenticate against?  Both Tomcat and EJBs appear
to only use the security element (Tomcat ignores the
security-realm-name element AFAICT and openejb-jar.xml doesn't have
one).

Thanks,
Aaron


Re: Attacking M4 - Final stuff

2005-08-02 Thread David Blevins
Ok, brand new binaries have ben published from the latest M4 tag:

  http://cvs.apache.org/dist/geronimo/unstable/1.0-M4/

If there are no more negative ones and the binaries pass the TCK, we are good 
to go.

If you feel the urge to negative one something, please just fix it if you have 
the time.

-David


On Mon, Aug 01, 2005 at 02:36:48PM -0500, David Blevins wrote:
 Alright, we have closed all the JIRA issues and successfully ran all
 the TCK tests.
 
 Time for the final stuff
 
 On Mon, Jul 25, 2005 at 04:59:47PM -0500, David Blevins wrote:
  THE FINAL STUFF
  
  We have to run the TCK on the final binary, which means we need to  
  allow for a couple days of just testing the actual geronimo-1.0- 
  M4.zip that people will download.  To even get that far, we need to:
  
1. [notes] Create release notes for M4 and check them into the QA  
  branch
2. [svn] Close the QA branch and copy it to tags/v1_0_M4
3. [dist] Cut and sign the binary and source distributions
  
 
 Aaron cleared up 1.  I'm going to take care of 2 and 3.  We should
 have binaries up shortly for voting and one final TCK sweep.
 
 -David


Re: svn commit: r226882 - in /geronimo: branches/v1_0_M4-QA/ tags/v1_0_M4/

2005-08-02 Thread David Blevins
On Mon, Aug 01, 2005 at 07:16:23PM -0400, Geir Magnusson Jr. wrote:
 no, you moved...
 
 we never want to move branches, but copy to make tags, and never  
 modify the tags.  That way, if we need to keep going on the branch,  
 we have it.

We agreed on this proceedure a month ago.

 On Jul 4, 2005, at 6:38 PM, Jeremy Boynes wrote:
 
 So basically,
 * create a branch now, say 1.0-M4-prep
 * do the stuff we talking about now on that branch
 * cut the final M4 distro
 * drop the 1.0-M4-prep branch


-David



 
 geir
 
 On Aug 1, 2005, at 4:54 PM, [EMAIL PROTECTED] wrote:
 
 Author: dblevins
 Date: Mon Aug  1 13:54:20 2005
 New Revision: 226882
 
 URL: http://svn.apache.org/viewcvs?rev=226882view=rev
 Log:
 Making the M4 tag from the branch.
 
 Added:
 geronimo/tags/v1_0_M4/
   - copied from r226881, geronimo/branches/v1_0_M4-QA/
 Removed:
 geronimo/branches/v1_0_M4-QA/
 
 
 
 -- 
 Geir Magnusson Jr  +1-203-665-6437
 [EMAIL PROTECTED]
 


Re: svn commit: r226882 - in /geronimo: branches/v1_0_M4-QA/ tags/v1_0_M4/

2005-08-02 Thread Aaron Mulder
I think it makes a lot more sense to work in the branch, and when
we're ready, copy the current state of the branch as the tag.  I don't
think we should move the branch to make it the tag.  If I agreed to the
opposite a month ago then the mess the other day convinced me of the error
of my ways.

Aaron

On Tue, 2 Aug 2005, David Blevins wrote:
 On Mon, Aug 01, 2005 at 07:16:23PM -0400, Geir Magnusson Jr. wrote:
  no, you moved...
  
  we never want to move branches, but copy to make tags, and never  
  modify the tags.  That way, if we need to keep going on the branch,  
  we have it.
 
 We agreed on this proceedure a month ago.
 
  On Jul 4, 2005, at 6:38 PM, Jeremy Boynes wrote:
  
  So basically,
  * create a branch now, say 1.0-M4-prep
  * do the stuff we talking about now on that branch
  * cut the final M4 distro
  * drop the 1.0-M4-prep branch
 
 
 -David
 
 
 
  
  geir
  
  On Aug 1, 2005, at 4:54 PM, [EMAIL PROTECTED] wrote:
  
  Author: dblevins
  Date: Mon Aug  1 13:54:20 2005
  New Revision: 226882
  
  URL: http://svn.apache.org/viewcvs?rev=226882view=rev
  Log:
  Making the M4 tag from the branch.
  
  Added:
  geronimo/tags/v1_0_M4/
- copied from r226881, geronimo/branches/v1_0_M4-QA/
  Removed:
  geronimo/branches/v1_0_M4-QA/
  
  
  
  -- 
  Geir Magnusson Jr  +1-203-665-6437
  [EMAIL PROTECTED]
  
 


Re: Attacking M4 - Final stuff

2005-08-02 Thread David Blevins
On Mon, Aug 01, 2005 at 11:49:20PM -0700, David Jencks wrote:

 I see dblevins has posted source and binary tar.gzs etc but I have
 no idea how to produce them from a plain maven build.  I consider
 his publish_build.sh script unacceptable for releases because it
 modifies what is checked out from svn.

I'm happy to create scripts so that people don't have to spend time
typing commands but am also very happy to see someone else implement
their ideal release proceedure in jelly or bash or java or anything
really.

As far as the script I've been using, I've created a new one and
removed tons of the parts that don't apply to official releases.

Again, if people are really unhappy with the scripts I've made for us
or they are unmaintainable or anything, I'm happy to remove them to
avoid the conflict.

 I'm going to start running tck tests on my build and hope we can 
 straighten out how to package stuff up later.  If there are problems 
 tomorrow I can always start over.

IIRC, we need to run the tests on the binaries that we indent to ship.

-David


Re: Security Role Mapping Authentication

2005-08-02 Thread Jeff Genender
Correct, Tomcat does not use the security-realm-name element from the 
geronimo-web.xml.


How it works is...

The Tomcat realms take the name of the object it is associated with. 
Tomcat objects inherit Realms from top down.  If a Realm is associated 
with an Engine, then the Host(s) and Context(s) inherit that realm. The 
same goes for Hosts...if its associated with that host, then all 
Contexts under that Host inherits the Realm.   Here is the example...


There is typically a geronimo realm GBean that is created...lets use the 
example of the one in the tomcat-config.xml.  Notice the realmName 
attribute is Geronimo.


Then a TomcatRealm is attached either the Engine, Host, or Context 
levels.  In this instance we have the TomcatRealm attached to the server 
(i.e. the Engine) Notice the Engine object in tomcat-config.xml has a 
name parameter of Geronimo.  All Contexts under that Engine will 
associate itself with the Geronimo realm name.  So this is Server-wide.


If I wish to change a Context to specifically use its own specific 
realm, its name is the context root/path name. So say I have created an 
application that has a context root of testme, then I can attach a 
Realm object to it, and this Realm object will expect to find a realm 
called testme.


This is how standard tomcat realms work, and it is because normally, 
J2EE/JAAS uses a login.config file, where we declare our realms with 
login modules like this:


name used by application to refer to this entry {
LoginModule flag LoginModule options;
optional additional LoginModules, flags and options;
};
(See http://tinyurl.com/dz6bz for more info)

In Geronimo, since we don't use a login.config, instead, we wire these 
up via 2 GBeans...a realm and a loginmodule.  The application name 
really becomes the realm name in our world.  So to keep in line with the 
login.config configuration, we use the realmName of the 
GenericSecurityRealm matched up with the application name (or path of 
our Context).


It would not be too difficult to use the security-realm-name as an 
override at this point, but Tomcat has stated that setName() on the 
Realms is deprecated and thus will disappear in the future.  This does 
not preclude us rewriting the Realms, but it would break compatibility 
with the slew of Realm objects offered by Tomcat in the future.


I would suggest we examine why we use security-realm-name and why not 
follow the application name paradigm that appears to be a standard.


Jeff

Aaron Mulder wrote:

So in web apps, the developer provides a list of roles in web.xml,
and then we let you map any principals from any Geronimo security realms
to the J2EE roles using the security element in geronimo-web.xml (it's 
quite possible to allow principals from multiple realms).


	However, on top of that, there's a security-realm-name element 
in geronimo-web.xml, which appears to be used by Jetty and not Tomcat.  
This appears to be used to set the JettyJAASRealm on the 
JettyWebAppContext (see JettyWebAppContext.java:257).


	I'm assuming that when you log in to Jetty, it authenticates you 
against the security realm named in the security-realm-name element, and 
then authorizes you against the mappings performed in the security 
element.  So logically, it wouldn't help you to include principals from 
any other realm in the security element, but we don't enforce that in 
the schemas.


	If that's true, then what realm does Tomcat authenticate against?  
And what realm do EJBs authenticate against?  Both Tomcat and EJBs appear

to only use the security element (Tomcat ignores the
security-realm-name element AFAICT and openejb-jar.xml doesn't have
one).

Thanks,
Aaron


Re: interop-server-plan.xml and izpack installer questions

2005-08-02 Thread sissonj

David Jencks [EMAIL PROTECTED] wrote on
03/08/2005 01:17:40 AM:

 
 On Aug 2, 2005, at 6:35 AM, [EMAIL PROTECTED] wrote:
 
 
  I noticed that the izpack installer has an EJB/IIOP Configuration

  panel where the user can configure things such as:
 
  * Naming port
  * EJB port
  * IP addresses the server should accept EJB Client connections
from
  * IIOP port
  * ORB port
  * CosNaming port
 
  Even though I am prompted for Corba config information, the 
  org/apache/geronimo/InteropServer configuration isn't started
when 
  Geronimo is started, which isn't intuitive. Should we be
starting the 
  configurations that they configure in the installer?
 
 The org/apache/geronimo/InteropServer relates to the code in the 
 geronimo interop module, which we aren't using at the moment. The

 actual CORBA support is entirely in openejb and uses the Sun orb.

Should we delete this plan to avoid users being confused
(since the plans are available to users (who used the izpack installer)
in the geronimo/installer-temp directory.

 
  Should the Interop config be optional (have it as a pack you
can 
  select at the beginning) and the IIOP port, ORB port and CosNaming

  port on a separate screen?
 
 I think it would probably be appropriate to put the openejb corba

 support in a separate corba module but it is most likely to be a 
 separate openejb corba module. At that time making it optional
seems 
 reasonable. I doubt this will happen before 1.0

I'll raise a JIRA issue for 1.1 to separate the openejb
corba module, so we don't forget.

 
  I also noticed that in the following change, the interop server
was 
  removed from the assembly. Can anyone give some more background
on 
  this?
 
 As noted above, we aren't using it for anything. The generated
code 
 (using the IDL compiler) is now in a spec module.
 
 thanks
 david jencks
 
  Revision: 159233
  Author: adc
  Date: 10:58:39 PM, Monday, 28 March 2005
  Message:
  Temporarily turned off.
  
  Modified : /geronimo/trunk/modules/assembly/maven.xml
 
  Thanks,
 
  John


CORBA and M4

2005-08-02 Thread sissonj

Considering the M4 izpack installer's CORBA configuration
seems to be outdated (designed to be used with the interop plan) should
we:

* remove the CORBA configuration information from
the M4 izpack installer? 
* have this as a known issue for M4?
* fix it so it does configure Corba (see below..)

I noticed that openejb\modules\assembly\src\plan\j2ee-server-plan.xml
has some CORBA related config information, e.g. ORBPort that I can't find
in any of the geronimo\modules\assembly\src\plan files. 

Can someone give an overview of how Corba is configured,
started and utilised in the M4 release?

Thanks,

John

This e-mail message and any attachments may contain confidential, proprietary
or non-public information. This information is intended solely for
the designated recipient(s). If an addressing or transmission error
has misdirected this e-mail, please notify the sender immediately and destroy
this e-mail. Any review, dissemination, use or reliance upon this
information by unintended recipients is prohibited. Any opinions
expressed in this e-mail are those of the author personally.

Patch for GERONIMO-794

2005-08-02 Thread Dondi Imperial
Can one of the guys working on the console please take
a look, and comment, at the attached patch for
http://issues.apache.org/jira/browse/GERONIMO-794 

Thanks

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


Patch for GERONIMO-794

2005-08-02 Thread Dondi Imperial
Sorry for the crosspost.  I accidentally sent this to
the user mailing list. :( Can someone take a look and
comment   on the attached patch for
http://issues.apache.org/jira/browse/GERONIMO-794

TIA

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


[jira] Created: (GERONIMO-843) Move Packaging Plugin from cglib 2.1 to cglib 2.1_2 to be consistent with cglib in Geronimo builds

2005-08-02 Thread John Sisson (JIRA)
Move Packaging Plugin from cglib 2.1 to cglib 2.1_2 to be consistent with cglib 
in Geronimo builds
--

 Key: GERONIMO-843
 URL: http://issues.apache.org/jira/browse/GERONIMO-843
 Project: Geronimo
Type: Task
Versions: 1.0-M4
Reporter: John Sisson
Priority: Minor
 Fix For: 1.0-M5


Need to update geronimo/plugins/geronimo-packaging-plugin/project.xml to use 
cglib version 2.1_2 (note the underscore).

Do we have anything using the packaging plugin at the moment?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: Naming schema port elements

2005-08-02 Thread Alan D. Cabrera

On 8/2/2005 5:19 PM, Aaron Mulder wrote:

	So our our naming schema, when dealing with web services, has an 
unfortunate overlap with two different port elements that are subtlely 
different:


service-ref
 service-ref-name /
 service-completion
   service-name /
   port (def #1, portCompletionType)
 (contents of port #2)
 binding-name /
   /port
 /service-completion
 port  (def #2)
   ...
 /port
/service-ref

	In other words, the first port element contains all the same 
stuff as the second port element plus one extra element.


	I think it would be preferable to have what's currently the first 
port look like this:


port-completion
 port ... /
 binding-name /
/port-completion

That way both port elements would be identical.

Any objections to making this change in M5?

 


Nope.


Regards,
Alan