On Tuesday, November 4, 2003, at 02:30 PM, gianny DAMOUR wrote:

Dain Sundstrom wrote:
        I thought we were going to use the JMX ObjectNames to identify
deployments, rather than keeping an actual repository somewhere?

That is what I thought. We can easily find all deployments with an ObjectName pattern query.
Grrrrrrrr. And I really mean it: I sent a memo "JSR77 -> JSR88" on Thu, 09 Oct 2003 14:02:15 +0200 proposing this idea. Excerpt of this pattern:

[...]
ApplicationDeployer needs to be notified in order to track the state
(running, stopped), the type (EJB, WAR, EAR et cetera) and various other
pieces of information.


Multiple solutions could be implemented in order to "sync" the JSR77 and
JSR88 models.


And this is a possible one only based on naming conventions:
...
This way, ApplicationDeployer "just" have to query the MBeanServer to
retrieve the running ModuleType.WAR modules.
[...]
It is rather clear that I wanted to use naming conventions to identify deployments and their states.


Excerpt of a response to this idea from Aaron:
[...]
While naming conventions are convenient, I don't want to rely on that for basic
operational requirements.
[...]
As a conclusion, I have duplicated the deployment repository and I am implementing what is required to identify the running or available deployments w/o having to poll the MBenServer.

Doh. I was out of town so I didn't see you email. We absolutely use naming conventions for services.


*:role=DeploymentUnit,type=<service type>,url=<the url to the deployment>

If something wants to be a deployment unit, it better use a name that matches that pattern or we will ignore it.

I disagree with your assertion that the server should pull code on
a distribute -- among other things, that assumes that the server can
freely contact that client, which I don't think is necessarily true, or a
good idea. However I do agree that the distribute call shouldn't block --
I asked earlier if there was a way to stream data to the server over JMX
and got no answer, so I think we need to set up a standard servlet to
receive the data or something.

Agree. We need some sort of bulk file transfer system. I think we should really look at using WebDav for this.
I also agree. So let me re-phrase: I will write a task to bulk transfer a file from a remote host. This task will use under the cover WebDAV. My point regarding the "server push" vs. "client push" idea is that it is up to the server to mount a WebDAV fie system and not the client to do so.

I doubt that a server will have access to mount a client's file system. Clients are normally protected by firewall from a server. Even in a data center you sometimes have servers on a separate physical or virtual lan from you admin machines. Having the client mount the server webdav directory and push a deployment is way more likely to work.


At the end of the day, I believe that DeploymentController should not be the
entry point to action a start, stop, redeployment or undeployment. Why?
Because one knows, which planner has mounted a deployment and hence one can
bypass DeploymentController.

You are assuming that only one planner was involved in the deployment. What about ear and other meta deployments.
My idea is that a ear containing say a rar and 2 war will be represented by the following deployment-units:
- one ear deployment unit;
- one rar deployment unit; and
- 2 war deployment units.


When I write deployment unit, I mean a deployment meta-data repository. The rar and war units are children of the ear one. When an ear is deployed, a ear planner mount the ear unit. This planner then calls the relevant planners for the rar and war units. These planners create units, which are children of the ear unit. When a start action is triggered, the ear unit requests to the ear planner to perform its job. Then all the children units are started. For instance, it means that a start action is triggered on the rar unit, which requests to its planner to perform its job.

Agree, except you would call startRecursive.

BTW, I tried to refactor ServiceDeploymentPlanner and this is not so simple:
ServiceDeploymentPlanner is a kernel service. if one wants to align it with
the proposed approach, then one must re-package a lot of classes from the
core sub-project to the kernel sub-project. For instance, as the proposed
approach defines a class which extends AbstractManagedContainer, one needs
to re-package the Container, Component, AbstractManagedComponent et cetera
classes.

Why is ServiceDeploymentPlanner a Container now? I dislike the idea of pulling this stuff into kernel unless it is absolutely necessary.
I agree. I assume that ServiceDeploymentPlanner is a container because it contains the mounted services.

If we make the change I suggested at the top, it no longer tracks mounted services and becomes a just a planner again.


-dain



Reply via email to