Ralph Goers wrote:


Steve Loughran wrote:
Ralph Goers wrote:


Steve Loughran wrote:
Simone Gianni wrote:

The thing to remember about WAR files is that they are a packaging format that is intended to make it easy to deploy web apps. Not distribute, but deploy. The old WAR/EAR use cases always had the 'assembler' who would be some person who would somehow assemble WARs and EJB beans to make a working app, presumably through some GUI that required the same work to be repeated every release.

If you are doing lots of late binding tuning to the WAR file, then perhaps build time is the wrong place to do it; it should really be done as part of the deployment process, where per machine optimisations can go in. In this world what you want from the outset is the exploded WAR, which is then taken with server-specific options to create the target WAR for the target system.
Pardon me for saying so, but this is just nuts.

:)


In my environment our CM folks do the build and then make it available for operations. Noone after CM is allowed to modify the parts. Any modifiable configuration has to be placed outside the webapp. Furthermore, CM prefers that no exploded webapps be used, since it is much easier to distrubute wars.

ok. So where do the per-system modifications go in? do your war files have hard coded assumptions about LDAP bindings, JDBC URLS or rely on the app server to set up the JNDI bindings for all the info kept in your dir server?
We have a system property that specifies the location of an XML file containing all these kinds of properties. Typically, it will go in the JBoss server's conf directory. We also have a more extensive configuration repository that is shared across servers and the XML config file has the information to connect to it.

Ok. So you do have central configs, its just outside the WAR. What I try and do at work is bring up everything coherently, so the DB gets created with the same username and password as the app server code is expecting. When we bring up the system we have to put stuff into JBoss too, like the mysql driver. I do that with a bit of the deployment that fetches the library from the m2 cache and then copies it into the destination directory



InstallDrivers extends Compound {

    destDir TBD;

    repo extends Maven2Library {
    }

    jdbcJAR  extends JarArtifact {
        project "mysql";
        artifact "mysql-connector-java";
        version "5.0.4";
        sha1 "ce259b62d08cce86a68a8f17f5f9c8218371b235";
        //link to the parent repository
        library LAZY PARENT:repo;
    }

    copyJdbcDriver extends CopyFile {
      source LAZY jdbcJAR ;
      destination LAZY destDir ;
      copyOnDeploy false;
      overwrite false;
    }

That thing only deploys if destDir is set to point to the relevant configuration lib dir, before jboss comes up.


Next question: how do ops automate the 'provisioning' of the app server? Do they have a piece of paper telling them what bits of the system you need 'Redhat EL with Java 1.5.06 and the patches need to /etc/profile to get it set up, or what services need to be running? Do you find that they decide to try 1.5.09 without telling you? Or that they forget to do something essential like bringing up the DNS server before the database and app servers come up.

Ops never changes an OS version or Java version without it having been verified in our QA area first. Everyone is informed before that happens.

that's good. I wish I had worked with an ops team like that in this project. And they probably wish they didnt have troublemakers like me.

http://people.apache.org/~stevel/slides/when_web_services_go_bad.pdf

I must admit I've never had to tell our ops folks that DNS and the database must be up before starting the app server.

Its not so much telling ops what to do, but telling the system what to do. the big nightmare problem was the hard reset, where some unexpected power event would reboot everything simultaneously. In an NT domain, if the PDC doesnt come up first, the other boxes dont authenticate and you are left wondering why pages that hit the database fail. As for DNS, if it starts failing with java.io.noRouteToHostException, do ops know its a DNS problem, or do they stop at the word 'java' and phone the deve team.



Because that's the kind of thing we can automate and lock down under SCM. That lets us create a blank VMWare or Xen disk image, have it run a PXE preboot to get the base image, then after it comes up we can bring the system up to the state where the WAR file deploys.

In that world, you dont really need an opts team to deploy. You just have a server that manages the build and deploy of the latest bit of the SCM tree tagged as ready to go into production. It sticks it out every night, and as server load increases, it brings up new machines. The ops team take on a different role: managing the Xen/VMWare server farm, monitoring the health of the (much larger) VM cluster, setting up the approved system configurations. This is the stuff that Amazon EC2 presumably does behind the scenes -an outsourced ops team.

That would probably never happen in my environment.

As long as your ops team is in control, all is well. My metric for this is simple : are you scared of the phone ringing at weekends? We used to have this story of the "developer relocation program" where developers would get a new identity so they could avoud being paged. Sadly, it doesnt exist.

-steve


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to