--- a b <[EMAIL PROTECTED]> wrote:

> 
> >Funny, I got the similar results for my mail
> servers
> >with anaconda kickstart, pxe, dhcp, tftp and grub
> save
> >for certain stuff in /etc. They all run the same
> >distro base, run the same software packages and
> >scriipts and don't require someone baby sitting
> them
> >during installation or upgrade. Almost fire and
> >forget.
> 
> And you...
> 
> ...managed tens of thousands of servers with it
> successfully, in parallel?

I think in this case it is probably easier just to
load disk images.

> 
> ...had a platform that spanned operating systems,
> and was able to run the 
> same software stacks in the same way,
> functionality-wise?

Sorry, never claimed this.

> 
> ...were able to deploy components and bundles, for
> example Oracle, within 
> minutes on thousands of servers, without ever having
> to do any kind of 
> manual configuration, just have the system start
> serving data?

Pretty much. Install, copy configs over, reboot.
Viola.  Minus the thousands of servers claim. And of
course no Oracle.

> 
> ...ran the whole kit and kaboodle through a modular,
> automated testing suite 
> that found any potential problems and descrepancies?

it is rather hard to get approval to put new stuff
into production without a full run of tests to check
for compliance to expected behaviour.

> 
> ...never had to log into a system and do any ad-hoc
> work?

Heh, never got round to putting in full automatic
queue management. But some stuff were automated so
partly yes, no log on to do ad-hoc work depending on
the problem.

> 
> PXE, Anaconda and JumpStart are just parts and
> pieces of the puzzle.
> My point is, in an environment like that, one would
> *never* run `apt-get` or 
> `yum update`. That would be ad-hoc. It would take
> all the stability and 
> reliability out of that environment.

I dunno, never is rather too broad. If all that got
updated was a single package, issuing a 'come and get
it' command to the servers won't take the system
out...unless of course you did not stage the update
process itself and did the real thing and then got
whacked by an unseen problem.

> 
> Pity all the architecture, specification,
> development and testing that went 
> into it with the engineering of the platform. It
> would defeat the whole 
> purpose in an instant.

Yeah, it is probably scrapped by now and back to
install, compile, copy config over all over again.

> 
> I do believe you though when you write that it
> worked for you. I'm just 
> curious, how many systems did you maintain, and how
> many people were 
> necessary to maintain it?
> 

Somewhere around 30. 1. This is just my part, the mail
delivery system, others handle the other parts of the
entire system and they don't make use of stuff like
this. However, i am was the one that gets to deal with
queues (fires) on an almost constant basis and having
to sit through an installation to bring an emergency
standin box or three don't cut it. But this is not
entirely the point of having an apt/yum in place.

You may get security/bug fixes to core components like
the kernel, system libraries. If doing an apt or a yum
on a staging proves clean, I don't see how that should
be a problem. On Open Solaris (this is about Open
Solaris, not Solaris 10 + Sun connection because I am
comparing this to Linux distros such as Fedora and
Centos which were what I ran and that is why this is
on the Open Solaris list thank you) how do you go
about this? Download latest cd images, burn and go
through each box, reinstall stuff and restart?

Yes, people do use Fedora in production. It is stable
enough and this is not some mission critical bank system.

Send instant messages to your online friends http://uk.messenger.yahoo.com 
_______________________________________________
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org

Reply via email to