Well, it depends.... It (is supposed) to start with full detailed requirements, vetted by IT security, so that things like what vlan it goes in, what firewall settings are needed, whether it needs to behind the F5, what storage or NFS shares it needs, etc. are known. And, whether a specific type of system is required.
We have a server known as cerberus, which did NAT so that we can patch systems behind it...it used to also answer to the outside, but don't think we do that anymore (had a Sun box jumped on the public side that got compromised, that the SA forgot about...so it didn't get patched. Though we also drop a default firewall in jumpstart now.) The same server also supports kickstart for the Solaris x64, RedHat Linux and pxeboot installs for FreeBSD systems. When I first started....it was just a small dumb switch and we had to put the server in the 'jumpstart' rack initially. But, now its a vlan...and we have a jumpstart cable hanging in every rack. Though we still stage some servers from the rack. For our ESX environment we have templates that we start from. And, not quite bare metal we have zone servers (both sparc and x64) and jail servers. Then we set it up in cfengine and its ready. The cfengine process varies. Personally, I figure out all the system specific needs and have those ready before the first cfengine run. And, then I make sure that everything I laid down works, and makes sense. And, then turn it over. Everybody else usually let it pick up the defaults from cfengine, and then throw some specific configs down (namely firewall, since the default firewall that cfengine is the same restrictive default one from jumpstart) and then they let the client start kicking the tires, fixing anything that they complain about... almost always the first complaint is they can't log in, because tcsh/bash aren't in the location that ldap calls for. Historically, when we had fewer systems..../usr/local was nfs mounted from a single box, and we built our own tcsh/bash...ssh, openssl, etc. Because they didn't trust the system ones. Was funny when you'd get messages of "NFS server somewhere is not responding".... At one time we dreamed of having a network fabric and (self) provisioning to MAC addresses....one of the new capabilities that we could get as we're being forced to upgrade our SAN fabric. But, after the Extreme incident, we're supposed to be Cisco only....(though we have non-Cisco stuff out on the edges...including some Extreme switches. Because they have lifetime warranties, and we're going to make them pay.... ;) So the network has to stay Cisco and separate from our SAN fabric (Brocade). Though thought I heard Cisco has something...though its already been pointed out that Brocade can do 10G on every port, while our new Cisco is 10G over 4 ports. And, the push for 10G is due to research grants. Though one of the buildings that needs that 10G....they still have mostly 10Base2 and 10Base5....and so much deferred maintenance that their afraid to touch the walls. In the datacenter....only the jumpstart network has DHCP, everything else is static. Though the DHCP servers for this entire campus and our Olathe campus (includes wireless, residence halls and cable modems) reside in the public vlan that cerberus is in. A pair of V240's....where most of the fans have failed....still haven't gotten a response on what they want me to set up to replace them with. They were going to get replaced with an appliance, but it didn't make into this fiscal year. And, it looks like we're about to enter into another cycle where we lost more fans in other V240s. I've been wanting to replace our MX's anyways....since they're the only SPARC systems left our DNS/Email fleet (and I seem to be upgrading bind more and more frequently.) ----- Original Message ----- > Hi gents, > > I would like to ask for your experiences when it comes to bringing a > server from bare metal to production-ready. From talking to people, > it > seems like there are many ways this is done. > > I assume you are using network-booting to kick things off? > > Are you just provisioning one operating system? If not, how do you > make > the selection between multiple OSes? Is this fully automated (e.g. > using > mac-addresses)? Is this important to you, or do you provision the > same > OS >90% of the time? > > Do you have a separate environment where you do your provisioning? > How > do you move between the provisioning environment and the production > environment? > > One option I've seen is physically replugging the server to separate > production from PXE environment. Is there a more automated way to do > this? One problem I can see for automation is that the PXE booting > relies on DHCP, and you don't want multiple DHCP servers (production > vs. > PXE environment). What about relying on static IPs for production > servers, and only using DHCP for new servers? > > Finally, what are your biggest problems with your setup (if any)? How > could you save more time and make it easier? Do you have thoughts or > plans for the future? > > I really appreciate any experiences you would like to share on this > topic. > > Thank you. > > -- > > Eystein Stenberg > CFEngine > _______________________________________________ > Tech mailing list > [email protected] > https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech > This list provided by the League of Professional System > Administrators > http://lopsa.org/ > -- Who: Lawrence K. Chen, P.Eng. - W0LKC - Senior Unix Systems Administrator For: Enterprise Server Technologies (EST) -- & SafeZone Ally Snail: Computing and Telecommunications Services (CTS) Kansas State University, 109 East Stadium, Manhattan, KS 66506-3102 Phone: (785) 532-4916 - Fax: (785) 532-3515 - Email: [email protected] Web: http://www-personal.ksu.edu/~lkchen - Where: 11 Hale Library _______________________________________________ Tech mailing list [email protected] https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech This list provided by the League of Professional System Administrators http://lopsa.org/
