On Jan 15, 2011, at 8:06 AM, Edward Ned Harvey wrote:

>> From: [email protected] [mailto:[email protected]]
>> On Behalf Of Brad Knowles
>> 
>> The Dell servers I've seen require the use of PCs running Windows and
>> Internet Explorer to perform certain systems administration tasks, like
>> updating firmware, some aspects of remote console access, configuring the
>> PERC RAID controllers, etc....  They do things like Active-X plug-ins that
>> download from the server to your browser, and if you're not running IE on
>> Windows, then you're screwed.
> 
> That must have been a long, long time ago.

That was when I was working at UT Austin, the Alma Mater of Mr. Dell himself.  
And the one place on the planet that probably gets the best discounts from Dell 
compared to any other buyer.  I was laid off by UT Austin in December of 2009.

Even our Windows admins didn't like some of these Windows-only features of the 
Dell hardware they had, because many of them liked running Windows on Intel 
Macintosh hardware, and apparently even if the OS is Windows, the fact that 
it's running on a Mac caused them some problems with managing their servers.

>                                                  Because we use mostly Dell
> servers, and I've never seen anything like that in the last 8 years.

How bleeding-edge is/was your Dell hardware?  Because of the types of discounts 
that UT Austin got from Dell, and their willingness to bend over backwards to 
get more hardware shoved in our door, of the Dell hardware we had, it tended to 
be the latest, greatest, and most recent hardware they had available.

Hell, they outright gave us at least one blade chassis, just to try to get us 
hooked on their blade servers.  And the management interface for the blade 
chassis was all pure Windows.


What they didn't account for is that the datacenters that UT Austin has were 
designed back in the mainframe days, and can't handle the 22kW per floor tile 
power & cooling consumption that these blade chassis required.  We put a single 
10U blade chassis into a rack, and that would consume all the power and cooling 
available for that floor tile (and some extra), and we'd have to leave the 
entire rest of the rack unpopulated.

Even the new datacenter that UT Austin was bringing online was limited to 7kW 
per floor tile for power & cooling, which was mostly a result of the way the 
old central receiving building was constructed, and the fact that the structure 
simply couldn't handle the weight of more power or cooling equipment.  They 
brought in the best in-row cooling and hot/cold aisle cooling design equipment 
they could, but there were some design limitations that they simply could not 
get around.

>                                                                          We
> have a mixture of windows, linux, and solaris running on these dells, and we
> have admins who use windows, osx, and ubuntu as their desktops, all with no
> problems.  I will only say that solaris isn't very stable on the dell, but
> the windows & linux are both fine.

So long as you run the officially blessed Dell packaged versions of Linux, 
you're probably at least mostly okay.

But in the Unix group, we liked to run the packaged versions of Linux that we 
liked, which meant that we didn't have much in the way of Dell hardware.  The 
only Dell hardware we had in our group during the time I was there were the 
eight-year-old servers that formed the FreeBSD-based "cluster" that ran the 
University Mail Box System (UMBS) mail service, which hadn't otherwise been 
touched in at least ten or more years.  I was one of two people responsible for 
keeping those machines running (among other duties), but they didn't require 
much in the way of administration or anything -- they just ran, and had been 
"just running" for the past eight years.

The Dell servers that were used for the ESX/ESXi virtual server farm were 
officially owned and operated by the Windows group, and all Windows tools were 
required to administer them.  We did have one guy who was responsible for 
building the several hundred Unix/Linux virtual machines that were included in 
that infrastructure, but once they were build and handed over to the project 
owners, he didn't have any responsibilities for doing any day-to-day 
administration of those virtual servers.

> Ironically, I have one Sun server, and I did see something like that on the
> Sun.  Specifically, to install the OS you connect to the remote console and
> map your local optical drive to the remote machine, which works in windows
> but not mac.

On the Sun servers we had where the remote DVD-ROM drive had to be used through 
their Java-based console, I got it working without too much difficulty on the 
Mac I had at my desk.  I think it even worked with the older PowerMac G5 Tower 
that I initially had, as well as the Intel-based Mac Pro Tower that I later 
received.

I don't know which specific Sun servers you had, but the ones we had worked 
just fine from any OS that any of us used.  And we had some of our Unix admins 
on Windows, some on Linux (various flavours), some on Solaris (and Solaris 
x86), and some on Mac OS X.

> And to be fair, the remote console over the iDRAC Enterprise is something we
> don't use.  That's where the Sun fell down, and for all I know, the dell
> could fall down there too.

Again, we didn't have any problems with the remote console facilities on the 
various Sun servers we had.  This includes ancient SPARC hardware (including 
equipment as old as Ultra 5s and Ultra 10s), right up through the X4140s, 
X4440s, and all the newer Sun hardware that we had bought a few months before I 
left.

--
Brad Knowles <[email protected]>
LinkedIn Profile: <http://tinyurl.com/y8kpxu>

_______________________________________________
Tech mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to