On Sun, 6 Jul 2003, James Nickerson said:
> > Sorry I mean CS124 finals, yes, that's right, anyone else in that class
> > remember the pain it was on the final exam(which was killer as well)?
> > -Kekoa
> 
>       Yeah, the exam under Hutchins (if that's who you had) was pretty darn 
> diabolical.  But I enjoyed the heck out of that class.  I learned so many 
> basic things I just wasn't aware of before.  That Hutchins really knows his 
> stuff.  I have an enormous respect for him.
>         I never had any problems with network overloading because I was in the 
> robust and well-maintained CAEDM lab, hacking away at the SPICE machines for 
> all my projects.  
>       
>       -James

Sorry that you don't feel that the CS machines are robust and
well-maintained.  We've had our problems, but other than the performance
issues on our file server, I'd say that everything has been quite reliable
and well-maintained (of course, I suppose I am biased).  The open lab
machines are almost all brand new, are made up of pretty nice hardware (P4
2.4Ghz, good video cards, etc), and are hardly ever down (mostly when
professors schedule tests).  We've never had any of our servers or open
lab machines hacked (no, that's not an invitation or challenge).  The
servers are all about a year old, have all been up for months, and have
been very reliable.  Key services are running on redundant boxes, and we
update everything quite regularly.

The problem we've been experiencing isn't really a network overload
problem, but a server performance problem.  The network is full gigabit,
but hasn't even spiked over 100 Mbit (about the highest network load is
when the guy on it.et.byu.edu downloads the ISOs off our ftp server).

We're currently running on a Dual PIII 1Ghz.  The problem that we see is
related to high network activity, but only because it causes high local IO
activity.  Whenever disk IO happens on the box, the load jumps, and the
machine slows down.  Even doing high local IO about pegs the box, and
slows down everything.

Since there's not much we can do to troubleshoot the IO problem while it's
in use (is it a RAID card problem, RAID config, PCI bus, SCSI bus,
filesystem, etc.), we chose to move to a new server.  The new server has a
much faster IO subsystem (fast disks and SCSI bus, lots of cache and
memory, etc.), and shouldn't get as hammered, even if it exhibits the same
problems (which we doubt).

A single multiprocessor machine works very well for us, partly because
it's easier to update and maintain a single machine, and partly because
the machine will also be serving samba and running an ldap replica.  It's
a lot easier to satisfy the bean counters with a single beefy machine, and
with a bigger machine, we can also get more memory (16 GB on ours), and
better disks.  Our initial results are very promising, though if people
criticize us too much, I suppose their data may be moved off the RAID to a
floppy drive :)

Frank
---------------------------------------------------------------------------
Frank Sorenson - KD7TZK
CSR Computer Science Department
Brigham Young University
[EMAIL PROTECTED]


____________________
BYU Unix Users Group 
http://uug.byu.edu/ 
___________________________________________________________________
List Info: http://uug.byu.edu/cgi-bin/mailman/listinfo/uug-list

Reply via email to