<snip>

> Very nice, almost makes me want to replace my desktop, but I rather get
> a new laptop first. Though the prices for those are all over the place.
> Then again there are also servers to buy and other things. Always seems
> there is more stuff to buy than money :)

Especially nowdays.  Times are tough.  Strangely, I am rarely an early adopter
of technology.  I still don't have blue-ray.

> One distro I have been very impressed with is Arch. Its very up to date
> and polished. Anytime I am Googling stuff for Gentoo, if I don't come
> across Gentoo docs I tend to come across stuff for Arch. Occasionally
> stuff from Ubuntu, but thats normally users in forums, not official
> docs. Arch has some excellent documentation.

RHEL was a requirement due to the infrastructure software I am running.
No one distro had everything I needed so I have to run a vmware image
with Ubuntu for the Ubigraph visualization server on this box.

It turns out I didn't really have a choice in distro.  The RedHat repositories
are kind of sparse.  I'm having to do the old school configure/make/make install
for a bunch of stuff I used to get from repos.

<snip>

> Usually fans on desktops tend to be the noisiest. But hard drives can be
> noisy just the same.

Which is why I typically make sure I go with a heat-sinked setup rather than
one with fans.  I learned my lesson years back when I "upgraded" a silent
system with a new video card whose fan sounded like a 747 on takeoff.

<snip>

> I have run swapless systems for years. All my virtual servers, including
> the one hosting the lug wiki and mailing list are swapless. That's
> because they are running on a nfsroot. No real way to have swap. Most
> any embedded system is swapless like phones :)
> 
> That includes some of my xen host nodes, which are also using nfsroot
> and diskless. Though I have had to enable swap in the kernel for domU's
> for live migration purposes. Other aspects of the kernel depend on that
> but I am not running any swap.

Your Linux-fu far exceeds mine.  Fun experiments, but no time to conduct
them.  Though I love it, Linux is an means to an end for me.

> Now that said on my desktops occasionally I run into a long time know
> kernel bug/issue that I am not sure if its a design or bug. I know a
> Gentoo developer brought this up with kernel.org a while back[1]. But
> the problem still occurs at times. Adding more ram does not fix the
> problem either.
> 
> What happens is at some point after extended periods of the machine
> running. You use all available ram. Some operation requests memory, and
> the kernel goes to free cached stuff. At the same time something is
> requesting what was cached, and it causes a nasty I/O loop. It will max
> out usually one CPU core, and HD I/O. Hard drive light will stay on
> solid, and CPU at 100%.

I imagine that this is more frequent when large contiguous blocks of memory
are requested.  I bet you could track this down with systemtap via tracing the
malloc calls.  kmalloc requires contiguous, I bet thats your trigger.

I used to track similar events with dtrace in Solaris.

> Sometimes the process will crash or you can kill a problematic one and
> the machine continue on. But it can take a while to log in just to do
> that. I usually end up hard power cycling the machine, which is not so
> nice. It doesn't happen very often, but probably at least once every
> month or two. Usually on my laptop, and might be amd64/64-bit specific.
> Not sure if something is leaking memory or what.

You could create a probe to dump a snapshot of processes periodically and
feed that into something like rrdtool to monitor memory growth and more.
Gradual growth would likely mean a memory leak.  Purify was always a great
tool for debugging such things.

<snip>

> Not sure, but one would think the same would hold true for a  regular
> hard drive. Not in the same regard, but still additional wear and tear.
> Not to mention using the slowest part of the machine to make up for one
> of the fastest. That has never sit well with me.

I think I was making much to-do about nothing.

> > Should I favor one filesystem versus another for a drive of this
> > type? 
> 
> I would go with ext4 for any new systems. I can't say I have seen much
> difference in performance over ext3, but have seen some metrics favoring
> ext4. There is also resierfs, though not sure on its popularity these
> days.

I didn't see ext4 as an option in the RHEL-Client 5.6 install.  Besides, I 
hadn't done
my homework on that filesystem so I stuck with the familiar ext3.  I like to 
make
educated choices, but sometimes don't feel like educating myself.  Filesystem
specifications are fairly dry reading.

> At best you might want no more than maybe 1GB of swap. I tend to
> recommend less, 256MB-512MB The only nice thing about swap, is you can
> turn it on and off. Thus if swap is being used and you don't want that.
> Turn it off, and then turn it back on. 
> 
> That said these days Windows seems to want large page files. The old
> tricks of fixing that or disabling. Has caused problems for just about
> anyone I have done that for. Thus it seems on Windows the old rule of
> thumb is still valid and the OS really wants you to have swap/page file,
> or what ever term they are calling it these days :)

Don't get me started about Windows.  Too late.  The other day I type "ls" in
cygwin and wait for 5 minutes.  3 minutes into it, pagefil.sys (or whatever
its called) busy.

All I was running was a browser on a dual core machine with 4G RAM.  But
my company crams those boxes with whatever invasive survelance, virus
prevention and encryption software they can find.  Performance suffers
accordingly.

                                          

Reply via email to