On Sat, 2011-04-02 at 14:34 -0400, Patrick Martin wrote:
>
> Especially nowdays.  Times are tough.  Strangely, I am rarely an early
> adopter of technology.  I still don't have blue-ray.

Blue what? Is that some form of fish? :)

I used to go with more like mid level. Though these days when you step
back at what others are tossing out. You can not only have machines, but
spare parts for days, all on the cheap ;)

> RHEL was a requirement due to the infrastructure software I am
> running. No one distro had everything I needed so I have to run a
> vmware image with Ubuntu for the Ubigraph visualization server on this
> box.

Yes I can understand RHEL being a requirement. Though CentOS might
appear to be that, depending on how they determining the platform. Still
if your running software like that, can likely afford a RHEL license.
Which is not bad for $50, providing you don't need to buy licenses for a
bunch of machines.

Though me being some what of a hacker (not a cracker in the computer
sense...) I would try to run the software on other things and see what I
need to do/hack to make it work :)

> It turns out I didn't really have a choice in distro.  The RedHat
> repositories are kind of sparse. 

I think you can mix in some Fedora repositories? Not really familiar
with RHEL these days, so others would have to comment further. Kyle? :)

>  I'm having to do the old school configure/make/make install for a
> bunch of stuff I used to get from repos.

Doing stuff like that on binary distros is something I hate. Its one of
the reasons that I ended up on Gentoo. Which automates such process, and
makes live from that point of view, much easier to deal with and
tolerate.

Gets worse when you have to start updating dependencies and doing the
same. Then when it comes time to update, repeat the process all over
again :(


> Which is why I typically make sure I go with a heat-sinked setup
> rather than one with fans.  I learned my lesson years back when I
> "upgraded" a silent system with a new video card whose fan sounded
> like a 747 on takeoff.

I try to avoid fans, but even on some severs where I could have not had
any, or very few. I elected to have fans, and man are they loud and
noisy. Now when it comes to video cards, I always go without fans.
Having had a few die without easy way to replace, and others fry because
of the fans shorting out :(

> Your Linux-fu far exceeds mine.  Fun experiments, but no time to
> conduct them.  Though I love it, Linux is an means to an end for me.

Well I wouldn't go that far, I try to be humble about my skills. Always
someone more skillful out there. Not to mention I am always amazed at
how much I don't know, no matter what I know. Though I think at some
point I need to stop with that and get back to reaching the end ;)

> I imagine that this is more frequent when large contiguous blocks of
> memory are requested.  I bet you could track this down with systemtap
> via tracing the malloc calls.  kmalloc requires contiguous, I bet
> thats your trigger.
> 
> I used to track similar events with dtrace in Solaris.

Problem is I never know when it will occur and I cannot replicate it.
Thus makes any sort of debugging or trouble shooting very difficult.
Though I should take a look at some point to see what is really going
on. Either way I think the kernel should realize its in a loop.

Its trying to keep more stuff in cache than is necessary. I thought it
was due to limited ram, but having doubled that and problem still
occurs. Its just not being handled as it should IMHO.


> You could create a probe to dump a snapshot of processes periodically
> and feed that into something like rrdtool to monitor memory growth and
> more. Gradual growth would likely mean a memory leak.  Purify was
> always a great tool for debugging such things.

Great ideas, but I have to find a way to replicate it reliably or at
all. Before I can work on finding what it is, etc.

> I think I was making much to-do about nothing.

The thought has crossed my mind with solid state storage devices in
general. Typically they had limited write/rewrite ability. Till they
became like old school cassette or video tapes. Stuff bleeding
through :)

> 
> I didn't see ext4 as an option in the RHEL-Client 5.6 install. 

They probably did not back port that. Depending on what kernel you are
running. Any bug fixes or newer stuff is back ported, because of the
life cycles they support. Which spans many years, I think 5, might be
7-10. Kyle would know for sure, and I could google, but to lazy :)

>  Besides, I hadn't done my homework on that filesystem so I stuck with
> the familiar ext3.  I like to make educated choices, but sometimes
> don't feel like educating myself.  Filesystem specifications are
> fairly dry reading.

ext4 is rock solid, just as any ext file system has been in my
experience. I am switching to it across the board as fast as I can.
Though will take a while for production systems for obvious reasons :)


> Don't get me started about Windows.  Too late.  The other day I type
> "ls" in cygwin and wait for 5 minutes.  3 minutes into it, pagefil.sys
> (or whatever its called) busy.

Funny :)

> All I was running was a browser on a dual core machine with 4G RAM.
> But my company crams those boxes with whatever invasive survelance,
> virus prevention and encryption software they can find.  Performance
> suffers accordingly.

Oh yes, you almost need cores and ram dedicated to just that stuff. But
it amazes me even without all that. How if you set the pagefile to be a
fixed size and/or make it small. You get all sorts or memory related pop
ups. These days I just leave it as is, because any modification seems to
make things worse. Were before fixing that to a certain size or making
it small, would increase performance.

But that does not sell new beefier hardware :)

-- 
William L. Thomson Jr.
Obsidian-Studios, Inc.
http://www.obsidian-studios.com



---------------------------------------------------------------------
Archive      http://marc.info/?l=jaxlug-list&r=1&w=2
RSS Feed     http://www.mail-archive.com/[email protected]/maillist.xml
Unsubscribe  [email protected]

Reply via email to