Re: SSD and RAID question

2012-09-08 Thread jdow

On 2012/09/08 18:34, Todd And Margo Chester wrote:

On 09/05/2012 03:34 PM, jdow wrote:

But if the real limit is related to
read write cycles on the memory locations you may find that temperature
has little real affect on the system lifetime.



I did some reliability analysis for the military about 25 yuears
ago.  It was pretty much following general guidlines and most
of it was baloney.  What I do remember was that failures from
temperature was not a linear curve, it was an exponential
curve.  I will strongly concur with you that heat is your enemy.

What I would love to see, but have never seen, is a Peltier
heat pump to mount hard drives on.

-T


Um, yes, temperature is a REAL problem when it gets high. But so is a
limited number of useable rewrites for memory locations. At "reasonable"
temperatures it should last a long time if write cycle limits are not
a problem. (Relays have a cycle related lifetime limit as well as the
usual temperature limit, as well.)

And thank Ghu that we're not dealing with radiation here. That gets
ugly, fast. I had to use two generations old TTL logic without gold
doping for the Frequency Synthesizer and Distribution Unit when
creating the basic Phase 2 GPS satellite design for that reason. Ick!

{^_-}


Re: SSD and RAID question

2012-09-08 Thread Todd And Margo Chester

On 09/05/2012 03:34 PM, jdow wrote:

But if the real limit is related to
read write cycles on the memory locations you may find that temperature
has little real affect on the system lifetime.



I did some reliability analysis for the military about 25 yuears
ago.  It was pretty much following general guidlines and most
of it was baloney.  What I do remember was that failures from
temperature was not a linear curve, it was an exponential
curve.  I will strongly concur with you that heat is your enemy.

What I would love to see, but have never seen, is a Peltier
heat pump to mount hard drives on.

-T


Re: boot problems

2012-09-08 Thread zxq9
That fstab looked pretty normal, I think the next bit you pasted is more 
relevant...


On 09/09/2012 08:00 AM, Dirk Brandherm wrote:

Filesystem  1k-blocks   UsedAvailable   Use%
51606140516040440   100%


Wow! That's pretty full and looks like your problem has been found.

The disk free (df) command told you that, and the disk used (du) command 
can tell you where all that space is being used. An easy way to get a 
full summary is:


du -shx /*

This may take some time to complete execution since it needs to run 
through the whole device. My guess is that your recent software 
installation was a lot larger than you expected and /usr or /var is 
probably huge now.


/var could be huge because of other software as well, though, since 
variable state data is saved there -- things like database storage 
files, logs and the like that tend to grow with time.


Since you are using a logical volume you can add another physical drive 
and extend the volume to include [part of] it. Or even better, you can 
move /var to its own partition if its the culprit (my usual approach).


Re: boot problems

2012-09-08 Thread Dirk Brandherm
Thanks for your advice folks, much appreciated. I believe this has helped
 
to identify the probable source of the problem. I would still need some 

further help in order to solve it though. More specifically:

@zxq9:

fstab seems to have been tampered with. It reads as follows, I can't see 

anything unusual, but maybe I am overlooking something.

#
# /etc/fstab
# Created by anaconda on Fri Sept 23 08:55:24 2011
#
# Accessible filesystems, by reference, are maintained under ‘/
dev/disc’
# See man pages fstab(5), findfs(8), mount(8) and/or glkid(8) for more in
fo
#
/dev/mapper/vg_gapqub01-lv_root /   ext4
defaults1 1
UUID=56f4f74f-1327-4ef5--8acf863cdfbf /boot ext4defaults

1 2
/dev/mapper/vg_gapqub01-lv_home /home   ext4defaults

1 2
/dev/mapper/vg_gapqub01-lv_swap swapswapdefaults

0 0
tmpfs   /dev/shmtmpfs   
defaults0 0
devpts  /dev/ptsdevpts  
gid=5,mode=620  0 0
sysfs   /syssysfs   
defaults0 0
proc/proc   proc
defaults0 0

@Steven Yellin:

You are right, this seems to be where the problem lies. When I boot 
in “single” mode and type “df”, t
his is what I get:

Filesystem  1k-blocks   UsedAvailable   Use%
51606140516040440   100%

Mounted on /dev/mapper/vg_gapqub01-lv_root

Deleting non essential data from /tmp or /var/log certainly sounds like a
 
good idea. I deleted the retroclient.log from /var/log, but that didn't d
o 
the trick. I am unsure now what other data could be deleted without 
jeopardizing system integrity.