Re: Security ERRATA Critical: openafs on SL5.x, SL6.x, SL7.x i386/x86_64

2015-10-29 Thread Sean Murray

Hi Pat

The openafs mailling list says the openafs release fixing
those CVE's is v 1.6.15.
The packages listed in the errata state 1.6.14. SL6/7

Do the SL packages not track the openafs release numbers ?

Thanks
Sean


Re: Autofs segfaults on 6.3 - and solution

2012-10-04 Thread Sean Murray




RAID0: LOL. If I suggested using RAID0, even on a simple dev box, I'd
either be asked to clear my desk on the spot or my name would rise
immediately to #1 on the headcount-reduction list...


That is supposed to be RAID1, I think Konstantin has a buggy keyboard
as well ;-)

Cheers
Sean



smime.p7s
Description: S/MIME Cryptographic Signature


Re: SSD and RAID question

2012-09-05 Thread Sean Murray

On 09/04/2012 09:21 PM, Konstantin Olchanski wrote:

On Sun, Sep 02, 2012 at 05:33:24PM -0700, Todd And Margo Chester wrote:


Cherryville drives have a 1.2 million hour MTBF (mean time
between failure) and a 5 year warranty.



Note that MTBF of 1.2 Mhrs (137 years?!?) is the *vendor's estimate*.

Its worse than that, if you read their docs, that number is based on an
average write/read rate of (intel) 20G per day, which is painfully little
for a server.

I looked at these for caching data but from our use case and of course
assuming the MTBF to be accurate, the MTBF would be 6 months

Sean



Actual failure rates observed in production are unknown, the devices have
not been around long enough.

However, if you read product feedback on newegg, you may note that many SSDs
seem to suffer from the sudden death syndrome - a problem we happily
no longer see on spinning disks.

I guess the 5 year warranty is real enough, but it does not cover your
costs in labour for replacing dead disks, costs of down time and costs of lost 
data.



... risk of dropping RAID in favor of just one of these drives?



To help you make a informed decision, here is my data.

I have about 9 SSDs in production use (most are in RAID1 pairs), oldest has been
running since last October:
- 1 has 3 bad blocks (no RAID1),
- 1 has SATA comm problem (vanishes from the system - system survives because
   it's a RAID1 pair).
- 0 dead so far

I have about 20 USB and CF flash drives in production used as SL4/5/6 system 
disks,
some in RAID1, some as singles, oldest has been in use for 3 ( or more?) years.
There are zero failures, except 1 USB flash has a few bad blocks, except
for infant mortality (every USB3 and all except 1 brand USB2 flash drives
fail within a few weeks).

All drives used as singles are backed up nightly (rsync).

All spining disks are installed in RAID1 pairs.

Would *I* use single drives (any technology - SSD, USB flash, spinning)?

Only for a system that does not require 100% uptime (is not used
by any users) and when I can do daily backups (it cannot be in a room
without a GigE network).







smime.p7s
Description: S/MIME Cryptographic Signature


Re: 389 Directory Server for SL6

2011-04-05 Thread Sean Murray

On 04/05/2011 12:02 PM, Rupert Kolb wrote:

Hi,

where is the 389 Directory Server for Scientific 6?




389-ds-base will be going into RHEL 6 at some point.  We cannot put it in EPEL6 
because it would conflict.  We are interested in suggestions about how to provide binary 
packages on EL6.
quoted from Rich Megginson archives of 389 list 26/1/2011.

Cheers
Sean


thanks in advance

Rupert





smime.p7s
Description: S/MIME Cryptographic Signature