Re: [CentOS] SSD support in C5 and C6

2013-07-19 Thread Wade Hampton
On Jul 19, 2013 10:04 PM, "Darr247"  wrote:
>
> On 2013-07-19 1:01 PM, John R Pierce wrote:
> > On 7/19/2013 5:51 AM, Darr247 wrote:
> >> On 2013-07-19 3:54 AM, Gordon Messmer wrote:
>  Regardless of your storage, your system should be powered by a
>  monitored UPS. Verify that it works, and the drive's cache shouldn't
>  be a major concern.
> >> It should also be a 'true sine wave' output when running on battery.
> >> Many UPS units output a 'stepped approximation' (typically pulse width
> >> modulation), which some computer power supplies may not like.
> > virtually all PC and server power supplies now days are 'switchers', and
> > could care less what the input wave form looks like.   they full wave
> > rectify the input voltage to DC, then chop it at 200Khz or so and run it
> > through a toroidal transformer to generate the various DC voltages.
> >
> >
>
> Heh...  go ahead and use stepped approximation UPS's then.
> What do I know; I'm just a dumb electrician.

I just trust Florida Flicker n Flash - never had outages more than once
a day!

Sorry could not resist...
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SSD support in C5 and C6

2013-07-19 Thread Darr247
On 2013-07-19 1:01 PM, John R Pierce wrote:
> On 7/19/2013 5:51 AM, Darr247 wrote:
>> On 2013-07-19 3:54 AM, Gordon Messmer wrote:
 Regardless of your storage, your system should be powered by a
 monitored UPS. Verify that it works, and the drive's cache shouldn't
 be a major concern.
>> It should also be a 'true sine wave' output when running on battery.
>> Many UPS units output a 'stepped approximation' (typically pulse width
>> modulation), which some computer power supplies may not like.
> virtually all PC and server power supplies now days are 'switchers', and
> could care less what the input wave form looks like.   they full wave
> rectify the input voltage to DC, then chop it at 200Khz or so and run it
> through a toroidal transformer to generate the various DC voltages.
>
>

Heh...  go ahead and use stepped approximation UPS's then.
What do I know; I'm just a dumb electrician.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SSD support in C5 and C6

2013-07-19 Thread Wade Hampton
Thanks for the feedback.

Sounds like all this needs to be merged into a wiki?

Couple of take-aways:
- options will depend on the drive
  -- cheap drives, be more conservative with options including
 turning write-cache off
  -- provisioning depends on how much mfg reserves
- better options are available for CentOS 6
- kernel scheduler, swap, and /tmp changes might help
  for some use cases -- test and determine if they will help
  (e.g., if your system processes data and creates a lot of files
   in /tmp for processing, putting /tmp in RAM might help)


1)  Determine your use case
2)  Determine the type of drive you need and any items
 specific to the drive (reserved space, TRIM, big caps)
3)  Use newer Linux systems (CentOS 6, later UBUNTU, RHEL, Fedora)
 if you can -- and use EXT4 with trim enabled (if drive supports it)
4)  Test
5)  Deploy

Cheers,
--
Wade Hampton


On Fri, Jul 19, 2013 at 4:07 PM, Gordon Messmer wrote:

> On 07/19/2013 11:21 AM, John R Pierce wrote:
> > On 7/19/2013 11:07 AM, Gordon Messmer wrote:
> >>> >>- under provision (only use 60-75% of drive, leave unallocated space)
> >> >That only applies to some drives, probably not current generation
> hardware.
> >> >
> > it applies to all SSDs.  they NEED to do write block remapping, if they
> > don't have free space, its much much less efficient..
>
> Well, maybe.
>
> The important factor is how much the manufacturer has over-provisioned
> the storage.  Performance targeted drives are going to have a large
> chunk of storage hidden from the OS in order to support block remapping
> functions.  Drives that are sold at a lower cost are often going to
> provide less reserved storage for that purpose.
>
> So, my point is that if you're buying good drives, you probably don't
> need to leave unpartitioned space because there's already a big chunk of
> space that's not even visible to the OS.
>
> Here are a couple of articles on the topic:
>
>
> http://www.edn.com/design/systems-design/4404566/Understanding-SSD-over-provisioning
> http://www.anandtech.com/show/6489/playing-with-op
>
> Anand's tests indicate that there's not really a difference between
> cells reserved by the manufacturer and cells in unpartitioned space on
> the drive.  If your manufacturer left less space reserved, you can
> probably boost performance by reserving space yourself by leaving it
> unpartitioned.
>
> There are diminishing returns, so if the manufacturer did reserve
> sufficient space, you won't get much performance benefit from leaving
> additional space unallocated.
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SSD support in C5 and C6

2013-07-19 Thread John R Pierce
On 7/19/2013 2:04 PM, Wade Hampton wrote:
>-- cheap drives, be more conservative with options including
>   turning write-cache off

you can't turn off the write cache on SSDs... if they did let you do 
that, they would grind to a halt as each n sector write operation would 
require read modify writing 1MB or so blocks of flash.



-- 
john r pierce  37N 122W
somewhere on the middle of the left coast

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SSD support in C5 and C6

2013-07-19 Thread Gordon Messmer
On 07/19/2013 11:21 AM, John R Pierce wrote:
> On 7/19/2013 11:07 AM, Gordon Messmer wrote:
>>> >>- under provision (only use 60-75% of drive, leave unallocated space)
>> >That only applies to some drives, probably not current generation hardware.
>> >
> it applies to all SSDs.  they NEED to do write block remapping, if they
> don't have free space, its much much less efficient..

Well, maybe.

The important factor is how much the manufacturer has over-provisioned 
the storage.  Performance targeted drives are going to have a large 
chunk of storage hidden from the OS in order to support block remapping 
functions.  Drives that are sold at a lower cost are often going to 
provide less reserved storage for that purpose.

So, my point is that if you're buying good drives, you probably don't 
need to leave unpartitioned space because there's already a big chunk of 
space that's not even visible to the OS.

Here are a couple of articles on the topic:

http://www.edn.com/design/systems-design/4404566/Understanding-SSD-over-provisioning
http://www.anandtech.com/show/6489/playing-with-op

Anand's tests indicate that there's not really a difference between 
cells reserved by the manufacturer and cells in unpartitioned space on 
the drive.  If your manufacturer left less space reserved, you can 
probably boost performance by reserving space yourself by leaving it 
unpartitioned.

There are diminishing returns, so if the manufacturer did reserve 
sufficient space, you won't get much performance benefit from leaving 
additional space unallocated.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SSD support in C5 and C6

2013-07-19 Thread Wade Hampton
>From what I have read, TRIM can also be done on demand
for older systems or file systems that are not TRIM aware.
For CentOS 5.x, a modified hdparm could be used to send
the TRIM comamnd to the drive.  Anyone have experience
with this?
--
Wade Hampton


On Fri, Jul 19, 2013 at 1:05 PM, John R Pierce  wrote:

> On 7/19/2013 8:48 AM, Wade Hampton wrote:
> > I found lots of references to TRIM, but it is not included
> > with CentOS 5.  However, I found that TRIM is in the
> > newer hdparm which could be build from source,
> > but AFIK is not included with CentOS 5 RPMS.  That way,
> > one could trim via a cron job?
>
>
> trim is done at the file system kernel level.essentially, its a
> extra command to the disk telling it this block is complete and the rest
> of it 'doesn't matter' so the drive doesn't need to actually store it.
>
>
> On 7/19/2013 7:10 AM, Alexander Arlt wrote:
> > Hm. I'm not sure, if I'd go with that. In my understanding, I'd just buy
> > something like a Samsung SSD 840 Pro (for not using TLC) and do a
> > overprovisioning of about 60% of the capacity. With the 512GiB-Variant,
> > I'd end up with 200GiB netto. By this way, I have no issues with TRIM or
> > GC (there are always enough empty cells) and wear leveling is also a
> > non-issue (at least right now...).
>
> those drives do NOT have 'supercaps' so they will lose any recently
> written data on power failures.   This WILL result in corrupted file
> systems, much the same as using a RAID controller with write-back cache
> that doesn't have a internal RAID battery.
>
>
>
> --
> john r pierce  37N 122W
> somewhere on the middle of the left coast
>
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SSD support in C5 and C6

2013-07-19 Thread Gordon Messmer
On 07/19/2013 08:48 AM, Wade Hampton wrote:
> I found lots of references to TRIM, but it is not included
> with CentOS 5.  However, I found that TRIM is in the
> newer hdparm which could be build from source,
> but AFIK is not included with CentOS 5 RPMS.  That way,
> one could trim via a cron job?

NO!

 From the man page:
--trim-sectors
   For  Solid  State  Drives  (SSDs).  EXCEPTIONALLY DANGEROUS.
   DO NOT USE THIS FLAG!!

That command can be used to trim sectors if you know which sector to 
start at and how many to TRIM.  The only thing it's likely to be useful 
for is deleting all of the data on a drive.

> - use file system supporting TRIM (e.g., EXT4 or BTRFS).

Yes, on release 6 or newer.

> - update hdparm to get TRIM support on CentOS 5

No.

> - align on block erase boundaries for drive, or use 1M boundaries
> - use native, non LVM partitions

LVM is fine.
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/newmds-ssdtuning.html

> - under provision (only use 60-75% of drive, leave unallocated space)

That only applies to some drives, probably not current generation hardware.

> - set noatime in /etc/fstab
>  (or relatime w/ newer to keep atime data sane)

Don't bother.  The current default is relatime.

> - move some tmp files to tmpfs
> (e.g., periodic status files and things that change often)
> - move /tmp to RAM (per some suggestions)

Same thing.  Most SSDs should have write capacity far in excess of a 
spinning disk, so the decision to do this shouldn't be driven by the use 
of SSD.

> - use secure erase before re-use of drive
> - make sure drive has the latest firmware

Not always.  Look at the changelog for your drive's firmware if you're 
concerned and decide whether you need to update it based on whether any 
of the named fixes affect your system.  For instance, one of my 
co-workers was using a Crucial brand drive in his laptop, and it 
frequently wasn't seen by the system on a cold boot.  This caused 
hibernate to always fail.  Firmware upgrades made the problem worse, as 
I recall.

> - add “elevator=noop” to the kernel boot options
>or use deadline, can change on a drive-by-drive basis
>(e.g., if HD + SSD in a system)
> - reduce swappiness of kernel via /etc/sysctl.conf:
> vm.swappiness=1
> vm.vfs_cache_pressure=50
> -- or swap to HD, not SSD

None of those should be driven by SSD use.  Evaluate their performance 
effects on your specific workload and decide whether they help.  I 
wouldn't use them in most cases.

> - BIOS tuning to set drives to “write back” and using hdparm:
> hdparm -W1 /dev/sda

That's not write-back, that's write-cache.  It's probably enabled by 
default.  When it's on, the drives will be faster and less safe (this is 
why John keeps advising you to look for a drive with a capacitor-backed 
write cache).  When it's off, the drive will be slower and more safe 
(and you don't need a capacitor backed write cache).

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SSD support in C5 and C6

2013-07-19 Thread John R Pierce
On 7/19/2013 11:07 AM, Gordon Messmer wrote:
>> - under provision (only use 60-75% of drive, leave unallocated space)
> That only applies to some drives, probably not current generation hardware.
>

it applies to all SSDs.  they NEED to do write block remapping, if they 
don't have free space, its much much less efficient..


-- 
john r pierce  37N 122W
somewhere on the middle of the left coast

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SSD support in C5 and C6

2013-07-19 Thread John R Pierce
On 7/19/2013 5:51 AM, Darr247 wrote:
> On 2013-07-19 3:54 AM, Gordon Messmer wrote:
>> >Regardless of your storage, your system should be powered by a
>> >monitored UPS. Verify that it works, and the drive's cache shouldn't
>> >be a major concern.
> It should also be a 'true sine wave' output when running on battery.
> Many UPS units output a 'stepped approximation' (typically pulse width
> modulation), which some computer power supplies may not like.

virtually all PC and server power supplies now days are 'switchers', and 
could care less what the input wave form looks like.   they full wave 
rectify the input voltage to DC, then chop it at 200Khz or so and run it 
through a toroidal transformer to generate the various DC voltages.


-- 
john r pierce  37N 122W
somewhere on the middle of the left coast

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SSD support in C5 and C6

2013-07-19 Thread John R Pierce
On 7/19/2013 8:48 AM, Wade Hampton wrote:
> I found lots of references to TRIM, but it is not included
> with CentOS 5.  However, I found that TRIM is in the
> newer hdparm which could be build from source,
> but AFIK is not included with CentOS 5 RPMS.  That way,
> one could trim via a cron job?


trim is done at the file system kernel level.essentially, its a 
extra command to the disk telling it this block is complete and the rest 
of it 'doesn't matter' so the drive doesn't need to actually store it.


On 7/19/2013 7:10 AM, Alexander Arlt wrote:
> Hm. I'm not sure, if I'd go with that. In my understanding, I'd just buy
> something like a Samsung SSD 840 Pro (for not using TLC) and do a
> overprovisioning of about 60% of the capacity. With the 512GiB-Variant,
> I'd end up with 200GiB netto. By this way, I have no issues with TRIM or
> GC (there are always enough empty cells) and wear leveling is also a
> non-issue (at least right now...).

those drives do NOT have 'supercaps' so they will lose any recently 
written data on power failures.   This WILL result in corrupted file 
systems, much the same as using a RAID controller with write-back cache 
that doesn't have a internal RAID battery.



-- 
john r pierce  37N 122W
somewhere on the middle of the left coast

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] LSI MegaRAID experience...

2013-07-19 Thread Drew Weaver
John Doe wrote:
> From: Drew Weaver 
>
>> If these drives do not have TLER do not use them with LSI controllers.
>
> Not sure about TLER on those Plextors...

TLER would only show up on something that looks at a *very* low level on the 
physical drive. What I know is that you can see it with smartctl - from the man 
page:  scterc[,READTIME,WRITETIME]  -  [ATA  only]  prints values  and
  descriptions of the SCT Error Recovery Control  settings. 
These
  are  equivalent  to  TLER (as used by Western Digital), CCTL (as
  used by Samsung and Hitachi) and ERC (as used by Seagate).
READ-
  TIME  and  WRITETIME  arguments  (deciseconds) set the specified
  values. Values of 0 disable the feature, other values less than
  65  are probably not supported. For RAID configurations, this is
  typically set to 70,70 deciseconds.

Note that knowing this was the result of a *lot* of research a couple-or so 
years ago. One *good* thing *seems* to be WD's new Red line, which is targeted 
toward NAS, they say... because they've put TLER back to something appropriate, 
like 7 sec or so, where it was 2 *minutes* for their "desktop" drives, and they 
disallowed changing it in firmware around '09, and the other OEMs followed 
suit. What makes Red good, if they work, is that they're only about one-third 
more than the low-cost drives, where the "server-grade" drives are 2-3 *times* 
the cost (look at the price of Seagate Constellations, for example).



I would also like to note that up until Red were released to had to use RE to 
get TLER, and now apparently RE, SE, and RED (cost in that order) all support 
TLER.

The thing that worries me about RED is that they're listed as only supporting 
up to 5 drives in an array, -- how are they limiting that?

I think they probably could've just merged RED and SE into one line of drives 
but I guess they limited RED to 3TB so if you want a 4TB part you have to get 
the SE.

Something in the back of my mind tells me that RE, SE, and Red are the exact 
same hardware with different FW.


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SSD support in C5 and C6

2013-07-19 Thread Wade Hampton
I have been following this and have some notes.  Can
you folks comment on them?  I am considering migrating
some systems to SSD but have not had time to set up
a test system yet to verify it.

I found lots of references to TRIM, but it is not included
with CentOS 5.  However, I found that TRIM is in the
newer hdparm which could be build from source,
but AFIK is not included with CentOS 5 RPMS.  That way,
one could trim via a cron job?

Could you folks please comment on the below notes that
I found from multiple sites online.  These are what I was
planning on doing for my systems.  Notes include:

- use file system supporting TRIM (e.g., EXT4 or BTRFS).
- update hdparm to get TRIM support on CentOS 5
- align on block erase boundaries for drive, or use 1M boundaries
- use native, non LVM partitions
- under provision (only use 60-75% of drive, leave unallocated space)
- set noatime in /etc/fstab
(or relatime w/ newer to keep atime data sane)
- move some tmp files to tmpfs
   (e.g., periodic status files and things that change often)

- move /tmp to RAM (per some suggestions)

- use secure erase before re-use of drive
- make sure drive has the latest firmware
- add “elevator=noop” to the kernel boot options
  or use deadline, can change on a drive-by-drive basis
  (e.g., if HD + SSD in a system)
- reduce swappiness of kernel via /etc/sysctl.conf:

vm.swappiness=1

vm.vfs_cache_pressure=50

-- or swap to HD, not SSD

- BIOS tuning to set drives to “write back” and using hdparm:

   hdparm -W1 /dev/sda



Any comments?

--

Wade Hampton




On Fri, Jul 19, 2013 at 10:10 AM, Alexander Arlt  wrote:

> Am 07/19/2013 03:17 AM, schrieb Lists:
> > Main thing is DO NOT EVEN THINK OF USING CONSUMER GRADE SSDs. SSDs are a
> > bit like a salt shaker, they have only a certain number of shakes and
> > when it runs out of writes, well, the salt shaker is empty. Spend the
> > money and get a decent Enterprise SSD. We've been conservatively using
> > the (spendy) Intel drives with good results.
>
> Hm. I'm not sure, if I'd go with that. In my understanding, I'd just buy
> something like a Samsung SSD 840 Pro (for not using TLC) and do a
> overprovisioning of about 60% of the capacity. With the 512GiB-Variant,
> I'd end up with 200GiB netto. By this way, I have no issues with TRIM or
> GC (there are always enough empty cells) and wear leveling is also a
> non-issue (at least right now...).
>
> It's a lot cheaper than the "Enterprise Grade SSDs", which are still
> basically MLC-SSDs and are also doing just the same as we are. And for
> the price of those golden SSDs I get about 7 or 8 of the "Consumer SSD",
> so I just swap those out, whenever I feel like it. Or smart tells me to
> do so.
>
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] if /else in expect script

2013-07-19 Thread Brian Mathis
Hi Tim,

You seem pretty determined to make this as convoluted as possible.  Adding
'expect' into the mix?  Using 'tee -a' to simply append a line to a file?
chmod 777?

If you take a look at my previous reply, you can see this is relatively
simple, and I basically wrote it for you, and even improved it to add some
checking before making the changes.

There is no need to include a password in the script, as it can be read
from the user like:
echo "Enter password"
read PASSWD

What are the issues you see with that?


❧ Brian Mathis



On Thu, Jul 18, 2013 at 5:37 PM, Tim Dunphy  wrote:

>  I took your suggestion and turned my (ill advised) sudoers bash script
> into an expect script! It works a lot better this way and is more secure.
> Because I'm not trying to store a password in a script (which I recognize
> as a bad idea anyway, I I think I've learned my lesson here).
>
>
> It really works well. But the only thing I'm still trying to figure out is
> how to put a if statement in there based on success of the last command
> ($?) before it'll move the new sudoers file in place. I'm verifying it with
> visudo before attempting to make the move. I'd like to make the final move
> based on the success/failure of that.
>
> Anyway, here's the script:
>
> stty -echo
> send_user -- "Please enter the host: "
> expect_user -re "(.*)\n"
> send_user "\n"
> set host $expect_out(1,string)
>
> stty -echo
> send_user -- "Please enter your username: "
> expect_user -re "(.*)\n"
> send_user "\n"
> set username $expect_out(1,string)
>
> stty -echo
> send_user -- "Please enter your passwd: "
> expect_user -re "(.*)\n"
> send_user "\n"
> set passwd $expect_out(1,string)
>
>
> set timeout -1
> spawn ssh -t $host {sudo -S cp /etc/sudoers /tmp/sudoers-template}
> match_max 10
> expect -exact "\[sudo\] password for $username: "
> send -- "$passwd\r"
> expect eof
>
> set timeout -1
> spawn ssh -t $host {sudo -S rm -f /tmp/sudoers.tmp}
> match_max 10
> expect eof
>
> set timeout -1
> spawn ssh -t $host {sudo -S echo '%tekmark_t1  ALL=(root) NOPASSWD:
> /sbin/service, /bin/rm, /usr/bin/du, /bin/df, /bin/ls, /usr/bin/find,
> /usr/sbin/tcpdump' > /tmp/sudoers.tmp}
> match_max 10
> expect eof
>
> set timeout -1
> spawn ssh -t $host {sudo -S chmod 777 /tmp/sudoers-template}
> match_max 10
> expect eof
>
> set timeout -1
> spawn ssh -t $host {cat /tmp/sudoers.tmp | tee  -a /tmp/sudoers-template}
> match_max 10
> expect eof
>
> set timeout -1
> spawn ssh -t $host {/usr/sbin/visudo -cf /tmp/sudoers-template}
> match_max 10
> expect eof
>
> if { "$?"  == 0 } {
>
> set timeout -1
> spawn ssh -t $host {sudo -S cp /etc/sudoers /tmp/sudoers.bak}
> match_max 10
> expect eof
>
> set timeout -1
> spawn ssh -t $host {sudo -S cp /tmp/sudoers-template /etc/sudoers}
> match_max 10
> expect eof
>
> set timeout -1
> spawn ssh -t $host {sudo -S /usr/sbin/visudo -cf /etc/sudoers}
> match_max 10
> expect eof
>
> set timeout -1
> spawn ssh -t $host {rm -f /tmp/sudoers-template}
> match_max 10
> expect eof
> } else {
>
>  puts "Verification of sudo template failed. Aborting. Process failed"
>
> }
>
>
> Pretty simple! Got a suggestion to make this work? If I get that part
> right, it'll be done.
>
>
> Thanks!
>
>
> --
> GPG me!!
>
> gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SSD support in C5 and C6

2013-07-19 Thread Alexander Arlt
Am 07/19/2013 03:17 AM, schrieb Lists:
> Main thing is DO NOT EVEN THINK OF USING CONSUMER GRADE SSDs. SSDs are a 
> bit like a salt shaker, they have only a certain number of shakes and 
> when it runs out of writes, well, the salt shaker is empty. Spend the 
> money and get a decent Enterprise SSD. We've been conservatively using 
> the (spendy) Intel drives with good results.

Hm. I'm not sure, if I'd go with that. In my understanding, I'd just buy
something like a Samsung SSD 840 Pro (for not using TLC) and do a
overprovisioning of about 60% of the capacity. With the 512GiB-Variant,
I'd end up with 200GiB netto. By this way, I have no issues with TRIM or
GC (there are always enough empty cells) and wear leveling is also a
non-issue (at least right now...).

It's a lot cheaper than the "Enterprise Grade SSDs", which are still
basically MLC-SSDs and are also doing just the same as we are. And for
the price of those golden SSDs I get about 7 or 8 of the "Consumer SSD",
so I just swap those out, whenever I feel like it. Or smart tells me to
do so.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] LSI MegaRAID experience...

2013-07-19 Thread m . roth
John Doe wrote:
> From: Drew Weaver 
>
>> If these drives do not have TLER do not use them with LSI controllers.
>
> Not sure about TLER on those Plextors...

TLER would only show up on something that looks at a *very* low level on
the physical drive. What I know is that you can see it with smartctl -
from the man page:  scterc[,READTIME,WRITETIME]  -  [ATA  only]  prints 
values  and
  descriptions of the SCT Error Recovery Control  settings. 
These
  are  equivalent  to  TLER (as used by Western Digital), CCTL
(as
  used by Samsung and Hitachi) and ERC (as used by Seagate).
READ-
  TIME  and  WRITETIME  arguments  (deciseconds) set the
specified
  values. Values of 0 disable the feature, other values less 
than
  65  are probably not supported. For RAID configurations,
this is
  typically set to 70,70 deciseconds.

Note that knowing this was the result of a *lot* of research a couple-or
so years ago. One *good* thing *seems* to be WD's new Red line, which is
targeted toward NAS, they say... because they've put TLER back to
something appropriate, like 7 sec or so, where it was 2 *minutes* for
their "desktop" drives, and they disallowed changing it in firmware around
'09, and the other OEMs followed suit. What makes Red good, if they work,
is that they're only about one-third more than the low-cost drives, where
the "server-grade" drives are 2-3 *times* the cost (look at the price of
Seagate Constellations, for example).

  mark

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] LSI MegaRAID experience...

2013-07-19 Thread Drew Weaver
Not sure about TLER on those Plextors...
This is what megacli says:

Enclosure Device ID: 252
Slot Number: 0
Drive's position: DiskGroup: 0, Span: 0, Arm: 0 Enclosure position: N/A Device 
Id: 0
WWN: 4154412020202020
Sequence Number: 2
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0 PD Type: SATA

Raw Size: 119.242 GB [0xee7c2b0 Sectors] Non Coerced Size: 118.742 GB 
[0xed7c2b0 Sectors] Coerced Size: 118.277 GB [0xec8e000 Sectors] Sector Size:  
0 Logical Sector Size:  0 Physical Sector Size:  0 Firmware state: Online, Spun 
Up Commissioned Spare : No Emergency Spare : No Device Firmware Level: 1.02 
Shield Counter: 0 Successful diagnostics completion on :  N/A SAS Address(0): 
0x44332211 Connected Port Number: 0(path0) Inquiry Data: P02302103634   
 PLEXTOR PX-128M5Pro 1.02 FDE Capable: Not Capable FDE 
Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: 6.0Gb/s
Link Speed: 6.0Gb/s
Media Type: Solid State Device
Drive:  Not Certified
Drive Temperature : N/A
PI Eligibility:  No
Drive is formatted for PI information:  No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s
Drive has flagged a S.M.A.R.T alert : No



Apart from that, I found the lsi events logs...
  Command timeout on PD 00(e0xfc/s0)
  . . .
  PD 00(e0xfc/s0) Path ... reset
  Error on PD 00(e0xfc/s0)
  State change on PD 00(e0xfc/s0) from ONLINE(18) to FAILED
  State change on VD 00/0 from OPTIMAL(3) to DEGRADED(2)
  Command timeout on PD 00(e0xfc/s0)
  PD 00(e0xfc/s0) Path ... reset
  State change on PD 00(e0xfc/s0) from FAILED(11) to UNCONFIGURED_BAD(1)
  . . .

Exact same behavior for the 2 servers and 3 SSDs...
So it seems the ctrl changes them first to failed and then to unconfigured...
---
We have experienced similar behavior with (to be blunt, non Intel) SSDs and 
with spinning rust (without TLER) on Dell PERC controllers (which are the same 
as LSI controllers) the drives simply "fall out" of the raid arrays they are in 
after a random period of time. 

This seems to "just happen" with certain SSDs, in the beginning we pushed very 
hard to try and understand why; now we just use different SSDs.

The ones we've had problems with are: OCZ Vertex, Samsung 840/840 pro, etc
Ones we've never had issues with are: Intel 520, Intel S3700

I know this doesn't really help you, but you could see if using a different SSD 
makes the problem go away.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] LSI MegaRAID experience...

2013-07-19 Thread John Doe
From: Drew Weaver 

> 
> If these drives do not have TLER do not use them with LSI controllers.
>

Not sure about TLER on those Plextors...
This is what megacli says:

Enclosure Device ID: 252
Slot Number: 0
Drive's position: DiskGroup: 0, Span: 0, Arm: 0
Enclosure position: N/A
Device Id: 0
WWN: 4154412020202020
Sequence Number: 2
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SATA

Raw Size: 119.242 GB [0xee7c2b0 Sectors]
Non Coerced Size: 118.742 GB [0xed7c2b0 Sectors]
Coerced Size: 118.277 GB [0xec8e000 Sectors]
Sector Size:  0
Logical Sector Size:  0
Physical Sector Size:  0
Firmware state: Online, Spun Up
Commissioned Spare : No
Emergency Spare : No
Device Firmware Level: 1.02
Shield Counter: 0
Successful diagnostics completion on :  N/A
SAS Address(0): 0x44332211
Connected Port Number: 0(path0) 
Inquiry Data: P02302103634    PLEXTOR PX-128M5Pro 1.02  
  
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None 
Device Speed: 6.0Gb/s 
Link Speed: 6.0Gb/s 
Media Type: Solid State Device
Drive:  Not Certified
Drive Temperature : N/A
PI Eligibility:  No 
Drive is formatted for PI information:  No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s 
Drive has flagged a S.M.A.R.T alert : No



Apart from that, I found the lsi events logs...
  Command timeout on PD 00(e0xfc/s0)
  . . .
  PD 00(e0xfc/s0) Path ... reset
  Error on PD 00(e0xfc/s0)
  State change on PD 00(e0xfc/s0) from ONLINE(18) to FAILED
  State change on VD 00/0 from OPTIMAL(3) to DEGRADED(2)
  Command timeout on PD 00(e0xfc/s0)
  PD 00(e0xfc/s0) Path ... reset
  State change on PD 00(e0xfc/s0) from FAILED(11) to UNCONFIGURED_BAD(1)
  . . .

Exact same behavior for the 2 servers and 3 SSDs...
So it seems the ctrl changes them first to failed and then to unconfigured...

Thx,
JD
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SSD support in C5 and C6

2013-07-19 Thread Darr247
On 2013-07-19 3:54 AM, Gordon Messmer wrote:
> Regardless of your storage, your system should be powered by a 
> monitored UPS. Verify that it works, and the drive's cache shouldn't 
> be a major concern.

It should also be a 'true sine wave' output when running on battery. 
Many UPS units output a 'stepped approximation' (typically pulse width 
modulation), which some computer power supplies may not like.

p.s. not really CentOS-related /per se/, but I have set 
centos@centos.org's entry in the address book to receive Plain Text... 
still, this looks like HTML, so far.
What other setting might I need to check in Thunderbird 17?
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] LSI MegaRAID experience...

2013-07-19 Thread Drew Weaver
If these drives do not have TLER do not use them with LSI controllers.

-Original Message-
From: centos-boun...@centos.org [mailto:centos-boun...@centos.org] On Behalf Of 
John Doe
Sent: Friday, July 19, 2013 5:13 AM
To: CentOS mailing list
Subject: Re: [CentOS] LSI MegaRAID experience...

The thing that bothers me is that the ctrl sees all the drives at first, later 

does not see some anymore, and he just "forgets" about them like they never 
existed.
I would have expected to still see them but in a failed state...
Here, megacli just lists info for the remaining drive(s).
So I miss all the "post mortem" info like the SMART status or the error counts 
if they had any...
Am I missing an option to add to megacli to show the failed ones too maybe?
Having used HP raid ctrls, I am used to see all drives, even failed ones.

Anyway, I"ll have to check the drives, backplane and cabling...

Thx for all the answers,
JD
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] CentOS-announce Digest, Vol 101, Issue 13

2013-07-19 Thread centos-announce-request
Send CentOS-announce mailing list submissions to
centos-annou...@centos.org

To subscribe or unsubscribe via the World Wide Web, visit
http://lists.centos.org/mailman/listinfo/centos-announce
or, via email, send a message with subject or body 'help' to
centos-announce-requ...@centos.org

You can reach the person managing the list at
centos-announce-ow...@centos.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of CentOS-announce digest..."


Today's Topics:

   1. CEBA-2013:1096  CentOS 6 bash Update (Johnny Hughes)
   2. CESA-2013:X003 Important Xen4CentOS xen Update (Johnny Hughes)


--

Message: 1
Date: Thu, 18 Jul 2013 13:49:18 +
From: Johnny Hughes 
Subject: [CentOS-announce] CEBA-2013:1096  CentOS 6 bash Update
To: centos-annou...@centos.org
Message-ID: <20130718134918.ga21...@n04.lon1.karan.org>
Content-Type: text/plain; charset=us-ascii


CentOS Errata and Bugfix Advisory 2013:1096 

Upstream details at : https://rhn.redhat.com/errata/RHBA-2013-1096.html

The following updated files have been uploaded and are currently 
syncing to the mirrors: ( sha256sum Filename ) 

i386:
81bc62e6d2396a462ea898f2c91c97578ad2d744af4588686602ffc3bec47420  
bash-4.1.2-15.el6_4.i686.rpm
4f9b9564df98ff7090a19af4a79b3a3ce0a7555bc745aded7354acad2f1ce613  
bash-doc-4.1.2-15.el6_4.i686.rpm

x86_64:
8c70e59d474eb6d8b8d64e4881fb27c853f102416a6d3c97fdf35f7b5a111d96  
bash-4.1.2-15.el6_4.x86_64.rpm
968e6c3b47ee3a617d614f7012fc90a1cdf25ffd8f9a7c709a70e440774d04c1  
bash-doc-4.1.2-15.el6_4.x86_64.rpm

Source:
17e92fbaf55ef5fbaccc7e28761edaaa1d18ede8e330fb20a40a27d27605003c  
bash-4.1.2-15.el6_4.src.rpm



-- 
Johnny Hughes
CentOS Project { http://www.centos.org/ }
irc: hughesjr, #cen...@irc.freenode.net



--

Message: 2
Date: Thu, 18 Jul 2013 09:57:37 -0500
From: Johnny Hughes 
Subject: [CentOS-announce] CESA-2013:X003 Important Xen4CentOS xen
Update
To: CentOS-Announce 
Message-ID: <51e80261.8030...@centos.org>
Content-Type: text/plain; charset="iso-8859-1"

CentOS Errata and Security Advisory 2013:X003 Important (Xen4CentOS)

The following updated files have been uploaded and are currently
syncing to the mirrors: ( sha256sum Filename )

-
X86_64
-
xen-4.2.2-23.el6.centos.alt.x86_64.rpm:
5e4118518434950ae600618884f97a0f959f39f772cce1e5c540e25ddadaef51

xen-devel-4.2.2-23.el6.centos.alt.x86_64.rpm:
3c7393b96702124c71a932451660bc2bd1278854a15d584e64dfa78dff48ece0

xen-doc-4.2.2-23.el6.centos.alt.x86_64.rpm:
79ce2d7d320dad6355588eec974d62f31675250685a0d23dd67731baf0b9caa1

xen-hypervisor-4.2.2-23.el6.centos.alt.x86_64.rpm:
7a0e0e42e48d8b887d8626c52815b3bf998ea64819f91d5a20a7f5f18a4586fa

xen-libs-4.2.2-23.el6.centos.alt.x86_64.rpm:
7cc9f2c7d36b3607a47463815217fe1fa63ec21ed8b8b4340cb0db7b98fcb79b

xen-licenses-4.2.2-23.el6.centos.alt.x86_64.rpm:
044f9947cb6f56f47ebababe38afce8168e2e42632cec6b6fcb90169c322d65a

xen-ocaml-4.2.2-23.el6.centos.alt.x86_64.rpm:
d0cb1cb336706b630649d30696a617f354270941787dcf8b7439679dae323218

xen-ocaml-devel-4.2.2-23.el6.centos.alt.x86_64.rpm:
6bd78b4c9ba0c2d714fdf1da6ecf537823f0119dec0ff10710db3144920dd02e

xen-runtime-4.2.2-23.el6.centos.alt.x86_64.rpm:
a047a4692f967b374cd9b6f9926510ca547f0d729c2360d219a285d0306e


-
Source:
-
xen-4.2.2-23.el6.centos.alt.src.rpm:
a0e7460f5c9e5c8f0bbd9057af9b946dd0ca3a3cbbc59083baf43fecbec2d53a

==
xen Changelog info from the SPEC file:

* Thu Jul 18 2013 Johnny Hughes joh...@centos.org - 4.2.2-23.el6.centos=

- added Patch131 for XSA-57 (CVE-2013-2211)
- added Patch132 for XSA-58 (CVE-2013-1432)

==

The following Security issues have been addressed in this kernel:

CVE-2013-2211 (XSA-57, Important):
http://lists.xen.org/archives/html/xen-announce/2013-06/msg00011.html

CVE-2013-1918 (XSA-58, Important):
http://lists.xen.org/archives/html/xen-announce/2013-06/msg00012.html

--
Johnny Hughes
CentOS Project { http://www.centos.org/ }
irc: hughesjr, #centos at irc.freenode.net

-- next part --
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: OpenPGP digital signature
Url : 
http://lists.centos.org/pipermail/centos-announce/attachments/20130718/1973e3c5/attachment-0001.bin
 

--

___
CentOS-announce mailing list
centos-annou...@centos.org
http://lists.centos.org/mailman/listinfo/centos-announce


End of CentOS-announce Digest, Vol 101, Issue 13

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.

Re: [CentOS] LSI MegaRAID experience...

2013-07-19 Thread John Doe
The thing that bothers me is that the ctrl sees all the drives at first, later 

does not see some anymore, and he just "forgets" about them like they never 
existed.
I would have expected to still see them but in a failed state...
Here, megacli just lists info for the remaining drive(s).
So I miss all the "post mortem" info like the SMART status or the error counts 
if they had any...
Am I missing an option to add to megacli to show the failed ones too maybe?
Having used HP raid ctrls, I am used to see all drives, even failed ones.

Anyway, I"ll have to check the drives, backplane and cabling...

Thx for all the answers,
JD
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SSD support in C5 and C6

2013-07-19 Thread John R Pierce
On 7/19/2013 12:54 AM, Gordon Messmer wrote:
> Regardless of your storage, your system should be powered by a monitored
> UPS.  Verify that it works, and the drive's cache shouldn't be a major
> concern.

done right, there should be two UPS's, each hooked up to alternate 
redundant power supplies in each chassis.

even so, things happen. a PDU gets tripped and shuts off a whole rack 
unexpectedly.

-- 
john r pierce  37N 122W
somewhere on the middle of the left coast

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SSD support in C5 and C6

2013-07-19 Thread Gordon Messmer
On 07/18/2013 06:55 PM, John R Pierce wrote:
> and not all Intel drives have the key features of supercap backed cache,
> and reliable write-acknowlegement behavior you want from a server.

Regardless of your storage, your system should be powered by a monitored 
UPS.  Verify that it works, and the drive's cache shouldn't be a major 
concern.

> that 95% (20:1) only applies to a SSD compared with a single desktop
> grade (7200rpm) disk.
>
> do note, you can easily build proper SAS raids that are just about as
> fast as a single SSD when used for write intensive database OTLP

Yes, but an array can be built with SSDs as well.  Its performance will 
have the same advantage over the SAS array that an SSD has over a single 
drive.

> one funny thing I've noted about various SSD's.   when they are new,
> they benchmark much faster than after they've been in production use.
> expect a several times slowdown in write performance once you've written
> approximately the size of the disk worth of blocks.NEVER let them
> get above about 75% full.

Again, yes, but that's what TRIM is for.  The slowdown you noticed is 
the result of using a filesystem or array that didn't support TRIM.

My understanding is that some of the current generation of drives no 
longer need TRIM.  The wear-leveling and block remapping features 
already present were combined with a percentage of reserved blocks to 
automatically reset blocks as they're re-written.  I couldn't name those 
drives, though.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos