Re: Do I need SAS drives?..

2017-08-09 Thread Frank Leonhardt (m)
Simple answer is to use either. You're running FreeBSD with ZFS, right? BSD 
will hot plug anything. I suspect 'hot plug' relates to Microsoft workaround 
hardware RAID.

Hot plug enclosures will also let the host know a drive has been pulled. 
Otherwise ZFS won't know whether it was pulled or is unresponsive due to it 
being on fire or something. With 8 drives in your array you can probably figure 
this out yourself.

SAS drives use SCSI commands, which are supposedly better than SATA commands. 
Electrically they are the same. SAS drives are more expensive and tend to be 
higher spec mechanically, but not always so. Incidentally, nearline SAS is a 
cheaper SATA drive that understands SAS protocol and has dual ports. Marketing.

Basically, if you really want speed at all costs go for SAS. If you want best 
capacity for your money, go SATA. If in doubt, go for SATA. If you don't know 
you need SAS for some reason, you probably don't.

Regards, Frank.


On 9 August 2017 15:27:37 BST, "Mikhail T."  wrote:
>My server has 8 "hot-plug" slots, that can accept both SATA and SAS
>drives. SATA ones tend to be cheaper for the same features (like
>cache-sizes), what am I getting for the extra money spent on SAS?
>
>Asking specifically about the protocol differences... It would seem,
>for example, SATA can not be as easily hot-plugged, but with
>camcontrol(8) that should not be a problem, right? What else? Thank
>you!
>-- 
>Sent from mobile device, please, pardon shorthand.
>
>
>-- 
>Sent from mobile device, please, pardon shorthand.
>___
>freebsd-hardware@freebsd.org mailing list
>https://lists.freebsd.org/mailman/listinfo/freebsd-hardware
>To unsubscribe, send any mail to
>"freebsd-hardware-unsubscr...@freebsd.org"

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
___
freebsd-hardware@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-hardware
To unsubscribe, send any mail to "freebsd-hardware-unsubscr...@freebsd.org"


Re: Do I need SAS drives?..

2017-08-09 Thread Josh Paetzel


On Wed, Aug 9, 2017, at 09:55 AM, Frank Leonhardt (m) wrote:
> Simple answer is to use either. You're running FreeBSD with ZFS, right?
> BSD will hot plug anything. I suspect 'hot plug' relates to Microsoft
> workaround hardware RAID.
> 
> Hot plug enclosures will also let the host know a drive has been pulled.
> Otherwise ZFS won't know whether it was pulled or is unresponsive due to
> it being on fire or something. With 8 drives in your array you can
> probably figure this out yourself.
> 
> SAS drives use SCSI commands, which are supposedly better than SATA
> commands. Electrically they are the same. SAS drives are more expensive
> and tend to be higher spec mechanically, but not always so. Incidentally,
> nearline SAS is a cheaper SATA drive that understands SAS protocol and
> has dual ports. Marketing.
> 
> Basically, if you really want speed at all costs go for SAS. If you want
> best capacity for your money, go SATA. If in doubt, go for SATA. If you
> don't know you need SAS for some reason, you probably don't.
> 
> Regards, Frank.
> 
> 
> On 9 August 2017 15:27:37 BST, "Mikhail T." 
> wrote:
> >My server has 8 "hot-plug" slots, that can accept both SATA and SAS
> >drives. SATA ones tend to be cheaper for the same features (like
> >cache-sizes), what am I getting for the extra money spent on SAS?
> >
> >Asking specifically about the protocol differences... It would seem,
> >for example, SATA can not be as easily hot-plugged, but with
> >camcontrol(8) that should not be a problem, right? What else? Thank
> >you!
> >-- 

I have a different take on this.  For starters SAS and SATA aren't
electrically compatible.  There's a reason SAS drives are keyed so you
can't plug them in to a SATA controller.  It keeps the magic smoke
inside the drive.  SAS controllers can tunnel SATA (They confusingly
call this STP (Not Spanning Tree Protocol, but SATA Tunneling Protocol) 
It's imperfect but good enough for 8 drives.  You really do not want to
put 60 SATA drives in a SAS JBOD)

SAS can be a shared fabric, which means a group of drives are like a
room full of people having a conversation.  If someone starts screaming
and spurting blood it can disrupt the conversations of everyone in the
room.  Modern RAID controllers are pretty good at disconnecting drives
that are not working properly but not completely dead.  Modern HBAs not
so much.  If your controller is an HBA trying to keep a SAS fabric
stable with SATA drives can be more problematic than if you use SAS
drives...and as Frank pointed out nearline SAS drives are essentially
SATA drives with a SAS interface (and are typically under a $20 premium)

If performance was an issue we'd be talking about SSDs.  While SAS
drives do have a performance advantage over SATA in
multiuser/multiapplication environments (they have a superior queuing
implementation) it's not worth considering when the real solution is
SSDs.

My recommendation is if you have SAS expanders and an HBA use SAS
drives.  If you have direct wired SAS or a RAID controller you can use
either SAS or SATA.  If your application demands performance or
concurrency get a couple SSDs.  They'll smoke anything any spinning
drive can do.

-- 

Thanks,

Josh Paetzel
___
freebsd-hardware@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-hardware
To unsubscribe, send any mail to "freebsd-hardware-unsubscr...@freebsd.org"


Re: Do I need SAS drives?..

2017-08-09 Thread Frank Leonhardt (m)


On 9 August 2017 16:29:52 BST, Josh Paetzel  wrote:
>
>
>On Wed, Aug 9, 2017, at 09:55 AM, Frank Leonhardt (m) wrote:
>> Simple answer is to use either. You're running FreeBSD with ZFS,
>right?
>> BSD will hot plug anything. I suspect 'hot plug' relates to Microsoft
>> workaround hardware RAID.
>> 
>> Hot plug enclosures will also let the host know a drive has been
>pulled.
>> Otherwise ZFS won't know whether it was pulled or is unresponsive due
>to
>> it being on fire or something. With 8 drives in your array you can
>> probably figure this out yourself.
>> 
>> SAS drives use SCSI commands, which are supposedly better than SATA
>> commands. Electrically they are the same. SAS drives are more
>expensive
>> and tend to be higher spec mechanically, but not always so.
>Incidentally,
>> nearline SAS is a cheaper SATA drive that understands SAS protocol
>and
>> has dual ports. Marketing.
>> 
>> Basically, if you really want speed at all costs go for SAS. If you
>want
>> best capacity for your money, go SATA. If in doubt, go for SATA. If
>you
>> don't know you need SAS for some reason, you probably don't.
>> 
>> Regards, Frank.
>> 
>> 
>> On 9 August 2017 15:27:37 BST, "Mikhail T." 
>> wrote:
>> >My server has 8 "hot-plug" slots, that can accept both SATA and SAS
>> >drives. SATA ones tend to be cheaper for the same features (like
>> >cache-sizes), what am I getting for the extra money spent on SAS?
>> >
>> >Asking specifically about the protocol differences... It would seem,
>> >for example, SATA can not be as easily hot-plugged, but with
>> >camcontrol(8) that should not be a problem, right? What else? Thank
>> >you!
>> >-- 
>
>I have a different take on this.  For starters SAS and SATA aren't
>electrically compatible.  There's a reason SAS drives are keyed so you
>can't plug them in to a SATA controller.  It keeps the magic smoke
>inside the drive.  SAS controllers can tunnel SATA (They confusingly
>call this STP (Not Spanning Tree Protocol, but SATA Tunneling Protocol)
>
>It's imperfect but good enough for 8 drives.  You really do not want to
>put 60 SATA drives in a SAS JBOD)
>
>SAS can be a shared fabric, which means a group of drives are like a
>room full of people having a conversation.  If someone starts screaming
>and spurting blood it can disrupt the conversations of everyone in the
>room.  Modern RAID controllers are pretty good at disconnecting drives
>that are not working properly but not completely dead.  Modern HBAs not
>so much.  If your controller is an HBA trying to keep a SAS fabric
>stable with SATA drives can be more problematic than if you use SAS
>drives...and as Frank pointed out nearline SAS drives are essentially
>SATA drives with a SAS interface (and are typically under a $20
>premium)
>
>If performance was an issue we'd be talking about SSDs.  While SAS
>drives do have a performance advantage over SATA in
>multiuser/multiapplication environments (they have a superior queuing
>implementation) it's not worth considering when the real solution is
>SSDs.
>
>My recommendation is if you have SAS expanders and an HBA use SAS
>drives.  If you have direct wired SAS or a RAID controller you can use
>either SAS or SATA.  If your application demands performance or
>concurrency get a couple SSDs.  They'll smoke anything any spinning
>drive can do.


There are differences, but not relevant to an 8 drive system IME. Electrically 
SAS works at a higher voltage on the differential pair, which means the cables 
can be a lot longer.

Most (all?) SAS Expanders can handle STP so talk to SATA drives, but in an 
eight-way config I doubt a SAS expander comes in to it - they're not cheap!

Incidentally, SATA allows for expanders now.

Okay, SAS has tagged command queueing, but SATA has native command queuing.

Incidentally, the slightly different notched drive connector is simply to stop 
you plugging a SAS drive on to a SATA HBA, because it wouldn't know how to talk 
to it. It won't go bang if you do it by mistake.  OTOH a SAS HBA can talk both, 
so has a notch to match the raised bit on SATA.

Could go on about drives for ever (and have done in the past) but this is just 
an array of eight drives.

Have you thought about Fibre channel :-)

Regards, Frank.

 
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
___
freebsd-hardware@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-hardware
To unsubscribe, send any mail to "freebsd-hardware-unsubscr...@freebsd.org"


Re: Do I need SAS drives?..

2017-08-09 Thread Alan Somers
On Wed, Aug 9, 2017 at 8:27 AM, Mikhail T.  wrote:
> My server has 8 "hot-plug" slots, that can accept both SATA and SAS drives. 
> SATA ones tend to be cheaper for the same features (like cache-sizes), what 
> am I getting for the extra money spent on SAS?
>
> Asking specifically about the protocol differences... It would seem, for 
> example, SATA can not be as easily hot-plugged, but with camcontrol(8) that 
> should not be a problem, right? What else? Thank you!
> --
> Sent from mobile device, please, pardon shorthand.

Good question.  First of all, hot-plugability has more to do with the
controller than the protocol.  Since you have a SAS controller, you
should have no problem hot plugging SATA drives.  But SAS drives still
have a few advantages:

1) When a SATA drive goes into error recovery, it can lock up the bus
indefinitely.  This won't matter if your drives are directly connected
to a SAS HBA.  But if you have an expander with say, 4 SAS lanes going
to the HBA, then a flaky SATA drive can reduce the bandwidth available
to the good drives.

2) Even with NCQ, the SATA protocol is limited to queueing one or more
write commands OR one or more read commands.  You can't queue a
mixture of reads and writes at the same time.  SAS does not have that
limitation.  In this sense, SAS is theoretically more performant.
However, I've never heard of anybody observing a performance problem
that can be definitely blamed on this effect.

3) SAS drives have a lot of fancy features that you may not need or
care about.  For example, they often have features that are useful in
multipath setups (dual ports, persistent reservations), their error
reporting capabilities are more sophisticated than SMART, their self
encrypting command set is more sophisticated, etc etc.

4) The SAS activity LED is the opposite of SATA's.  With SATA, the LED
is off for an idle drive or blinking for a busy drive.  With SAS, it's
on for an idle drive or blinking for a busy drive.  This makes it
easier to see at a glance how many SAS drives you have installed.  I
think some SATA drives have a way to change the LEDs behavior, though.

5) Desktop class SATA drives can spend an indefinite amount of time in
error recovery mode.  If your RAID stack doesn't timeout a command,
that can cause your array to hang.  But SAS drives and  RAID class
SATA drives will fail any command than spends too much time in error
recovery mode.

6) But the most important difference isn't something you'll find on
any datasheet or protocol manual.  SAS drives are built to a higher
standard of quality than SATA drives, and have accordingly lower
failure rates.

I'm guessing that you don't have an expander (since you only have 8
slots), so item 1 doesn't matter to you.  I'll guess that item 3
doesn't matter either, or you wouldn't have asked this question.  Item
5 can be dealt with simply by buying the higher end SATA drives.  So
item 6 is really the most important.  If this system needs to have
very high uptime and consistent bandwidth, or if it will be difficult
to access for maintenance, then you probably want to use SAS drives.
If not, then you can save some money by using SATA.  Hope that helps.

-Alan
___
freebsd-hardware@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-hardware
To unsubscribe, send any mail to "freebsd-hardware-unsubscr...@freebsd.org"


Re: Do I need SAS drives?..

2017-08-09 Thread Frank Leonhardt

On 09/08/2017 16:59, Alan Somers wrote:

On Wed, Aug 9, 2017 at 8:27 AM, Mikhail T.  wrote:

My server has 8 "hot-plug" slots, that can accept both SATA and SAS drives. 
SATA ones tend to be cheaper for the same features (like cache-sizes), what am I getting 
for the extra money spent on SAS?

Asking specifically about the protocol differences... It would seem, for 
example, SATA can not be as easily hot-plugged, but with camcontrol(8) that 
should not be a problem, right? What else? Thank you!
--
Sent from mobile device, please, pardon shorthand.

Good question.  First of all, hot-plugability has more to do with the
controller than the protocol.  Since you have a SAS controller, you
should have no problem hot plugging SATA drives.  But SAS drives still
have a few advantages:

1) When a SATA drive goes into error recovery, it can lock up the bus
indefinitely.  This won't matter if your drives are directly connected
to a SAS HBA.  But if you have an expander with say, 4 SAS lanes going
to the HBA, then a flaky SATA drive can reduce the bandwidth available
to the good drives.

2) Even with NCQ, the SATA protocol is limited to queueing one or more
write commands OR one or more read commands.  You can't queue a
mixture of reads and writes at the same time.  SAS does not have that
limitation.  In this sense, SAS is theoretically more performant.
However, I've never heard of anybody observing a performance problem
that can be definitely blamed on this effect.

3) SAS drives have a lot of fancy features that you may not need or
care about.  For example, they often have features that are useful in
multipath setups (dual ports, persistent reservations), their error
reporting capabilities are more sophisticated than SMART, their self
encrypting command set is more sophisticated, etc etc.

4) The SAS activity LED is the opposite of SATA's.  With SATA, the LED
is off for an idle drive or blinking for a busy drive.  With SAS, it's
on for an idle drive or blinking for a busy drive.  This makes it
easier to see at a glance how many SAS drives you have installed.  I
think some SATA drives have a way to change the LEDs behavior, though.

5) Desktop class SATA drives can spend an indefinite amount of time in
error recovery mode.  If your RAID stack doesn't timeout a command,
that can cause your array to hang.  But SAS drives and  RAID class
SATA drives will fail any command than spends too much time in error
recovery mode.

6) But the most important difference isn't something you'll find on
any datasheet or protocol manual.  SAS drives are built to a higher
standard of quality than SATA drives, and have accordingly lower
failure rates.

I'm guessing that you don't have an expander (since you only have 8
slots), so item 1 doesn't matter to you.  I'll guess that item 3
doesn't matter either, or you wouldn't have asked this question.  Item
5 can be dealt with simply by buying the higher end SATA drives.  So
item 6 is really the most important.  If this system needs to have
very high uptime and consistent bandwidth, or if it will be difficult
to access for maintenance, then you probably want to use SAS drives.
If not, then you can save some money by using SATA.  Hope that helps.

-Alan


Alan makes a good point about SAS expanders and their tendency to stick 
when some SATA drives go off on a trip. I'm also assuming Mikhail(?)'s 
setup doesn't use one.


On BSD with ZFS, a SATA drive chucking a shoe doesn't make any 
difference if they're directly connected to the HBA (same applies to 
GEOM RAID/MIRROR). "Dive silent?", "Detach it".


I'm not at all convinced that SAS is any more reliable than SATA per se. 
This is based on 30+ years experience with Winchesters starting with 
ST506. In the UK I used to write most of the storage articles for a 
couple of major tech publishers, and I spent a lot of time talking to 
and visiting the manufacturers and looking around the factories. Some of 
this may now be out-of-date (Conner went bust for a start).


The thing is that if you opened a XXX brand SCSI disk and the IDE 
version, guess what? They were the same inside. I spoke to the makers, 
and apparently the electronics on the SCSI version is a lot more 
expensive. Why? Well we don't sell as many, er, um.


Okay, they don't make cheap and nasty SCSI (or SAS) drives, but they do 
make low-end IDE/SATA. They also make some very nice drives that are 
only available as SAS. An equivalent quality SAS/SATA drive will be just 
as reliable - there's no mechanical reason for them not to be. They come 
off the same line.


Then there's the MTBF and the unrecoverable error rates. On high-end 
drives the latter is normally claimed to be 10x better than the cheap 
ones. Pretty much always, and exactly 10x. This is utter bilge. What 
they're saying is that the unrecoverable error rate is this figure or 
better, and any study in to this has shown that it's usually a lot 
better than both figures. So both figures are technically correct; it 
just ma

Re: Do I need SAS drives?..

2017-08-09 Thread Lanny Baron
Not sure what kind of server you are referring to but our servers can 
take SAS and SATA at the same time. We build plenty of servers running 
FreeBSD which in some cases have SATA SSD for boot drives (in a RAID-1) 
and then X amount of either SATA or SAS or both in a different RAID 
configuration all connected to the same high quality RAID Controller.


I have yet to see any complaint with the configurations we've done for 
our clients.


SAS drives can be much faster. 15K RPM vs. SATA 7.2K. Your choices would 
depend on how busy the server is.


Regards,
Lanny

On 8/9/2017 11:29 AM, Josh Paetzel wrote:



On Wed, Aug 9, 2017, at 09:55 AM, Frank Leonhardt (m) wrote:

Simple answer is to use either. You're running FreeBSD with ZFS, right?
BSD will hot plug anything. I suspect 'hot plug' relates to Microsoft
workaround hardware RAID.

Hot plug enclosures will also let the host know a drive has been pulled.
Otherwise ZFS won't know whether it was pulled or is unresponsive due to
it being on fire or something. With 8 drives in your array you can
probably figure this out yourself.

SAS drives use SCSI commands, which are supposedly better than SATA
commands. Electrically they are the same. SAS drives are more expensive
and tend to be higher spec mechanically, but not always so. Incidentally,
nearline SAS is a cheaper SATA drive that understands SAS protocol and
has dual ports. Marketing.

Basically, if you really want speed at all costs go for SAS. If you want
best capacity for your money, go SATA. If in doubt, go for SATA. If you
don't know you need SAS for some reason, you probably don't.

Regards, Frank.


On 9 August 2017 15:27:37 BST, "Mikhail T." 
wrote:

My server has 8 "hot-plug" slots, that can accept both SATA and SAS
drives. SATA ones tend to be cheaper for the same features (like
cache-sizes), what am I getting for the extra money spent on SAS?

Asking specifically about the protocol differences... It would seem,
for example, SATA can not be as easily hot-plugged, but with
camcontrol(8) that should not be a problem, right? What else? Thank
you!
--


I have a different take on this.  For starters SAS and SATA aren't
electrically compatible.  There's a reason SAS drives are keyed so you
can't plug them in to a SATA controller.  It keeps the magic smoke
inside the drive.  SAS controllers can tunnel SATA (They confusingly
call this STP (Not Spanning Tree Protocol, but SATA Tunneling Protocol)
It's imperfect but good enough for 8 drives.  You really do not want to
put 60 SATA drives in a SAS JBOD)

SAS can be a shared fabric, which means a group of drives are like a
room full of people having a conversation.  If someone starts screaming
and spurting blood it can disrupt the conversations of everyone in the
room.  Modern RAID controllers are pretty good at disconnecting drives
that are not working properly but not completely dead.  Modern HBAs not
so much.  If your controller is an HBA trying to keep a SAS fabric
stable with SATA drives can be more problematic than if you use SAS
drives...and as Frank pointed out nearline SAS drives are essentially
SATA drives with a SAS interface (and are typically under a $20 premium)

If performance was an issue we'd be talking about SSDs.  While SAS
drives do have a performance advantage over SATA in
multiuser/multiapplication environments (they have a superior queuing
implementation) it's not worth considering when the real solution is
SSDs.

My recommendation is if you have SAS expanders and an HBA use SAS
drives.  If you have direct wired SAS or a RAID controller you can use
either SAS or SATA.  If your application demands performance or
concurrency get a couple SSDs.  They'll smoke anything any spinning
drive can do.


___
freebsd-hardware@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-hardware
To unsubscribe, send any mail to "freebsd-hardware-unsubscr...@freebsd.org"


Re: Do I need SAS drives?..

2017-08-10 Thread Ben RUBSON
> On 09 Aug 2017, at 17:59, Alan Somers  wrote:
> 
> 3) SAS drives have a lot of fancy features that you may not need or
> care about.  For example, (...) their error
> reporting capabilities are more sophisticated than SMART

Really interesting answer Alan, thank  you very much !
Slightly off-topic but I take this opportunity,
how do you check SAS drives health ?
I personally cron a background long test every 2 weeks (using smartmontools).
I did not experience SAS drive error yet, so not sure how this behaves.
Does the drive reports to FreeBSD when its read or write error rate cross
a threshold (so that we can replace it before it fails) ?
Or perhaps smartd will do ?

As an example below a SAS error counter log returned by smartctl :
Errors Corrected by  Total   CorrectionGigabytesTotal
ECC rereads/errors   algorithm processeduncorrected
fast | delayed  rewrites  corrected  invocations  [10^9 bytes]  errors
read:   0   49049 233662 73743.588   0
write:  030 3  83996  9118.895   0
verify: 000 0  28712 0.000   0

Thank you !

Ben

___
freebsd-hardware@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-hardware
To unsubscribe, send any mail to "freebsd-hardware-unsubscr...@freebsd.org"


Re: Do I need SAS drives?..

2017-08-10 Thread Alan Somers
On Thu, Aug 10, 2017 at 7:44 AM, Ben RUBSON  wrote:
>> On 09 Aug 2017, at 17:59, Alan Somers  wrote:
>>
>> 3) SAS drives have a lot of fancy features that you may not need or
>> care about.  For example, (...) their error
>> reporting capabilities are more sophisticated than SMART
>
> Really interesting answer Alan, thank  you very much !
> Slightly off-topic but I take this opportunity,
> how do you check SAS drives health ?
> I personally cron a background long test every 2 weeks (using smartmontools).
> I did not experience SAS drive error yet, so not sure how this behaves.
> Does the drive reports to FreeBSD when its read or write error rate cross
> a threshold (so that we can replace it before it fails) ?
> Or perhaps smartd will do ?
>
> As an example below a SAS error counter log returned by smartctl :
> Errors Corrected by  Total   CorrectionGigabytesTotal
> ECC rereads/errors   algorithm processed
> uncorrected
> fast | delayed  rewrites  corrected  invocations  [10^9 bytes]  errors
> read:   0   49049 233662 73743.588   0
> write:  030 3  83996  9118.895   0
> verify: 000 0  28712 0.000   0
>
> Thank you !
>
> Ben

smartmontools is probably the best way to read SAS error logs.
Interpreting them can be hard, though.  The Backblaze blog is probably
the best place to get current advice.  But the easiest thing to do is
certainly to wait until something fails hard.  With ZFS, you can have
up to 3 drives' worth of redundancy, and hotspares too.

-Alan
___
freebsd-hardware@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-hardware
To unsubscribe, send any mail to "freebsd-hardware-unsubscr...@freebsd.org"


Re: Do I need SAS drives?..

2017-08-10 Thread Frank Leonhardt

On 10/08/2017 15:01, Alan Somers wrote:

Really interesting answer Alan, thank you very much !

Slightly off-topic but I take this opportunity,
how do you check SAS drives health ?
I personally cron a background long test every 2 weeks (using smartmontools).
I did not experience SAS drive error yet, so not sure how this behaves.
Does the drive reports to FreeBSD when its read or write error rate cross
a threshold (so that we can replace it before it fails) ?
Or perhaps smartd will do ?

As an example below a SAS error counter log returned by smartctl :
 Errors Corrected by  Total   CorrectionGigabytesTotal
 ECC rereads/errors   algorithm processeduncorrected
 fast | delayed  rewrites  corrected  invocations  [10^9 bytes]  errors
read:   0   49049 233662 73743.588   0
write:  030 3  83996  9118.895   0
verify: 000 0  28712 0.000   0

Thank you !

Ben

smartmontools is probably the best way to read SAS error logs.
Interpreting them can be hard, though.  The Backblaze blog is probably
the best place to get current advice.  But the easiest thing to do is
certainly to wait until something fails hard.  With ZFS, you can have
up to 3 drives' worth of redundancy, and hotspares too.


I concur with Alan. Trying to predict drive failure is a mug's game. 
Very through research (e.g. Google, 2007) has shown it's a waste of time 
trying.


With ZFS (or geom mirror) a drive will be "failed" as soon as there's a 
problem and you can get notification using a cron job that sends an 
email if the output of zpool status (or gmirror status ) contains 
"DEGRADED".


That said, I've found it useful to use smartctl to pick up when a drive 
is overheating, usually due to fan failure. You might also find the new 
(11.0+?) sesutil handy to monitor components on a SAS expander IF YOU 
HAVE ONE. Things like fans and temperature sensors are readable this way.


Regards, Frank.

___
freebsd-hardware@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-hardware
To unsubscribe, send any mail to "freebsd-hardware-unsubscr...@freebsd.org"


Re: Do I need SAS drives?..

2017-11-06 Thread Zane C. B-H.

On 2017-08-09 10:59, Alan Somers wrote:
On Wed, Aug 9, 2017 at 8:27 AM, Mikhail T.  
wrote:

1) When a SATA drive goes into error recovery, it can lock up the bus
indefinitely.  This won't matter if your drives are directly connected
to a SAS HBA.  But if you have an expander with say, 4 SAS lanes going
to the HBA, then a flaky SATA drive can reduce the bandwidth available
to the good drives.


In my years of doing decade plus of DC work, I've seen both SAS and SATA
drives flake and render systems in operable till the offending drive is
removed.


4) The SAS activity LED is the opposite of SATA's.  With SATA, the LED
is off for an idle drive or blinking for a busy drive.  With SAS, it's
on for an idle drive or blinking for a busy drive.  This makes it
easier to see at a glance how many SAS drives you have installed.  I
think some SATA drives have a way to change the LEDs behavior, though.


HPs and Dells will show on by default, regardless of if it is SATA or 
SAS.


For Supermicro it will vary between backplanes.


I'm guessing that you don't have an expander (since you only have 8
slots), so item 1 doesn't matter to you.  I'll guess that item 3
doesn't matter either, or you wouldn't have asked this question.  Item
5 can be dealt with simply by buying the higher end SATA drives.  So
item 6 is really the most important.  If this system needs to have
very high uptime and consistent bandwidth, or if it will be difficult
to access for maintenance, then you probably want to use SAS drives.
If not, then you can save some money by using SATA.  Hope that helps.


Actually most boxes with more than 4 slots tend to be use multipliers.

As to uptime, that is trivial to achieve with both.

With both it is of importance of drive monitoring and regular self 
tests.

___
freebsd-hardware@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-hardware
To unsubscribe, send any mail to "freebsd-hardware-unsubscr...@freebsd.org"


Re: Do I need SAS drives?..

2017-11-07 Thread Frank Leonhardt



On 06/11/2017 10:09, Zane C. B-H. wrote:

In my years of doing decade plus of DC work, I've seen both SAS and SATA
drives flake and render systems in operable till the offending drive is
removed.



My experience too.

For Supermicro it will vary between backplanes.

Very true indeed. If they go on or off from time to time, that's good 
enough.



I'm guessing that you don't have an expander (since you only have 8
slots), so item 1 doesn't matter to you.  I'll guess that item 3
doesn't matter either, or you wouldn't have asked this question.  Item
5 can be dealt with simply by buying the higher end SATA drives.  So
item 6 is really the most important.  If this system needs to have
very high uptime and consistent bandwidth, or if it will be difficult
to access for maintenance, then you probably want to use SAS drives.
If not, then you can save some money by using SATA.  Hope that helps.


Actually most boxes with more than 4 slots tend to be use multipliers.

I'm more mixed on that. There are quite a few Dells with eight or 
twelve-slot backplanes, even if it means two HBAs. Apart from better 
performance, the cost of 2xHBA+backplane is bizarrely less than 
1xHBA+Expander. All the Supermicros I've seen have had expanders though.



As to uptime, that is trivial to achieve with both.

With both it is of importance of drive monitoring and regular self tests.


WHS! Biggest cause of problems is discovering a flaky drive or two AFTER 
the redundant one has failed. I don't know what anyone else thinks, but 
I'm inclined to do a straightforward read of a block device rather than 
a ZFS scrub because (a) I think it's quicker, especially when there's 
not much workload; and (b) it also reads unused blocks, which are 
probably the majority. "Best Practice" says you should do a scrub every 
three months - seems way to long a gap for my liking.



___
freebsd-hardware@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-hardware
To unsubscribe, send any mail to "freebsd-hardware-unsubscr...@freebsd.org"