>>>>> "ml" == Mark Little <marklit...@koallo.com> writes:

    ml> Just to clarify - do you mean TLER should be off or on?

It should be set to ``do not have asvc_t 11 seconds and <1 io/s''.

...which is not one of the settings of the TLER knob.

This isn't a problem with the TLER *setting*.  TLER does not even
apply unless the drive has a latent sector error.

TLER does not even apply unless the drive has a latent sector error.

TLER does not even apply unless the drive has a latent sector error.

GOT IT?  so if the drive is not defective, but is erratically having
huge latency when not busy, this isn't a TLER problem.  It's a
drive-is-unpredictable-piece-of-junk problem.  Will the problem go
away if you change the TLER setting to the opposite of whatever it is?
Who knows?!  It shouldn't based on the claimed purpose of TLER, but in
reality, maybe, maybe not, because the drive shouldn't (``shouldn't'',
haha) act like that to begin with.  It will be more likely to go away
if you replace the drive with a different model, though.

    ml> Storage forum on hardforum.com, the experts there seem to
    ml> recommend NOT having TLER enabled when using ZFS as ZFS can be
    ml> configured for its timeouts, etc, 

I don't believe there are any configurable timeouts in ZFS.  The ZFS
developers take the position that timeouts are not our problem and
push all that work down the stack to the controller driver and the
disk driver, which cooperate (this is two drivers, now.  plus a third
``SCSI mid-layer'' perhaps, for some controllers but not others.) to
implement a variety of inconsistent, silly, undocumented cargo-cult
flailing timeout regimes that we all have to put up with.  However
they are always quite long.  The ATA max timeout is 30sec, and AIUI
they are all much longer than that.

My new favorite thing, though, is the reference counting.  OS: ``This
disk/iSCSIdisk is `busy' so you can't detach it''.  me: ``bullshit.
YOINK, detached, now deal with it.''  IMO this area is in need of some
serious bar-raising.

    ml> and the main reason to use TLER is when using those drives
    ml> with hardware RAID cards which will kick a drive out of the
    ml> array if it takes longer than 10 seconds.

yup.

which is something the drive will not do unless it encounters an
ERROR.  that is the E in TLER.  In other words, the feature as
described prevents you from noticing and invoking warranty replacement
on your about-to-fail drive.  For this you pay double.  Have I got
that right?

In any case the obvious proper place to fix this is in the
RAID-on-a-card firmware, not the disk firmware, if it does even need
fixing which is unclear to me.  unless the disk manufacturers are
going to offer a feature ``do not spend more than 1 second out of
every 2 seconds `trying harder' to read marginal data, just return
errors'' which woudl actually have real value, the only reason TLER is
proper is that it can convince all you gamers to pay twice as much for
a drive because they've flipped a single bit in the firmware and then
shovelled a big pile of bullshit into your heads.

    ml> Can anyone else here comment if they have had experience with
    ml> the WD drives and ZFS and if they have TLER enabled or
    ml> disabled?

I do not have any problems with drives dropping out of ZFS using the
normal TLER setting.

I do have problems with slowly-failing drives fucking up the whole
system.  ZFS doesn't deal with them gracefully, and I have to find the
bad drive and remove it by hand.  All this stuff about cold spares
automatically replacing and USARS never notice, is largely a fantasy.

Neither observation leads me to want TLER.

however observations like this ``why did my disks suddenly slow
down?'' lead me to avoid WD drives period, for ZFS or not ZFS or
anything at all.  Whipping up all this marketing sillyness around TLER
also leads me to avoid them because I know they will shovel bullshit
and FUD to justify jacked prices.

Attachment: pgpMng48rq0w8.pgp
Description: PGP signature

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to