On Tuesday, 16 June 2020 12:26:01 BST Dale wrote:
> Wols Lists wrote:
> > On 16/06/20 10:04, Dale wrote:
> >> I might add, I don't have LVM on that drive.  I read it does not work
> >> well with LVM, RAID etc as you say.  Most likely, that drive will always
> >> be a external drive for backups or something.  If it ever finds itself
> >> on the OS or /home, it'll be a last resort.
> > 
> > LVM it's probably fine with. Raid, MUCH less so. What you need to make
> > sure does NOT happen is a lot of random writes. That might make deleting
> > an lvm snapshot slightly painful ...
> > 
> > But adding a SMR drive to an existing ZFS raid is a guarantee for pain.
> > I don't know why, but "resilvering" causes a lot of random writes. I
> > don't think md-raid behaves this way.
> > 
> > But it's the very nature of raid that, as soon as something goes wrong
> > and a drive needs replacing, everything is going to get hammered. And
> > SMR drives don't take kindly to being hammered ... :-)
> > 
> > Even in normal use, a SMR drive is going to cause grief if it's not
> > handled carefully.
> > 
> > Cheers,
> > Wol
> 
> From what I've read, I agree.  Basically, as some have posted in
> different places, SMR drives are good when writing once and leaving it
> alone.  Basically, about like a DVD-R.  From what I've read, let's say I
> moved a lot of videos around, maybe moved the directory structure
> around, which means a lot of data to move.  I think I'd risk just
> putting a new file system on it and then backup everything from
> scratch.  It may take a little longer given the amount of data but it
> would be easier on the drive.  It would keep it from hammering as you
> put it that drive to death. 
> 
> I've also read about the resilvering problems too.  I think LVM
> snapshots and something about BTFS(sp?) has problems.  I've also read
> that on windoze, it can cause a system to freeze while it is trying to
> rewrite the moved data too.  It gets so slow, it actually makes the OS
> not respond.  I suspect it could happen on Linux to if the conditions
> are right.
> 
> I guess this is about saving money for the drive makers.  The part that
> seems to really get under peoples skin tho, them putting those drives
> out there without telling people that they made changes that affect
> performance.  It's bad enough for people who use them where they work
> well but the people that use RAID and such, it seems to bring them to
> their knees at times.  I can't count the number of times I've read that
> people support a class action lawsuit over shipping SMR without telling
> anyone.  It could happen and I'm not sure it shouldn't.  People using
> RAID and such, especially in some systems, they need performance not
> drives that beat themselves to death.
> 
> My plan, avoid SMR if at all possible.  Right now, I just don't need the
> headaches.  The one I got, I'm lucky it works OK, even if it does bump
> around for quite a while after backups are done. 
> 
> My new to me hard drive is still testing.  Got a few more hours left
> yet.  Then I'll run some more tests.  It seems to be OK tho. 
> 
> Dale
> 
> :-)  :-) 

Just to add my 2c's before you throw that SMR away, the use case for these 
drives is to act as disk archives, rather than regular backups.  You write 
data you want to keep, once.  SMR disks would work well for your use case of 
old videos/music/photos you want to keep and won't be overwriting every other 
day/week/month.  Using rsync with '-c' to compare checksums will also make 
sure what you've copied is as good/bad as the original fs source.

Attachment: signature.asc
Description: This is a digitally signed message part.

Reply via email to