I meant to send this to the list.

-------- Forwarded Message --------
Subject:        Re: LVM for vtapes
Date:   Tue, 21 Sep 2021 16:10:09 -0400
From:   Chris Hoogendyk <hoogen...@bio.umass.edu>
To:     Olivier <olivier.nic...@cs.ait.ac.th>



Just an off hand comment on disk drives. We've been getting 10TB Western Digital Ultrastar Data Center Hard Drives with 5 year warranty (I think these are actually HGST which was bought by Western Digital). Typically I'm configuring things into raid using mdadm and lvm on Ubuntu. This is for primary data storage, not for amanda. But, just commenting on the drive sizes. Most recently we've had trouble finding the 10s because of the demand from the latest cryptocurrency strategies. We ended up finding 12TB drives of the same make. Our 4TB and 6TB drives are old and gradually being replaced. If you're interested in drive statistics, you can look up the backblaze drive reports. They have huge storage farms and keep detailed statistics on drive failures that they report periodically. Their latest is https://www.backblaze.com/blog/backblaze-drive-stats-for-q2-2021/. The HUH serial numbers correspond to what we have been getting.

On 9/20/21 11:17 PM, Olivier wrote:
Jon,

Interesting discussion in other threads got me wondering
whether I should have made some other choices when setting
up my vtape environment. Particularly whether I should
have used LVM (Logical Volume Management) to create one
large filesystem covering my multiple dedicated disks.

Its a topic I do not recall being discussed, pros & cons.
I am using 7 disks of 3 (or is that 4) and 6 TB (should upgrade them all to 6TB
soon) almost dedicated to vtapes (the last disk also has a copy of the
deleted accounts). I have them configured as individual disks. The size
of my vtapes is also about 100GB and I am using a small chunck size, so
my disks end up being 80% full at least.

When I designed my vtape architecture, I decided to keep each disk
individual so that it can be put offline after usage. My idea was to
have a system that could prompt an operator to "mount a disk" before the
backup and the disk could be manually unmounted and safe stored each
day. It is taking advantage of the automount service on FreeBSD.

Mounting could be USD disk, or hot-swap. I never went very far in the
implementation. I wrote all that many years ago when vtapes were new and
limited to a single directory, that is why I wrote my own tape changer.

I knew about the risk of loosing a disk and it being a good portion of
consecutive backups. But what I had in mind was:

- have the system as simple and as portable as possible, so I can shove
a disk in another machine and extract contents manually (during the
great flood of Bangkok in 2011, I moved all the servers and also took
all my hard disks from Amanda backup, but I did not need to move the
rack mounted server itself);

- a side advantage of my own tape changer is that I can keep the older
disks (each disk has an individual label, like any vtape has a label)
(I have updated them from 500GB to 1TB to 3TB and soon to 6TB) and the
vtapes are still known into tapelist (they are marked noreuse). If the
need arise, I can still remount that old disk.

So far (10+ years) the only disk I had failling was the disk having the
holding partition, I guess it was because of excessive usage.

I understand that vtapes have evolved since I started using them, but my
system works for me so I never took the time to search any further.

Best regards,

Olivier

--
---------------

Chris Hoogendyk

-
   O__  ---- Systems Administrator, Retired
  c/ /'_ --- Biology & Geosciences Departments
 (*) \(*) -- 315 Morrill Science Center III
~~~~~~~~~~ - University of Massachusetts, Amherst

<hoogen...@bio.umass.edu>

---------------

Erdös 4

Reply via email to