Re: Disks renamed after update to 'testing'...?

2020-08-21 Thread David Christensen

On 2020-08-20 14:58, Andy Smith wrote:


... dm-integrity can now be used with LUKS (with or
without encryption) to add checksums that force a read error when
they don't match. When there is redundancy (e.g. LVM or MD) a read
can then come from a good copy and the bad copy will be repaired.


So, LVM and md RAID do not checksum data blocks -- they need a disk to 
report a failed read before they realize something is wrong?




Here is a practical example:

 https://gist.github.com/MawKKe/caa2bbf7edcc072129d73b61ae7815fb



Thanks for the link.  I recall reading about dm-integrity sometime in 
the past, but my rotten old brain must have screwed it up.  At least I 
wasn't hallucinating when I guessed that LUKS had checksums.  8-O



My Debian 9 and cryptsetup are now looking even older and more inferior.


Does the Debian 10 installer make use of, or allow the use of, 
dm-integrity and/or cryptsetup with the --integrity option?  If not, 
does the necessary software exist in the d-i kernel, BusyBox, whatever, 
so I can do it by hand with the rescue console/ alternate virtual console?



David


p.s.  Similar statements can be made for OpenZFS native encryption:

https://openzfs.org/wiki/ZFS-Native_Encryption

https://openzfs.org/wiki/OpenZFS_Developer_Summit_2016#Presentations



Re: Disks renamed after update to 'testing'...?

2020-08-21 Thread tomas
On Thu, Aug 20, 2020 at 01:34:58PM -0700, David Christensen wrote:
> On 2020-08-20 08:32, rhkra...@gmail.com wrote:
> >On Thursday, August 20, 2020 03:43:55 AM to...@tuxteam.de wrote:
> >>Contraty to the other (very valid) points, my backups are always on
> >>a LUKS drive, no partition table. Rationale is, should I lose it, the
> >>less visible information the better. Best if it looks like a broken
> >>USB stick. No partition table looks (nearly) broken :-)
> 
> I always use a partition table, to reduce the chance of confusing
> myself.  ;-)
> 
> 
> >I have two questions:
> >
> >* I suppose that means you create the LUKS drive on, e.g., /dev/sdc 
> > rather
> >than, for example, /dev/sdc?  (I suppose that should be easy to do.)

Exactly.

> >* But, I'm wondering, how much bit rot would it take to make the entire
> >backup unusable, and what kind of precautions do you take (or could be taken)
> >to avoid that?

I have no current strategy for silent [1] bit rot. For file system
consistency, I do run from time to time an fsck after opening the
LUKS and before mounting (We are talking about roughly 60..70 GB;
were we talking about 100..1000 times as much, active bitrot
mitigation might sound more compelling).

> I have been pondering bit-rot mitigation on non-checksumming filesystems.

The big ones have that; and for really huge amounts of data (where
some corners of your data might rest unseen for years (say hundreds
of TB or so), it does make sense.

In my case, I consider the backup just as something which i expect
to "fail early" and "fail loudly". In the "normal" case it is perfectly
disposable :-)

> Some people have mentioned md RAID.  tomas has mentioned LUKS.  I
> believe both of them add checksums to the contained contents.  So,
> bit-rot within a container should be caught by the container driver.

Don't know about that, to be honest: I count on the ext4 beneath the
LUKS to catch any nasties (and to issue an early warning when the
USB stick starts degrading -- I'm still a bit queasy how cheap a
128GB USB stick can be).

> In the case of md RAID, the driver should respond by fetching the
> data from another drive and then dealing with the bad block(s); the
> application should not see any error (?).  I assume LVM RAID would
> respond like md RAID (?).

Yes. That's why I reserve RAID for "high availability" case: you
want to keep running after a failure, and your customer doesn't
notice (it would make sense to think about whether this is the
best level to introduce redundancy, but I disgress).

>   In the case of LUKS, the driver has no
> redundant data (?) and will have no choice but to report an error to
> the application (?).  I would guess LVM non-RAID would behave
> similarly (?).

Exactly. For the backup scenario, the whole backup /is/ the redundant
data. If the probability of failure of your main system in some
given time interval T is, say, 10e-7, and that of your backup's in
the same time interval is, say 10e-5 (cheaper hardware, and that),
you're looking into a catastrophe with a prob of 10e-12. If you
want to better that, use two separate backup media, then you are
into 10e-17 [2].

Cheers
[1] silent meaning some bit flips in file content, without the
   file system noticing. Which on ext4 is quite possible and
   btrfs, e.g. can (reasonably) guard against.

[2] This is, of course, "economist maths", the one which lead
   to the 2008-2009 crash: assume all those bad events are
   independent. If my house burns down, my computer is in there,
   and my only backup on a stick is in my pocket...

 - t
> 
> 
> For all three -- md, LUKS, LVM -- I don't know what happens for bit
> rot outside the container (e.g. in the container metadata).
> 
> 
> David
> 


signature.asc
Description: Digital signature


Re: Disks renamed after update to 'testing'...?

2020-08-20 Thread Andy Smith
Hello,

On Thu, Aug 20, 2020 at 05:30:20PM -0400, Dan Ritter wrote:
> David Christensen wrote: 
> > Some people have mentioned md RAID.  tomas has mentioned LUKS.  I believe
> > both of them add checksums to the contained contents.  So, bit-rot within a
> > container should be caught by the container driver.
> 
> This is incorrect. The systems that checksum every write and
> recalculate and match on every read are BTRFS and ZFS.
> 
> LVM, LUKS, and mdadm do not.

Indeed, although dm-integrity can now be used with LUKS (with or
without encryption) to add checksums that force a read error when
they don't match. When there is redundancy (e.g. LVM or MD) a read
can then come from a good copy and the bad copy will be repaired.

Here is a practical example:

https://gist.github.com/MawKKe/caa2bbf7edcc072129d73b61ae7815fb

I haven't yet used it in production but if anyone has I would be
really interested to see a with and without comparison of
performance.

Cheers,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: Disks renamed after update to 'testing'...?

2020-08-20 Thread Dan Ritter
David Christensen wrote: 
> On 2020-08-20 08:32, rhkra...@gmail.com wrote:
> I have been pondering bit-rot mitigation on non-checksumming filesystems.
> 
> 
> Some people have mentioned md RAID.  tomas has mentioned LUKS.  I believe
> both of them add checksums to the contained contents.  So, bit-rot within a
> container should be caught by the container driver.

This is incorrect. The systems that checksum every write and
recalculate and match on every read are BTRFS and ZFS.

LVM, LUKS, and mdadm do not.

-dsr-



Re: Disks renamed after update to 'testing'...?

2020-08-20 Thread David Christensen

On 2020-08-20 08:32, rhkra...@gmail.com wrote:

On Thursday, August 20, 2020 03:43:55 AM to...@tuxteam.de wrote:

Contraty to the other (very valid) points, my backups are always on
a LUKS drive, no partition table. Rationale is, should I lose it, the
less visible information the better. Best if it looks like a broken
USB stick. No partition table looks (nearly) broken :-)


I always use a partition table, to reduce the chance of confusing 
myself.  ;-)




I have two questions:

* I suppose that means you create the LUKS drive on, e.g., /dev/sdc rather
than, for example, /dev/sdc?  (I suppose that should be easy to do.)

* But, I'm wondering, how much bit rot would it take to make the entire
backup unusable, and what kind of precautions do you take (or could be taken)
to avoid that?


I have been pondering bit-rot mitigation on non-checksumming filesystems.


Some people have mentioned md RAID.  tomas has mentioned LUKS.  I 
believe both of them add checksums to the contained contents.  So, 
bit-rot within a container should be caught by the container driver.  In 
the case of md RAID, the driver should respond by fetching the data from 
another drive and then dealing with the bad block(s); the application 
should not see any error (?).  I assume LVM RAID would respond like md 
RAID (?).  In the case of LUKS, the driver has no redundant data (?) and 
will have no choice but to report an error to the application (?).  I 
would guess LVM non-RAID would behave similarly (?).



For all three -- md, LUKS, LVM -- I don't know what happens for bit rot 
outside the container (e.g. in the container metadata).



David



Re: Disks renamed after update to 'testing'...?

2020-08-20 Thread rhkramer
On Thursday, August 20, 2020 03:43:55 AM to...@tuxteam.de wrote:
> Contraty to the other (very valid) points, my backups are always on
> a LUKS drive, no partition table. Rationale is, should I lose it, the
> less visible information the better. Best if it looks like a broken
> USB stick. No partition table looks (nearly) broken :-)

I have two questions:

   * I suppose that means you create the LUKS drive on, e.g., /dev/sdc rather 
than, for example, /dev/sdc?  (I suppose that should be easy to do.)

   * But, I'm wondering, how much bit rot would it take to make the entire 
backup unusable, and what kind of precautions do you take (or could be taken) 
to avoid that?



Re: Disks renamed after update to 'testing'...?

2020-08-20 Thread Joe
On Thu, 20 Aug 2020 09:43:55 +0200
 wrote:

> On Wed, Aug 19, 2020 at 02:41:02PM -0700, David Christensen wrote:
> > On 2020-08-19 03:03, Urs Thuermann wrote:  
> > >David Christensen  writes:  
> >   
> > >>When using a drive as backup media, are there likely use-cases
> > >>that benefit from configuring the drive with no partition, a
> > >>single PV, single VG, single LV, and single filesystem vs.
> > >>configuring the drive with a single partition, single UUID fstab
> > >>entry, and single filesystem?  
> 
> Contraty to the other (very valid) points, my backups are always on
> a LUKS drive, no partition table. Rationale is, should I lose it, the
> less visible information the better. Best if it looks like a broken
> USB stick. No partition table looks (nearly) broken :-)
> 

A number of them come formatted that way. FAT32, no partitions.

-- 
Joe



Re: Disks renamed after update to 'testing'...?

2020-08-20 Thread tomas
On Wed, Aug 19, 2020 at 02:41:02PM -0700, David Christensen wrote:
> On 2020-08-19 03:03, Urs Thuermann wrote:
> >David Christensen  writes:
> 
> >>When using a drive as backup media, are there likely use-cases that
> >>benefit from configuring the drive with no partition, a single PV,
> >>single VG, single LV, and single filesystem vs. configuring the drive
> >>with a single partition, single UUID fstab entry, and single
> >>filesystem?

Contraty to the other (very valid) points, my backups are always on
a LUKS drive, no partition table. Rationale is, should I lose it, the
less visible information the better. Best if it looks like a broken
USB stick. No partition table looks (nearly) broken :-)

Cheers
-- t


signature.asc
Description: Digital signature


Re: Disks renamed after update to 'testing'...?

2020-08-19 Thread David Christensen

On 2020-08-19 03:03, Urs Thuermann wrote:

David Christensen  writes:



When using a drive as backup media, are there likely use-cases that
benefit from configuring the drive with no partition, a single PV,
single VG, single LV, and single filesystem vs. configuring the drive
with a single partition, single UUID fstab entry, and single
filesystem?


You can use a partition or the whole disk for a physical volume


Yes.



... I prefer having a partition table with only one partition
covering the whole disk.  The partition table entry includes a type so
that there is less guessing about what the disk contains:


This is especially true if you access the drive with foreign operating 
systems.




If you then put a single LV into the VG which covers the whole VG you
don't benefit much from LVM's functionality, except that you can
easily change allocations later if you decide so. 


Some backup tools, such as macOS Time Machine and Windows File History, 
automatically delete old backups if and when the destination filesystem 
becomes full.  So, one use-case would be if the drive were the 
destination for several such backup tools -- use LVM to subdivide the 
available space among them.



Another use-case is enlarging the backup filesystem by adding another drive.


Another use-case is mirroring the backup filesystem.


A more complex mirroring use-case -- add, re-silver, and remove drives, 
and rotate them on-line, on-site, near-site, off-site, etc..



Re-partitioning is more complicated. 


For a drive used as a PV for backups, I cannot think of a use-case for 
re-partitioning.



David



Re: Disks renamed after update to 'testing'...?

2020-08-19 Thread Urs Thuermann
David Christensen  writes:

> Thanks for the explanation.  It seems that pvcreate(8) places an LVM
> disk label and an LVM metadata area onto disks or partitions when
> creating a PV; including a unique UUID:
> 
> https://www.man7.org/linux/man-pages/man8/pvcreate.8.html

Yes, correct.  You can see the UUID with pvdisplay(8) or blkid(8):

# pvdisplay /dev/md0
  --- Physical volume ---
  PV Name   /dev/md0
  VG Name   vg0
  PV Size   1.82 TiB / not usable 3.00 MiB
  Allocatable   yes 
  PE Size   4.00 MiB
  Total PE  476899
  Free PE   96653
  Allocated PE  380246
  PV UUID   uFHSzs-QpCa-GVIX-LKRZ-rIRV-KgfE-taQXQV
   
# blkid /dev/md0
/dev/md0: UUID="uFHSzs-QpCa-GVIX-LKRZ-rIRV-KgfE-taQXQV" TYPE="LVM2_member"

> When using a drive as backup media, are there likely use-cases that
> benefit from configuring the drive with no partition, a single PV,
> single VG, single LV, and single filesystem vs. configuring the drive
> with a single partition, single UUID fstab entry, and single
> filesystem?

You can use a partition or the whole disk for a physical volume, as
you can for a file system.  That is, you can

mkfs /dev/sdaormkfs /dev/sda1

and likewise with LVM you can

pvcreate /dev/sdaorpvcreare /dev/sda1

Long ago I actually created PVs on the whole disk and didn't have
partition tables and therefore no partition on many of my drives.
Today, I prefer having a partition table with only one partition
covering the whole disk.  The partition table entry includes a type so
that there is less guessing about what the disk contains:

# fdisk -l /dev/sda | grep /dev
Disk /dev/sda: 1.8 TiB, 2000397852160 bytes, 3907027055 sectors
/dev/sda12048 3907026943 3907024896  1.8T fd Linux raid autodetect
# fdisk -l /dev/sdf | grep /dev
Disk /dev/sdf: 3.7 TiB, 4000787030016 bytes, 976754646 sectors
/dev/sdf1 256 976754645 976754390  3.7T 8e Linux LVM

If you then put a single LV into the VG which covers the whole VG you
don't benefit much from LVM's functionality, except that you can
easily change allocations later if you decide so.  Re-partitioning is
more complicated.  But even then you have nice and stable device
names.  You could even add or remove drives to the volume group to
extend it, spread logical volumes across the drives and still no LV
name would change.

I like having nice device names like /dev/vg0/root, /dev/vg0/usr,
/dev/vg0/var, /dev/vg0/home, /dev/vg0/swap, /dev/vg0/ for all of
my (currently 4) virtual machines.  And use it a lot, because it so
easy to add/delete/change:

# ls -l /dev/mapper | wc -l
27

For example if I want to test something with btrfs, I can run

lvcreate -n btrfs-test -L 4G vg0

and I have a /dev/vg0/btrfs-test to work with.  No re-partitioning, no
problem with re-reading partition tables which are in use, etc.

urs



Re: Disks renamed after update to 'testing'...?

2020-08-19 Thread David Christensen

On 2020-08-18 23:00, Urs Thuermann wrote:

David Christensen  writes:


AIUI the OP was mounting an (external?) drive partition for use as a
destination for backups.  Prior to upgrading to Testing, the root
partition was /dev/sda1 (no LVM?) and the backup partition was
/dev/sdb1 (no LVM?).  After upgrading to Testing, the root partition
is /dev/sdb1 and the backup partition device node is unknown.  The OP
was confused by the changed root partition device node.


Please describe how LVM would help in this situation.


Instead of using /dev/sdb1 directly for the backup file system, the OP
could put LVM to /dev/sdb1 (or now /dev/sda1).  I.e. he would create a
physical volume on /deb/sdb1, create a volume group e.g. named vgbkup,
and would then create a logical volume, e.g. named lv1.  The device
name for the backup file system would then always be /dev/vgbkup/lv1
regardless of how the kernel will name underlying device (/dev/sda1 or
/dev/sdb1 or whatever).

In addition, you get the flexibility of LVM of adding, deleting, and
resizing volumes without re-partitioning.


Thanks for the explanation.  It seems that pvcreate(8) places an LVM 
disk label and an LVM metadata area onto disks or partitions when 
creating a PV; including a unique UUID:


https://www.man7.org/linux/man-pages/man8/pvcreate.8.html


When using a drive as backup media, are there likely use-cases that 
benefit from configuring the drive with no partition, a single PV, 
single VG, single LV, and single filesystem vs. configuring the drive 
with a single partition, single UUID fstab entry, and single filesystem?



David



Re: Disks renamed after update to 'testing'...?

2020-08-18 Thread Urs Thuermann
Urs Thuermann  writes:

> IMO the best solution is to use LVM.  I use it since 2001 on most
> drives and I don't have partitions.  And I prefer to use device names
> over using the *UUID or *LABEL prefixes.  With LVM, device names are
> predictable /dev/mapper/- with symlinks
> /dev//.

Following up myself: The reason I prefer stable device names instead
of UUIDs or LABELs is that device names show up in some places even if
you use UUID or LABEL in /etc/fstab or in your command line:

On my laptop I have UUID in /etc/fstab but df still shows the device
name:

$ grep -w / /etc/fstab
# / was on /dev/nvme0n1p2 during installation
UUID=c73ff331-0ff5-44fb-8aef-228e64a96175 /   ext4
errors=remount-ro 0   1
$ df | grep -w /
/dev/nvme0n1p2  237470384 107725172 117659352  48% /

On my server with LVM I get:

$ grep -w / /etc/fstab
/dev/mapper/vg0-root   /   ext4errors=remount-ro 0   1
$ df | grep -w /
/dev/mapper/vg0-root  2031440 659132   1261556  35% /


urs



Re: Disks renamed after update to 'testing'...?

2020-08-18 Thread Urs Thuermann
David Christensen  writes:

> AIUI the OP was mounting an (external?) drive partition for use as a
> destination for backups.  Prior to upgrading to Testing, the root
> partition was /dev/sda1 (no LVM?) and the backup partition was
> /dev/sdb1 (no LVM?).  After upgrading to Testing, the root partition
> is /dev/sdb1 and the backup partition device node is unknown.  The OP
> was confused by the changed root partition device node.
> 
> 
> Please describe how LVM would help in this situation.

Instead of using /dev/sdb1 directly for the backup file system, the OP
could put LVM to /dev/sdb1 (or now /dev/sda1).  I.e. he would create a
physical volume on /deb/sdb1, create a volume group e.g. named vgbkup,
and would then create a logical volume, e.g. named lv1.  The device
name for the backup file system would then always be /dev/vgbkup/lv1
regardless of how the kernel will name underlying device (/dev/sda1 or
/dev/sdb1 or whatever).

In addition, you get the flexibility of LVM of adding, deleting, and
resizing volumes without re-partitioning.

If the OP had done that before he hadn't noticed the change from
/dev/sdb1 to /dev/sda1 as he hadn't used that name.  He could now
change to LVM and never deal with changing physical device names
again.

Whether it's an internal or external drive doesn't matter.

My backup drive is an external USB-3 hard drive with 1 partition
containing the whole disk space of 4 TB.  That partition contains
volume vgroup "vg2" with currently one logical voulume "snap" of 2 TB
and I mount /dev/vg2/snap to /var/snapshots for backups.  Currently I
don't need more than these 2 TB but I could easily extend the logical
volume or create new ones for backups of my other machines or virtual
machines.

And I actually forget the physical device name of the vg2 volume group
(I have just looked it up using vgdisplay(8), and it's currently
/dev/sdf1).

urs



Re: Disks renamed after update to 'testing'...?

2020-08-18 Thread David Christensen

On 2020-08-18 11:27, Urs Thuermann wrote:

"Rick Thomas"  writes:


The /dev/sdx names for devices have been unpredictable for quite a
while.  Which one is sda and which sdb will depend on things like
timing -- which one gets recognized by the kernel first.

The best solution is to either use UUID or LABEL when you fsck
and/or mount the device.


IMO the best solution is to use LVM.  I use it since 2001 on most
drives and I don't have partitions.  And I prefer to use device names
over using the *UUID or *LABEL prefixes.  With LVM, device names are
predictable /dev/mapper/- with symlinks
/dev//.


AIUI the OP was mounting an (external?) drive partition for use as a 
destination for backups.  Prior to upgrading to Testing, the root 
partition was /dev/sda1 (no LVM?) and the backup partition was /dev/sdb1 
(no LVM?).  After upgrading to Testing, the root partition is /dev/sdb1 
and the backup partition device node is unknown.  The OP was confused by 
the changed root partition device node.



Please describe how LVM would help in this situation.


David



Re: Disks renamed after update to 'testing'...?

2020-08-18 Thread Urs Thuermann
"Rick Thomas"  writes:

> The /dev/sdx names for devices have been unpredictable for quite a
> while.  Which one is sda and which sdb will depend on things like
> timing -- which one gets recognized by the kernel first.
> 
> The best solution is to either use UUID or LABEL when you fsck
> and/or mount the device.

IMO the best solution is to use LVM.  I use it since 2001 on most
drives and I don't have partitions.  And I prefer to use device names
over using the *UUID or *LABEL prefixes.  With LVM, device names are
predictable /dev/mapper/- with symlinks
/dev//.

urs



Re: Disks renamed after update to 'testing'...?

2020-08-18 Thread Thomas Schmitt
Hi,

i wrote:
> > I only deem *UUID as safe,

Nicolas George wrote:
> UUID can get duplicated too. Just have somebody copy the whole block
> device with "good ol' dd".

Yes, sure. A HDD of mine got by the Debian installation 128 GPT slots of
128 bytes. So the primary GPT including "protective MBR" and GPT header
block has a size of 2 + 128 * 128 / 512 = 34 blocks of 512 bytes.
Transplanting this to a disk of sufficent size and a run of a partition
editor to recreate the GPT backup table will suffice to clone all GPT
identification opportunities of partitions.

But i rather meant "safe against unintentional duplication".


Have a nice day :)

Thomas



Re: Disks renamed after update to 'testing'...?

2020-08-18 Thread David Christensen

On 8/17/20 9:01 PM, hobie of RMN wrote:

On 2020-08-17 16:42, hobie of RMN wrote:

Hi, All -

My brother has been issuing "mount /dev/sdb1" prior to backing up some
files to a second hard disk.  He lately upgraded to 'testing', and it
appears (from result of running df) that what the system now calls
/dev/sdb1 is what he has thought of as /dev/sda1, the system '/'
partition.

Thanks to the UUID= mechanism, his system still boots properly, but
'mount
/dev/sdb1' is inappropriate now, could even be the path to madness. :)

Two questions, then: (1) What caused this shift of device naming? And
(2)
How do we fix it?  Is this something that can be changed in the BIOS?
But, if so, what caused it to change in the first place?

Thanks for your time and attenton.


Please run the following commands as root and post the complete console
session -- prompt, command issued, and output obtained:

# cat /etc/fstab

# mount


Please post the complete console session demonstrating the issue with
mount(8).



root@shelby:~# cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
#
# / was on /dev/sda1 during installation
UUID=3f50ca38-20f3-4a12-880c-a1283ac6e41b /   ext4
errors=remount-ro 0   1
# swap was on /dev/sda5 during installation
UUID=ca191c62-2f38-4eae-b4e9-e21337edc198 noneswapsw
0   0
/dev/sr0/media/cdrom0   udf,iso9660 user,noauto 0   0
/dev/sr1/media/cdrom1   udf,iso9660 user,noauto 0   0



root@shelby:~# mount



/dev/sdb1 on / type ext4 (rw,relatime,errors=remount-ro)


AIUI device node assignment is something the kernel does during boot. 
Drive nodes can change due to changing software, changing hardware, or 
both.  Upgrading to testing is probably what caused the change.  It's 
not something I would try to "fix".  Do not mess with the BIOS/UEFI or 
CMOS settings.



The best answer is to use a stable identifier, such as a UUID or label, 
for the backup drive partition.  Human-readable labels can be easier to 
type, but require a supported partitioning scheme (e.g. GPT) or 
filesystem (e.g. ext*).  I would connect the backup drive, use blkid to 
find the UUID, create a mountpoint (e.g. /mnt/backup), and add an entry 
to /etc/fstab.  You would then mount the drive via 'mount /mnt/backup'.



David



Re: Disks renamed after update to 'testing'...?

2020-08-18 Thread Nicolas George
Thomas Schmitt (12020-08-18):
> I only deem *UUID as safe, unless the same names on different devices
> are intented and always only one of those devices will be connected.

UUID can get duplicated too. Just have somebody copy the whole block
device with "good ol' dd".

Regards,

-- 
  Nicolas George


signature.asc
Description: PGP signature


Re: Disks renamed after update to 'testing'...?

2020-08-18 Thread Thomas Schmitt
Hi,

didier gaumet wrote:
> give a name to the underlyning [GPT] partition

Let me add the hint that a GPT partition "name" is a user defined string
(in fstab and lsblk: PARTLABEL=) whereas the partition UUIDs in GPT
get handed out by partition editors automatically as random data
(human readable and fstab usable form: PARTUUID=).
The corresponding properties of filesystems are referred to as LABEL= and
UUID=.
So if the mount point shall be bound to a partition's key text, then one
has to prepend "PART" to the more widely used filesystem field names.

I only deem *UUID as safe, unless the same names on different devices
are intented and always only one of those devices will be connected.

Whether to use partition or filesystem UUID depends on whether it is the
goal to identify the hardware or its content. Both have their plausible
use cases.


Have a nice day :)

Thomas



Re: Disks renamed after update to 'testing'...?

2020-08-18 Thread Rick Thomas
On Mon, Aug 17, 2020, at 4:42 PM, hobie of RMN wrote:
> Hi, All -
> 
> My brother has been issuing "mount /dev/sdb1" prior to backing up some
> files to a second hard disk.  He lately upgraded to 'testing', and it
> appears (from result of running df) that what the system now calls
> /dev/sdb1 is what he has thought of as /dev/sda1, the system '/'
> partition.
> 
> Thanks to the UUID= mechanism, his system still boots properly, but 'mount
> /dev/sdb1' is inappropriate now, could even be the path to madness. :)
> 
> Two questions, then: (1) What caused this shift of device naming? And (2)
> How do we fix it?  Is this something that can be changed in the BIOS? 
> But, if so, what caused it to change in the first place?
> 
> Thanks for your time and attenton.

The /dev/sdx names for devices have been unpredictable for quite a while.  
Which one is sda and which sdb will depend on things like timing -- which one 
gets recognized by the kernel first.

The best solution is to either use UUID or LABEL when you fsck and/or mount the 
device.  So:

1) Use "df" to find out the device name that the kernel decided to use for your 
backup disk this time.  Let's assume it's /dev/sda1.

2) label that device with the "tune2fs" command (assuming your device contains 
an ext[234] filesystem.  If not, check the man pages for the filesystem you are 
using.)  e.g. "tune2fs -L BACKUP /dev/sda1".

3) then when you want to mount or fsck  the device (you do fsck it before 
mounting it, right?) use "LABEL=BACKUP" instead of "/dev/sdb1".
fsck LABEL=BACKUP
mount LABEL=BACKUP
4) If you're into typing long strings of random characters, you can instead 
skip the label step and do
fsck UUID=..
mount UUID=..
But that's only for masochists, IMHO.

In any case, read the man pages before you try anything, so you'll know what 
your doing.

Enjoy!
Rick



Re: Disks renamed after update to 'testing'...?

2020-08-18 Thread didier gaumet
Hello,

Apparently, it is also possible to either:
-  give a name to the filesystem (use e2label to do so, the filesystem being 
ext4) and mount the filesystem by using this name as a parameter of the mount 
command instead of /dev/sd* or an UUID
-  give a name to the underlyning partition (use parted or similar tool  to do 
so) and mount the filesystem by using this name as a parameter of the mount 
command instead of /dev/sd* or an UUID. On a PC it is only possible with a GPT 
disk, not an MBR/DOS one, and that probably implies there is no other 
filesystem on that specific partition.



Re: Disks renamed after update to 'testing'...?

2020-08-17 Thread john doe

On 8/18/2020 6:01 AM, hobie of RMN wrote:

On 2020-08-17 16:42, hobie of RMN wrote:

Hi, All -

My brother has been issuing "mount /dev/sdb1" prior to backing up some
files to a second hard disk.  He lately upgraded to 'testing', and it
appears (from result of running df) that what the system now calls
/dev/sdb1 is what he has thought of as /dev/sda1, the system '/'
partition.

Thanks to the UUID= mechanism, his system still boots properly, but
'mount
/dev/sdb1' is inappropriate now, could even be the path to madness. :)

Two questions, then: (1) What caused this shift of device naming? And
(2)
How do we fix it?  Is this something that can be changed in the BIOS?
But, if so, what caused it to change in the first place?

Thanks for your time and attenton.


Please run the following commands as root and post the complete console
session -- prompt, command issued, and output obtained:

# cat /etc/fstab

# mount


Please post the complete console session demonstrating the issue with
mount(8).


David



Thaks. :)

cat /etc/fstab output includes:
# / was on /dev/sda1 during installation
UUID=3f50ca38-20f3-4a12-880c-a1283ac6e41b /   ext4
errors=remount-ro 0

'mount' output includes:
/dev/sdb1 on / type ext4 (rw,relatime,errors=remount-ro)



Instead of '/dev/sdb1' you should use the 'UUID' or a 'lable' to refer
to that partition.

In the case of your brother, I would do:

$ blkid -s UUID -o export /dev/sdb1

From the output of the above cmd, substitute '/dev/sdb1' by the all
UUID line.


You also might consider changing your mountpoint from using '/'..


To answer your other question:

'# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#'


--
John Doe



Re: Disks renamed after update to 'testing'...?

2020-08-17 Thread hobie of RMN
> On 2020-08-17 16:42, hobie of RMN wrote:
>> Hi, All -
>>
>> My brother has been issuing "mount /dev/sdb1" prior to backing up some
>> files to a second hard disk.  He lately upgraded to 'testing', and it
>> appears (from result of running df) that what the system now calls
>> /dev/sdb1 is what he has thought of as /dev/sda1, the system '/'
>> partition.
>>
>> Thanks to the UUID= mechanism, his system still boots properly, but
>> 'mount
>> /dev/sdb1' is inappropriate now, could even be the path to madness. :)
>>
>> Two questions, then: (1) What caused this shift of device naming? And
>> (2)
>> How do we fix it?  Is this something that can be changed in the BIOS?
>> But, if so, what caused it to change in the first place?
>>
>> Thanks for your time and attenton.
>
> Please run the following commands as root and post the complete console
> session -- prompt, command issued, and output obtained:
>
>   # cat /etc/fstab
>
>   # mount
>
>
> Please post the complete console session demonstrating the issue with
> mount(8).
>
>
> David
>

Thaks. :)

cat /etc/fstab output includes:
# / was on /dev/sda1 during installation
UUID=3f50ca38-20f3-4a12-880c-a1283ac6e41b /   ext4   
errors=remount-ro 0

'mount' output includes:
/dev/sdb1 on / type ext4 (rw,relatime,errors=remount-ro)

Here's the full output:

root@shelby:~# cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
#
# / was on /dev/sda1 during installation
UUID=3f50ca38-20f3-4a12-880c-a1283ac6e41b /   ext4   
errors=remount-ro 0   1
# swap was on /dev/sda5 during installation
UUID=ca191c62-2f38-4eae-b4e9-e21337edc198 noneswapsw  
   0   0
/dev/sr0/media/cdrom0   udf,iso9660 user,noauto 0   0
/dev/sr1/media/cdrom1   udf,iso9660 user,noauto 0   0
root@shelby:~#
root@shelby:~# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs
(rw,nosuid,noexec,relatime,size=4023876k,nr_inodes=1005969,mode=755)
devpts on /dev/pts type devpts
(rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs
(rw,nosuid,nodev,noexec,relatime,size=814860k,mode=755)
/dev/sdb1 on / type ext4 (rw,relatime,errors=remount-ro)
securityfs on /sys/kernel/security type securityfs
(rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2
(rw,nosuid,nodev,noexec,relatime,nsdelegate)
cgroup on /sys/fs/cgroup/systemd type cgroup
(rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
none on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
cgroup on /sys/fs/cgroup/pids type cgroup
(rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/perf_event type cgroup
(rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/blkio type cgroup
(rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup
(rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/devices type cgroup
(rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup
(rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/rdma type cgroup
(rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup
(rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/memory type cgroup
(rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/cpuset type cgroup
(rw,nosuid,nodev,noexec,relatime,cpuset)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs
(rw,relatime,fd=33,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=3406)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
vmware-vmblock on /run/vmblock-fuse type fuse.vmware-vmblock
(rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
tmpfs on /run/user/0 type tmpfs
(rw,nosuid,nodev,relatime,size=814860k,mode=700)
tmpfs on /run/user/1000 type tmpfs
(rw,nosuid,nodev,relatime,size=814860k,mode=700,uid=1000,gid=1000)




Re: Disks renamed after update to 'testing'...?

2020-08-17 Thread David Christensen

On 2020-08-17 16:42, hobie of RMN wrote:

Hi, All -

My brother has been issuing "mount /dev/sdb1" prior to backing up some
files to a second hard disk.  He lately upgraded to 'testing', and it
appears (from result of running df) that what the system now calls
/dev/sdb1 is what he has thought of as /dev/sda1, the system '/'
partition.

Thanks to the UUID= mechanism, his system still boots properly, but 'mount
/dev/sdb1' is inappropriate now, could even be the path to madness. :)

Two questions, then: (1) What caused this shift of device naming? And (2)
How do we fix it?  Is this something that can be changed in the BIOS?
But, if so, what caused it to change in the first place?

Thanks for your time and attenton.


Please run the following commands as root and post the complete console 
session -- prompt, command issued, and output obtained:


# cat /etc/fstab

# mount


Please post the complete console session demonstrating the issue with 
mount(8).



David



Disks renamed after update to 'testing'...?

2020-08-17 Thread hobie of RMN
Hi, All -

My brother has been issuing "mount /dev/sdb1" prior to backing up some
files to a second hard disk.  He lately upgraded to 'testing', and it
appears (from result of running df) that what the system now calls
/dev/sdb1 is what he has thought of as /dev/sda1, the system '/'
partition.

Thanks to the UUID= mechanism, his system still boots properly, but 'mount
/dev/sdb1' is inappropriate now, could even be the path to madness. :)

Two questions, then: (1) What caused this shift of device naming? And (2)
How do we fix it?  Is this something that can be changed in the BIOS? 
But, if so, what caused it to change in the first place?

Thanks for your time and attenton.