[Touch-packages] [Bug 2063381] Re: NFS Client can't mount via NFS version 4.0

2024-04-24 Thread Ryan S
** Description changed:

  When specifying vers=4.0 or nfsvers=4.0 in NFSv4 mount options, the
  client ends up using version 4.2 instead. This option seems to be
  ignored.
  
  This happens with pam_mount and manual mount. We have been using
  pam_mount for years with the option vers=4.0, but now it appears to be
  broken. Just recently noticed this is happening while troubleshooting
  root causes of some other issues on our servers.
  
- Test: mounting manually with option: minorversion=0, the client does not 
mount, no errors are seen with -vvv, no errors in logs either, it just fails 
silently. 
- With pam_mount this option fails as well but is logged as mount failed into 
auth.log 
+ Test: mounting manually with option: minorversion=0, the client does not 
mount, no errors are seen with -vvv, no errors in logs either, it just fails 
silently.
+ With pam_mount this option fails as well but is logged as mount failed into 
auth.log
  ex: Apr 24 17:15:22 servername sshd[666278]: (pam_mount.c:522): mount of 
/mnt/nfs/home/admin_username failed
  
- I see that something similar was fixed in autofs, but this is not autofs
- related. https://bugs.launchpad.net/bugs/1818121
+ other nfs4 mount options we are using:
+ sec=krb5p,noatime,nodiratime,async
+ 
+ 
+ I see that something similar was fixed in autofs, but this is not autofs 
related, as we do not use it. https://bugs.launchpad.net/bugs/1818121
  
  Found a similar [oracle] issue but cannot read it=
  
https://support.oracle.com/knowledge/Oracle%20Linux%20and%20Virtualization/2528012_1.html
  
  ---
  
  Description:Ubuntu 20.04.6 LTS
  Release:20.04
  
  ---
  util-linux:
-   Installed: 2.34-0.1ubuntu9.6
-   Candidate: 2.34-0.1ubuntu9.6
-   Version table:
-  *** 2.34-0.1ubuntu9.6 500
- 500 http://us.archive.ubuntu.com/ubuntu focal-updates/main amd64 
Packages
- 500 http://us.archive.ubuntu.com/ubuntu focal-security/main amd64 
Packages
- 100 /var/lib/dpkg/status
-  2.34-0.1ubuntu9 500
- 500 http://us.archive.ubuntu.com/ubuntu focal/main amd64 Packages
- 
+   Installed: 2.34-0.1ubuntu9.6
+   Candidate: 2.34-0.1ubuntu9.6
+   Version table:
+  *** 2.34-0.1ubuntu9.6 500
+ 500 http://us.archive.ubuntu.com/ubuntu focal-updates/main amd64 
Packages
+ 500 http://us.archive.ubuntu.com/ubuntu focal-security/main amd64 
Packages
+ 100 /var/lib/dpkg/status
+  2.34-0.1ubuntu9 500
+ 500 http://us.archive.ubuntu.com/ubuntu focal/main amd64 Packages
  
  mount:
-   Installed: 2.34-0.1ubuntu9.6
-   Candidate: 2.34-0.1ubuntu9.6
-   Version table:
-  *** 2.34-0.1ubuntu9.6 500
- 500 http://us.archive.ubuntu.com/ubuntu focal-updates/main amd64 
Packages
- 500 http://us.archive.ubuntu.com/ubuntu focal-security/main amd64 
Packages
- 100 /var/lib/dpkg/status
-  2.34-0.1ubuntu9 500
- 500 http://us.archive.ubuntu.com/ubuntu focal/main amd64 Packages
- 
+   Installed: 2.34-0.1ubuntu9.6
+   Candidate: 2.34-0.1ubuntu9.6
+   Version table:
+  *** 2.34-0.1ubuntu9.6 500
+ 500 http://us.archive.ubuntu.com/ubuntu focal-updates/main amd64 
Packages
+ 500 http://us.archive.ubuntu.com/ubuntu focal-security/main amd64 
Packages
+ 100 /var/lib/dpkg/status
+  2.34-0.1ubuntu9 500
+ 500 http://us.archive.ubuntu.com/ubuntu focal/main amd64 Packages
  
  nfs-common:
-   Installed: 1:1.3.4-2.5ubuntu3.6
-   Candidate: 1:1.3.4-2.5ubuntu3.6
-   Version table:
-  *** 1:1.3.4-2.5ubuntu3.6 500
- 500 http://us.archive.ubuntu.com/ubuntu focal-updates/main amd64 
Packages
- 100 /var/lib/dpkg/status
-  1:1.3.4-2.5ubuntu3.3 500
- 500 http://us.archive.ubuntu.com/ubuntu focal-security/main amd64 
Packages
-  1:1.3.4-2.5ubuntu3 500
- 500 http://us.archive.ubuntu.com/ubuntu focal/main amd64 Packages
+   Installed: 1:1.3.4-2.5ubuntu3.6
+   Candidate: 1:1.3.4-2.5ubuntu3.6
+   Version table:
+  *** 1:1.3.4-2.5ubuntu3.6 500
+ 500 http://us.archive.ubuntu.com/ubuntu focal-updates/main amd64 
Packages
+ 100 /var/lib/dpkg/status
+  1:1.3.4-2.5ubuntu3.3 500
+ 500 http://us.archive.ubuntu.com/ubuntu focal-security/main amd64 
Packages
+  1:1.3.4-2.5ubuntu3 500
+ 500 http://us.archive.ubuntu.com/ubuntu focal/main amd64 Packages

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to util-linux in Ubuntu.
https://bugs.launchpad.net/bugs/2063381

Title:
  NFS Client can't mount via NFS version 4.0

Status in util-linux package in Ubuntu:
  New

Bug description:
  When specifying vers=4.0 or nfsvers=4.0 in NFSv4 mount options, the
  client ends up using version 4.2 instead. This option seems to be
  ignored.

  This happens with pam_mount and manual mount. We have been using
  pam_mount for years with the option vers=4.0, but now it appears to be
  broken. Just recently noticed this is happening while troubleshooting
  root 

[Touch-packages] [Bug 2063381] [NEW] NFS Client can't mount via NFS version 4.0

2024-04-24 Thread Ryan S
Public bug reported:

When specifying vers=4.0 or nfsvers=4.0 in NFSv4 mount options, the
client ends up using version 4.2 instead. This option seems to be
ignored.

This happens with pam_mount and manual mount. We have been using
pam_mount for years with the option vers=4.0, but now it appears to be
broken. Just recently noticed this is happening while troubleshooting
root causes of some other issues on our servers.

Test: mounting manually with option: minorversion=0, the client does not mount, 
no errors are seen with -vvv, no errors in logs either, it just fails silently. 
With pam_mount this option fails as well but is logged as mount failed into 
auth.log 
ex: Apr 24 17:15:22 servername sshd[666278]: (pam_mount.c:522): mount of 
/mnt/nfs/home/admin_username failed

I see that something similar was fixed in autofs, but this is not autofs
related. https://bugs.launchpad.net/bugs/1818121

Found a similar [oracle] issue but cannot read it=
https://support.oracle.com/knowledge/Oracle%20Linux%20and%20Virtualization/2528012_1.html

---

Description:Ubuntu 20.04.6 LTS
Release:20.04

---
util-linux:
  Installed: 2.34-0.1ubuntu9.6
  Candidate: 2.34-0.1ubuntu9.6
  Version table:
 *** 2.34-0.1ubuntu9.6 500
500 http://us.archive.ubuntu.com/ubuntu focal-updates/main amd64 
Packages
500 http://us.archive.ubuntu.com/ubuntu focal-security/main amd64 
Packages
100 /var/lib/dpkg/status
 2.34-0.1ubuntu9 500
500 http://us.archive.ubuntu.com/ubuntu focal/main amd64 Packages


mount:
  Installed: 2.34-0.1ubuntu9.6
  Candidate: 2.34-0.1ubuntu9.6
  Version table:
 *** 2.34-0.1ubuntu9.6 500
500 http://us.archive.ubuntu.com/ubuntu focal-updates/main amd64 
Packages
500 http://us.archive.ubuntu.com/ubuntu focal-security/main amd64 
Packages
100 /var/lib/dpkg/status
 2.34-0.1ubuntu9 500
500 http://us.archive.ubuntu.com/ubuntu focal/main amd64 Packages


nfs-common:
  Installed: 1:1.3.4-2.5ubuntu3.6
  Candidate: 1:1.3.4-2.5ubuntu3.6
  Version table:
 *** 1:1.3.4-2.5ubuntu3.6 500
500 http://us.archive.ubuntu.com/ubuntu focal-updates/main amd64 
Packages
100 /var/lib/dpkg/status
 1:1.3.4-2.5ubuntu3.3 500
500 http://us.archive.ubuntu.com/ubuntu focal-security/main amd64 
Packages
 1:1.3.4-2.5ubuntu3 500
500 http://us.archive.ubuntu.com/ubuntu focal/main amd64 Packages

** Affects: util-linux (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: mount nfs

** Package changed: autofs (Ubuntu) => util-linux (Ubuntu)

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to util-linux in Ubuntu.
https://bugs.launchpad.net/bugs/2063381

Title:
  NFS Client can't mount via NFS version 4.0

Status in util-linux package in Ubuntu:
  New

Bug description:
  When specifying vers=4.0 or nfsvers=4.0 in NFSv4 mount options, the
  client ends up using version 4.2 instead. This option seems to be
  ignored.

  This happens with pam_mount and manual mount. We have been using
  pam_mount for years with the option vers=4.0, but now it appears to be
  broken. Just recently noticed this is happening while troubleshooting
  root causes of some other issues on our servers.

  Test: mounting manually with option: minorversion=0, the client does not 
mount, no errors are seen with -vvv, no errors in logs either, it just fails 
silently. 
  With pam_mount this option fails as well but is logged as mount failed into 
auth.log 
  ex: Apr 24 17:15:22 servername sshd[666278]: (pam_mount.c:522): mount of 
/mnt/nfs/home/admin_username failed

  I see that something similar was fixed in autofs, but this is not
  autofs related. https://bugs.launchpad.net/bugs/1818121

  Found a similar [oracle] issue but cannot read it=
  
https://support.oracle.com/knowledge/Oracle%20Linux%20and%20Virtualization/2528012_1.html

  ---

  Description:Ubuntu 20.04.6 LTS
  Release:20.04

  ---
  util-linux:
Installed: 2.34-0.1ubuntu9.6
Candidate: 2.34-0.1ubuntu9.6
Version table:
   *** 2.34-0.1ubuntu9.6 500
  500 http://us.archive.ubuntu.com/ubuntu focal-updates/main amd64 
Packages
  500 http://us.archive.ubuntu.com/ubuntu focal-security/main amd64 
Packages
  100 /var/lib/dpkg/status
   2.34-0.1ubuntu9 500
  500 http://us.archive.ubuntu.com/ubuntu focal/main amd64 Packages

  
  mount:
Installed: 2.34-0.1ubuntu9.6
Candidate: 2.34-0.1ubuntu9.6
Version table:
   *** 2.34-0.1ubuntu9.6 500
  500 http://us.archive.ubuntu.com/ubuntu focal-updates/main amd64 
Packages
  500 http://us.archive.ubuntu.com/ubuntu focal-security/main amd64 
Packages
  100 /var/lib/dpkg/status
   2.34-0.1ubuntu9 500
  500 http://us.archive.ubuntu.com/ubuntu focal/main amd64 Packages

  
  nfs-common:
Installed: 1:1.3.4-2.5ubuntu3.6
Candidate: 1:1.3.4-2.5ubuntu3.6
Version table:
   *** 

[Touch-packages] [Bug 1426782] Re: Boot fails with encyprted BTRFS RAID1 array - open ctree failed

2015-03-28 Thread Ryan S Kingsbury
*** This bug is a duplicate of bug 1362884 ***
https://bugs.launchpad.net/bugs/1362884

Here is the dmesg output at the time of the error message:

[8.515396] BTRFS: device fsid e91f611f-524a-43f5-bde5-8ebb9672f146 devid 2 
transid 100593 /dev/dm-0
[8.525319] BTRFS info (device dm-0): enabling auto defrag
[8.525322] BTRFS info (device dm-0): disk space caching is enabled
[8.526281] BTRFS: failed to read the system array on dm-0
[8.547852] BTRFS: open_ctree failed
[   10.106655] BTRFS: device fsid e91f611f-524a-43f5-bde5-8ebb9672f146 devid 3 
transid 100593 /dev/dm-1
[   10.943981] init: mountall main process (334) terminated with status 2
[   10.972131] init: plymouth-upstart-bridge main process ended, respawning
[   10.976545] init: plymouth-upstart-bridge main process (1253) terminated 
with status 1
[   10.976550] init: plymouth-upstart-bridge main process ended, respawning
[   19.846926] BTRFS info (device dm-1): enabling auto defrag
[   19.846929] BTRFS info (device dm-1): disk space caching is enabled
[   19.901994] BTRFS: bdev /dev/dm-0 errs: wr 0, rd 79, flush 0, corrupt 0, gen 0
[   23.125160] init: failsafe main process (1465) killed by TERM signal

** This bug has been marked a duplicate of bug 1362884
   Only one device of multidevice btrfs is decrypted in cryptroot

** No longer affects: upstart (Ubuntu)

** Also affects: cryptsetup (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to upstart in Ubuntu.
https://bugs.launchpad.net/bugs/1426782

Title:
  Boot fails with encyprted BTRFS RAID1 array - open ctree failed

Status in The Linux Mint Distribution:
  New
Status in cryptsetup package in Ubuntu:
  New

Bug description:
  I have recently installed Linux Mint 17.1  into a system with three
  drives - 1 SSD containing the / partition (btrfs format), and 2
  identical HDD's containing /home in an encrypted BTRFS RAID1 array.
  The system will not boot without manual intervention, using either the
  recommended 3.13.0-37 kernel or the most up to date 3.16.0-31 kernel.

  At about the same time that I am prompted to enter my password to
  unlock the drive, I am prompted that a mount has failed. If I press M
  to enter manual recovery I see the emergency shell. Then all I have to
  do is exit (Ctrl+D) to continue booting and everything works normally.

  It appears that a similar (maybe the same) bug was reported on ArchWiki:
  https://wiki.archlinux.org/index.php/Btrfs#BTRFS:_open_ctree_failed

  I also experienced this bug on OpenSUSE 13.2 and filed a bug report there:
  https://bugzilla.opensuse.org/show_bug.cgi?id=912170

  On a possibly related note, I am getting TWO password prompts even
  though both encrypted volumes use the same password. On OpenSUSE only
  one password was required. My understanding is that as soon as you
  suppy one password it is supposed to be tried on each volume in
  /etc/crypttab.

  I will now try the fix that worked for me on OpenSUSE (see bug report)

  copy /usr/lib/udev/rules.d/64-btrfs.rules into
  /etc/udev/rules.d/64-btrfs.rules and replace the following two lines

  ENV{DM_NAME}==, IMPORT{builtin}=btrfs ready $devnode
  ENV{DM_NAME}==?*, IMPORT{builtin}=btrfs ready /dev/mapper/$env{DM_NAME}

  with single one

  IMPORT{builtin}=btrfs ready $devnode

  and report back what happens.

  Some system information:
  =

  relevant excerpts from /etc/fstab:

  # SSD containing the main system
  # / was on /dev/sda3 during installation
  UUID=00a0b482-a861-400f-b29c-b003cbc697f9 /   btrfs   
defaults,autodefrag,discard,space_cache 0 0
  # /boot/efi was on /dev/sda2 during installation
  UUID=26CC-DC02  /boot/efi   vfatdefaults0   1

  # encrypted RAID1 btrfs array containing /home
  # the btrfs devid's are 2 and 3
  UUID=e91f611f-524a-43f5-bde5-8ebb9672f146 /home   btrfs   
defaults,discard,autodefrag,space_cache 0 0

  /etc/crypttab:
  encrypted-home-sdb UUID=9c8fb7d0-74e2-4e38-b7c7-6211bbb6d2b1 none luks, 
retry=1
  encrypted-home-sdc UUID=1ca9d3ba-c409-4127-91f5-e3d9c21242bd none luks, 
retry=1

  relevant excerpts from dmesg:

  [9.487085] BTRFS: device fsid e91f611f-524a-43f5-bde5-8ebb9672f146 devid 
3 transid 81360 /dev/dm-0
  [9.497773] BTRFS info (device dm-0): turning on discard
  [9.497776] BTRFS info (device dm-0): enabling auto defrag
  [9.49] BTRFS info (device dm-0): disk space caching is enabled
  [9.498565] BTRFS: failed to read the system array on dm-0
  [9.522859] BTRFS: open_ctree failed
  [   10.246821] init: mountall main process (322) terminated with status 2
  [   10.275253] init: plymouth-upstart-bridge main process ended, respawning
  [   11.110362] BTRFS info (device dm-0): turning on discard
  [   11.110365] BTRFS info (device dm-0): enabling auto defrag
  [   11.110366] BTRFS info (device dm-0): disk space 

[Touch-packages] [Bug 1426782] Re: Boot fails with encyprted BTRFS RAID1 array - open ctree failed

2015-03-17 Thread Ryan S Kingsbury
** Package changed: ubuntu = upstart (Ubuntu)

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to upstart in Ubuntu.
https://bugs.launchpad.net/bugs/1426782

Title:
  Boot fails with encyprted BTRFS RAID1 array - open ctree failed

Status in The Linux Mint Distribution:
  New
Status in upstart package in Ubuntu:
  New

Bug description:
  I have recently installed Linux Mint 17.1  into a system with three
  drives - 1 SSD containing the / partition (btrfs format), and 2
  identical HDD's containing /home in an encrypted BTRFS RAID1 array.
  The system will not boot without manual intervention, using either the
  recommended 3.13.0-37 kernel or the most up to date 3.16.0-31 kernel.

  At about the same time that I am prompted to enter my password to
  unlock the drive, I am prompted that a mount has failed. If I press M
  to enter manual recovery I see the emergency shell. Then all I have to
  do is exit (Ctrl+D) to continue booting and everything works normally.

  It appears that a similar (maybe the same) bug was reported on ArchWiki:
  https://wiki.archlinux.org/index.php/Btrfs#BTRFS:_open_ctree_failed

  I also experienced this bug on OpenSUSE 13.2 and filed a bug report there:
  https://bugzilla.opensuse.org/show_bug.cgi?id=912170

  On a possibly related note, I am getting TWO password prompts even
  though both encrypted volumes use the same password. On OpenSUSE only
  one password was required. My understanding is that as soon as you
  suppy one password it is supposed to be tried on each volume in
  /etc/crypttab.

  I will now try the fix that worked for me on OpenSUSE (see bug report)

  copy /usr/lib/udev/rules.d/64-btrfs.rules into
  /etc/udev/rules.d/64-btrfs.rules and replace the following two lines

  ENV{DM_NAME}==, IMPORT{builtin}=btrfs ready $devnode
  ENV{DM_NAME}==?*, IMPORT{builtin}=btrfs ready /dev/mapper/$env{DM_NAME}

  with single one

  IMPORT{builtin}=btrfs ready $devnode

  and report back what happens.

  Some system information:
  =

  relevant excerpts from /etc/fstab:

  # SSD containing the main system
  # / was on /dev/sda3 during installation
  UUID=00a0b482-a861-400f-b29c-b003cbc697f9 /   btrfs   
defaults,autodefrag,discard,space_cache 0 0
  # /boot/efi was on /dev/sda2 during installation
  UUID=26CC-DC02  /boot/efi   vfatdefaults0   1

  # encrypted RAID1 btrfs array containing /home
  # the btrfs devid's are 2 and 3
  UUID=e91f611f-524a-43f5-bde5-8ebb9672f146 /home   btrfs   
defaults,discard,autodefrag,space_cache 0 0

  /etc/crypttab:
  encrypted-home-sdb UUID=9c8fb7d0-74e2-4e38-b7c7-6211bbb6d2b1 none luks, 
retry=1
  encrypted-home-sdc UUID=1ca9d3ba-c409-4127-91f5-e3d9c21242bd none luks, 
retry=1

  relevant excerpts from dmesg:

  [9.487085] BTRFS: device fsid e91f611f-524a-43f5-bde5-8ebb9672f146 devid 
3 transid 81360 /dev/dm-0
  [9.497773] BTRFS info (device dm-0): turning on discard
  [9.497776] BTRFS info (device dm-0): enabling auto defrag
  [9.49] BTRFS info (device dm-0): disk space caching is enabled
  [9.498565] BTRFS: failed to read the system array on dm-0
  [9.522859] BTRFS: open_ctree failed
  [   10.246821] init: mountall main process (322) terminated with status 2
  [   10.275253] init: plymouth-upstart-bridge main process ended, respawning
  [   11.110362] BTRFS info (device dm-0): turning on discard
  [   11.110365] BTRFS info (device dm-0): enabling auto defrag
  [   11.110366] BTRFS info (device dm-0): disk space caching is enabled
  [   11.113918] BTRFS: failed to read the system array on dm-0
  [   11.129428] BTRFS: open_ctree failed
  [   11.714810] BTRFS: device fsid e91f611f-524a-43f5-bde5-8ebb9672f146 devid 
2 transid 81360 /dev/dm-1
  [   12.628797] init: mountall main process (1033) terminated with status 2
  [   13.088968] BTRFS info (device dm-1): turning on discard
  [   13.088971] BTRFS info (device dm-1): enabling auto defrag
  [   13.088973] BTRFS info (device dm-1): disk space caching is enabled
  [   13.145755] BTRFS: bdev /dev/mapper/encrypted-home-sdb errs: wr 0, rd 79, 
flush 0, corrupt 0, gen 0
  [   16.111916] init: failsafe main process (1436) killed by TERM signal

To manage notifications about this bug go to:
https://bugs.launchpad.net/linuxmint/+bug/1426782/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp