Re: [Kernel-packages] [Bug 1906542] Re: echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds says premission error, unable to reapair lost zfs pool data

2020-12-04 Thread Joni-Pekka Kurronen
hi,


Dose new ZFS allow just removeing FAULTED device, so I have old clean 
disk alone,

scrub that,... then REPARTITIONING FAULTED device ( i had incorrect 
size, there is boot area also ),

and then attach FAULTED DEVICE AS NEW MIRROR DISK as it was intented ???


zfs remove old-rpool  -d  faulted

zfs scrub

zfs add old-rpool old new

|???|

|Then I do not have to copy anyyhing ???
|

||

joni


Richard Laager kirjoitti 3.12.2020 klo 23.29:
> device_removal only works if you can import the pool normally. That is
> what you should have used after you accidentally added the second disk
> as another top-level vdev. Whatever you have done in the interim,
> though, has resulted in the second device showing as FAULTED. Unless you
> can fix that, device_removal is not an option. I had hoped that you just
> had the second drive unplugged or something. But since the import is
> showing "corrupted data" for the second drive, that's probably not what
> happened.
>
> This works for me on Ubuntu 20.04:
> echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds
>
> That setting does not exist on Ubuntu 18.04 (which you are running), so
> I get the same "Permission denied" error (because bash is trying to
> create that file, which you cannot do).
>
> I now see this is an rpool. Is your plan to reinstall? With 18.04 or
> 20.04?
>
> If 18.04, then:
> 1. Download the 20.04.1 live image. Write it to a USB disk and boot into that.
> 2. In the live environment, install the ZFS tools: sudo apt install 
> zfsutils-linux
> 3. echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds
> 4. mkdir /old
> 5. Import the old pool renaming it to rpool-old and mount filesystems:
> zpool import -o readonly=on -N -R /old rpool rpool-old
> zfs mount rpool-old/ROOT/ubuntu
> zfs mount -a
> 6. Confirm you can access your data. Take another backup, if desired. If you 
> don't have space to back it up besides the new/second disk, then read on...
> 7. Follow the 18.04 Root-on-ZFS HOWTO using (only) the second disk. Be very 
> careful not to partition or zpool create the disk with your data!!! For 
> example, partition the second disk for the mirror scenario. But obviously you 
> can't do zpool create with "mirror" because you have only one disk.
> 8. Once the new system is installed (i.e. after step 6.2), but before 
> rebooting, copy data from /old to /mnt as needed.
> 9. Shut down. Disconnect the old disk. Boot up again.
> 9. Continue the install as normal.
> 10. When you are certain that everything is good and that new disk is working 
> properly (maybe do a scrub) and you have all your data, then you can connect 
> the old disk and do the zpool attach (ATTACH, not add) to attach the old disk 
> to the new pool as a mirror
>
> If 20.04, then I'd do this instead:
> 1. Unplug the disk with your data.
> 2. Follow the 20.04 Root-on-ZFS HOWTO using only the second disk. Follow the 
> steps as if you were mirroring (since that is the ultimate goal) where 
> possible. For example, partition the second disk for the mirror scenario. But 
> obviously you can't do zpool create with "mirror" because you have only one 
> disk.
> 3. Once the new, 20.04 system is working on the second disk and booting 
> normally, connect the other, old drive. (This assumes you can connect it 
> while the system is running.)
> 4. echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds
> 5. Import the old pool using its GUID renaming it to rpool-old and mount 
> filesystems:
> zpool import -o readonly -N -R /mnt 5077426391014001687 rpool-old
> zfs mount rpool-old/ROOT/ubuntu
> zfs mount -a
> 6. Copy over data.
> 7. zpool export rpool-old
> 8. When you are certain that everything is good and that new disk is working 
> properly (maybe do a scrub) and you have all your data, then you can do the 
> zpool attach (ATTACH, not add) to attach the old disk to the new pool as a 
> mirror.
>
-- 
joni

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1906542

Title:
  echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds  says
  premission error, unable to reapair lost zfs pool data

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  root@jonipekka-desktop:~# echo 1 >> 
/sys/module/zfs/parameters/zfs_max_missing_tvds
  -bash: /sys/module/zfs/parameters/zfs_max_missing_tvds: Permission denied
  root@jonipekka-desktop:~#

  
  https://www.delphix.com/blog/openzfs-pool-import-recovery

  Import with missing top level vdevs

  The changes to the pool configuration logic have enabled another great
  improvement: the ability to import a pool with missing or faulted top-
  level vdevs. Since some data will almost certainly be missing, a pool
  with missing top-level vdevs can only be imported read-only, and the
  failmode is set to “continue” (failmode=continue means that when
  encountering errors the pool will continue running, as 

[Kernel-packages] [Bug 1906542] Re: echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds says premission error, unable to reapair lost zfs pool data

2020-12-03 Thread Joni-Pekka Kurronen
root@jonipekka-desktop:~# zpool import
   pool: rpool
 id: 5077426391014001687
  state: UNAVAIL
 status: One or more devices are faulted.
 action: The pool cannot be imported due to damaged devices or data.
 config:

rpoolUNAVAIL  insufficient 
replicas
  ata-WDC_WD4005FZBX-00K5WB0_V6GAE1PR-part1  ONLINE
  ata-WDC_WD4005FZBX-00K5WB0_VBGDM25F-part4  FAULTED  corrupted data
root@jonipekka-desktop:~#

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1906542

Title:
  echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds  says
  premission error, unable to reapair lost zfs pool data

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  root@jonipekka-desktop:~# echo 1 >> 
/sys/module/zfs/parameters/zfs_max_missing_tvds
  -bash: /sys/module/zfs/parameters/zfs_max_missing_tvds: Permission denied
  root@jonipekka-desktop:~#

  
  https://www.delphix.com/blog/openzfs-pool-import-recovery

  Import with missing top level vdevs

  The changes to the pool configuration logic have enabled another great
  improvement: the ability to import a pool with missing or faulted top-
  level vdevs. Since some data will almost certainly be missing, a pool
  with missing top-level vdevs can only be imported read-only, and the
  failmode is set to “continue” (failmode=continue means that when
  encountering errors the pool will continue running, as opposed to
  being suspended or panicking).

  To enable this feature, we’ve added a new global variable:
  zfs_max_missing_tvds, which defines how many missing top level vdevs
  we can tolerate before marking a pool as unopenable. It is set to 0 by
  default, and should be changed to other values only temporarily, while
  performing an extreme pool recovery.

  Here as an example we create a pool with two vdevs and write some data
  to a first dataset; we then add a third vdev and write some data to a
  second dataset. Finally we physically remove the new vdev (simulating,
  for instance, a device failure) and try to import the pool using the
  new feature.

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: zfsutils-linux 0.7.5-1ubuntu16.10
  ProcVersionSignature: Ubuntu 4.15.0-126.129-generic 4.15.18
  Uname: Linux 4.15.0-126-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.9-0ubuntu7.20
  Architecture: amd64
  Date: Wed Dec  2 18:39:58 2020
  InstallationDate: Installed on 2020-12-02 (0 days ago)
  InstallationMedia: Ubuntu 18.04.1 LTS "Bionic Beaver" - Release amd64 
(20180725)
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1906542/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1906542] Re: echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds says premission error, unable to reapair lost zfs pool data

2020-12-03 Thread Joni-Pekka Kurronen
Do you mean this feature which is comi ng,... when ?
https://github.com/openzfs/openzfs/pull/251

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1906542

Title:
  echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds  says
  premission error, unable to reapair lost zfs pool data

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  root@jonipekka-desktop:~# echo 1 >> 
/sys/module/zfs/parameters/zfs_max_missing_tvds
  -bash: /sys/module/zfs/parameters/zfs_max_missing_tvds: Permission denied
  root@jonipekka-desktop:~#

  
  https://www.delphix.com/blog/openzfs-pool-import-recovery

  Import with missing top level vdevs

  The changes to the pool configuration logic have enabled another great
  improvement: the ability to import a pool with missing or faulted top-
  level vdevs. Since some data will almost certainly be missing, a pool
  with missing top-level vdevs can only be imported read-only, and the
  failmode is set to “continue” (failmode=continue means that when
  encountering errors the pool will continue running, as opposed to
  being suspended or panicking).

  To enable this feature, we’ve added a new global variable:
  zfs_max_missing_tvds, which defines how many missing top level vdevs
  we can tolerate before marking a pool as unopenable. It is set to 0 by
  default, and should be changed to other values only temporarily, while
  performing an extreme pool recovery.

  Here as an example we create a pool with two vdevs and write some data
  to a first dataset; we then add a third vdev and write some data to a
  second dataset. Finally we physically remove the new vdev (simulating,
  for instance, a device failure) and try to import the pool using the
  new feature.

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: zfsutils-linux 0.7.5-1ubuntu16.10
  ProcVersionSignature: Ubuntu 4.15.0-126.129-generic 4.15.18
  Uname: Linux 4.15.0-126-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.9-0ubuntu7.20
  Architecture: amd64
  Date: Wed Dec  2 18:39:58 2020
  InstallationDate: Installed on 2020-12-02 (0 days ago)
  InstallationMedia: Ubuntu 18.04.1 LTS "Bionic Beaver" - Release amd64 
(20180725)
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1906542/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1906542] Re: echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds says premission error, unable to reapair lost zfs pool data

2020-12-03 Thread Joni-Pekka Kurronen
new device_removal feature ,... where it is ? It might work.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1906542

Title:
  echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds  says
  premission error, unable to reapair lost zfs pool data

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  root@jonipekka-desktop:~# echo 1 >> 
/sys/module/zfs/parameters/zfs_max_missing_tvds
  -bash: /sys/module/zfs/parameters/zfs_max_missing_tvds: Permission denied
  root@jonipekka-desktop:~#

  
  https://www.delphix.com/blog/openzfs-pool-import-recovery

  Import with missing top level vdevs

  The changes to the pool configuration logic have enabled another great
  improvement: the ability to import a pool with missing or faulted top-
  level vdevs. Since some data will almost certainly be missing, a pool
  with missing top-level vdevs can only be imported read-only, and the
  failmode is set to “continue” (failmode=continue means that when
  encountering errors the pool will continue running, as opposed to
  being suspended or panicking).

  To enable this feature, we’ve added a new global variable:
  zfs_max_missing_tvds, which defines how many missing top level vdevs
  we can tolerate before marking a pool as unopenable. It is set to 0 by
  default, and should be changed to other values only temporarily, while
  performing an extreme pool recovery.

  Here as an example we create a pool with two vdevs and write some data
  to a first dataset; we then add a third vdev and write some data to a
  second dataset. Finally we physically remove the new vdev (simulating,
  for instance, a device failure) and try to import the pool using the
  new feature.

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: zfsutils-linux 0.7.5-1ubuntu16.10
  ProcVersionSignature: Ubuntu 4.15.0-126.129-generic 4.15.18
  Uname: Linux 4.15.0-126-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.9-0ubuntu7.20
  Architecture: amd64
  Date: Wed Dec  2 18:39:58 2020
  InstallationDate: Installed on 2020-12-02 (0 days ago)
  InstallationMedia: Ubuntu 18.04.1 LTS "Bionic Beaver" - Release amd64 
(20180725)
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1906542/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


Re: [Kernel-packages] [Bug 1906542] Re: echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds says premission error, unable to reapair lost zfs pool data

2020-12-03 Thread Joni-Pekka Kurronen
I tried remove command before taking it out,

i was in beleaf system will then correct problem,


So basically secon disk has no data and not corrupted,...


I need that option due import readonly -f pool -d dose not fix problem

so i can then copy disk. I have only essential's at backup ,... so there 
is many files

i realy need,...


joni


Richard Laager kirjoitti 3.12.2020 klo 18.38:
> Why is the second disk missing? If you accidentally added it and ended
> up with a striped pool, as long as both disks are connected, you can
> import the pool normally. Then use the new device_removal feature to
> remove the new disk from the pool.
>
> If you've done something crazy like pulled the disk and wiped it, then
> yeah, you're going to need to figure out how to import the pool read-
> only. I don't have any advice on that piece.
>
-- 
joni

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1906542

Title:
  echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds  says
  premission error, unable to reapair lost zfs pool data

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  root@jonipekka-desktop:~# echo 1 >> 
/sys/module/zfs/parameters/zfs_max_missing_tvds
  -bash: /sys/module/zfs/parameters/zfs_max_missing_tvds: Permission denied
  root@jonipekka-desktop:~#

  
  https://www.delphix.com/blog/openzfs-pool-import-recovery

  Import with missing top level vdevs

  The changes to the pool configuration logic have enabled another great
  improvement: the ability to import a pool with missing or faulted top-
  level vdevs. Since some data will almost certainly be missing, a pool
  with missing top-level vdevs can only be imported read-only, and the
  failmode is set to “continue” (failmode=continue means that when
  encountering errors the pool will continue running, as opposed to
  being suspended or panicking).

  To enable this feature, we’ve added a new global variable:
  zfs_max_missing_tvds, which defines how many missing top level vdevs
  we can tolerate before marking a pool as unopenable. It is set to 0 by
  default, and should be changed to other values only temporarily, while
  performing an extreme pool recovery.

  Here as an example we create a pool with two vdevs and write some data
  to a first dataset; we then add a third vdev and write some data to a
  second dataset. Finally we physically remove the new vdev (simulating,
  for instance, a device failure) and try to import the pool using the
  new feature.

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: zfsutils-linux 0.7.5-1ubuntu16.10
  ProcVersionSignature: Ubuntu 4.15.0-126.129-generic 4.15.18
  Uname: Linux 4.15.0-126-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.9-0ubuntu7.20
  Architecture: amd64
  Date: Wed Dec  2 18:39:58 2020
  InstallationDate: Installed on 2020-12-02 (0 days ago)
  InstallationMedia: Ubuntu 18.04.1 LTS "Bionic Beaver" - Release amd64 
(20180725)
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1906542/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1906542] Re: echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds says premission error, unable to reapair lost zfs pool data

2020-12-03 Thread Joni-Pekka Kurronen
Is there anyone who could help me over go this bug so I can rescue my ZFS pool 
data,
pool will be lost as I understand. I accidentaly added disk to pool and not as 
mirror
what was intention,... and it can nor be removed even there is no data!

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1906542

Title:
  echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds  says
  premission error, unable to reapair lost zfs pool data

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  root@jonipekka-desktop:~# echo 1 >> 
/sys/module/zfs/parameters/zfs_max_missing_tvds
  -bash: /sys/module/zfs/parameters/zfs_max_missing_tvds: Permission denied
  root@jonipekka-desktop:~#

  
  https://www.delphix.com/blog/openzfs-pool-import-recovery

  Import with missing top level vdevs

  The changes to the pool configuration logic have enabled another great
  improvement: the ability to import a pool with missing or faulted top-
  level vdevs. Since some data will almost certainly be missing, a pool
  with missing top-level vdevs can only be imported read-only, and the
  failmode is set to “continue” (failmode=continue means that when
  encountering errors the pool will continue running, as opposed to
  being suspended or panicking).

  To enable this feature, we’ve added a new global variable:
  zfs_max_missing_tvds, which defines how many missing top level vdevs
  we can tolerate before marking a pool as unopenable. It is set to 0 by
  default, and should be changed to other values only temporarily, while
  performing an extreme pool recovery.

  Here as an example we create a pool with two vdevs and write some data
  to a first dataset; we then add a third vdev and write some data to a
  second dataset. Finally we physically remove the new vdev (simulating,
  for instance, a device failure) and try to import the pool using the
  new feature.

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: zfsutils-linux 0.7.5-1ubuntu16.10
  ProcVersionSignature: Ubuntu 4.15.0-126.129-generic 4.15.18
  Uname: Linux 4.15.0-126-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.9-0ubuntu7.20
  Architecture: amd64
  Date: Wed Dec  2 18:39:58 2020
  InstallationDate: Installed on 2020-12-02 (0 days ago)
  InstallationMedia: Ubuntu 18.04.1 LTS "Bionic Beaver" - Release amd64 
(20180725)
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1906542/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1906542] [NEW] echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds says premission error, unable to reapair lost zfs pool data

2020-12-02 Thread Joni-Pekka Kurronen
Public bug reported:

root@jonipekka-desktop:~# echo 1 >> 
/sys/module/zfs/parameters/zfs_max_missing_tvds
-bash: /sys/module/zfs/parameters/zfs_max_missing_tvds: Permission denied
root@jonipekka-desktop:~#


https://www.delphix.com/blog/openzfs-pool-import-recovery

Import with missing top level vdevs

The changes to the pool configuration logic have enabled another great
improvement: the ability to import a pool with missing or faulted top-
level vdevs. Since some data will almost certainly be missing, a pool
with missing top-level vdevs can only be imported read-only, and the
failmode is set to “continue” (failmode=continue means that when
encountering errors the pool will continue running, as opposed to being
suspended or panicking).

To enable this feature, we’ve added a new global variable:
zfs_max_missing_tvds, which defines how many missing top level vdevs we
can tolerate before marking a pool as unopenable. It is set to 0 by
default, and should be changed to other values only temporarily, while
performing an extreme pool recovery.

Here as an example we create a pool with two vdevs and write some data
to a first dataset; we then add a third vdev and write some data to a
second dataset. Finally we physically remove the new vdev (simulating,
for instance, a device failure) and try to import the pool using the new
feature.

ProblemType: Bug
DistroRelease: Ubuntu 18.04
Package: zfsutils-linux 0.7.5-1ubuntu16.10
ProcVersionSignature: Ubuntu 4.15.0-126.129-generic 4.15.18
Uname: Linux 4.15.0-126-generic x86_64
NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
ApportVersion: 2.20.9-0ubuntu7.20
Architecture: amd64
Date: Wed Dec  2 18:39:58 2020
InstallationDate: Installed on 2020-12-02 (0 days ago)
InstallationMedia: Ubuntu 18.04.1 LTS "Bionic Beaver" - Release amd64 (20180725)
SourcePackage: zfs-linux
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: zfs-linux (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug bionic

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1906542

Title:
  echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds  says
  premission error, unable to reapair lost zfs pool data

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  root@jonipekka-desktop:~# echo 1 >> 
/sys/module/zfs/parameters/zfs_max_missing_tvds
  -bash: /sys/module/zfs/parameters/zfs_max_missing_tvds: Permission denied
  root@jonipekka-desktop:~#

  
  https://www.delphix.com/blog/openzfs-pool-import-recovery

  Import with missing top level vdevs

  The changes to the pool configuration logic have enabled another great
  improvement: the ability to import a pool with missing or faulted top-
  level vdevs. Since some data will almost certainly be missing, a pool
  with missing top-level vdevs can only be imported read-only, and the
  failmode is set to “continue” (failmode=continue means that when
  encountering errors the pool will continue running, as opposed to
  being suspended or panicking).

  To enable this feature, we’ve added a new global variable:
  zfs_max_missing_tvds, which defines how many missing top level vdevs
  we can tolerate before marking a pool as unopenable. It is set to 0 by
  default, and should be changed to other values only temporarily, while
  performing an extreme pool recovery.

  Here as an example we create a pool with two vdevs and write some data
  to a first dataset; we then add a third vdev and write some data to a
  second dataset. Finally we physically remove the new vdev (simulating,
  for instance, a device failure) and try to import the pool using the
  new feature.

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: zfsutils-linux 0.7.5-1ubuntu16.10
  ProcVersionSignature: Ubuntu 4.15.0-126.129-generic 4.15.18
  Uname: Linux 4.15.0-126-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.9-0ubuntu7.20
  Architecture: amd64
  Date: Wed Dec  2 18:39:58 2020
  InstallationDate: Installed on 2020-12-02 (0 days ago)
  InstallationMedia: Ubuntu 18.04.1 LTS "Bionic Beaver" - Release amd64 
(20180725)
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1906542/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1317811] Re: Dropped packets on EC2, xen_netfront: xennet: skb rides the rocket: x slots

2015-02-18 Thread Joni-Pekka Kurronen
I can confirm that ethtool -K eth0 sg off  did correct bacula backup problem:

-bacula-sd - bacula-fd communication error that stops backup process
saying  Error: bsock.c:427 Write error sending reset by peer

-so far no IPV6 trafic jam's whit aiccu, but single missing packet's
should not stop aiccu?

At Ubuntu 14.04 3.13.0-39-generic LTS, twin server configuration (
suricata, logstash, NFQUEUE, shoreline, keepalived, haproxy, mariadb-
galera-cluster, aiccu IPCV6,...), one of twin servers has 4G (Hawei
E398) tongle, wifi (hostapd) tongle and dose firewall, routing,... 4
cores so it's not busy ever,...

So could these missing packet's realy stop lenghty prosess like backup
where 0.1T to 0.7T transfered and other mechanisms can not correct, hide
the problem ?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1317811

Title:
  Dropped packets on EC2, xen_netfront: xennet: skb rides the rocket: x
  slots

Status in linux package in Ubuntu:
  Fix Released
Status in linux source package in Trusty:
  Fix Committed
Status in linux source package in Utopic:
  Fix Committed

Bug description:
  Running Ubuntu 14.04 LTS on EC2, we see a lot of the following in the
  kernel log:

  xen_netfront: xennet: skb rides the rocket: 19 slots

  Each of these messages corresponds to a dropped TX packet, and
  eventually causes our application's connections to break and timeout.

  The problem appears when network load increases. We have Node.js
  processes doing pubsub with a Redis server, and these are most visibly
  affected, showing frequent connection loss. The processes talk to each
  other using the private addresses EC2 allocates to the machines.

  Notably, the default MTU on the network interface seems to have gone
  up from 1500 on 13.10, to 9000 in 14.04 LTS. Reducing the MTU back to
  1500 seems to drastically reduce dropped packets. (Can't say for
  certain if it completely eliminates the problem.)

  The machines we run are started from ami-896c96fe.

  ProblemType: Bug
  DistroRelease: Ubuntu 14.04
  Package: linux-image-3.13.0-24-generic 3.13.0-24.46
  ProcVersionSignature: User Name 3.13.0-24.46-generic 3.13.9
  Uname: Linux 3.13.0-24-generic x86_64
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 May  9 09:01 seq
   crw-rw 1 root audio 116, 33 May  9 09:01 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.14.1-0ubuntu3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: Error: [Errno 2] No such file or directory: 'iw'
  Date: Fri May  9 09:11:18 2014
  Ec2AMI: ami-896c96fe
  Ec2AMIManifest: (unknown)
  Ec2AvailabilityZone: eu-west-1c
  Ec2InstanceType: c3.large
  Ec2Kernel: aki-52a34525
  Ec2Ramdisk: unavailable
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  Lspci:

  Lsusb: Error: command ['lsusb'] failed with exit code 1: unable to initialize 
libusb: -99
  PciMultimedia:

  ProcFB:

  ProcKernelCmdLine: root=LABEL=cloudimg-rootfs ro console=hvc0
  RelatedPackageVersions:
   linux-restricted-modules-3.13.0-24-generic N/A
   linux-backports-modules-3.13.0-24-generic  N/A
   linux-firmware N/A
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  SourcePackage: linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  ---
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 May  9 09:54 seq
   crw-rw 1 root audio 116, 33 May  9 09:54 timer
  AplayDevices: Error: [Errno 2] No such file or directory
  ApportVersion: 2.14.1-0ubuntu3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: Error: [Errno 2] No such file or directory
  CurrentDmesg: [   24.724129] init: plymouth-upstart-bridge main process 
ended, respawning
  DistroRelease: Ubuntu 14.04
  Ec2AMI: ami-896c96fe
  Ec2AMIManifest: (unknown)
  Ec2AvailabilityZone: eu-west-1c
  Ec2InstanceType: c3.large
  Ec2Kernel: aki-52a34525
  Ec2Ramdisk: unavailable
  IwConfig: Error: [Errno 2] No such file or directory
  Lspci:

  Lsusb: Error: command ['lsusb'] failed with exit code 1: unable to initialize 
libusb: -99
  Package: linux (not installed)
  PciMultimedia:

  ProcFB:

  ProcKernelCmdLine: root=LABEL=cloudimg-rootfs ro console=hvc0
  ProcVersionSignature: User Name 3.13.0-24.46-generic 3.13.9
  RelatedPackageVersions:
   linux-restricted-modules-3.13.0-24-generic N/A
   linux-backports-modules-3.13.0-24-generic  N/A
   linux-firmware N/A
  RfKill: Error: [Errno 2] No such file or directory
  Tags:  trusty ec2-images
  Uname: Linux 3.13.0-24-generic x86_64
  

[Kernel-packages] [Bug 1317811] Re: Dropped packets on EC2, xen_netfront: xennet: skb rides the rocket: x slots

2015-02-18 Thread Joni-Pekka Kurronen
Crazy idea, could there be situations when sender increases mtu over receiver 
side?

Fundamental question, I tried to understan what ethtool -K eth0 sg off dose in 
protocoll level
can anyone explain, it look's medicine at moment.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1317811

Title:
  Dropped packets on EC2, xen_netfront: xennet: skb rides the rocket: x
  slots

Status in linux package in Ubuntu:
  Fix Released
Status in linux source package in Trusty:
  Fix Committed
Status in linux source package in Utopic:
  Fix Committed

Bug description:
  Running Ubuntu 14.04 LTS on EC2, we see a lot of the following in the
  kernel log:

  xen_netfront: xennet: skb rides the rocket: 19 slots

  Each of these messages corresponds to a dropped TX packet, and
  eventually causes our application's connections to break and timeout.

  The problem appears when network load increases. We have Node.js
  processes doing pubsub with a Redis server, and these are most visibly
  affected, showing frequent connection loss. The processes talk to each
  other using the private addresses EC2 allocates to the machines.

  Notably, the default MTU on the network interface seems to have gone
  up from 1500 on 13.10, to 9000 in 14.04 LTS. Reducing the MTU back to
  1500 seems to drastically reduce dropped packets. (Can't say for
  certain if it completely eliminates the problem.)

  The machines we run are started from ami-896c96fe.

  ProblemType: Bug
  DistroRelease: Ubuntu 14.04
  Package: linux-image-3.13.0-24-generic 3.13.0-24.46
  ProcVersionSignature: User Name 3.13.0-24.46-generic 3.13.9
  Uname: Linux 3.13.0-24-generic x86_64
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 May  9 09:01 seq
   crw-rw 1 root audio 116, 33 May  9 09:01 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.14.1-0ubuntu3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: Error: [Errno 2] No such file or directory: 'iw'
  Date: Fri May  9 09:11:18 2014
  Ec2AMI: ami-896c96fe
  Ec2AMIManifest: (unknown)
  Ec2AvailabilityZone: eu-west-1c
  Ec2InstanceType: c3.large
  Ec2Kernel: aki-52a34525
  Ec2Ramdisk: unavailable
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  Lspci:

  Lsusb: Error: command ['lsusb'] failed with exit code 1: unable to initialize 
libusb: -99
  PciMultimedia:

  ProcFB:

  ProcKernelCmdLine: root=LABEL=cloudimg-rootfs ro console=hvc0
  RelatedPackageVersions:
   linux-restricted-modules-3.13.0-24-generic N/A
   linux-backports-modules-3.13.0-24-generic  N/A
   linux-firmware N/A
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  SourcePackage: linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  ---
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 May  9 09:54 seq
   crw-rw 1 root audio 116, 33 May  9 09:54 timer
  AplayDevices: Error: [Errno 2] No such file or directory
  ApportVersion: 2.14.1-0ubuntu3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: Error: [Errno 2] No such file or directory
  CurrentDmesg: [   24.724129] init: plymouth-upstart-bridge main process 
ended, respawning
  DistroRelease: Ubuntu 14.04
  Ec2AMI: ami-896c96fe
  Ec2AMIManifest: (unknown)
  Ec2AvailabilityZone: eu-west-1c
  Ec2InstanceType: c3.large
  Ec2Kernel: aki-52a34525
  Ec2Ramdisk: unavailable
  IwConfig: Error: [Errno 2] No such file or directory
  Lspci:

  Lsusb: Error: command ['lsusb'] failed with exit code 1: unable to initialize 
libusb: -99
  Package: linux (not installed)
  PciMultimedia:

  ProcFB:

  ProcKernelCmdLine: root=LABEL=cloudimg-rootfs ro console=hvc0
  ProcVersionSignature: User Name 3.13.0-24.46-generic 3.13.9
  RelatedPackageVersions:
   linux-restricted-modules-3.13.0-24-generic N/A
   linux-backports-modules-3.13.0-24-generic  N/A
   linux-firmware N/A
  RfKill: Error: [Errno 2] No such file or directory
  Tags:  trusty ec2-images
  Uname: Linux 3.13.0-24-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: adm audio cdrom dialout dip floppy netdev plugdev sudo video
  _MarkForUpload: True

  break-fix: - 97a6d1bb2b658ac85ed88205ccd1ab809899884d
  break-fix: - 11d3d2a16cc1f05c6ece69a4392e99efb85666a6

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1317811/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : 

[Kernel-packages] [Bug 1317811] Re: Dropped packets on EC2, xen_netfront: xennet: skb rides the rocket: x slots

2015-02-17 Thread Joni-Pekka Kurronen
hi,

I have had two mysteriouse problem's, bacula stop's and says as reason
connecyion lost due too big packets's ( tried to cange mtu no success)
second have been aiccu that dose not recover time to time after
connection get up againg, need's service stop  start,...

I just gave sudo ethtool -K eth0 sg off and bacula seem's yo work
now,... tomorrow morning if all backup's done this might been the
corrective action. Backup causes at giganet around 17-40mb/s continouse
trafic + internet usage at server/router/4Gtongle connection point.

Any ideas how to test? Could this explain my problem?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1317811

Title:
  Dropped packets on EC2, xen_netfront: xennet: skb rides the rocket: x
  slots

Status in linux package in Ubuntu:
  Fix Released
Status in linux source package in Trusty:
  Fix Committed
Status in linux source package in Utopic:
  Fix Committed

Bug description:
  Running Ubuntu 14.04 LTS on EC2, we see a lot of the following in the
  kernel log:

  xen_netfront: xennet: skb rides the rocket: 19 slots

  Each of these messages corresponds to a dropped TX packet, and
  eventually causes our application's connections to break and timeout.

  The problem appears when network load increases. We have Node.js
  processes doing pubsub with a Redis server, and these are most visibly
  affected, showing frequent connection loss. The processes talk to each
  other using the private addresses EC2 allocates to the machines.

  Notably, the default MTU on the network interface seems to have gone
  up from 1500 on 13.10, to 9000 in 14.04 LTS. Reducing the MTU back to
  1500 seems to drastically reduce dropped packets. (Can't say for
  certain if it completely eliminates the problem.)

  The machines we run are started from ami-896c96fe.

  ProblemType: Bug
  DistroRelease: Ubuntu 14.04
  Package: linux-image-3.13.0-24-generic 3.13.0-24.46
  ProcVersionSignature: User Name 3.13.0-24.46-generic 3.13.9
  Uname: Linux 3.13.0-24-generic x86_64
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 May  9 09:01 seq
   crw-rw 1 root audio 116, 33 May  9 09:01 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.14.1-0ubuntu3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: Error: [Errno 2] No such file or directory: 'iw'
  Date: Fri May  9 09:11:18 2014
  Ec2AMI: ami-896c96fe
  Ec2AMIManifest: (unknown)
  Ec2AvailabilityZone: eu-west-1c
  Ec2InstanceType: c3.large
  Ec2Kernel: aki-52a34525
  Ec2Ramdisk: unavailable
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  Lspci:

  Lsusb: Error: command ['lsusb'] failed with exit code 1: unable to initialize 
libusb: -99
  PciMultimedia:

  ProcFB:

  ProcKernelCmdLine: root=LABEL=cloudimg-rootfs ro console=hvc0
  RelatedPackageVersions:
   linux-restricted-modules-3.13.0-24-generic N/A
   linux-backports-modules-3.13.0-24-generic  N/A
   linux-firmware N/A
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  SourcePackage: linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  ---
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 May  9 09:54 seq
   crw-rw 1 root audio 116, 33 May  9 09:54 timer
  AplayDevices: Error: [Errno 2] No such file or directory
  ApportVersion: 2.14.1-0ubuntu3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: Error: [Errno 2] No such file or directory
  CurrentDmesg: [   24.724129] init: plymouth-upstart-bridge main process 
ended, respawning
  DistroRelease: Ubuntu 14.04
  Ec2AMI: ami-896c96fe
  Ec2AMIManifest: (unknown)
  Ec2AvailabilityZone: eu-west-1c
  Ec2InstanceType: c3.large
  Ec2Kernel: aki-52a34525
  Ec2Ramdisk: unavailable
  IwConfig: Error: [Errno 2] No such file or directory
  Lspci:

  Lsusb: Error: command ['lsusb'] failed with exit code 1: unable to initialize 
libusb: -99
  Package: linux (not installed)
  PciMultimedia:

  ProcFB:

  ProcKernelCmdLine: root=LABEL=cloudimg-rootfs ro console=hvc0
  ProcVersionSignature: User Name 3.13.0-24.46-generic 3.13.9
  RelatedPackageVersions:
   linux-restricted-modules-3.13.0-24-generic N/A
   linux-backports-modules-3.13.0-24-generic  N/A
   linux-firmware N/A
  RfKill: Error: [Errno 2] No such file or directory
  Tags:  trusty ec2-images
  Uname: Linux 3.13.0-24-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: adm audio cdrom dialout dip floppy netdev plugdev sudo video
  _MarkForUpload: True

  break-fix: - 

[Kernel-packages] [Bug 1365869] Re: After upgrade to 3.13.0-35.62, rpc.gssd complains about missing /run/rpc_pipefs/gssd/clntXX/info

2014-09-30 Thread Joni-Pekka Kurronen
hi,

Real Bug is that NFS logging dose not give clear information wgat's
happening and diagnose is hard to do!

Problem solved:
1) I finaly found upgrade kernel, after installing it gssapi error
message dissapeared and NO ERROR MESSAGES at log's.

2) by useing wireshark I found _kerberos._udp DNS requests and
due DNS had not answers I configured BIND9 to publish all kerberos
service addresses.

I have NFS + MIT kerberos client and apacheDS LDAP + KERBEROS server
there is no addministration address at kerberos.


joni

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1365869

Title:
  After upgrade to 3.13.0-35.62, rpc.gssd complains about missing
  /run/rpc_pipefs/gssd/clntXX/info

Status in “linux” package in Ubuntu:
  Fix Released
Status in “linux” source package in Trusty:
  Fix Committed

Bug description:
  The following changes in 3.13.0-35.62:
   * sunrpc: create a new dummy pipe for gssd to hold open
     - LP: #1327563
   * sunrpc: replace sunrpc_net-gssd_running flag with a more reliable check
     - LP: #1327563
   * nfs: check if gssd is running before attempting to use krb5i auth in 
SETCLIENTID call
     - LP: #1327563
  are causing rpc.gssd to fill syslog with messages of the form
  ERROR: can't open /run/rpc_pipefs/gssd/clntXX/info: No such file or directory

  The problem was discussed last December in 
https://bugzilla.redhat.com/show_bug.cgi?id=1037793
  where the resolution was to include the following three patches:

  http://marc.info/?l=linux-nfsm=138624689302466w=2
  http://marc.info/?l=linux-nfsm=138624684502447w=2
  http://marc.info/?l=linux-nfsm=138624684502447w=2

  These patches are already in the upstream kernel (since 3.14). I suggest 
cherry-picking them for 3.13. Commit hashes from the 3.14 branch:
   3396f92f8be606ea485b0a82d4e7749a448b013b
   e2f0c83a9de331d9352185ca3642616c13127539
   23e66ba97127ff3b064d4c6c5138aa34eafc492f

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1365869/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1365869] Re: After upgrade to 3.13.0-35.62, rpc.gssd complains about missing /run/rpc_pipefs/gssd/clntXX/info

2014-09-29 Thread Joni-Pekka Kurronen
Tried to use propose but kernel 13.63 was last no 64 !!!

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1365869

Title:
  After upgrade to 3.13.0-35.62, rpc.gssd complains about missing
  /run/rpc_pipefs/gssd/clntXX/info

Status in “linux” package in Ubuntu:
  Fix Released
Status in “linux” source package in Trusty:
  Fix Committed

Bug description:
  The following changes in 3.13.0-35.62:
   * sunrpc: create a new dummy pipe for gssd to hold open
     - LP: #1327563
   * sunrpc: replace sunrpc_net-gssd_running flag with a more reliable check
     - LP: #1327563
   * nfs: check if gssd is running before attempting to use krb5i auth in 
SETCLIENTID call
     - LP: #1327563
  are causing rpc.gssd to fill syslog with messages of the form
  ERROR: can't open /run/rpc_pipefs/gssd/clntXX/info: No such file or directory

  The problem was discussed last December in 
https://bugzilla.redhat.com/show_bug.cgi?id=1037793
  where the resolution was to include the following three patches:

  http://marc.info/?l=linux-nfsm=138624689302466w=2
  http://marc.info/?l=linux-nfsm=138624684502447w=2
  http://marc.info/?l=linux-nfsm=138624684502447w=2

  These patches are already in the upstream kernel (since 3.14). I suggest 
cherry-picking them for 3.13. Commit hashes from the 3.14 branch:
   3396f92f8be606ea485b0a82d4e7749a448b013b
   e2f0c83a9de331d9352185ca3642616c13127539
   23e66ba97127ff3b064d4c6c5138aa34eafc492f

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1365869/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1365869] Re: After upgrade to 3.13.0-35.62, rpc.gssd complains about missing /run/rpc_pipefs/gssd/clntXX/info

2014-09-24 Thread Joni-Pekka Kurronen
I have nfs4 not working and guessing could it be same problem,... actually
after 12.04 to 14.04 upgrade it stopped to work and I belived it was
pam/ldap realatd problem,... can kindly look at and confirm that this is the
reason and I can wait kernel update ( or upgrade kerenel? ) to get NFS4
working.


Server:
  ApacheDS: ldap and kerberos
  MIT: kerberos client
  nfs-kernel

LOG say's while mounting from client:

Sep 24 18:33:19 mpi1 rpc.gssd[1176]: ERROR: failed to read service info
Sep 24 18:33:23 mpi1 named[2378]: client 
2001:14b8:100:8363:d5bc:33c:1c2c:6bc2#23423 (_kerberos-master._udp.KURROLA.FI): 
query (cache) '_kerberos-master._udp.KURROLA.FI/SRV/IN' denied
Sep 24 18:33:23 mpi1 rpc.gssd[1176]: ERROR: can't open 
/run/rpc_pipefs/gssd/clntXX/info: No such file or directory
Sep 24 18:33:23 mpi1 rpc.gssd[1176]: ERROR: failed to read service info
Sep 24 18:33:23 mpi1 rpc.gssd[1176]: ERROR: can't open 
/run/rpc_pipefs/gssd/clntXX/info: No such file or directory
Sep 24 18:33:23 mpi1 rpc.gssd[1176]: ERROR: failed to read service info
Sep 24 18:33:23 mpi1 rpc.gssd[1176]: ERROR: can't open 
/run/rpc_pipefs/gssd/clntXX/info: No such file or directory
Sep 24 18:33:23 mpi1 rpc.gssd[1176]: ERROR: failed to read service info
Sep 24 18:33:23 mpi1 rpc.gssd[1176]: ERROR: can't open 
/run/rpc_pipefs/gssd/clntXX/info: No such file or directory
Sep 24 18:33:23 mpi1 rpc.gssd[1176]: ERROR: failed to read service info
Sep 24 18:33:23 mpi1 rpc.gssd[1176]: ERROR: can't open 
/run/rpc_pipefs/gssd/clntXX/info: No such file or directory
Sep 24 18:33:23 mpi1 rpc.gssd[1176]: ERROR: failed to read service info
Sep 24 18:33:23 mpi1 rpc.gssd[1176]: ERROR: can't open 
/run/rpc_pipefs/gssd/clntXX/info: No such file or directory
Sep 24 18:33:23 mpi1 rpc.gssd[1176]: ERROR: failed to read service info
Sep 24 18:33:23 mpi1 rpc.gssd[1176]: ERROR: can't open 
/run/rpc_pipefs/gssd/clntXX/info: No such file or directory
Sep 24 18:33:23 mpi1 rpc.gssd[1176]: ERROR: failed to read service info
Sep 24 18:33:23 mpi1 rpc.gssd[1176]: ERROR: can't open 
/run/rpc_pipefs/gssd/clntXX/info: No such file or directory
Sep 24 18:33:23 mpi1 rpc.gssd[1176]: ERROR: failed to read service info
Sep 24 18:33:23 mpi1 rpc.gssd[1176]: ERROR: can't open 
/run/rpc_pipefs/gssd/clntXX/info: No such file or directory
Sep 24 18:33:23 mpi1 rpc.gssd[1176]: ERROR: failed to read service info

This is OK whit server, so principal should work,...
sudo kinit -k -t /etc/krb5.keytab nfs/mpi1.kurrola.dy...@kurrola.fi

Client: 
  MIT: kerberos client

joni@kaak:~$ sudo mount -a
mount.nfs4: access denied by server while mounting mpi1.kurrola.dy.fi:/

and log says at client:

Sep 24 18:37:53 kaak sudo: joni : problem with defaults entries ; TTY=pts/2 
; PWD=/home/joni ; 
Sep 24 18:37:53 kaak sudo: joni : TTY=pts/2 ; PWD=/home/joni ; USER=root ; 
COMMAND=/bin/mount -a
Sep 24 18:37:53 kaak sudo: pam_unix(sudo:session): session opened for user root 
by joni(uid=0)
Sep 24 18:37:54 kaak sudo: pam_unix(sudo:session): session closed for user root

This at client work's do principals should be ok.
 sudo kinit -k -t /etc/krb5.keytab nfs/kaak.kurrola.dy...@kurrola.fi

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1365869

Title:
  After upgrade to 3.13.0-35.62, rpc.gssd complains about missing
  /run/rpc_pipefs/gssd/clntXX/info

Status in “linux” package in Ubuntu:
  Fix Released
Status in “linux” source package in Trusty:
  Fix Committed

Bug description:
  The following changes in 3.13.0-35.62:
   * sunrpc: create a new dummy pipe for gssd to hold open
     - LP: #1327563
   * sunrpc: replace sunrpc_net-gssd_running flag with a more reliable check
     - LP: #1327563
   * nfs: check if gssd is running before attempting to use krb5i auth in 
SETCLIENTID call
     - LP: #1327563
  are causing rpc.gssd to fill syslog with messages of the form
  ERROR: can't open /run/rpc_pipefs/gssd/clntXX/info: No such file or directory

  The problem was discussed last December in 
https://bugzilla.redhat.com/show_bug.cgi?id=1037793
  where the resolution was to include the following three patches:

  http://marc.info/?l=linux-nfsm=138624689302466w=2
  http://marc.info/?l=linux-nfsm=138624684502447w=2
  http://marc.info/?l=linux-nfsm=138624684502447w=2

  These patches are already in the upstream kernel (since 3.14). I suggest 
cherry-picking them for 3.13. Commit hashes from the 3.14 branch:
   3396f92f8be606ea485b0a82d4e7749a448b013b
   e2f0c83a9de331d9352185ca3642616c13127539
   23e66ba97127ff3b064d4c6c5138aa34eafc492f

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1365869/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1355146] [NEW] dose not boot efi signed kernel

2014-08-11 Thread Joni-Pekka Kurronen
Public bug reported:


Hardware: AsRock 990 FX

fstab:
/boot/efi

root@mpi1:/home/joni# efibootmgr --create --part 1 --label Ubuntu Trusty EFI 
--loader '\boot\efi\EFI\ubuntu\vmlinuz-3.13.0-32-generic.efi.signed'
BootCurrent: 
Timeout: 2 seconds
BootOrder: 0002,,0006,0003,0001
Boot* ubuntu
Boot0001  Hard Drive 
Boot0003  UEFI: Built-in EFI Shell 
Boot0006* CD/DVD Drive 
Boot0002* Ubuntu Trusty EFI
root@mpi1:/home/joni# ls 
/boot/efi/EFI/ubuntu/vmlinuz-3.13.0-32-generic.efi.signed
/boot/efi/EFI/ubuntu/vmlinuz-3.13.0-32-generic.efi.signed
root@mpi1:/home/joni# sudo efibootmgr -v
BootCurrent: 
Timeout: 2 seconds
BootOrder: 0002,,0006,0003,0001
Boot* ubuntu
HD(1,22,ee6b3,20ed17b3-c133-4c27-8f5f-1c420b10d018)File(\EFI\ubuntu\grubx64.efi)
Boot0001  Hard DriveBIOS(2,0,00)AMGOAMNO|.W.D.C. 
.W.D.1.0.0.2.F.A.E.X.-.0.0.7.B.A.0A..1.N.;..@..Gd-.;.A..MQ..L.W.D.C.
 .W.D.1.0.0.2.F.A.E.X.-.0.0.7.B.A.0......AMBOAMNO|.W.D.C. 
.W.D.1.0.0.2.F.A.E.X.-.0.0.7.B.A.0A..1.N.;..@..Gd-.;.A..MQ..L.W.D.C.
 .W.D.1.0.0.2.F.A.E.X.-.0.0.7.B.A.0......AMBOAMNOv.S.a.n.D.i.s.k. 
.S.D.S.S.D.P.0.6.4.GA..1.N.;..:..Gd-.;.A..MQ..L.S.a.n.D.i.s.k.
 .S.D.S.S.D.P.0.6.4.G......AMBO
Boot0002* Ubuntu Trusty EFI 
HD(1,22,ee6b3,20ed17b3-c133-4c27-8f5f-1c420b10d018)File(\boot\efi\EFI\ubuntu\vmlinuz-3.13.0-32-)
Boot0003  UEFI: Built-in EFI Shell  
Vendor(5023b95c-db26-429b-a648-bd47664c8012,)AMBO
Boot0006* CD/DVD Drive  BIOS(3,0,00)AMGOAMNOm.H.L.-.D.T.-.S.T. 
.B.D.-.R.E. . 
.B.H.1.0.L.S.3.8A...Gd-.;.A..MQ..L.8.K.C.4.5.2.3.9.2.8.
 .4. . . . . . . . ......AMBO

ProblemType: Bug
DistroRelease: Ubuntu 14.04
Package: linux-signed-image-generic 3.13.0.32.38
ProcVersionSignature: Ubuntu 3.13.0-32.57-generic 3.13.11.4
Uname: Linux 3.13.0-32-generic x86_64
NonfreeKernelModules: nvidia
ApportVersion: 2.14.1-0ubuntu3
Architecture: amd64
AudioDevicesInUse:
 USERPID ACCESS COMMAND
 /dev/snd/controlC1:  joni   3667 F pulseaudio
 /dev/snd/controlC3:  joni   3667 F pulseaudio
 /dev/snd/controlC2:  joni   3667 F pulseaudio
 /dev/snd/controlC0:  joni   3667 F pulseaudio
Date: Mon Aug 11 14:52:28 2014
HibernationDevice: RESUME=UUID=c5de965b-1adc-40e4-ab8b-cddb8bd153c6
MachineType: To Be Filled By O.E.M. To Be Filled By O.E.M.
ProcFB: 0 EFI VGA
ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-3.13.0-32-generic.efi.signed 
root=UUID=717426c9-bb8b-455c-b1c8-85f1e67cd245 ro nomdmonddf nomdmonisw 
nomdmonddf nomdmonisw nomdmonddf nomdmonisw nomdmonddf nomdmonisw nomdmonddf 
nomdmonisw
PulseList:
 Error: command ['pacmd', 'list'] failed with exit code 1: Home directory not 
accessible: Permission denied
 No PulseAudio daemon running, or not running as session daemon.
RelatedPackageVersions:
 linux-restricted-modules-3.13.0-32-generic N/A
 linux-backports-modules-3.13.0-32-generic  N/A
 linux-firmware 1.127.4
SourcePackage: linux
UpgradeStatus: No upgrade log present (probably fresh install)
dmi.bios.date: 10/12/2012
dmi.bios.vendor: American Megatrends Inc.
dmi.bios.version: P1.50
dmi.board.name: 990FX Extreme3
dmi.board.vendor: ASRock
dmi.chassis.asset.tag: To Be Filled By O.E.M.
dmi.chassis.type: 3
dmi.chassis.vendor: To Be Filled By O.E.M.
dmi.chassis.version: To Be Filled By O.E.M.
dmi.modalias: 
dmi:bvnAmericanMegatrendsInc.:bvrP1.50:bd10/12/2012:svnToBeFilledByO.E.M.:pnToBeFilledByO.E.M.:pvrToBeFilledByO.E.M.:rvnASRock:rn990FXExtreme3:rvr:cvnToBeFilledByO.E.M.:ct3:cvrToBeFilledByO.E.M.:
dmi.product.name: To Be Filled By O.E.M.
dmi.product.version: To Be Filled By O.E.M.
dmi.sys.vendor: To Be Filled By O.E.M.

** Affects: linux (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug trusty

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1355146

Title:
  dose not boot efi signed kernel

Status in “linux” package in Ubuntu:
  New

Bug description:
  
  Hardware: AsRock 990 FX

  fstab:
  /boot/efi

  root@mpi1:/home/joni# efibootmgr --create --part 1 --label Ubuntu Trusty 
EFI --loader '\boot\efi\EFI\ubuntu\vmlinuz-3.13.0-32-generic.efi.signed'
  BootCurrent: 
  Timeout: 2 seconds
  BootOrder: 0002,,0006,0003,0001
  Boot* ubuntu
  Boot0001  Hard Drive 
  Boot0003  UEFI: Built-in EFI Shell 
  Boot0006* CD/DVD Drive 
  Boot0002* Ubuntu Trusty EFI
  root@mpi1:/home/joni# ls 
/boot/efi/EFI/ubuntu/vmlinuz-3.13.0-32-generic.efi.signed
  /boot/efi/EFI/ubuntu/vmlinuz-3.13.0-32-generic.efi.signed
  root@mpi1:/home/joni# sudo efibootmgr -v
  BootCurrent: 
  Timeout: 2 seconds
  BootOrder: 0002,,0006,0003,0001
  Boot* ubuntu