Any chance this test needs a re-run of “grub-install”, not just “update-
grub” (as you would get from a reconfigure)?
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/2051999
Title:
Gr
My experience so far lines up with the above: 148 good, 150 bad, 152
good.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-signed in Ubuntu.
https://bugs.launchpad.net/bugs/2018960
Title:
linux-image-5.4.0-149-generic (regression)
listsnaps is an alias of listsnapshots, but you're right that it's on
the pool.
Can you file this upstream:
https://github.com/openzfs/zfs/issues/new/choose
If you want, you could take a stab at submitting a pull request. It's a
pretty simple sounding change. The repo is here:
https://github.com/
The limit in the code does seem to be 64 MiB. I'm not sure why this
isn't working. I am not even close to an expert on this part of OpenZFS,
so all I can suggest is to file a bug report upstream:
https://github.com/openzfs/zfs/issues/new
--
You received this bug notification because you are a mem
device_removal only works if you can import the pool normally. That is
what you should have used after you accidentally added the second disk
as another top-level vdev. Whatever you have done in the interim,
though, has resulted in the second device showing as FAULTED. Unless you
can fix that, devi
Why is the second disk missing? If you accidentally added it and ended
up with a striped pool, as long as both disks are connected, you can
import the pool normally. Then use the new device_removal feature to
remove the new disk from the pool.
If you've done something crazy like pulled the disk an
You could shrink the DDT by making a copy of the files in place (with
dedup off) and deleting the old file. That only requires enough extra
space for a single file at a time. This assumes no snapshots.
If you need to preserve snapshots, another option would be to send|recv
a dataset at a time. If
Did you destroy and recreate the pool after disabling dedup? Otherwise
you still have the same dedup table and haven’t really accomplished
much.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.ne
That sounds like a missing dependency on python3-distutils.
But unless you're running a custom kernel, Ubuntu is shipping the ZFS module
now:
https://bugs.launchpad.net/ubuntu/+source/linux-raspi/+bug/1884110
--
You received this bug notification because you are a member of Kernel
Packages, whi
I've posted this upstream (as a draft PR, pending testing) at:
https://github.com/openzfs/zfs/pull/10662
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1888405
Title:
zfsutils-linux:
Here is a completely untested patch that takes a different approach to
the same issue. If this works, it seems more suitable for upstreaming,
as the existing list_zvols seems to be the place where properties are
checked. Can either of you test this? If this looks good, I'll submit it
upstream.
**
See also this upstream PR: https://github.com/openzfs/zfs/pull/9414
and the one before it: https://github.com/openzfs/zfs/pull/8667
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/171876
I have submitted this upstream:
https://github.com/openzfs/zfs/pull/10388
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1875577
Title:
Encrypted swap won't load on 20.04 with zfs ro
seth-arnold, the ZFS default is actltype=off, which means that ACLs are
disabled. (I don't think the NFSv4 ACL support in ZFS is wired up on
Linux.) It's not clear to me why this is breaking with ACLs off.
--
You received this bug notification because you are a member of Kernel
Packages, which is
There is another AES-GCM performance acceleration commit for systems
without MOVBE.
--
Richard
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1881107
Title:
zfs: backport AES-GCM p
I have confirmed that the fix in -proposed fixes the issue for me.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1872863
Title:
QEMU/KVM display is garbled when booting from kernel EFI
Can you share a bit more details about how you have yours setup? What
does your partition table look like, what does the MD config look like,
what do you have in /etc/fstab for swap, etc.? I'm running into weird
issues with this configuration, separate from this bug.
@didrocks: I'll try to get thi
I think it used to be the case that zfsutils-linux depended on zfs-dkms
which was then provided by the kernel packages. That seems like a way to
solve this. Given that dkms is for dynamic kernel modules, it was always
a bit weird to see the kernel providing that. It should probably be that
zfsutils
I didn't get a chance to test the patch. I'm running into unrelated
issues.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1875577
Title:
Encrypted swap won't load on 20.04 with zfs
This is a tricky one because all of the dependencies make sense in
isolation. Even if we remove the dependency added by that upstream
OpenZFS commit, given that modern systems use zfs-mount-generator,
systemd-random-seed.service is going to Require= and After= var-
lib.mount because of its Requires
John Gray: Everything else aside, you should mirror your swap instead of
striping it (which I think is what you're doing). With your current
setup, if a disk dies, your system will crash.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs
brian-willoughby (and pranav.bhattarai):
The original report text confirms that "The exit code is 0, so update-
grub does not fail as a result." That matches my understanding (as
someone who has done a lot of ZFS installs maintaining the upstream
Root-on-ZFS HOWTO) that this is purely cosmetic.
I
The AES-GCM performance improvements patch has been merged to master. This also
included the changes to make encryption=on mean aes-256-gcm:
https://github.com/zfsonlinux/zfs/commit/31b160f0a6c673c8f926233af2ed6d5354808393
--
You received this bug notification because you are a member of Kernel
What was the expected result? Are you expecting to be able to just
install ZFS in a container (but not use it)? Or are you expecting it to
actually work? The user space tools can’t do much of anything without
talking to the kernel.
--
You received this bug notification because you are a member of
** Changed in: zfs-linux (Ubuntu)
Status: New => Incomplete
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1862661
Title:
zfs-mount.service and others fail inside unpriv conta
** Bug watch added: Github Issue Tracker for ZFS #9443
https://github.com/zfsonlinux/zfs/issues/9443
** Also affects: zfs via
https://github.com/zfsonlinux/zfs/issues/9443
Importance: Unknown
Status: Unknown
--
You received this bug notification because you are a member of Kernel
There does seem to be a real bug here. The problem is that we don’t know
if it is on the ZoL side or the FreeBSD side. The immediate failure is
that “zfs recv” on the FreeBSD side is failing to receive the stream. So
that is the best place to start figuring out why. If it turns out that
ZoL is gene
The FreeBSD bug report:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=243730
Like I said, boiling this down to a test case would likely help a lot.
Refusing to do so and blaming the people giving you free software and
free support isn’t helpful.
** Bug watch added: bugs.freebsd.org/bugzilla/
** Changed in: zfs-linux (Ubuntu)
Status: New => Incomplete
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1854982
Title:
Lost compatibilty for backup between Ubuntu 19.10 and
In terms of a compact reproducer, does this work:
# Create a temp pool with large_dnode enabled:
truncate -s 1G lp1854982.img
sudo zpool create -d -o feature@large_dnode=enabled lp1854982
$(pwd)/lp1854982.img
# Create a dataset with dnodesize=auto
sudo zfs create -o dnodesize=auto lp1854982/ldn
So, one of two things is true:
A) ZFS on Linux is generating the stream incorrectly.
B) FreeBSD is receiving the stream incorrectly.
I don't have a good answer as to how we might differentiate those two.
Filing a bug report with FreeBSD might be a good next step. But like I
said, a compact reprodu
The last we heard on this, FreeBSD was apparently not receiving the send
stream, even though it supports large_dnode:
https://zfsonlinux.topicbox.com/groups/zfs-
discuss/T187d60c7257e2eb6-M14bb2d52d4d5c230320a4f56/feature-
incompatibility-between-ubuntu-19-10-and-freebsd-12-0
That's really bizarr
I think there are multiple issues here. If it's just multipath, that
issue should be resolved by adding After=multipathd.service to zfs-
import-{cache,scan}.service.
For other issues, I wonder if this is cache file related. I'd suggest
checking that the cache file exists (I expect it would), and t
@gustypants: Sorry, the other one is scan, not pool. Are you using a
multipath setup? Does the pool import fine if you do it manually once
booted?
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.
zfs-linux (0.6.5.6-2) unstable; urgency=medium
...
* Scrub all healthy pools monthly from Richard Laager
So Debian stretch, but not Ubuntu 16.04.
Deleting the file should be safe, as dpkg should retain that. It sounds
like you never deleted it, as you didn’t have it before this upgrade. So
it
This was added a LONG time ago. The interesting question here is: if you
previously deleted it, why did it come back? Had you deleted it though?
It sounds like you weren’t aware of this file.
You might want to edit it in place, even just to comment out the job.
That would force dpkg to give you a
You original scrub took just under 4.5 hours. Have you let the second
scrub run anywhere near that long? If not, start there.
The new scrub code uses a two-phase approach. First it works through
metadata determining what (on-disk) blocks to scrub. Second, it does the
actual scrub. This allows ZFS
We discussed this at the January 7th OpenZFS Leadership meeting. The
notes and video recording are now available.
The meeting notes are in the running document here (see page 2 right now, or
search for this Launchpad bug number):
https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoL
> It is not appropriate to require the user to type a password on every
> boot by default; this must be opt-in.
Agreed.
The installer should prompt (with a checkbox) for whether the user wants
encryption. It should default to off. If the user selects the checkbox,
prompt them for a passphrase. Se
I've given this a lot of thought. For what it's worth, if it were my
decision, I would first put your time into making a small change to the
installer to get the "encryption on" case perfect, rather than the
proposal in this bug.
The installer currently has:
O Erase disk an install Ubuntu
War
Try adding "After=multipathd.service" to zfs-import-cache.service and
zfs-import-pool.service. If that fixes it, then we should probably add
that upstream.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.l
I put these questions to Tom Caputi, who wrote the ZFS encryption. The
quoted text below is what I asked him, and the unquoted text is his
response:
> 1. Does ZFS rewrite the wrapped/encrypted master key in place? If
>not, the old master key could be retrieved off disk, decrypted
>with the
I have come up with a potential security flaw with this design:
The user installs Ubuntu with this fixed passphrase. This is used to
derive the "user key", which is used to encrypt the "master key", which
is used to encrypt their data. The encrypted version of the master key
is obviously written t
Here are some quick performance comparisons:
https://github.com/zfsonlinux/zfs/pull/9749#issuecomment-569132997
In summary, "the GCM run is approximately 1.15 times faster than the CCM
run. Please also note that this PR doesn't improve AES-CCM performance,
so if this gets merged, the speed differe
This is an interesting approach. I figured the installer should prompt
for encryption, and it probably still should, but if the performance
impact is minimal, this does have the nice property of allowing for
enabling encryption post-install.
It might be worthwhile (after merging the SIMD fixes) to
Should it set KEYMAP=y too, like cryptsetup does?
I've created a PR upstream and done some light testing:
https://github.com/zfsonlinux/zfs/pull/9723
Are you able to confirm that this fixes the issue wherever you were
seeing it?
--
You received this bug notification because you are a member of
I received the email of your latest comment, but oddly I’m not seeing it
here.
Before you go to all the work to rebuild the system, I think you should
do some testing to determine exactly what thing is breaking the send
stream compatibility. From your comment about your laptop, it sounds
like you
I'm not sure if userobj_accounting and/or project_quota have
implications for send stream compatibility, but my hunch is that they do
not. large_dnode is documented as being an issue, but since your
receiver supports that, that's not it.
I'm not sure what the issue is, nor what a good next step wo
This is probably an issue of incompatible pool features. Check what you
have active on the Ubuntu side:
zpool get all | grep feature | grep active
Then compare that to the chart here:
http://open-zfs.org/wiki/Feature_Flags
There is an as-yet-unimplemented proposal upstream to create a features
If the pool has an _active_ (and not "read-only compatible") feature
that GRUB does not understand, then GRUB will (correctly) refuse to load
the pool. Accordingly, you will be unable to boot.
Some features go active immediately, and others need you to enable some
filesystem-level feature or take
** Changed in: zfs-linux (Ubuntu)
Status: New => Incomplete
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1852854
Title:
Update of zfs-linux fails
Status in zfs-linux packag
Which specific filesystems are failing to mount?
Typically, this situation occurs because something is misconfigured, so
the mount fails, so files end up inside what should otherwise be empty
mountpoint directories. Then, even once the original problem is fixed,
the non-empty directories prevent Z
> I think "zfs mount -a" should NOT try to mount datasets with
> mountpoint "/"
There is no need for this to be (confusingly, IMHO) special-cased in
zpool mount.
You should set canmount=noauto on your root filesystems (the ones with
mountpoint=/). The initramfs handles mounting the selected root
The fix here seems fine, given that you're going for minimal impact in
an SRU. I agree that the character restrictions are such that the pool
names shouldn't actually need to be escaped. That's not to say that I
would remove the _proper_ quoting of variables that currently exists
upstream, as it's
> "com.sun:auto-snapshot=false" do we need to add that or does our zfs
not support it?
You do not need that. That is used by some snapshot tools, but Ubuntu is
doing its own zsys thing.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-
** Also affects: ubiquity (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1847628
Title:
When using swap in ZFS, system stops when
*** This bug is a duplicate of bug 1847628 ***
https://bugs.launchpad.net/bugs/1847628
** This bug has been marked a duplicate of bug 1847628
When using swap in ZFS, system stops when you start using swap
--
You received this bug notification because you are a member of Kernel
Packages, w
The osprober part is a duplicate of #1847632.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1847927
Title:
Upgrading of 20191010 installed on ZFS will lead to "device-mapper:
relo
osprober complaining about ZFS is a known issue. I don’t know if I
bothered to file a bug report, so this will probably be the report for
that.
Side question: where did you find an installer image with ZFS support? I
tried the daily yesterday but I had no ZFS option.
** Changed in: zfs-linux (Ubu
This is not a bug as far as I can see. This looks like the snapshot has
no unique data so its USED is 0. Note that REFER is non-zero.
** Changed in: zfs-linux (Ubuntu)
Status: New => Invalid
--
You received this bug notification because you are a member of Kernel
Packages, which is subscr
You had a setup with multiple root filesystems which each had
canmount=on and mountpoint=/. So they both tried to automatically mount
at /. (When booting in the root-on-ZFS config, one was already mounted
as your root filesystem.) ZFS, unlike other Linux filesystems, refuses
to mount over non-empty
The error is again related to something trying to mount at /. That means
you have something setup wrong. If it was setup properly, nothing should
be trying to _automatically_ (i.e. canmount=on) mount at /. (In a root-
on-ZFS setup, the root filesystem is canmount=noauto and mounted by the
initramfs
I've commented upstream (with ZFS) that we should fake the pre-
allocation (i.e. return success from fallocate() when mode == 0) because
with ZFS it's worthless at best and counterproductive at worst:
https://github.com/zfsonlinux/zfs/issues/326#issuecomment-540162402
Replies (agreeing or disagre
What is the installer doing for swap? The upstream HOWTO uses a zvol and
then this is necessary: “The RESUME=none is necessary to disable
resuming from hibernation. This does not work, as the zvol is not
present (because the pool has not yet been imported) at the time the
resume script runs. If it
Do NOT upgrade your bpool.
The dangerous warning is a known issue. There has been talk of an
upstream feature that would allow a nice fix for this, but nobody has
taken up implementing it yet. I wonder how hard it would be to
temporarily patch zpool status / zpool upgrade to not warn about /
upgra
That has the same error so you are using the same two pools. Please
follow the instructions I’ve given and fix this once so you are in a
fully working state. Once things are working, then you can retry
whatever upgrade steps you think break it.
--
You received this bug notification because you ar
The size of the pool is not particularly relevant. It sounds like you
think I'm asking you to backup and restore your pool, which I definitely
am not. A pool "import" is somewhat like "mounting" a pool (though it's
not literally mounting, because mounting is something that happens with
filesystems)
As the error message indicates, /vms and /hp-data are not empty. ZFS, by
default, will not mount over non-empty directories.
There are many ways to fix this, but here's something that is probably
the safest:
Boot up in rescue mode. If it is imported, export the hp-data pool with
`zpool export hp-
You have two datasets with mountpoint=/ (and canmount=on) which is going
to cause problems like this.
vms/roots/mate-1804 mountpoint / local
vms/roots/mate-1804 canmountondefault
vms/roots/xubuntu-1804 mountpoint / local
vm
Can you provide the following details on your datasets' mountpoints.
zfs get mountpoint,canmount -t filesystem
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1846424
Title:
19.10 ZFS
I’m not aware of anything new starting scrubs. Scrubs are throttled and
usually the complaint is that they are throttled too much, not too
little. Having two pools on the same disk is likely the issue. That
should be avoided, with the exception of a small boot pool on the same
disk as the root pool
This is a known issue which will hopefully be improved by 20.04 or so.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1843298
Title:
Upgrade of datapool to ZFS 0.8 creates a problem
I closed this as requested, but I'm actually going to reopen it to see
what people think about the following...
Is there a "default" kernel in Ubuntu? I think there is, probably linux-
generic.
So perhaps this dependency should be changed:
OLD: zfs-modules | zfs-dkms
NEW: linux-generic | zfs-modu
What was the expected behavior from your perspective?
The ZFS utilities are useless without a ZFS kernel module. It seems to
me that this is working fine, and installing the ZFS utilities in this
environment doesn’t make sense.
--
You received this bug notification because you are a member of Ke
Your upgrade is done, but for the record, installing the HWE kernel
doesn't remove the old kernel. So you still have the option to go back
to that in the GRUB menu.
Also, once you're sure the HWE kernel is working, you'll probably want
to remove the linux-image-generic package so you're not contin
ZFS 0.7.9 was released in Cosmic (18.10). You could update to Cosmic.
Alternatively, on 18.04, you can install the HWE kernel package: linux-
image-generic-hwe-18.04
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
http
I really don’t know what to suggest here. As you mentioned, this used to
work. If you are only using LUKS for swap, maybe you could just remove
it from crypttab and run the appropriate commands manually in rc.local
or a custom systemd unit.
--
You received this bug notification because you are a
If the pool is on top of LUKS (a relatively common configuration when
ZFS and cryptsetup are both being used), then you'd need cryptsetup
first. My advice is that you should either stop encrypting swap or start
encrypting the whole pool. Hopefully in another (Ubuntu) release or two,
we'll have nati
Try adding initramfs as an option in /etc/crypttab. That's the approach
I use when putting the whole pool on a LUKS device, and is necessary due
to: https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906
--
You received this bug notification because you are a member of Kernel
Packages,
Is there something inherent in snaps that makes this easier or better
than debs? For example, do snaps support multiple installable versions
of the same package name?
If snaps aren’t inherently better, the same thing could be done with
debs using the usual convention for having multiple versions i
I don't have permissions to change this, but my recommendation would be
to set this as "Won't Fix". It's my understanding that zfs-auto-snapshot
is more-or-less unmaintained upstream. I know I've seen recommendations
to switch to something else (e.g. sanoid) on issues there.
--
You received this
@sdeziel, I agree 100%.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-meta in Ubuntu.
https://bugs.launchpad.net/bugs/1738259
Title:
need to ensure microcode updates are available to all bare-metal
installs of Ubuntu
Status i
This is particularly annoying for me too.
All of my virtual machines use linux-image-generic because I need linux-
image-extra to get the i6300esb watchdog driver for the KVM watchdog.
This change forces the amd64-microcode and intel-microcode packages to
be installed on all of my VMs.
--
You re
I haven't had a chance to write and test the zpool.cache copying. I keep
meaning to get to it every day, but pushing it back for lack of time.
The zfs-initramfs script in 16.04 (always) and in 18.04 (by default)
runs a plain `zpool import`.
ZoL 0.7.5 has a default search order for imports that pr
** Changed in: zfs-linux (Ubuntu)
Status: Confirmed => Fix Released
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1600060
Title:
ZFS "partially filled holes lose birth time"
I fixed this upstream, which was released in 0.7.4. Bionic has 0.7.5.
** Changed in: zfs-linux (Ubuntu)
Status: Confirmed => Fix Committed
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.
I need to do some testing, but we might want to consider using the cache
file. An approach (suggested to my by ryao, I think) was that we first
import the root pool read-only, copy the cache file out of it, export
the pool, and then import the pool read-write using the cache file.
--
You received
I updated to the version from -proposed and rebooted. I verified that no
units failed on startup.
** Tags added: verification-done-artful
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs
zfs-load-module.service seems to have a Requires on itself? That has to
be wrong.
Also, zfs-import-cache.service and zfs-import-scan.service need an After
=zfs-load-module.service. They're not getting one automatically because
of DefaultDependencies=no (which seems appropriate here, so leave that
Native encryption was merged to master but has not been released in a
tagged version. There are actually a couple of issues that will result
in on-disk format changes. It should be the major feature for the 0.8.0
release.
** Changed in: zfs-linux (Ubuntu)
Status: New => Invalid
--
You rec
16.04's HWE's updates will top out at the kernel version shipped in
18.04. I assume this is because you can then just use 18.04.
See:
https://wiki.ubuntu.com/Kernel/RollingLTSEnablementStack
as linked from:
https://wiki.ubuntu.com/Kernel/LTSEnablementStack
--
You received this bug notification b
Public bug reported:
I just noticed on my test VM of artful that zfs-import-cache.service
does not have a ConditionPathExists=/etc/zfs/zpool.cache. Because of
that, it fails on startup, since the cache file does not exist.
This line is being deleted by
debian/patches/ubuntu-load-zfs-unconditiona
I have a related question... as far as I'm aware, the ZoL
kernel<->userspace interface is still not versioned:
https://github.com/zfsonlinux/zfs/issues/1290
Effectively, this means that the version of zfsutils-linux must always
match the version of the kernel modules. What is the plan to handle th
ZFS already limits the amount of IO that a scrub can do. Putting
multiple pools on the same disk defeats ZFS's IO scheduler.* Scrubs are
just one example of the performance problems that will cause. I don't
think we should complicate the scrub script to accommodate this
scenario.
My suggestion is
Why do you have multiple pools on the same disks? That's very much not a
best practice or even typical ZFS installation.
** Changed in: zfs-linux (Ubuntu)
Status: Confirmed => Incomplete
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed
I submitted a fix upstream:
https://github.com/zfsonlinux/zfs/pull/6807/
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1550301
Title:
ZFS: Set elevator=noop on disks in the root poo
samvde, can you provide your `zfs list` output? The script seems
designed to only import filesystems *below* the filesystem that is the
root filesystem. In the typical case, the root filesystem is something
like rpool/ROOT/ubuntu. There typically shouldn't be children of
rpool/ROOT/ubuntu.
--
You
Copying the script is probably fine for now. I still intend to look at
this, hopefully in the next month or so. It's been relatively low on my
list, since LTS releases are my main priority.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to z
The output only shows the results, not the order of mount attempts. It
may be the case that there is an ordering bug here. But we need to rule
out the other case first. It could easily be the case that the directory
is non-empty, so the /var/share mount failed even though it was properly
attempted
This is the problem:
'/var/share': directory is not empty
Figure out what is in there and deal with this appropriately.
I don't personally love that ZFS refuses to mount on non-empty
directories, but most of the time, the fact that the directory is non-
empty is the real problem.
** Changed in:
1 - 100 of 242 matches
Mail list logo