Once the boot process also failed on my Xubuntu 24.04 VM. I happened
after more than a week without an issue. The second boot was OK.
** Attachment added: "prevboot.txt"
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2076520/+attachment/5809034/+files/prevboot.txt
--
You received this
The info about the BIOS
$ sudo dmidecode
[sudo] password for bertadmin:
# dmidecode 3.5
Getting SMBIOS data from sysfs.
SMBIOS 2.5 present.
10 structures occupying 456 bytes.
Table at 0x000E1000.
Handle 0x, DMI type 0, 20 bytes
BIOS Information
Vendor: innotek GmbH
Version: V
I don't know the design, so I can't add much information. But I expect
it is a timing issue between the kernel and one of the virtualbox kernel
modules. They do things in 2 threads (one from the kernel and one from
virtualbox) and they are not properly synched. So in some systems it
will work alway
DKMS was not installed and on installing dkms, I got the same error see
annex: errordkms.
I assume vboxadd is from Virtualbox and since 10 years I use the latest
version from the www.virtualbox.org website. The only problems I
remember are with newest Linux kernels, not yet supported by Virtualbo
Public bug reported:
The upgrade fails, note that Ubuntu Studio is the only flavor that
failed as follows:
1 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n] y
Setting up linux-image-5.15.0-87-lowlatency (5.15.0-87.96~
A screenshot of my settings in Virtualbox are in the attachment. If
needed I can also send the vbox file. Note that the first thing I do, if
I run into display/video issues, is switching off 3D acceleration, but
that did not help this time.
** Attachment added: "Screenshot"
https://bugs.launch
I both cases I get the login screen of the Host and not the one of the
VM :)
Even Right-Ctrl followed by Ctrl+Alt+F1 gives the Host screen.
I've tried in Xorg the following command:
bertadmin@VM-Ubuntu-2210:~$ ps $(pgrep Xorg)
PID TTY STAT TIME COMMAND
1749 tty2 Sl+0:16 /us
I did the command in the xorg session, since in the Wayland session I
have no display.
bertadmin@VM-Ubuntu-2210:~$ find /lib/modules -name vmwgfx.ko
/lib/modules/5.15.0-27-generic/kernel/drivers/gpu/drm/vmwgfx/vmwgfx.ko
/lib/modules/5.19.0-15-generic/kernel/drivers/gpu/drm/vmwgfx/vmwgfx.ko
bertadm
That change did not help, the display still did freeze.
The content of /etc/environment was:
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
MUTTER_DEBUG_ENABLE_ATOMIC_KMS=0
MUTTER_DEBUG_FORCE_KMS_MODE=simple
The host OS is a minimal insta
Well a summary: The Ubuntu 22.04 LTS VM works file and it also works fine with
the Ubuntu 22.10 VM on Xorg after disabling Wayland.
If I use Wayland it freezes and does not display the login screen. The last
message, I see flashing by during the start-up, is about starting the gnome
display man
I did read that gdm3 did take care of the login screen, maybe it is
wrong or incomplete, but whatever I have news:
I changed /etc/gdm3/custom.conf and disabled wayland. Both systems
Ubuntu 22.10 and Ubuntu Unity 22.10 work now with Xorg without any
issues.
--
You received this bug notification b
I don't expect it is a problem in Virtualbox as you assume, since I use
exactly the same VBox release in Ubuntu 22.04 LTS; Ubuntu 20.04 LTS and
Fedora 36. I only have a problem with the 22.10 systems using the latest
gdm3. I've tried Ubuntu Mate 22.10 and that one worked fine with
Virtualbox 6.1.3x
Another try to send the journalctl file, using Email.
On Tue, 2022-09-20 at 10:30 +, Daniel van Vugt wrote:
> Please use one of these commands to provide a full log.
>
> If you have not rebooted since the freeze:
>
> journalctl -b0 > journal.txt
>
> Or after you have rebooted:
>
> jou
Again the program crashes, if I add the text file. So I attach the last
lines:
sep 21 16:08:33 VM-Ubuntu-2210 systemd[1082]:
snap.snapd-desktop-integration.snapd-desktop-integration.service: Scheduled
restart job, restart counter is at 4.
sep 21 16:08:33 VM-Ubuntu-2210 systemd[1082]: Starting Vi
Let me try a wild guess, it looks like buffer fragmentation related to
network/ssh/openZFS. The software might desperately try to defragment
many small spaces and that would explain the 100% CPU load during the
low speed transfers. If the defragmentation happens in the kernel, it
also might explain
The error only occurs on the backup of the large datasets with
incremental updates from say 40 GB to 80 GB. Those datasets are around
250 GB and 450 GB and they contain Virtual Machines. On smaller datasets
I have no issues and in the begin of the transfers, there are no
problems either. The perfor
Public bug reported:
Since OpenZFS the reception of incremental backups over ssh are very slow;
FreeBSD 13.0 with OpenZFS running on a 2003 Pentium 4 HT 3.0GHz (1.5GB DDR) is
faster than Ubuntu 21.10/22.04 with OpenZFS running on my laptop with an
i5-2520m (8GB DDR3).
FreeBSD runs at 21 MiB/s
I show you a screenshot of the moment the tranfer stalls
** Attachment removed: "Screenshot from 2022-02-19 19-55-54.png"
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1961499/+attachment/5562212/+files/Screenshot%20from%202022-02-19%2019-55-54.png
** Attachment added: "Screenshot
I show you a screenshot of the moment the tranfer stalls
** Attachment added: "Screenshot from 2022-02-19 19-55-54.png"
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1961499/+attachment/5562212/+files/Screenshot%20from%202022-02-19%2019-55-54.png
--
You received this bug notificat
Public bug reported:
I weekly backup my stuff to a Pentium 4 HT on FreeBSD 13.0 and to This
i5-2520M. FreeBSD on the Pentium is faster than Ubuntu 21.10 on the i5;
The Pentium runs constantly at a ~92% CPU load on one thread and it
succeeds in transferring the stuff at ~22Mbps. The I5 is slower an
Agreed happened after the linux upgrade, so I selected dist-upgrade in
the ubuntu-bug menu because that seemed closest, since I sometimes used
apt dist-upgrade for a normal upgrade too.
On Wed, 2021-12-01 at 12:07 +, Chris Guiver wrote:
> The release-upgrade to 21.10 occurred 47 days ago, so
I did run an update of only one file and I did install another
application. Rebooting and reverting back to the last snapshot did work
as expected. The application has been removed and I could rerun the
update again.
According to the Internet the next commands would stop all autozsys snapshots
s
** Description changed:
- Since 2019 I run scripts to incrementally backup my zfs snapshots. In OpenZFS
2.06 it refuses to send empty snapshots.
- It is a serious problem because in the end each dataset will have a different
last snapshots and I have to do every backup of every dataset by hand o
Public bug reported:
Since 2019 I run scripts to incrementally backup my zfs snapshots. In OpenZFS
2.06 it refuses to send empty snapshots.
It is a serious problem because in the end each dataset will have a different
last snapshots and I have to do every backup of every dataset by hand or I hav
The change worked on my PC. The difference has been tested with limited changes
to my userdata, but I try tomorrow with the next set of updates. The current
snapshots are:
bertadmin@VM-Xubuntu-2110:~$ zfs list -t snapshot
NAME USED AV
Public bug reported:
I have done some testing with Ubuntu 21.04 and OpenZFS. I mostly run
Virtual Machines and I noticed a huge difference in performance between
the VM disk IO, between the same VM stored on a lz4 compressed dataset
and on an uncompressed dataset. The difference in the VM disk thr
I'm using Ubuntu 21.04 Beta and FreeBSD 13.0-RC5 now on my desktop and
backup server. I did check it and it works without any issue, even after
I changed the setting I used to force compatibility. I changed the
'dnodesize' back to 'auto' from 'legacy'. Legay that I used to force
compatibility.
--
I'm using Ubuntu 21.04 Beta and FreeBSD 13.0-RC5 now on my desktop and
backup server. I did check it and it works without any isue, even after
I changed the setting I used to force compatibility.
I already added this text to the bug report.
On Wed, 2021-04-07 at 13:24 +, Colin Ian King wrote:
It looks that it is a hardware problem. The sshd has a high number of
not correctable read errors in the hundreds and the reallocated sector
count also >100.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs
** Description changed:
The last update of Ubuntu 20.04 inc. ZFS destroyed my boot process,
fortunately I dual boot between zfs and ext4.
- My zfs boot is a manual boot, that I used in 2018 with Ubuntu Mate 18.04. The
system has been upgraded I think through 19.04 and 19.10 to 20.04; The last
Public bug reported:
The last update of Ubuntu 20.04 inc. ZFS destroyed my boot process, fortunately
I dual boot between zfs and ext4.
My zfs boot is a manual boot, that I used in 2018 with Ubuntu Mate 18.04. The
system has been upgraded I think through 19.04 and 19.10 to 20.04; The last
system
That report is so long ago. I don't know, what happened anymore.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1843340
Title:
Boot menu not displayed for multi boot with ZFS
Status
Basically I backup every weekend again from my 64-bits Ubuntu to my
32-bits FreeBSD. I solved the issue by setting the property
dnodesize=legacy for the datasets I wanted to backup. I also reloaded
those datasets once to get the storage with the right dnodesize
everywhere.
The problem has been in
Any reason to invalidate the bug report?
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1843296
Title:
ZFS auto scrub after upgrade to Xubuntu 19.10
Status in zfs-linux package in U
Dank U, but I prefer Conky, because it displays each 2 seconds; RAM
size; SWAP size and SWAP throughput :)
On Tue, 2020-07-07 at 01:48 +, Daniel van Vugt wrote:
> You can see the memory usage by running:
>
> free -h
>
--
You received this bug notification because you are a member of Kerne
Normally I run the VM with Transmission, Firefox/WhatsAPP and Evolution.
Sometimes I add a second Firefox window or LibreOffice Calc, but that's
all. I never noticed a crash and I have no problems with responsiveness,
despite a relative large SWAP usage, since the VM runs on top of ZFS
(lz4 compres
apport information
** Attachment added: "ProcModules.txt"
https://bugs.launchpad.net/bugs/1886330/+attachment/5390243/+files/ProcModules.txt
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/b
apport information
** Attachment added: "PulseList.txt"
https://bugs.launchpad.net/bugs/1886330/+attachment/5390244/+files/PulseList.txt
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/
apport information
** Attachment added: "ProcCpuinfo.txt"
https://bugs.launchpad.net/bugs/1886330/+attachment/5390239/+files/ProcCpuinfo.txt
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/b
apport information
** Attachment added: "Lspci-vt.txt"
https://bugs.launchpad.net/bugs/1886330/+attachment/5390237/+files/Lspci-vt.txt
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/18
apport information
** Attachment added: "CRDA.txt"
https://bugs.launchpad.net/bugs/1886330/+attachment/5390234/+files/CRDA.txt
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1886330
Ti
apport information
** Attachment added: "Lspci.txt"
https://bugs.launchpad.net/bugs/1886330/+attachment/5390236/+files/Lspci.txt
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1886330
apport information
** Attachment added: "WifiSyslog.txt"
https://bugs.launchpad.net/bugs/1886330/+attachment/5390246/+files/WifiSyslog.txt
** Changed in: linux (Ubuntu)
Status: Incomplete => Confirmed
--
You received this bug notification because you are a member of Kernel
Packages,
apport information
** Attachment added: "CurrentDmesg.txt"
https://bugs.launchpad.net/bugs/1886330/+attachment/5390235/+files/CurrentDmesg.txt
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net
apport information
** Attachment added: "UdevDb.txt"
https://bugs.launchpad.net/bugs/1886330/+attachment/5390245/+files/UdevDb.txt
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1886330
apport information
** Attachment added: "ProcEnviron.txt"
https://bugs.launchpad.net/bugs/1886330/+attachment/5390241/+files/ProcEnviron.txt
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/b
apport information
** Attachment added: "Lsusb-v.txt"
https://bugs.launchpad.net/bugs/1886330/+attachment/5390238/+files/Lsusb-v.txt
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1886
apport information
** Tags added: apport-collected
** Description changed:
I run my "work" VMs and in Xubuntu pulse audio often does not start. In
this dump I tried to start music through Firefox and through my media
player quodlibet and none of the audio starts.
See attachment with t
apport information
** Attachment added: "ProcCpuinfoMinimal.txt"
https://bugs.launchpad.net/bugs/1886330/+attachment/5390240/+files/ProcCpuinfoMinimal.txt
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.l
apport information
** Attachment added: "ProcInterrupts.txt"
https://bugs.launchpad.net/bugs/1886330/+attachment/5390242/+files/ProcInterrupts.txt
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad
OK, close the bug-report.
On Thu, 2020-03-26 at 22:47 +, Colin Ian King wrote:
> I'll close this bug report if that's OK.
>
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1854480
I moved to Ubuntu 20.04 and I do not have that problem anymore.
I stopped with writing directly after the login the value to the file
/sys/module/zfs/parameters/zfs_arc_max.
I now only rely on "options zfs zfs_arc_max=3221225472" in
/etc/modprobe.d/zfs.conf.
Problem solved in 20.04 and I tried i
I display it constantly using conky (see attachment) and conky is using
${color}L1ARC / ${color0}Hits%: ${color}${alignr}(${exec cat
/proc/spl/kstat/zfs/arcstats | grep -m 1 uncompressed_size | awk
'{printf "%4.2f",$3/1073741824}'}) ${exec cat
/proc/spl/kstat/zfs/arcstats | grep -m 1 size | awk '{
Yes you did miss something. Of course the value is changed in
/etc/modprobe.d/zfs.conf. I changed it myself! However zfs is ignoring
that value and is using considerably more memory for L1ARC.
If I wanted to limit the size of L1ARC, I had to change the value in
/sys/module/zfs/parameters/zfs_arc_m
Public bug reported:
Before the zsys of 27 Feb update I had created a snapshot for the system
BOOT, ROOT and USERDATA datasets. These snapshots are reported by
"zsysctl show" for the USERDATA datasets, but they are ignored for the
BOOT and ROOT datasets.
see annex with:
zfs list -t snapshot
zsysc
The system uses xattr=sa, but I did not set it myself. Like you can see
in the annex, it has been inherited from rpool everywhere and it has
been set by the installer.
I annexed the properties
- Home of Ubuntu, which properties I did not touch at all (except
canmount) and
- my main dataset with
I did not blame you, I only noticed a seemingly missing process in the
communication between both groups (ZOL/ZOF or Ubuntu/FreeBSD).
I said I already tried your proposed case in a different way. I'm always
willing to try something else, but I need to understand why. I'm not a monkey,
that has t
I now also filed a bug-report Bug 243730 for FreeBSD with the following
ending;
Ubuntu and FreeBSD did choose different defaults for large-dnodes and
dnodesizes, but to solve bugs related to feature incompatibility both
groups have to communicate! The problem will not disappear completely,
because
Garret Fields also specified some test and the result of those test
were as specified here.
I used Ubuntu 19.10 and FreeBSD 12.1. I detected the issue running
FreeBSD 12.0.
Both system have the large-dnode feature active! Weekly I do send the
data with the param -c, there is however one uncompress
"dpool" is another datapool created with Ubuntu 19.10 and it had the
same defaults with respect to "large-dnode" as rpool. My main problem
has been with rpool, since it took my whole nvme-SSD. By the way the
same happened in FreeBSD with zroot, during the install it also took
all space on my stripe
What do you mean with "a compact reproducer"? I can reproduce the error
easily, but I have no clue, how to produce more info? I'm still
relatively new to FreeBSD.
On Wed, 2020-01-29 at 01:20 +, Richard Laager wrote:
> So, one of two things is true:
> A) ZFS on Linux is generating the stream in
ebsd-12-0
>
> That's really bizarre. If it supports large_dnode, it should be able
> to
> receive that stream. Ideally, this needs more troubleshooting,
> particularly on the receive side. "It said (dataset does not exist)
> after a long transfer." is not particularly clea
The easy, lazy solution is to close this bugreport. However if you
start to use this install option on servers in server farms, you might
have this problem more frequently. The OpenZFS website has a matrix
with which features are supported by which OSes. It would be relatively
easy to implement tha
I did hide my errors :)
On the ZOL site Richard Laager advised me to look also at the dnodesize
property for the datasets, since both systems had the zpool feature large-dnode
"active". He assumed that the send/receive should work. The FreeBSD system had
all dnodesizes from all datasets set to
I updated both Ubuntu and I upgrade FreeBSD to 12.1, but it did not
help. Both system remain incompatible. A serious regression that forces
me to fall back on rsync-backup or ext4-boot with correctly featured
user datapools with large_dnode disabled.
So what else should we learn:
- Never break the
I updated both Ubuntu and I upgrade FreeBSD to 12.1, but it did not
help. Both system remain incompatible.
So what should we learn:
- Never break the send/receive compatibility, especially not if you demand
during the install the complete disk (Ubuntu) or all complete disks (Raid-0 or
1) (FreeBS
I will add an overview of the features of all involved datasets.
The feature@userobj_accounting and feature@project_quota are not
supported by FreeBSD 12.0 and are not part of feature list of FreeBSD as
annexed, but they are active in Ubuntu. Why? I don't use them and don't
intent to use them ever
I decided to add the properties of my Dec 2018 archives datapool and
dataset of the Ubuntu PC. That dataset-backup still works between Ubuntu
19.10 and FreeBSD 12.0. I did send an incremental update a few days ago.
I noticed that the archives feature@large_dnode is enabled and not
active, maybe tha
** Description changed:
After I tried to back-up my datapools from Ubuntu 19.10 to FreeBSD 12.0
as I have done since June each week, I found out it did not work
anymore. The regression occurred after I reinstalled Ubuntu on my new
nvme drive. I also had to reorganize my own datapools/datas
Public bug reported:
After I tried to back-up my datapools from Ubuntu 19.10 to FreeBSD 12.0
as I have done since June each week, I found out it did not work
anymore. The regression occurred after I reinstalled Ubuntu on my new
nvme drive. I also had to reorganize my own datapools/datasets, becaus
** Description changed:
In the past I could limit the size of L1ARC by specifying "options zfs
zfs_arc_max=3221225472" in /etc/modprobe.d/zfs.conf. I tried even to
- fill /sys/module/zfs/parameters/zfs_arc_max directly after login, but
- none of those methods limits the size of L1ARC. It worke
Public bug reported:
In the past I could limit the size of L1ARC by specifying "options zfs
zfs_arc_max=3221225472" in /etc/modprobe.d/zfs.conf. I tried even to
fill /sys/module/zfs/parameters/zfs_arc_max directly after login, but
none of those methods limits the size of L1ARC. It worked nicely in
Well the people involved in the design and maintenance Richard Laager
and Didier Roche warned me NOT to upgrade, so I did not upgrade the
pool. My bug was about the confusions caused by that message.
--
You received this bug notification because you are a member of Kernel
Packages, which is subsc
Public bug reported:
I use Virtualbox for Ubuntu 19.10. I have an installation with two disks, one
to boot from zfs and one to boot from ext4. The last one has zfs also
installed, but the update of zfs failed on many "directories not empty"
messages. I'm 74, but still learning and one day ago I
OK, thank you, that solved the problem.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1852793
Title:
Various problems related to "zfs mount -a
Status in zfs-linux package in Ubuntu
** Description changed:
I boot from ZFS since the begin of the year. I have a multi-boot situation
with:
- Ubuntu 19.10 with Virtualbox booting from ZFS
- - Ubuntu Mate 19.10 with QEMO/KVM booting from ZFS
+ - Ubuntu Mate 19.10 with QEMU/KVM booting from ZFS
- Ubuntu 19.10 booting from ext4
** Description changed:
I boot from ZFS since the begin of the year. I have a multi-boot situation
with:
- Ubuntu 19.10 with Virtualbox booting from ZFS
- Ubuntu Mate 19.10 with QEMO/KVM booting from ZFS
- Ubuntu 19.10 booting from ext4
I have two problems with zfs:
- the last upda
Public bug reported:
I boot from ZFS since the begin of the year. I have a multi-boot situation with:
- Ubuntu 19.10 with Virtualbox booting from ZFS
- Ubuntu Mate 19.10 with QEMO/KVM booting from ZFS
- Ubuntu 19.10 booting from ext4
I have two problems with zfs:
- the last update of zfs failed,
I have the same problem with my custom install, inherited and upgraded
from my Ubuntu 19.04 system. After each boot I run a small script to
export and import my three pools again. zsys is not installed anymore on
this manual install from early 2019. The system worked without problems
on Ubuntu 19.0
Public bug reported:
I have created a new user using the standard Ubuntu utility, but there
is no new ZFS dataset created, like for the first user, that has been
created during installation. See screenshot of home directory and zfs
datasets.
ProblemType: Bug
DistroRelease: Ubuntu 19.10
Package: z
I did find it on Tuesday 14:30 local time, 19:30 GMT on the "Ubuntu
daily builds pending" as far as I can reconstruct it. I also downloaded
Ubuntu Mate that day and the first dirty install worked, but when I
tried to install the same ISO again, it showed the known bug.
On Thu, 2019-10-10 at 17:43
OK I reached the same conclusion somewhat later.
That dataset has no mountpoint
On Thu, 2019-10-10 at 17:39 +, Richard Laager wrote:
> This is not a bug as far as I can see. This looks like the snapshot
> has
> no unique data so its USED is 0. Note that REFER is non-zero.
>
> ** Changed in: z
** Attachment added: "zfslist.png"
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1847637/+attachment/5296278/+files/zfslist.png
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/
Public bug reported:
yesterday I did a manual snapshot
today i upgrade the system and part of that large upgrade was Linux 5.3.0-17.
Afterwards I took another snapshot
I expect to see the columns "used" and "refer" to contain realistic
values, not "0" or a some standard value. See screenshot.
**
** Attachment added: "Screenshot from 2019-10-10 12-51-17.png"
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1847632/+attachment/5296270/+files/Screenshot%20from%202019-10-10%2012-51-17.png
--
You received this bug notification because you are a member of Kernel
Packages, which is
** Attachment added: "zfs-update-grub-fails"
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1847632/+attachment/5296269/+files/zfs-update-grub-fails
** Description changed:
It is probably a minor problem. I did run an upgrade of my official ZFS
installation in a VM.
- It did prod
Public bug reported:
It is probably a minor problem. I did run an upgrade of my official ZFS
installation in a VM.
It did produce an error:
device-mapper: reload ioctl on osprober-linux-sdb6 failed: Device or resource
busy
Command failed.
I have a dual boot so afterwards I did boot from ext4
I will read it a few times more, because it is complex. In the past I
booted from ext4 and I had stored all my ~15 Virtual Machines in vms/vms
(and on the desktop I have vms/kvm too). I was pleased with the
instantaneous response times in the VMs, because the Linux VMs did
almost completely run
Public bug reported:
I have often problems with zfsutils-linux updates. I have in a Virtualbox VM a
dual boot from two disks,
- one from ext4 (sda), the one with the failing update and
- another one from zfs (sdb).
The whole update process is far from robust, any fly on the road causes a car
OK the next zfs-linux update worked for the system I did the rmdir. For the
system I re-installed the error reoccurred, so I was wrong.
But I still have no idea why rmdir did the job.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-li
Public bug reported:
After installing Ubuntu 19.10 on ZFS, the boot process is slow, because
the following file is empty: /etc/initramfs-tools/conf.d/resume
I have added RESUME=none
** Affects: zfs-linux (Ubuntu)
Importance: Undecided
Status: New
** Description changed:
After i
Public bug reported:
The bpool status is confusing. Should I upgrade the pool or is it on
purpose that the bpool is like this. I do no like to see this warning
after installing the system on ZFS from scratch.
See screenshot
** Affects: zfs-linux (Ubuntu)
Importance: Undecided
Statu
Public bug reported:
I installed Ubuntu Mate 19.10 on ZFS in Virtualbox on sdb, while sda has
Ubuntu 19.10 on ext4 with both zfsutils-linux and zfs-initramfs
installed. The installer refused to install a boot loader on both sda
and sdb, but after booting from ext4 and:
updating /etc/default/g
I used the following commands on both systems;
sudo zpool export hp-data
sudo zfs list # the other command produces a lot of snap mounts
sudo umount -l /vms/vms #the only other mounted dataset
sudo rmdir /hp-data
sudo rmdir /vms/vms
Update and upgrade the system
sudo zpool import hp-data
sudo zf
Sorry, you were right about the meaning of import/export and rmdir, but
my excuse is, that it has been very early in the morning.
However I have the same update problem on an ext4 installation of Ubuntu
19.10 on the same laptop. It gives the same error on two zfs modules,
but of course without the
Sorry, but this is ridiculous. After the upgrade to 19.10, the system
worked fine for a week. I should export/import ~700 GB of data to get an
standard update working? Why should your advice help this time? If I'm
stupid enough to move 700GB of data around, I will probably have the
same problem aga
Note that the system also refuses to mount my hp-data datapool a
datapool for all my data and only data, since the directory is not
empty.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs
Tried the change bu it did not work. tried it a second time after update-grub
still not working.
I forced zfs re-installation by removing a non-existing module zfs-dkms.
** Attachment added: "mountpoint-change"
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1846424/+attachment/52952
The output related to the mountpoints, see attachment
** Attachment added: "mountpoints"
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1846424/+attachment/5295259/+files/mountpoints
--
You received this bug notification because you are a member of Kernel
Packages, which is subscri
Ignore the version numbers in the roots sub dataset, I did not change
the dataset names, but the system versions stored there is 19.10.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/18
1 - 100 of 136 matches
Mail list logo