[Bug 1964328] [NEW] Lubuntu Installer unmounts luks volumes during install resulting in failed mountpoints for installer
Public bug reported: When trying to install lubuntu on custom created luks volumes. The installer unmounts the luks volumes by that removing the FS inside them and fails to install. # These are the steps to reproduce. # Set up initial partitions /dev/sda1 as fat32 EFI /dev/sda2 as LUKS encrypted volume /dev/sda3 as Luks Encrypted Volume gdisk /dev/sda GPT fdisk (gdisk) version 1.0.8 Partition table scan: MBR: not present BSD: not present APM: not present GPT: not present Creating new GPT entries in memory. Command (? for help): n Partition number (1-128, default 1): First sector (34-104857566, default = 2048) or {+-}size{KMGTP}: Last sector (2048-104857566, default = 104857566) or {+-}size{KMGTP}: +550M Current type is 8300 (Linux filesystem) Hex code or GUID (L to show codes, Enter = 8300): ef00 Changed type of partition to 'EFI system partition' Command (? for help): n Partition number (2-128, default 2): p First sector (34-104857566, default = 1128448) or {+-}size{KMGTP}: Last sector (1128448-104857566, default = 104857566) or {+-}size{KMGTP}: +41G Current type is 8300 (Linux filesystem) Hex code or GUID (L to show codes, Enter = 8300): Changed type of partition to 'Linux filesystem' Command (? for help): n Partition number (3-128, default 3): p First sector (34-104857566, default = 87111680) or {+-}size{KMGTP}: Last sector (87111680-104857566, default = 104857566) or {+-}size{KMGTP}: Current type is 8300 (Linux filesystem) Hex code or GUID (L to show codes, Enter = 8300): Changed type of partition to 'Linux filesystem' Command (? for help): p Disk /dev/sda: 104857600 sectors, 50.0 GiB Model: VBOX HARDDISK Sector size (logical/physical): 512/512 bytes Disk identifier (GUID): D7862606-BF94-427B-9708-E767F97B1319 Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 104857566 Partitions will be aligned on 2048-sector boundaries Total free space is 2014 sectors (1007.0 KiB) Number Start (sector)End (sector) Size Code Name 12048 1128447 550.0 MiB EF00 EFI system partition 2 112844887111679 41.0 GiB8300 Linux filesystem 387111680 104857566 8.5 GiB 8300 Linux filesystem Command (? for help): w Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING PARTITIONS!! Do you want to proceed? (Y/N): y OK; writing new GUID partition table (GPT) to /dev/sda. The operation has completed successfully. # Encrypt the volumes cryptsetup -v --cipher aes-xts-plain64 --key-size 512 --iter-time 5000 --type=luks1 --use-random luksFormat /dev/sda2 WARNING! This will overwrite data on /dev/sda2 irrevocably. Are you sure? (Type 'yes' in capital letters): YES Enter passphrase for /dev/sda2: Verify passphrase: Key slot 0 created. Command successful. cryptsetup -v --cipher aes-xts-plain64 --key-size 512 --iter-time 5000 --type=luks1 --use-random luksFormat /dev/sda3 WARNING! This will overwrite data on /dev/sda3 irrevocably. Are you sure? (Type 'yes' in capital letters): YES Enter passphrase for /dev/sda3: Verify passphrase: Key slot 0 created. Command successful. # Unlock the LUKS volumes root@lubuntu:/home/lubuntu# cryptsetup luksOpen /dev/sda2 LubuntuRootCrypt Enter passphrase for /dev/sda2: root@lubuntu:/home/lubuntu# cryptsetup luksOpen /dev/sda3 LubuntuRootSwap Enter passphrase for /dev/sda3: # format the volumes I used kpartitionmanager to create the fat32 for EFI # Then commands for the "root" and "Swap" partition mkfs.btrfs -L "Lubuntu LTS Root" -m dup /dev/mapper/LubuntuRootCrypt btrfs-progs v5.16.2 See http://btrfs.wiki.kernel.org for more information. NOTE: several default settings have changed in version 5.15, please make sure this does not affect your deployments: - DUP for metadata (-m dup) - enabled no-holes (-O no-holes) - enabled free-space-tree (-R free-space-tree) Label: Lubuntu LTS Root UUID: 8b84f49d-969d-4fa7-b38a-18ddc69a859d Node size: 16384 Sector size:4096 Filesystem size:41.00GiB Block group profiles: Data: single8.00MiB Metadata: DUP 256.00MiB System: DUP 8.00MiB SSD detected: no Zoned device: no Incompat features: extref, skinny-metadata, no-holes Runtime features: free-space-tree Checksum: crc32c Number of devices: 1 Devices: IDSIZE PATH 141.00GiB /dev/mapper/LubuntuRootCrypt mkswap /dev/mapper/LubuntuRootSwap Setting up swapspace version 1, size = 8.5 GiB (9083789312 bytes) no label, UUID=a9b1d684-a764-426a-afe6-4b31d8444652 # The partition layout and LUKS unlocked volumes now looks like lsblk NAME MAJ:MIN
[Bug 1939210] Re: When using HWE, zfs-kmod and zfs user tools versions must match
I see a lot of people somehow subscribed to this bug. But only 17 registered as this bug affects them. Could i politely ask more people to vote for this issue. So it might be noticed before 22.04. Since we need a fix before then or we will live with this for 2 more years in the new LTS. Best regards, Darkyere -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1939210 Title: When using HWE, zfs-kmod and zfs user tools versions must match To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1939210/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1939210] Re: When using HWE, zfs-kmod and zfs user tools versions must match
I have a proposal to what could be done to fix this. And i hope the origional creater of this bug agree. As well as other in this thread. But if we have a HWE kernel that gives us the newest stable kernel. Wouldten it be an idea to have a zfsutils-linux-hwe package that matches the ZFS kmod in the kernel. Depending on which kernel a person uses. It would be as simple as either installing zfsutils-linux or zfsutils-linux-hwe Best regards, Darkyere -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1939210 Title: When using HWE, zfs-kmod and zfs user tools versions must match To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1939210/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1931801] Re: Apt install postfix fails with bad demiliter
*** This bug is a duplicate of bug 1906970 *** https://bugs.launchpad.net/bugs/1906970 Your assessment was correct. Commenting out "search ." and then installing Postfix resulted in a successful install. I'm just not sure fixing it like this is the right way though. I have and ubuntu 20.04 that didn't fail install or upgrade of postfix. And I have a friend whos on 20.10 that didn't fail install or upgrade of postfix. This seems like being a Hirsute problem. Even found this on the net. (He decided to null the man.cf file though instead of finding a solution) https://askubuntu.com/questions/1333199/apt-update-encounters-an-error- configuring-postfix-after-upgrade-to-21-04/1346306#1346306 My point being one should rather figure out why this happens. And figure out a possible fix it. Or more will probably show up with the bug. Best Regards, Mark ** This bug has been marked a duplicate of bug 1906970 dpkg hook hostname error -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1931801 Title: Apt install postfix fails with bad demiliter To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/postfix/+bug/1931801/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1931801] Re: Apt install postfix fails with bad demiliter
The Ubuntu version i am Using is Hirsute. Im on a RPI4. On previus installs it used the "mail name" during install etc. "rpi4-ubuntudesktop" To generate the myhostname = rpi4ubuntudesktop and it would just work. Now myhostname is created as "rpi4ubuntudesktop.." (Notice the two added dots) Which it didt do before # My main.cf file is not manually edited yet since i have a failed package install # Everytime i try to install something after installing postfix i get an 1 not fully installed or removed. Running Apt install --fix-missing Reverts me back to the origional install error of postfix Running apt remove --purge postfix # Then apt install postfix gives me the same errors as posted before. This is my /etc/postfix/main.cf file after a failed install I havent edited anything -> # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # See http://www.postfix.org/COMPATIBILITY_README.html -- default to 2 on # fresh installs. compatibility_level = 2 # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_security_level=may smtp_tls_CApath=/etc/ssl/certs smtp_tls_security_level=may smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination myhostname = rpi4-ubuntudesktop.. alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases mydestination = $myhostname, rpi4-ubuntudesktop, localhost.localdomain, , localhost relayhost = mynetworks = 127.0.0.0/8 [:::127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all inet_protocols = all ** Changed in: postfix (Ubuntu) Status: Incomplete => New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1931801 Title: Apt install postfix fails with bad demiliter To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/postfix/+bug/1931801/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1931801] Re: Apt install postfix fails with bad demiliter
i of couse made a typo The steps in taking is valid from # -> # Here is the info in the interactive part choose internet choose mail name -> rpi4-ubuntudesktop # -< And nano /etc/hostname # Is really rpi4-ubuntudesktop -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1931801 Title: Apt install postfix fails with bad demiliter To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/postfix/+bug/1931801/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1931801] [NEW] Apt install postfix fails with bad demiliter
Public bug reported: # Trying to install postfix keeps on adding a delimiter (two dots at end of hostname) making newaliases and then the postfix installation fail # i have tried fixing it a lot of times by changing hostname and info in the hosts file # And also changing the myhostname in /etc/postfix/main.cf and running apt install --fix-broken # But it just changes myhostname to rpi4-ubuntudesktop.. (Note the two dots at the end) # It just continuously fail's to install # I believe i need help to fix this or quite possible a bug # here are my steps and info sudo apt install postfix Reading package lists... Done Building dependency tree... Done Reading state information... Done postfix is already the newest version (3.5.6-1). 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue? [Y/n] y Setting up postfix (3.5.6-1) ... # -> # Here is the info in the interactive part choose internet choose mail name -> rpi4-ubuntudesktop # -< sudo apt install postfix Reading package lists... Done Building dependency tree... Done Reading state information... Done Suggested packages: procmail postfix-mysql postfix-pgsql postfix-ldap postfix-pcre postfix-lmdb postfix-sqlite sasl2-bin | dovecot-common postfix-cdb postfix-doc The following packages will be REMOVED: lsb-invalid-mta The following NEW packages will be installed: postfix 0 upgraded, 1 newly installed, 1 to remove and 1 not upgraded. Need to get 0 B/1,200 kB of archives. After this operation, 4,233 kB of additional disk space will be used. Do you want to continue? [Y/n] y Preconfiguring packages ... dpkg: lsb-invalid-mta: dependency problems, but removing anyway as you requested: lsb-core depends on lsb-invalid-mta (>= 11.1.0ubuntu2) | mail-transport-agent; however: Package lsb-invalid-mta is to be removed. Package mail-transport-agent is not installed. Package lsb-invalid-mta which provides mail-transport-agent is to be removed. lsb-core depends on lsb-invalid-mta (>= 11.1.0ubuntu2) | mail-transport-agent; however: Package lsb-invalid-mta is to be removed. Package mail-transport-agent is not installed. Package lsb-invalid-mta which provides mail-transport-agent is to be removed. (Reading database ... 162385 files and directories currently installed.) Removing lsb-invalid-mta (11.1.0ubuntu2) ... Selecting previously unselected package postfix. (Reading database ... 162379 files and directories currently installed.) Preparing to unpack .../postfix_3.5.6-1_arm64.deb ... Unpacking postfix (3.5.6-1) ... Setting up postfix (3.5.6-1) ... Adding group `postfix' (GID 135) ... Done. Adding system user `postfix' (UID 128) ... Adding new user `postfix' (UID 128) with group `postfix' ... Not creating home directory `/var/spool/postfix'. Creating /etc/postfix/dynamicmaps.cf Adding group `postdrop' (GID 136) ... Done. setting myhostname: rpi4-ubuntudesktop.. setting alias maps setting alias database mailname is not a fully qualified domain name. Not changing /etc/mailname. setting destinations: $myhostname, rpi4-ubuntudesktop, localhost.localdomain, , localhost setting relayhost: setting mynetworks: 127.0.0.0/8 [:::127.0.0.0]/104 [::1]/128 setting mailbox_size_limit: 0 setting recipient_delimiter: + setting inet_interfaces: all setting inet_protocols: all Postfix (main.cf) is now set up with a default configuration. If you need to make changes, edit /etc/postfix/main.cf (and others) as needed. To view Postfix configuration values, see postconf(1). After modifying main.cf, be sure to run 'systemctl reload postfix'. Running newaliases newaliases: warning: valid_hostname: misplaced delimiter: rpi4-ubuntudesktop.. newaliases: fatal: file /etc/postfix/main.cf: parameter myhostname: bad parameter value: rpi4-ubuntudesktop.. dpkg: error processing package postfix (--configure): installed postfix package post-installation script subprocess returned error exit status 75 Processing triggers for ufw (0.36-7.1) ... Rules updated for profile 'Samba' Skipped reloading firewall Processing triggers for man-db (2.9.4-2) ... Processing triggers for rsyslog (8.2102.0-2ubuntu1) ... Processing triggers for libc-bin (2.33-0ubuntu5) ... Errors were encountered while processing: postfix E: Sub-process /usr/bin/dpkg returned an error code (1) nano /etc/hostname rpi4ubuntudesktop nano /etc/hosts 127.0.0.1 localhost 127.0.1.1 rpi4-ubuntudesktop # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ** Affects: postfix (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1931801 Title: Apt install postfix fails with bad
[Bug 1870616] [NEW] [MEDION E122X] hibernate/resume failure
Public bug reported: Had terminal and firefox open. Then atemmepted to see if i could hibernate with pm-hibernate just to see if it worked ProblemType: KernelOops DistroRelease: Ubuntu 20.04 Package: linux-image-5.4.0-21-generic 5.4.0-21.25 ProcVersionSignature: Ubuntu 5.4.0-21.25-generic 5.4.27 Uname: Linux 5.4.0-21-generic x86_64 Annotation: This occurred during a previous hibernation, and prevented the system from resuming properly. ApportVersion: 2.20.11-0ubuntu22 Architecture: amd64 AudioDevicesInUse: USERPID ACCESS COMMAND /dev/snd/controlC0: darkyere 1121 F pulseaudio Date: Fri Apr 3 21:57:45 2020 DuplicateSignature: hibernate/resume:MEDION E122X:Medion-E122X_0107 ExecutablePath: /usr/share/apport/apportcheckresume Failure: hibernate/resume InstallationDate: Installed on 2020-04-02 (1 days ago) InstallationMedia: Ubuntu 20.04 LTS "Focal Fossa" - Beta amd64 (20200401) InterpreterPath: /usr/bin/python3.8 MachineType: MEDION E122X ProcCmdline: /usr/bin/python3 /usr/share/apport/apportcheckresume ProcFB: 0 i915drmfb ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-5.4.0-21-generic root=/dev/mapper/LubuntuRootLVM-Root ro quiet splash vt.handoff=7 PulseList: Error: command ['pacmd', 'list'] failed with exit code 1: No PulseAudio daemon running, or not running as session daemon. Python3Details: /usr/bin/python3.8, Python 3.8.2, python3-minimal, 3.8.2-0ubuntu1 PythonDetails: N/A RelatedPackageVersions: linux-restricted-modules-5.4.0-21-generic N/A linux-backports-modules-5.4.0-21-generic N/A linux-firmware1.187 SourcePackage: linux Title: [MEDION E122X] hibernate/resume failure UpgradeStatus: No upgrade log present (probably fresh install) UserGroups: dmi.bios.date: 05/11/2010 dmi.bios.vendor: American Megatrends Inc. dmi.bios.version: Medion-E122X_0107 dmi.board.asset.tag: To Be Filled By O.E.M. dmi.board.name: E122X dmi.board.vendor: MEDION dmi.board.version: Rev 1.0 dmi.chassis.asset.tag: To Be Filled By O.E.M. dmi.chassis.type: 10 dmi.chassis.vendor: MEDION dmi.chassis.version: To Be Filled By O.E.M. dmi.modalias: dmi:bvnAmericanMegatrendsInc.:bvrMedion-E122X_0107:bd05/11/2010:svnMEDION:pnE122X:pvrVer01.07:rvnMEDION:rnE122X:rvrRev1.0:cvnMEDION:ct10:cvrToBeFilledByO.E.M.: dmi.product.family: To Be Filled By O.E.M. dmi.product.name: E122X dmi.product.sku: To Be Filled By O.E.M. dmi.product.version: Ver: 01.07 dmi.sys.vendor: MEDION ** Affects: linux (Ubuntu) Importance: Undecided Status: New ** Tags: amd64 apport-kerneloops focal hibernate resume -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1870616 Title: [MEDION E122X] hibernate/resume failure To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1870616/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1100843] Re: Live Migration Causes Performance Issues
Can anyone confirm if you similar slowdowns if you leave the VM running for a few days? I thought it was related to live migration, but I saw my performance degrade if the VM/physical host was up and idle for a couple days. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to qemu-kvm in Ubuntu. https://bugs.launchpad.net/bugs/1100843 Title: Live Migration Causes Performance Issues To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1100843/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1100843] Re: Live Migration Causes Performance Issues
@ccormier I've thought all along it might be a libc issue but testing libc 2.13 on precise would be rather difficult. To some extent I feel like this rules out the kernel as an issue though, since the same kernel on precise/lucid yield different results. Have you tried letting a precise VM idle without a livemigration to see if performance degrades? If not perhaps you could leave one idle over the weekend and performance test on Monday? I assume you're testing with qemu-kvm 1.0.0? I've been testing with qemu-kvm 1.2.0 as the performance is remarkably better for me. This would seem to indicate it's not qemu-kvm at fault either. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to qemu-kvm in Ubuntu. https://bugs.launchpad.net/bugs/1100843 Title: Live Migration Causes Performance Issues To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1100843/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1100843] Re: Live Migration Causes Performance Issues
Can anyone confirm if you similar slowdowns if you leave the VM running for a few days? I thought it was related to live migration, but I saw my performance degrade if the VM/physical host was up and idle for a couple days. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1100843 Title: Live Migration Causes Performance Issues To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1100843/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1100843] Re: Live Migration Causes Performance Issues
@ccormier I've thought all along it might be a libc issue but testing libc 2.13 on precise would be rather difficult. To some extent I feel like this rules out the kernel as an issue though, since the same kernel on precise/lucid yield different results. Have you tried letting a precise VM idle without a livemigration to see if performance degrades? If not perhaps you could leave one idle over the weekend and performance test on Monday? I assume you're testing with qemu-kvm 1.0.0? I've been testing with qemu-kvm 1.2.0 as the performance is remarkably better for me. This would seem to indicate it's not qemu-kvm at fault either. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1100843 Title: Live Migration Causes Performance Issues To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1100843/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1122328] Re: KVM Crash: Use root_domain of rt_rq not current processor
Will this be added to a proposed kernel anytime son? -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1122328 Title: KVM Crash: Use root_domain of rt_rq not current processor To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1122328/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1122328] Re: KVM Crash: Use root_domain of rt_rq not current processor
Oops, I meant anytime soon? -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1122328 Title: KVM Crash: Use root_domain of rt_rq not current processor To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1122328/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1122328] [NEW] KVM Crash: Use root_domain of rt_rq not current processor
Public bug reported: I'm still seeing crashes after patching bug 1116362 on KVM. Using the stable patch below seems to have made things more stable. We'll see if I can manage to not crash the box, but this one is less predictable than 1116362 was. https://patchwork.kernel.org/patch/1973201/ Could this be included with precise? ** Affects: linux (Ubuntu) Importance: Undecided Status: Confirmed -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1122328 Title: KVM Crash: Use root_domain of rt_rq not current processor To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1122328/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1122328] Re: KVM Crash: Use root_domain of rt_rq not current processor
There are no logs, just a 'kernel: sched: RT throttling activated' on the VNC console and the box becomes unavailable while spinning the CPU's on the physical host. ** Changed in: linux (Ubuntu) Status: New = Confirmed -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1122328 Title: KVM Crash: Use root_domain of rt_rq not current processor To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1122328/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1116362] Re: KVM Lockup Bug
It's an old patch so if it didn't make it to stable it's probably not going to without some intervention. Do you want me to see about getting it submitted to stable? Did it apply cleanly against precise? -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1116362 Title: KVM Lockup Bug To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1116362/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1116362] [NEW] KVM Lockup Bug
Public bug reported: I've hit a bug present in 3.2.0-37 that causes my KVM virtual machines to lockup on occasion, possibly related to this this change in 3.2.0-32 * time: Avoid making adjustments if we haven't accumulated anything - LP: #1053039 I believe there is already an upstream fix to this patch that I believe resolves my issue. I'd like to request that this patch be included in precise. Commit-ID: 1d17d17484d40f2d5b35c79518597a2b25296996 Gitweb: http://git.kernel.org/tip/1d17d17484d40f2d5b35c79518597a2b25296996 Author: Ingo Molnar mi...@kernel.org AuthorDate: Sat, 4 Aug 2012 21:21:14 +0200 Committer: Ingo Molnar mi...@kernel.org CommitDate: Sun, 5 Aug 2012 12:37:14 +0200 time: Fix adjustment cleanup bug in timekeeping_adjust() Tetsuo Handa reported that sporadically the system clock starts counting up too quickly which is enough to confuse the hangcheck timer to print a bogus stall warning. Commit 2a8c0883 time: Move xtime_nsec adjustment underflow handling timekeeping_adjust overlooked this exit path: } else return; which should really be a proper exit sequence, fixing the bug as a side effect. Also make the flow more readable by properly balancing curly braces. Reported-by: Tetsuo Handa penguin-ker...@i-love.sakura.ne.jp wrote: Tested-by: Tetsuo Handa penguin-ker...@i-love.sakura.ne.jp wrote: Signed-off-by: Ingo Molnar mi...@kernel.org ** Affects: linux (Ubuntu) Importance: Undecided Status: Incomplete -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1116362 Title: KVM Lockup Bug To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1116362/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1116362] Re: KVM Lockup Bug
The issue is only seen briefly on the console before the machine locks up. Nothing makes it into the logs. It's already a known linux bug. ** Changed in: linux (Ubuntu) Status: Incomplete = Confirmed -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1116362 Title: KVM Lockup Bug To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1116362/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1116362] Re: KVM Lockup Bug
I was under the impression Ingo's patch landed in 3.6-rc2, is this not the case? https://patchwork.kernel.org/patch/1275411/ http://git.kernel.org/tip/1d17d17484d40f2d5b35c79518597a2b25296996 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1116362 Title: KVM Lockup Bug To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1116362/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1100843] Re: Live Migration Causes Performance Issues
I tested with qemu-kvm 1.3.0. It seems that the issue still exists, but that it exists without a live migration if you wait long enough. That is if you start a VM on one node and run phoronix batch-run pts/compilation, wait 4 hours (with the VM and physical host doing nothing else) an re-run the test you'll get results on the VM similar to if you run the test and then live migrate to a new host. I have no idea what's causing this behavior, but it seems to be reproducible. For now this can probably be closed. I'll resubmit a new bug (possibly upsteam) if I can figure out how to get more details to help properly diagnose how/when/why the VM's slow down over time. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to qemu-kvm in Ubuntu. https://bugs.launchpad.net/bugs/1100843 Title: Live Migration Causes Performance Issues To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1100843/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1100843] Re: Live Migration Causes Performance Issues
There's nothing in syslog for the VM or host that would imply performance degradation. I have done this with hugepages and made sure huge page use was consistent. Previously I disabled hugepages and didn't see a difference but I haven't tested again. I'm using (C)LVM back off FCoE/SAN but I haven't tried local LVM/qcow2 type backing. I'm using libvirt, but the command line ends up looking like this, if you would like I can provide XML for libvirt as well. /usr/bin/kvm -name one-10 -S -M pc-1.3 -cpu Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1 -uuid 5ee0afd3-df3f-fb1f-02bd- 7cde2bc4ee95 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-10.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -no-shutdown -device piix3-usb- uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/one//datastores/0/10/disk.0,if=none,id=drive-virtio- disk0,format=raw,cache=none -device virtio-blk- pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio- disk0,bootindex=1 -drive file=/var/lib/one//datastores/0/10/disk.1,if=none,id=drive- ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive =drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=23,id=hostnet0 -device virtio-net- pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fd,bus=pci.0,addr=0x3 -vnc 0.0.0.0:10,password -vga cirrus -device virtio-balloon- pci,id=balloon0,bus=pci.0,addr=0x5 I'll try experimenting more with THP disabled and different IO backends. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to qemu-kvm in Ubuntu. https://bugs.launchpad.net/bugs/1100843 Title: Live Migration Causes Performance Issues To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1100843/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1100843] Re: Live Migration Causes Performance Issues
I tested with qemu-kvm 1.3.0. It seems that the issue still exists, but that it exists without a live migration if you wait long enough. That is if you start a VM on one node and run phoronix batch-run pts/compilation, wait 4 hours (with the VM and physical host doing nothing else) an re-run the test you'll get results on the VM similar to if you run the test and then live migrate to a new host. I have no idea what's causing this behavior, but it seems to be reproducible. For now this can probably be closed. I'll resubmit a new bug (possibly upsteam) if I can figure out how to get more details to help properly diagnose how/when/why the VM's slow down over time. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1100843 Title: Live Migration Causes Performance Issues To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1100843/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1100843] Re: Live Migration Causes Performance Issues
There's nothing in syslog for the VM or host that would imply performance degradation. I have done this with hugepages and made sure huge page use was consistent. Previously I disabled hugepages and didn't see a difference but I haven't tested again. I'm using (C)LVM back off FCoE/SAN but I haven't tried local LVM/qcow2 type backing. I'm using libvirt, but the command line ends up looking like this, if you would like I can provide XML for libvirt as well. /usr/bin/kvm -name one-10 -S -M pc-1.3 -cpu Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1 -uuid 5ee0afd3-df3f-fb1f-02bd- 7cde2bc4ee95 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-10.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -no-shutdown -device piix3-usb- uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/one//datastores/0/10/disk.0,if=none,id=drive-virtio- disk0,format=raw,cache=none -device virtio-blk- pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio- disk0,bootindex=1 -drive file=/var/lib/one//datastores/0/10/disk.1,if=none,id=drive- ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive =drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=23,id=hostnet0 -device virtio-net- pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fd,bus=pci.0,addr=0x3 -vnc 0.0.0.0:10,password -vga cirrus -device virtio-balloon- pci,id=balloon0,bus=pci.0,addr=0x5 I'll try experimenting more with THP disabled and different IO backends. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1100843 Title: Live Migration Causes Performance Issues To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1100843/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1100843] Re: Live Migration Causes Performance Issues
I don't see qemu-kvm 1.3.0 yet. Will test when you get it pushed, hopefully Tuesday (01/22/2013) if you've pushed by then. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to qemu-kvm in Ubuntu. https://bugs.launchpad.net/bugs/1100843 Title: Live Migration Causes Performance Issues To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1100843/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1100843] Re: Live Migration Causes Performance Issues
I don't see qemu-kvm 1.3.0 yet. Will test when you get it pushed, hopefully Tuesday (01/22/2013) if you've pushed by then. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1100843 Title: Live Migration Causes Performance Issues To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1100843/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1100843] [NEW] Live Migration Causes Performance Issues
Public bug reported: I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms- 0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal, built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs from source to test, but libvirt seems to have an issue with it that I haven't been able to track down yet. I'm seeing a performance degradation after live migration on Precise, but not Lucid. These hosts are managed by libvirt (tested both 0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I don't seem to have this problem with lucid guests (running a number of standard kernels, 3.2.5 mainline and backported linux- image-3.2.0-35-generic as well.) I first noticed this problem with phoronix doing compilation tests, and then tried lmbench where even simple calls experience performance degradation. I've attempted to post to the kvm mailing list, but so far the only suggestion was it may be related to transparent hugepages not being used after migration, but this didn't pan out. Someone else has a similar problem here - http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592 qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1 -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio- disk0,format=raw,cache=none -device virtio-blk- pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio- disk0,bootindex=1 -drive file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive- ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive =drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net- pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3 -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 Disk backend is LVM running on SAN via FC connection (using symlink from /var/lib/one/datastores/0/2/disk.0 above) ubuntu-12.04 - first boot == Simple syscall: 0.0527 microseconds Simple read: 0.1143 microseconds Simple write: 0.0953 microseconds Simple open/close: 1.0432 microseconds Using phoronix pts/compuational ImageMagick - 31.54s Linux Kernel 3.1 - 43.91s Mplayer - 30.49s PHP - 22.25s ubuntu-12.04 - post live migration == Simple syscall: 0.0621 microseconds Simple read: 0.2485 microseconds Simple write: 0.2252 microseconds Simple open/close: 1.4626 microseconds Using phoronix pts/compilation ImageMagick - 43.29s Linux Kernel 3.1 - 76.67s Mplayer - 45.41s PHP - 29.1s I don't have phoronix results for 10.04 handy, but they were within 1% of each other... ubuntu-10.04 - first boot == Simple syscall: 0.0524 microseconds Simple read: 0.1135 microseconds Simple write: 0.0972 microseconds Simple open/close: 1.1261 microseconds ubuntu-10.04 - post live migration == Simple syscall: 0.0526 microseconds Simple read: 0.1075 microseconds Simple write: 0.0951 microseconds Simple open/close: 1.0413 microseconds ** Affects: qemu-kvm (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to qemu-kvm in Ubuntu. https://bugs.launchpad.net/bugs/1100843 Title: Live Migration Causes Performance Issues To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1100843/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1100843] [NEW] Live Migration Causes Performance Issues
Public bug reported: I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms- 0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal, built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs from source to test, but libvirt seems to have an issue with it that I haven't been able to track down yet. I'm seeing a performance degradation after live migration on Precise, but not Lucid. These hosts are managed by libvirt (tested both 0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I don't seem to have this problem with lucid guests (running a number of standard kernels, 3.2.5 mainline and backported linux- image-3.2.0-35-generic as well.) I first noticed this problem with phoronix doing compilation tests, and then tried lmbench where even simple calls experience performance degradation. I've attempted to post to the kvm mailing list, but so far the only suggestion was it may be related to transparent hugepages not being used after migration, but this didn't pan out. Someone else has a similar problem here - http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592 qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1 -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio- disk0,format=raw,cache=none -device virtio-blk- pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio- disk0,bootindex=1 -drive file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive- ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive =drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net- pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3 -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 Disk backend is LVM running on SAN via FC connection (using symlink from /var/lib/one/datastores/0/2/disk.0 above) ubuntu-12.04 - first boot == Simple syscall: 0.0527 microseconds Simple read: 0.1143 microseconds Simple write: 0.0953 microseconds Simple open/close: 1.0432 microseconds Using phoronix pts/compuational ImageMagick - 31.54s Linux Kernel 3.1 - 43.91s Mplayer - 30.49s PHP - 22.25s ubuntu-12.04 - post live migration == Simple syscall: 0.0621 microseconds Simple read: 0.2485 microseconds Simple write: 0.2252 microseconds Simple open/close: 1.4626 microseconds Using phoronix pts/compilation ImageMagick - 43.29s Linux Kernel 3.1 - 76.67s Mplayer - 45.41s PHP - 29.1s I don't have phoronix results for 10.04 handy, but they were within 1% of each other... ubuntu-10.04 - first boot == Simple syscall: 0.0524 microseconds Simple read: 0.1135 microseconds Simple write: 0.0972 microseconds Simple open/close: 1.1261 microseconds ubuntu-10.04 - post live migration == Simple syscall: 0.0526 microseconds Simple read: 0.1075 microseconds Simple write: 0.0951 microseconds Simple open/close: 1.0413 microseconds ** Affects: qemu-kvm (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1100843 Title: Live Migration Causes Performance Issues To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1100843/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 833368] Re: clustered lvm commands fail with activation/monitoring=0 is incompatible with clustered Volume Group error
As a user of Ubuntu I would like to see it supported. I use it extensively on physical hosts to support virtual servers. It's also particularly useful for OCFS2, GFS/GFS2, ceph etc. Apparently this removal could brick servers using the the cluster feature as well (debian bug #697676.) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/833368 Title: clustered lvm commands fail with activation/monitoring=0 is incompatible with clustered Volume Group error To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/833368/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1028933] Re: CPUSet is no more working on 3.2.0.26+
Is there a workaround? Is cgroup-lite really necessary for libvirt-bin? -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1028933 Title: CPUSet is no more working on 3.2.0.26+ To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/cpuset/+bug/1028933/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1075997] [NEW] cgroup-lite seems to break cpuset in recent kernels
Public bug reported: cgroup-lite, in conjunction with libvirt-bin seems to break cset, in conjunction with this kernel patch - http://gitorious.org/linux- omap/mainline/commit/f9ab5b5b0f5be506640321d710b0acd3dca6154a root@localhost:~/tools# mount -t cgroup none /cpusets/ -o cpuset,noprefix root@localhost:~/tools# cset set cset: /cpusets/libvirt/lxc is not a cpuset directory cset: ** /cpusets/libvirt/lxc is not a cpuset directory Is cgroup-lite really necessary for libvirt-bin? This can easily be reproduced by installing cset and test (`cset set` will work), then install libvirst-bin and restart. `cset set` will fail. ** Affects: cgroup-lite (Ubuntu) Importance: Undecided Status: New ** Description changed: cgroup-lite, in conjunction with libvirt-bin seems to break cset, in conjunction with this kernel patch - http://gitorious.org/linux- omap/mainline/commit/f9ab5b5b0f5be506640321d710b0acd3dca6154a - - root@localhost:~/tools# mount -t cgroup none /cpusets/ -o cpuset,noprefix + root@localhost:~/tools# mount -t cgroup none /cpusets/ -o + cpuset,noprefix root@localhost:~/tools# cset set cset: /cpusets/libvirt/lxc is not a cpuset directory cset: ** /cpusets/libvirt/lxc is not a cpuset directory - - Is cgroup-lite really necessary for libvirt-bin? This can easily be reproduced by installing cset and test (cset set will work), then install libvirst-bin and restart. `cset set` will work fail. + Is cgroup-lite really necessary for libvirt-bin? This can easily be + reproduced by installing cset and test (`cset set` will work), then + install libvirst-bin and restart. `cset set` will fail. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1075997 Title: cgroup-lite seems to break cpuset in recent kernels To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/cgroup-lite/+bug/1075997/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1065150] Re: Kernel bridge driver dropping packets as invalid header
I tested this and it is working for me in precise. ** Tags removed: verification-needed-precise ** Tags added: verification-done-precise -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1065150 Title: Kernel bridge driver dropping packets as invalid header To manage notifications about this bug go to: https://bugs.launchpad.net/emulex/+bug/1065150/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 695731] [NEW] sync hangs and can not be killed
Public bug reported: Sync hangs. strace stops logging anything after: sync( Process can not be killed and the server needs to be restarted. This is similar to other reports of the same issue which I will link, but I have no USB/NFS/CIFS/etc filesystems. I do have lvm2, HP cciss controller and ext4 filesystems. ProblemType: Bug DistroRelease: Ubuntu 10.04 Package: coreutils 7.4-2ubuntu3 ProcVersionSignature: Ubuntu 2.6.32-25.45-server 2.6.32.21+drm33.7 Uname: Linux 2.6.32-25-server x86_64 Architecture: amd64 Date: Thu Dec 30 09:00:58 2010 ExecutablePath: /bin/sync ProcEnviron: SHELL=/bin/bash LANG=en_US.UTF-8 LANGUAGE=en_US:en SourcePackage: coreutils ** Affects: linux (Ubuntu) Importance: Undecided Status: New ** Tags: amd64 apport-bug lucid -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/695731 Title: sync hangs and can not be killed -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 695731] Re: sync hangs and can not be killed
-- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/695731 Title: sync hangs and can not be killed -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 695731] Re: sync hangs and can not be killed
I could not figure out how to link to other bugs in launchpad, but this is similar to these bugs: a href=https://bugs.launchpad.net/bugs/624229;Bug 624229/a a href=https://bugs.launchpad.net/bugs/537241;Bug 537241/a a href=https://bugs.launchpad.net/bugs/624229;Bug 624229/a -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/695731 Title: sync hangs and can not be killed -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 65788] Re: Hangs at boot on AMD64
Similar to ddabe and RJ. Running Sun SunFire V20Z. Xen kernel starts to boot, runs scripts/local-premount then scripts/local-bottom and after scripts/init-bottom it hangs. CTRL-ALT-DELETE is picked up for a reboot, Sync SCSI Data or something like that is shown before the reboot. Everything running off LVM2 on SCSI LSI RAID card. -- Hangs at boot on AMD64 https://launchpad.net/bugs/65788 -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 76461] Re: Problem with Default Config File
I didn't realize this was fixed in Debian GNU/Linux, as of version 2.6-2. -- Problem with Default Config File https://launchpad.net/bugs/76461 -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 76461] Problem with Default Config File
Public bug reported: Binary package hint: nsca # and this is the default location for nagios2: command_file=/var/run/nagios2/rw/nagios.cmd This is not the default location for nagios2. The default config file should be: # and this is the default location for nagios2: command_file=/var/lib/nagios2/rw/nagios.cmd ** Affects: nsca (Ubuntu) Importance: Undecided Status: Unconfirmed ** Affects: nsca (Debian) Importance: Unknown Status: Unknown ** Bug watch added: Debian Bug tracker #396343 http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=396343 ** Also affects: nsca (Debian) via http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=396343 Importance: Unknown Status: Unknown -- Problem with Default Config File https://launchpad.net/bugs/76461 -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs