[Bug 2056064] Re: Nbdkit should include vddk plugin to support mounting vmware datastores
Providing a clarification: The change should be made to enable the plugin sub-package built: https://git.launchpad.net/ubuntu/+source/nbdkit/tree/configure?h=ubuntu/mantic#n27332 I understand this will make a built package to deviate from Debian, so please recommend proper path to address this. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2056064 Title: Nbdkit should include vddk plugin to support mounting vmware datastores To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nbdkit/+bug/2056064/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2056064] [NEW] Nbdkit should include vddk plugin to support mounting vmware datastores
Public bug reported: Currently nbdkit includes a number of plugins but doesn't provide ability to utilize VDDK to remotely access VMWare datastores. Upstream libguestfs describes the plugin https://libguestfs.org/nbdkit-vddk-plugin.1.html The codebase of the plugin is https://gitlab.com/nbdkit/vddk-remote Please build nbdkit with vddk support. ** Affects: nbdkit (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2056064 Title: Nbdkit should include vddk plugin to support mounting vmware datastores To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nbdkit/+bug/2056064/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1956764] [NEW] Proxy should be set up only for ua-related repos
Public bug reported: Currently if the proxy is configured for the UA client it is setting up global proxy for the entire apt: * To change what ubuntu-advantage-tools sets, run one of the following: * Substitute "apt_https_proxy" for "apt_http_proxy" as necessary. * sudo ua config set apt_http_proxy= * sudo ua config unset apt_http_proxy */ Acquire::http::Proxy http://:3128; Acquire::https::Proxy http://:3128; In the clouds though all the packages come from the cloud-based mirrors except UA-related ones. There is a use case currently when a customer wants to set up a proxy to reach the UA repositories but the proxy is throttling the bandwidth and it is impossible to pull all the packages through it. Suggesting to set up proxy in apt specifying the repo names such as Acquire::http::Proxy:: "http://your.proxy.host/;; Acquire::http::Proxy:: "http://your.proxy.host/;; ** Affects: ubuntu-advantage-tools (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1956764 Title: Proxy should be set up only for ua-related repos To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ubuntu-advantage-tools/+bug/1956764/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1936438] Re: Required to have "http" engine built in to the package
Essentially this requires to rebuild the existing package including the libraries of "curl", "ssl" and "crypto" as build dependencies. See the configure script if test "$disable_http" != "yes"; then HTTP_LIBS="-lcurl -lssl -lcrypto" if compile_prog "" "$HTTP_LIBS" "curl-new-ssl"; then output_sym "CONFIG_HAVE_OPAQUE_HMAC_CTX" http="yes" elif mv $TMPC2 $TMPC && compile_prog "" "$HTTP_LIBS" "curl-old-ssl"; then http="yes" fi fi print_config "http engine" "$http" -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1936438 Title: Required to have "http" engine built in to the package To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/fio/+bug/1936438/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1936438] [NEW] Required to have "http" engine built in to the package
Public bug reported: "http" engine is required to be present in order to be able to benchmark S3 and Swift functionality of the remote endpoint (Ceph RadosGW) ** Affects: fio (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1936438 Title: Required to have "http" engine built in to the package To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/fio/+bug/1936438/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1921453] Re: multi-zone replication doesn't work
"the system user is absent from the slave deployment" can you elaborate on this? In the procedure on docs.ceph.com there is no step to configure the system user on the secondary site - it is supposed to come through the replication once I pull the realm. And just to add once again - if I follow the procedure manually - I'm not observing this issue of improper authentication, however the metadata isn't being synced. I'm talking https://docs.ceph.com/en/latest/radosgw/multisite/ (which actually has a few mistakes) and https://dokk.org/documentation/ceph/v10.2.8/radosgw/multisite/ hat was a great manual. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1921453 Title: multi-zone replication doesn't work To manage notifications about this bug go to: https://bugs.launchpad.net/charm-ceph-radosgw/+bug/1921453/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1796292] Re: Tight timeout for bcache removal causes spurious failures
Is there an estimate on getting this package in bionic-updates please? -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1796292 Title: Tight timeout for bcache removal causes spurious failures To manage notifications about this bug go to: https://bugs.launchpad.net/curtin/+bug/1796292/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1828617] Re: Hosts randomly 'losing' disks, breaking ceph-osd service enumeration
Yes, it is latest - the cluster is being re-deployed as part of Bootstack handover. Corey, The bug you point to is fixing the sequence of ceph/udev. Here however udev can't create any devices as they don't exist at the moment of udev run seems so - when the host boots and settles down - there is no PVs exist at all. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1828617 Title: Hosts randomly 'losing' disks, breaking ceph-osd service enumeration To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1828617/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1828617] Re: Hosts randomly 'losing' disks, breaking ceph-osd service enumeration
Steve, This is MAAS who creates these udev rules. We requested this feature to be implemented in order to be able to use persistent names in further services configuration (using templating). We couldn't go with /dev/sdX names as they may change after the reboot, and can't use wwn names as they are unique per node and don't allow us to use templates with FCB. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1828617 Title: Hosts randomly 'losing' disks, breaking ceph-osd service enumeration To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1828617/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1796292] Re: Tight timeout for bcache removal causes spurious failures
@jhobbs Here is the script that cleans up bcache devices on recommission: https://pastebin.ubuntu.com/p/6WCGvM4Q32/ -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1796292 Title: Tight timeout for bcache removal causes spurious failures To manage notifications about this bug go to: https://bugs.launchpad.net/curtin/+bug/1796292/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1828617] [NEW] Hosts randomly 'losing' disks, breaking ceph-osd service enumeration
Public bug reported: Ubuntu 18.04.2 Ceph deployment. Ceph OSD devices utilizing LVM volumes pointing to udev-based physical devices. LVM module is supposed to create PVs from devices using the links in /dev/disk/by-dname/ folder that are created by udev. However on reboot it happens (not always, rather like race condition) that Ceph services cannot start, and pvdisplay doesn't show any volumes created. The folder /dev/disk/by-dname/ however has all necessary device created by the end of boot process. The behaviour can be fixed manually by running "#/sbin/lvm pvscan --cache --activate ay /dev/nvme0n1" command for re-activating the LVM components and then the services can be started. ** Affects: systemd (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1828617 Title: Hosts randomly 'losing' disks, breaking ceph-osd service enumeration To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1828617/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1316812] Re: ubuntu qemu-kvm package attempts to start a service and fails
Why do we have to have such a service running on the controller node? As far as I understand we need it to run on the compute nodes only. Once virtualization has been switched off in the BIOS of the controller the node will fail to deploy. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to qemu-kvm in Ubuntu. https://bugs.launchpad.net/bugs/1316812 Title: ubuntu qemu-kvm package attempts to start a service and fails To manage notifications about this bug go to: https://bugs.launchpad.net/fuel/+bug/1316812/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1316812] Re: ubuntu qemu-kvm package attempts to start a service and fails
Why do we have to have such a service running on the controller node? As far as I understand we need it to run on the compute nodes only. Once virtualization has been switched off in the BIOS of the controller the node will fail to deploy. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1316812 Title: ubuntu qemu-kvm package attempts to start a service and fails To manage notifications about this bug go to: https://bugs.launchpad.net/fuel/+bug/1316812/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs