Providing a clarification:
The change should be made to enable the plugin sub-package built:
https://git.launchpad.net/ubuntu/+source/nbdkit/tree/configure?h=ubuntu/mantic#n27332
I understand this will make a built package to deviate from Debian, so
please recommend proper path to address this.
Public bug reported:
Currently nbdkit includes a number of plugins but doesn't provide ability to
utilize VDDK to remotely access VMWare datastores.
Upstream libguestfs describes the plugin
https://libguestfs.org/nbdkit-vddk-plugin.1.html
The codebase of the plugin is https://gitlab.com/nbdkit/v
Public bug reported:
Currently if the proxy is configured for the UA client it is setting up
global proxy for the entire apt:
* To change what ubuntu-advantage-tools sets, run one of the following:
* Substitute "apt_https_proxy" for "apt_http_proxy" as necessary.
* sudo ua config set apt_http_p
Essentially this requires to rebuild the existing package including the
libraries of "curl", "ssl" and "crypto" as build dependencies. See the
configure script
if test "$disable_http" != "yes"; then
HTTP_LIBS="-lcurl -lssl -lcrypto"
if compile_prog "" "$HTTP_LIBS" "curl-new-ssl"; then
outp
Public bug reported:
"http" engine is required to be present in order to be able to benchmark
S3 and Swift functionality of the remote endpoint (Ceph RadosGW)
** Affects: fio (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member
"the system user is absent from the slave deployment"
can you elaborate on this? In the procedure on docs.ceph.com there is no step
to configure the system user on the secondary site - it is supposed to come
through the replication once I pull the realm.
And just to add once again - if I follow t
Is there an estimate on getting this package in bionic-updates please?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1796292
Title:
Tight timeout for bcache removal causes spurious failures
To mana
Yes, it is latest - the cluster is being re-deployed as part of
Bootstack handover.
Corey,
The bug you point to is fixing the sequence of ceph/udev. Here however udev
can't create any devices as they don't exist at the moment of udev run seems so
- when the host boots and settles down - there is
Steve,
This is MAAS who creates these udev rules. We requested this feature to be
implemented in order to be able to use persistent names in further services
configuration (using templating). We couldn't go with /dev/sdX names as they
may change after the reboot, and can't use wwn names as they
@jhobbs
Here is the script that cleans up bcache devices on recommission:
https://pastebin.ubuntu.com/p/6WCGvM4Q32/
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1796292
Title:
Tight timeout for bc
Public bug reported:
Ubuntu 18.04.2 Ceph deployment.
Ceph OSD devices utilizing LVM volumes pointing to udev-based physical devices.
LVM module is supposed to create PVs from devices using the links in
/dev/disk/by-dname/ folder that are created by udev.
However on reboot it happens (not always,
Why do we have to have such a service running on the controller node? As
far as I understand we need it to run on the compute nodes only. Once
virtualization has been switched off in the BIOS of the controller the
node will fail to deploy.
--
You received this bug notification because you are a m
12 matches
Mail list logo