I do not think there was any remaining work for bcache-tools to do and
that this is fixed, so I am marking the bcache-tools task as invalid. If
my assumption is incorrect, please feel free to respond to this bug if
there is open work remaining. Thanks!
** Changed in: bcache-tools (Ubuntu)
** Changed in: charm-ceph-osd
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1812925
Title:
No OSDs has been initialized in random unit with "No block
** Changed in: charm-ceph-osd
Milestone: None => 19.04
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1812925
Title:
No OSDs has been initialized in random unit with "No block devices
detected
Reviewed: https://review.openstack.org/632738
Committed:
https://git.openstack.org/cgit/openstack/charm-ceph-osd/commit/?id=8655dbd9e34a31b11b276e34118e190f6bf8467e
Submitter: Zuul
Branch:master
commit 8655dbd9e34a31b11b276e34118e190f6bf8467e
Author: Liam Young
Date: Wed Jan 23 14:46:26
** Changed in: charm-ceph-osd
Status: Incomplete => In Progress
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1812925
Title:
No OSDs has been initialized in random unit with "No block
For information: I have redeployed the same bundle [with only difference
- removed Livepatch references] on similar hardware and no reproducer
here - all symlinks are in-place.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
I've recreated an VM instance that uses the curtin config attached to
this bug.
I can recreate the additional registrations and delete/create of the
dname symlinks to the bcache devices by simply running 'udevadm trigger'
in a loop, and watching, dname directory with inotifywatch (inotifywait
-mr
On Tue, Jan 29, 2019 at 5:50 PM Ryan Harper <1812...@bugs.launchpad.net>
wrote:
> I'm not 100% sure at which point ceph/charms make use of the dname
> symlinks, but ceph appears to be up much earlier than this.
>
> $ journalctl -o short-monotonic -D fd6c366f929d47a195590b3c6dc9df5a -u
> ceph*
>
On Tue, Jan 29, 2019 at 5:50 PM Ryan Harper <1812...@bugs.launchpad.net>
wrote:
> I'm not 100% sure at which point ceph/charms make use of the dname
> symlinks, but ceph appears to be up much earlier than this.
>
> $ journalctl -o short-monotonic -D fd6c366f929d47a195590b3c6dc9df5a -u
> ceph*
>
After looking at the journal I can conclude a few things.
1) the bcache devices are found and rules run and change events emitted
fairly early during boot
[8.873909] ubuntu kernel: bcache: bch_journal_replay() journal replay done,
673 keys in 35 entries, seq 724
[8.875099] ubuntu
I spent time looking at a udev debug log and I can confirm that during a
cold plug of the storage subsystem the bcache symlinks (kernel
(/dev/bcache/by-{uuid/label}, dname (/dev/disk/by-dname/)) will be
removed due to how the bcache driver works (it binds a backing and a
cache device) and when
AIUI there is no action for bcache-tools right now so I'm marking as
Incomplete.
** Changed in: bcache-tools (Ubuntu)
Status: New => Incomplete
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Workaround for the time being until the actual root cause is identified:
juju run --application ceph-osd "sudo udevadm trigger --subsystem-
match=block --action=add"
before unsealing and authorising the vault charm. Charm task marked
incomplete pending diagnosis of what's actually mangling the
Workaround for the time being until the actual root cause is identified:
juju run --application ceph-osd "sudo udevadm trigger --subsystem-
match=block --action=add"
before unsealing and authorising the vault charm. Charm task marked
incomplete pending diagnosis of what's actually mangling the
Just have run an 'udevadm trigger' before Vault unlocking and looks like
it's getting that links back:
$ juju run --application ceph-osd 'sudo udevadm trigger -ubuntu@ln-sv-infr01:~$
juju run --application ceph-osd 'sudo udevadm trigger --subsystem-match=block
--action=add'
- Stdout: ""
Third redeployment:
$ juju status | pastebinit
http://paste.ubuntu.com/p/Y2Sdv4QJFm/
$ juju run --application ceph-osd 'ls -lah /dev/disk/by-dname | wc -l' |
pastebinit
http://paste.ubuntu.com/p/7CxBPzrqhY/
--
You received this bug notification because you are a member of Ubuntu
Bugs, which
It's failing somewhere in between:
https://pastebin.canonical.com/p/bBSkf7mh9f/ (take a look at ln-sv-
ostk05, it's already missing some bcaches); however they were there (see
previous comment).
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to
I just captured this once more after starting an openstack bundle
deployment: https://pastebin.canonical.com/p/dhc8tBqt8Q/
So, workaround with inserting udevadm settle/trigger to cloudinit-
userdata will not help there as it's being executed on node initial
deployment, but nodes already have all
Just did two deployments of simple single charm in a row to exclude some
openstack charm influence:
Pass 1: https://paste.ubuntu.com/p/qCt7WhHbzv/,
http://paste.ubuntu.com/p/cxQ9qgY7gZ/
Pass 2: http://paste.ubuntu.com/p/Yn7ZYVSNzH/
http://paste.ubuntu.com/p/Hp5zcdGZKH/,
After systems being idle for ~2h currently it looks like they have all
the symlinks in-place:
$ juju run --application ceph-osd 'ls -lah /dev/disk/by-dname | wc -l' |
pastebinit
http://paste.ubuntu.com/p/rDH57vGjKP/
$ ls -l /dev/disk/by-dname/
total 0
lrwxrwxrwx 1 root root 13 Jan 24 15:16
@raharper the /dev/disk/by-dname/* entries are created by the
/etc/udev/rules.d/bcache*.rules created by maas whereas the
/dev/bcache/by-* entries are created by
/lib/udev/rules.d/69-bcache.rules from bcache-tools. The deployment in
this bug is not using the /dev/bcache/by-* and is only using the
Looking at the syslog and kern.log, the kernel has emitted the change
events which trigger the bcache rules, for all 12 devices, each time.
So what remains to understand is whether udevd ran the hook
(69-bcache.rules) and I see no reason it wouldn't.
% grep kernel.*register_bcache
** Changed in: charm-ceph-osd
Importance: Critical => High
** Changed in: charm-ceph-osd
Status: In Progress => Incomplete
--
You received this bug notification because you are a member of Ubuntu
Server, which is subscribed to bcache-tools in Ubuntu.
** Changed in: charm-ceph-osd
Importance: Critical => High
** Changed in: charm-ceph-osd
Status: In Progress => Incomplete
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1812925
Title:
No
Re-triggering udev outside of the charm creates the missing by-dname
entries:
sudo udevadm trigger --subsystem-match=block --action=add
--
You received this bug notification because you are a member of Ubuntu
Server, which is subscribed to bcache-tools in Ubuntu.
Re-triggering udev outside of the charm creates the missing by-dname
entries:
sudo udevadm trigger --subsystem-match=block --action=add
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1812925
Title:
Raising a bug task for bcache-tools
** Also affects: bcache-tools (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1812925
Title:
No OSDs has been
An obvious workaround would be to trigger a re-scan before the ceph-osd
charm tries to osdize the bcache devices - I would consider this a
workaround only as the by-dname devices really should be present before
the charm executes.
--
You received this bug notification because you are a member of
Raising a bug task for bcache-tools
** Also affects: bcache-tools (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Server, which is subscribed to bcache-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1812925
Title:
An obvious workaround would be to trigger a re-scan before the ceph-osd
charm tries to osdize the bcache devices - I would consider this a
workaround only as the by-dname devices really should be present before
the charm executes.
--
You received this bug notification because you are a member of
30 matches
Mail list logo