On Fri, Aug 23, 2019 at 3:54 PM Florian Haas wrote:
>
> On 23/08/2019 13:34, Paul Emmerich wrote:
> > Is this reproducible with crushtool?
>
> Not for me.
>
> > ceph osd getcrushmap -o crushmap
> > crushtool -i crushmap --update-item XX 1.0 osd.XX --loc host
> > hostname-that-doesnt-exist-yet -o
Ok, thanks. What could be the reason for this issue? How to rectify this?
On Fri, 23 Aug 2019, 17:18 Jason Dillaman, wrote:
> On Fri, Aug 23, 2019 at 7:38 AM Ajitha Robert
> wrote:
> >
> > Sir,
> >
> > I have a running DR setup with ceph.. but i did the same for another two
> sites.. Its
The WPQ scheduler may help your clients back off when things get busy.
Put this in your ceph.conf and restart your OSDs.
osd op queue = wpq
osd op queue cut off = high
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Fri, Aug 23, 2019 at 5:03
On 23/08/2019 13:34, Paul Emmerich wrote:
> Is this reproducible with crushtool?
Not for me.
> ceph osd getcrushmap -o crushmap
> crushtool -i crushmap --update-item XX 1.0 osd.XX --loc host
> hostname-that-doesnt-exist-yet -o crushmap.modified
> Replacing XX with the osd ID you tried to add.
On Fri, Aug 23, 2019 at 7:38 AM Ajitha Robert wrote:
>
> Sir,
>
> I have a running DR setup with ceph.. but i did the same for another two
> sites.. Its actually direct L2 connectivity link between sites.. I m getting
> repeated error
>
> rbd::mirror::InstanceWatcher:
Hi everyone,
there are a couple of bug reports about this in Redmine but only one
(unanswered) mailing list message[1] that I could find. So I figured I'd
raise the issue here again and copy the original reporters of the bugs
(they are BCC'd, because in case they are no longer subscribed it
Just following up here to report back and close the loop:
On 21/08/2019 16:51, Jason Dillaman wrote:
> It just looks like this was an oversight from the OpenStack developers
> when Nova RBD "direct" ephemeral image snapshot support was added [1].
> I would open a bug ticket against Nova for the
What about getting the disks out of the san enclosure and putting them
in some sas expander? And just hook it up to an existing ceph osd server
via sas/sata jbod adapter with external sas port?
Something like this
https://www.raidmachine.com/products/6g-sas-direct-connect-jbods/