[ceph-users] Re: Luminous and mimic: adding OSD can crash mon(s) and lead to loss of quorum

2019-08-23 Thread Paul Emmerich
On Fri, Aug 23, 2019 at 3:54 PM Florian Haas wrote: > > On 23/08/2019 13:34, Paul Emmerich wrote: > > Is this reproducible with crushtool? > > Not for me. > > > ceph osd getcrushmap -o crushmap > > crushtool -i crushmap --update-item XX 1.0 osd.XX --loc host > > hostname-that-doesnt-exist-yet -o

[ceph-users] Re: Failed to get omap key when mirroring of image is enabled

2019-08-23 Thread Ajitha Robert
Ok, thanks. What could be the reason for this issue? How to rectify this? On Fri, 23 Aug 2019, 17:18 Jason Dillaman, wrote: > On Fri, Aug 23, 2019 at 7:38 AM Ajitha Robert > wrote: > > > > Sir, > > > > I have a running DR setup with ceph.. but i did the same for another two > sites.. Its

[ceph-users] Re: ceph fs crashes on simple fio test

2019-08-23 Thread Robert LeBlanc
The WPQ scheduler may help your clients back off when things get busy. Put this in your ceph.conf and restart your OSDs. osd op queue = wpq osd op queue cut off = high Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Fri, Aug 23, 2019 at 5:03

[ceph-users] Re: Luminous and mimic: adding OSD can crash mon(s) and lead to loss of quorum

2019-08-23 Thread Florian Haas
On 23/08/2019 13:34, Paul Emmerich wrote: > Is this reproducible with crushtool? Not for me. > ceph osd getcrushmap -o crushmap > crushtool -i crushmap --update-item XX 1.0 osd.XX --loc host > hostname-that-doesnt-exist-yet -o crushmap.modified > Replacing XX with the osd ID you tried to add.

[ceph-users] Re: Failed to get omap key when mirroring of image is enabled

2019-08-23 Thread Jason Dillaman
On Fri, Aug 23, 2019 at 7:38 AM Ajitha Robert wrote: > > Sir, > > I have a running DR setup with ceph.. but i did the same for another two > sites.. Its actually direct L2 connectivity link between sites.. I m getting > repeated error > > rbd::mirror::InstanceWatcher:

[ceph-users] Luminous and mimic: adding OSD can crash mon(s) and lead to loss of quorum

2019-08-23 Thread Florian Haas
Hi everyone, there are a couple of bug reports about this in Redmine but only one (unanswered) mailing list message[1] that I could find. So I figured I'd raise the issue here again and copy the original reporters of the bugs (they are BCC'd, because in case they are no longer subscribed it

[ceph-users] Re: RBD, OpenStack Nova, libvirt, qemu-guest-agent, and FIFREEZE: is this working as intended?

2019-08-23 Thread Florian Haas
Just following up here to report back and close the loop: On 21/08/2019 16:51, Jason Dillaman wrote: > It just looks like this was an oversight from the OpenStack developers > when Nova RBD "direct" ephemeral image snapshot support was added [1]. > I would open a bug ticket against Nova for the

[ceph-users] Re: Strange Ceph architect with SAN storages

2019-08-23 Thread Marc Roos
What about getting the disks out of the san enclosure and putting them in some sas expander? And just hook it up to an existing ceph osd server via sas/sata jbod adapter with external sas port? Something like this https://www.raidmachine.com/products/6g-sas-direct-connect-jbods/