Re: [ceph-users] Qemu RBD image usage

2019-12-09 Thread Marc Roos
This should get you started with using rbd. WDC WD40EFRX-68WT0N0 cat > secret.xml < client.rbd.vps secret EOF virsh

Re: [ceph-users] PG Balancer Upmap mode not working

2019-12-09 Thread Lars Täuber
Hi Anthony! Mon, 9 Dec 2019 17:11:12 -0800 Anthony D'Atri ==> ceph-users : > > How is that possible? I dont know how much more proof I need to present > > that there's a bug. > > FWIW, your pastes are hard to read with all the ? in them. Pasting > non-7-bit-ASCII? I don't see much "?"

Re: [ceph-users] sharing single SSD across multiple HD based OSDs

2019-12-09 Thread Nathan Fish
You can loop over the creation of LVs on the SSD of a fixed size, then loop over creating OSDs assigned to each of them. That is what we did, it wasn't bad. On Mon, Dec 9, 2019 at 9:32 PM Philip Brown wrote: > > I have a bunch of hard drives I want to use as OSDs, with ceph nautilus. > >

[ceph-users] sharing single SSD across multiple HD based OSDs

2019-12-09 Thread Philip Brown
I have a bunch of hard drives I want to use as OSDs, with ceph nautilus. ceph-volume lvm create makes straight raw dev usage relatively easy, since you can just do ceph-volume lvm create --data /dev/sdc or whatever. Its nice that it takes care of all the LVM jiggerypokery automatically. but..

Re: [ceph-users] PG Balancer Upmap mode not working

2019-12-09 Thread Anthony D'Atri
> How is that possible? I dont know how much more proof I need to present that > there's a bug. FWIW, your pastes are hard to read with all the ? in them. Pasting non-7-bit-ASCII? > |I increased PGs and see no difference. From what pgp_num to what new value? Numbers that are not a power of

Re: [ceph-users] Annoying PGs not deep-scrubbed in time messages in Nautilus.

2019-12-09 Thread Robert LeBlanc
On Mon, Dec 9, 2019 at 11:58 AM Paul Emmerich wrote: > solved it: the warning is of course generated by ceph-mgr and not ceph-mon. > > So for my problem that means: should have injected the option in ceph-mgr. > That's why it obviously worked when setting it on the pool... > > The solution for

Re: [ceph-users] Annoying PGs not deep-scrubbed in time messages in Nautilus.

2019-12-09 Thread Paul Emmerich
solved it: the warning is of course generated by ceph-mgr and not ceph-mon. So for my problem that means: should have injected the option in ceph-mgr. That's why it obviously worked when setting it on the pool... The solution for you is to simply put the option under global and restart ceph-mgr

Re: [ceph-users] Annoying PGs not deep-scrubbed in time messages in Nautilus.

2019-12-09 Thread Paul Emmerich
On Mon, Dec 9, 2019 at 5:17 PM Robert LeBlanc wrote: > I've increased the deep_scrub interval on the OSDs on our Nautilus cluster > with the following added to the [osd] section: > should have read the beginning of your email; you'll need to set the option on the mons as well because they

Re: [ceph-users] Annoying PGs not deep-scrubbed in time messages in Nautilus.

2019-12-09 Thread Paul Emmerich
Hi, nice coincidence that you mention that today; I've just debugged the exact same problem on a setup where deep_scrub_interval was increased. The solution was to set the deep_scrub_interval directly on all pools instead (which was better for this particular setup anyways): ceph osd pool set

[ceph-users] Qemu RBD image usage

2019-12-09 Thread Liu, Changcheng
Hi all, I want to attach another RBD image into the Qemu VM to be used as disk. However, it always failed. The VM definiation xml is attached. Could anyone tell me where I did wrong? || nstcc3@nstcloudcc3:~$ sudo virsh start ubuntu_18_04_mysql --console || error: Failed to start

[ceph-users] Annoying PGs not deep-scrubbed in time messages in Nautilus.

2019-12-09 Thread Robert LeBlanc
I've increased the deep_scrub interval on the OSDs on our Nautilus cluster with the following added to the [osd] section: osd_deep_scrub_interval = 260 And I started seeing 1518 pgs not deep-scrubbed in time in ceph -s. So I added mon_warn_pg_not_deep_scrubbed_ratio = 1 since the default

Re: [ceph-users] Cluster in ERR status when rebalancing

2019-12-09 Thread Paul Emmerich
This is a (harmless) bug that existed since Mimic and will be fixed in 14.2.5 (I think?). The health error will clear up without any intervention. Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München

Re: [ceph-users] Cluster in ERR status when rebalancing

2019-12-09 Thread Eugen Block
Hi, since we upgraded our cluster to Nautilus we also see those messages sometimes when it's rebalancing. There are several reports about this [1] [2], we didn't see it in Luminous. But eventually the rebalancing finished and the error message cleared, so I'd say there's (probably)

Re: [ceph-users] Cluster in ERR status when rebalancing

2019-12-09 Thread Simone Lazzaris
In data lunedì 9 dicembre 2019 11:46:34 CET, huang jun ha scritto: > what about the pool's backfill_full_ratio value? > That vaule, as far as I can see, is 0.9000, which is not reached by any OSD: root@s1:~# ceph osd df ID CLASS WEIGHT REWEIGHT SIZERAW USE DATAOMAPMETAAVAIL

Re: [ceph-users] Cluster in ERR status when rebalancing

2019-12-09 Thread huang jun
what about the pool's backfill_full_ratio value? Simone Lazzaris 于2019年12月9日周一 下午6:38写道: > > Hi all; > > Long story short, I have a cluster of 26 OSD in 3 nodes (8+9+9). One of the > disk is showing some read error, so I''ve added an OSD in the faulty node > (OSD.26) and set the (re)weight of

[ceph-users] Cluster in ERR status when rebalancing

2019-12-09 Thread Simone Lazzaris
Hi all; Long story short, I have a cluster of 26 OSD in 3 nodes (8+9+9). One of the disk is showing some read error, so I''ve added an OSD in the faulty node (OSD.26) and set the (re)weight of the faulty OSD (OSD.12) to zero. The cluster is now rebalancing, which is fine, but I have now 2 PG

Re: [ceph-users] Missing Ceph perf-counters in Ceph-Dashboard or Prometheus/InfluxDB...?

2019-12-09 Thread Stefan Kooman
Quoting Ernesto Puerta (epuer...@redhat.com): > The default behaviour is that only perf-counters with priority > PRIO_USEFUL (5) or higher are exposed (via `get_all_perf_counters` API > call) to ceph-mgr modules (including Dashboard, DiskPrediction or > Prometheus/InfluxDB/Telegraf exporters). >