Thanks, Sean!
BTW, is it a good idea to turn off scrub and deep-scrub on bucket.index pool?
We have something like 5 million objects in it and when it is
scrubbing RGW just stops working until it's finished...
Or will setting the "idle" IO priority for scrub help?
2016-06-12 16:07 GMT+03:00 Se
.rgw.buckets are all we have as EC. The remainder are replication.
Thanks,
CJ
On Sun, Jun 12, 2016 at 4:12 AM, Василий Ангапов wrote:
> Hello!
>
> I have a question regarding RGW pools type: what pools can be Erasure
> Coded?
> More exactly, I have the following pools:
>
> .rgw.root (EC)
> ed-1
I don't know. That is why i'm asking here.
2016-06-12 6:36 GMT+03:00 Ken Peng :
> Hi,
>
> We had experienced the similar error, when writing to RBD block with
> multi-threads using fio, some OSD got error and down.
> Did we talk about the same stuff?
>
> 2016-06-11 0:37 GMT+08:00 Юрий Соколов :
>>
Hello!
I did not find any information on how to move existing RGW bucket
index pool to new one.
I want to move my bucket indices on SSD disks, do I have to shut down
the whole RGW or not? Would be very grateful for any tip.
Regards, Vasily.
___
ceph-use
Wade,
I'm having the same problem as you do. We have currently 5+ million
objects in a bucket and it is not even sharded, so we observe many
problems with that. Did you manage to test RGW with tons of files?
2016-05-24 2:45 GMT+03:00 Wade Holler :
> We (my customer ) are trying to test at Jewell
Hi to myself =)
just in case other's run into the same:
#1: You will have to update parted from version 3.1 to 3.2 ( for example
simply take the fedora package, its newer, and replace with it ) -which
is responsible for partprobe.
#2: Softwareraid will still not work, because of the guid of the
Hi Brad,
thank you very much for your answer.
I found after digging quiet long the information to update the parted
version of centos. From version 3.1 to version 3.2.
This caused, in the very end, that partprobe started to run cleanly
through with ceph-deploy.
So i could leave the path of tryi
> Current cluster health:
>cluster 537a3e12-95d8-48c3-9e82-91abbfdf62e0
> health HEALTH_WARN
>5 pgs degraded
>8 pgs down
>48 pgs incomplete
>3 pgs recovering
>1 pgs recovery_wait
>76 pgs stale
>5 pgs stuck d
> The GUID for a CEPH journal partition should be
> "45B0969E-9B03-4F30-B4C6-B4B80CEFF106"
> I haven't been able to find this info in the documentation on the ceph site
The GUID typecodes are listed in the /usr/sbin/ceph-disk script.
I had an issue a couple years ago where a subset of OSD’s in o
Hello!
I have a question regarding RGW pools type: what pools can be Erasure Coded?
More exactly, I have the following pools:
.rgw.root (EC)
ed-1.rgw.control (EC)
ed-1.rgw.data.root (EC)
ed-1.rgw.gc (EC)
ed-1.rgw.intent-log (EC)
ed-1.rgw.buckets.data (EC)
ed-1.rgw.meta (EC)
ed-1.rgw.users.keys (R
10 matches
Mail list logo