Thanks Eugen
On 22/06/2020 10:27 pm, Eugen Block wrote:
Regarding the inactive PGs, how are your pools configured? Can you share
ceph osd pool ls detail
It could be an issue with min_size (is it also set to 3?).
pool 2 'ceph' replicated size 3 min_size 1 crush_rule 0 object_hash
rjenkins
For those who responded to me directly with some helpful tips, thank you!
I thought I'd answer my own question here, since it might be useful to others.
I actually did not find useful examples, but maybe I was not looking for the
right things...
First off, s3cmd kept giving me HTTP 405 errors.
Thanks a lot, I've run that and that was perfectly ok :)
On Mon, Jun 15, 2020 at 5:24 PM Matthew Vernon wrote:
> On 14/06/2020 17:07, Khodayar Doustar wrote:
>
> > Now I want to add the other two nodes as monitor and rgw.
> >
> > Can I just modify the ansible host file and re-run the site.yml?
Ouch , ok .
-Message d'origine-
De : Michael Fladischer
Envoyé : 22 juin 2020 15:57
À : St-Germain, Sylvain (SSC/SPC) ;
ceph-users@ceph.io
Objet : Re: [ceph-users] Re: OSD crash with assertion
Hi Sylvain,
Yeah, that's the best and safes way to do it. The pool I wrecked was
Hi Sylvain,
Yeah, that's the best and safes way to do it. The pool I wrecked was
fortunately a dummy-pool.
The pool for which I want to change to EC profile is ~4PiB large, so
moving all files (pool is used in CephFS) on it to a new pool might take
some time and I was hoping for an in-place
The way I did is I create a new pool, copy data on it and put the new pool in
place of the old one after I delete the former pool
echo ""
echo " Create a new pool with erasure coding"
echo
Turns out, I really messed up when changing the EC profile. Removing the
pool did not get rid of it's PGs on the OSDs that have crashed.
To get my OSDs back up I used ceph-objectstore-tool like this:
for PG in $(ceph-objectstore-tool --data-path $DIR --type=bluestore
--op=list-pgs |grep
Use
ceph fs set down true
after this all mdses of fs fs_name will become standbys. Now you can cleanly
remove everything.
Wait for the fs to be shown as down in ceph status, the command above is
non-blocking but the shutdown takes a long time. Try to disconnect all clients
first.
Best
Hi,
a lot of our OSD have crashed a few hours ago because of a failed assertion:
/build/ceph-15.2.3/src/osd/ECUtil.h: 34: FAILED ceph_assert(stripe_width
% stripe_size == 0)
Full output here:
https://pastebin.com/D1SXzKsK
All OSDs are on bluestore and run 15.2.3.
I think I messed up when I
On Mon, Jun 22, 2020 at 7:29 AM Frank Schilder wrote:
>
> Use
>
> ceph fs set down true
>
> after this all mdses of fs fs_name will become standbys. Now you can cleanly
> remove everything.
>
> Wait for the fs to be shown as down in ceph status, the command above is
> non-blocking but the
ceph fs set down true
That's much better!
Zitat von Frank Schilder :
Use
ceph fs set down true
after this all mdses of fs fs_name will become standbys. Now you can
cleanly remove everything.
Wait for the fs to be shown as down in ceph status, the command
above is non-blocking but
Hi,
It seems that the command to use is ceph fs rm fs1 --yes-i-really-mean-it
and then delete the data and metadata pools with ceph osd pool delete
but in many threads I noticed that you must shutdown the mds before
running ceph fs rm.
Is it still the case ?
Yes.
What happens in my
Hello,
I have a ceph cluster (nautilus 14.2.8) with 2 filesystems and 3 mds.
mds1 is managing fs1
mds2 manages fs2
mds3 is standby
I want to completely remove fs1.
It seems that the command to use is ceph fs rm fs1 --yes-i-really-mean-it
and then delete the data and metadata pools with ceph osd
I have 3 ceph clusters on nautilus 14.2.9 (same configuration through
puppet). 2 of them are autmatically sharding rados buckets, one of them is
not
When I do
radosgw-admin reshard stale-instances list on the cluster where it does
not work I get:
reshard stale-instances list
Resharding disabled
Your pg_num is fine, there's no reason to change it if you don't
encounter any issues. One could argue that your smaller OSDs have too
few PGs but the larger OSDs have reasonable values. I would probably
leave it as it is.
Regarding the inactive PGs, how are your pools configured? Can you
No that is not the case actually. There is no sync happening for all the new
buckets getting created in the primary zone. Later when i manually restart
radosgw in secondary site, it starts syncing.
___
ceph-users mailing list -- ceph-users@ceph.io
To
Hi all,
One question please. Does Ceph uses the linux multi queue block IO layer ?
BR
Bobby
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello Ceph users,
We are experiencing an issue with ceph 14.2.9 / RGW Beast frontend. We are
seeing this across our two separate clusters.
Over a few weeks the qlen and qactive are going up and not returning to zero.
At some point we start seeing performance degrade and we need to reboot the
18 matches
Mail list logo