I removed cephfs and its pools, created everything again using the default
crush ruleset, which is for the HDD, and now ceph health is OK.
I appreciate your help. Thank you very much.
On Tue, Nov 15, 2016 at 11:48 AM Webert de Souza Lima
wrote:
> Right, thank you.
>
> On
Right, thank you.
On this particular cluster it would be Ok to have everything on the HDD. No
big traffic here.
In order to do that, do I need to delete this cephfs, delete its pools and
create them again?
After that I assume I would run ceph osd pool set cephfs_metadata
crush_ruleset 0, as 0 is
Hi,
On 11/15/2016 01:55 PM, Webert de Souza Lima wrote:
sure, as requested:
*cephfs* was created using the following command:
ceph osd pool create cephfs_metadata 128 128
ceph osd pool create cephfs_data 128 128
ceph fs new cephfs cephfs_metadata cephfs_data
*ceph.conf:*
sure, as requested:
*cephfs* was created using the following command:
ceph osd pool create cephfs_metadata 128 128
ceph osd pool create cephfs_data 128 128
ceph fs new cephfs cephfs_metadata cephfs_data
*ceph.conf:*
https://paste.debian.net/895841/
*# ceph osd crush
Hi,
On 11/15/2016 01:27 PM, Webert de Souza Lima wrote:
Not that I know of. On 5 other clusters it works just fine and
configuration is the same for all.
On this cluster I was using only radosgw, but cephfs was not in use
but it had been already created following our procedures.
This
Not that I know of. On 5 other clusters it works just fine and
configuration is the same for all.
On this cluster I was using only radosgw, but cephfs was not in use but it
had been already created following our procedures.
This happened right after mounting it.
On Tue, Nov 15, 2016 at 10:24 AM
On Tue, Nov 15, 2016 at 12:14 PM, Webert de Souza Lima
wrote:
> Hey John.
>
> Just to be sure; by "deleting the pools" you mean the cephfs_metadata and
> cephfs_metadata pools, right?
> Does it have any impact over radosgw? Thanks.
Yes, I meant the cephfs pools. It
I'm sorry, I meant *cephfs_data* and *cephfs_metadata*
On Tue, Nov 15, 2016 at 10:15 AM Webert de Souza Lima
wrote:
> Hey John.
>
> Just to be sure; by "deleting the pools" you mean the *cephfs_metadata*
> and *cephfs_metadata* pools, right?
> Does it have any impact
Hey John.
Just to be sure; by "deleting the pools" you mean the *cephfs_metadata* and
*cephfs_metadata* pools, right?
Does it have any impact over radosgw? Thanks.
On Tue, Nov 15, 2016 at 10:10 AM John Spray wrote:
> On Tue, Nov 15, 2016 at 11:58 AM, Webert de Souza Lima
>
On Tue, Nov 15, 2016 at 11:58 AM, Webert de Souza Lima
wrote:
> Hi,
>
> after running a cephfs on my ceph cluster I got stuck with the following
> heath status:
>
> # ceph status
> cluster ac482f5b-dce7-410d-bcc9-7b8584bd58f5
> health HEALTH_WARN
> 128
Also, i instructed all unclean pgs to repair and nothing happend. I did it
like this:
~# for pg in `ceph pg dump_stuck unclean 2>&1 | grep -Po
'[0-9]+\.[A-Za-z0-9]+'`; do ceph pg repair $pg; done
On Tue, Nov 15, 2016 at 9:58 AM Webert de Souza Lima
wrote:
> Hi,
>
> after
Hi,
after running a cephfs on my ceph cluster I got stuck with the following
heath status:
# ceph status
cluster ac482f5b-dce7-410d-bcc9-7b8584bd58f5
health HEALTH_WARN
128 pgs degraded
128 pgs stuck unclean
128 pgs undersized
recovery
12 matches
Mail list logo