[ceph-users] PG_DAMAGED: Possible data damage: 4 pgs recovery_unfound

2022-08-17 Thread Eric Dold
Hi everyone,

It seems like I hit Bug #44286 Cache tiering shows unfound objects after
OSD reboots .

I did stop some OSD's to compact the RocksDB on them. Noout was set during
this time.
Soon after that i got:

[ERR] PG_DAMAGED: Possible data damage: 4 pgs recovery_unfound
pg 8.8 is active+recovery_unfound+degraded, acting [42,43,39], 1 unfound
pg 8.14 is active+recovery_unfound+degraded, acting [43,40,42], 1
unfound
pg 8.3b is active+recovery_unfound+degraded, acting [36,40,43], 1
unfound
pg 8.50 is active+recovery_unfound+degraded, acting [39,38,36], 1
unfound

ceph pg 8.8 list_unfound
{
"num_missing": 1,
"num_unfound": 1,
"objects": [
{
"oid": {
"oid": "hit_set_8.8_archive_2022-08-12
12:12:06.515941Z_2022-08-12 12:18:16.186156Z",
"key": "",
"snapid": -2,
"hash": 8,
"max": 0,
"pool": 8,
"namespace": ".ceph-internal"
},
"need": "118438'7610615",
"have": "0'0",
"flags": "none",
"clean_regions": "clean_offsets: [], clean_omap: 0, new_object:
1",
"locations": []
}
],
"state": "NotRecovering",
"available_might_have_unfound": true,
"might_have_unfound": [],
"more": false
}

The other missing objects look the same. The oid is hit_set_*
So i guess no data is affected. The question is how to get rid of the error.

This is a cache pool with replica x3 in front of a cephfs with ec 6+2.
Affected are the hit set objects from the cache pool.
Everything seems to work so far. The cluster is in "HEALTH_ERR: Possible
data damage: 4 pgs recovery_unfound" though.

I could not get the PG's do deep scrub to find the missing objects. It also
did not work when i disabled scrubbing on all OSD's except the affected
ones.
Repairing the PG does also not start since it's a scrub operation as well.
They are just queued for deep scrub but nothing is happening.

I did try
ceph pg deep-scrub 8.8
ceph pg repair 8.8
I also tried to set one of the primary OSD out, but the affected PG did
stay on that OSD.

What's the best course of action to get the cluster back to a healthy state?

Should i make

ceph pg 8.8 mark_unfound_lost revert
or
ceph pg 8.8 mark_unfound_lost delete

Or is there another way?
Would the cache pool still work after that?

Thanks,
Eric
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Cephfs - MDS all up:standby, not becoming up:active

2021-09-18 Thread Eric Dold
Hi Patrick

Thanks a lot!

After setting
ceph fs compat cephfs add_incompat 7 "mds uses inline data"
the filesystem is working again.

So should I leave this setting as it is now, or do I have to remove it
again in a future update?

On Sat, Sep 18, 2021 at 2:28 AM Patrick Donnelly 
wrote:

> On Fri, Sep 17, 2021 at 6:57 PM Eric Dold  wrote:
> >
> > Hi Patrick
> >
> > Here's the output of ceph fs dump:
> >
> > e226256
> > enable_multiple, ever_enabled_multiple: 0,1
> > default compat: compat={},rocompat={},incompat={1=base v0.20,2=client
> > writeable ranges,3=default file layouts on dirs,4=dir inode in separate
> > object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> > anchor table,9=file layout v2,10=snaprealm v2}
> > legacy client fscid: 2
> >
> > Filesystem 'cephfs' (2)
> > fs_name cephfs
> > epoch   226254
> > flags   12
> > created 2019-03-20T14:06:32.588328+0100
> > modified2021-09-17T14:47:08.513192+0200
> > tableserver 0
> > root0
> > session_timeout 60
> > session_autoclose   300
> > max_file_size   1099511627776
> > required_client_features{}
> > last_failure0
> > last_failure_osd_epoch  91941
> > compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> > ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds
> > uses versioned encoding,6=dirfrag is stored in omap,8=no anchor
> > table,9=file layout v2,10=snaprealm v2}
> > max_mds 1
> > in  0,1
> > up  {}
> > failed  0,1
>
> Run:
>
> ceph fs compat add_incompat cephfs 7 "mds uses inline data"
>
>
> It's interesting you're in the same situation (two ranks). Are you
> using cephadm? If not, were you not aware of the MDS upgrade procedure
> [1]?
>
> [1] https://docs.ceph.com/en/pacific/cephfs/upgrading/
>
> --
> Patrick Donnelly, Ph.D.
> He / Him / His
> Principal Software Engineer
> Red Hat Sunnyvale, CA
> GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Cephfs - MDS all up:standby, not becoming up:active

2021-09-17 Thread Eric Dold
Hi Patrick

Here's the output of ceph fs dump:

e226256
enable_multiple, ever_enabled_multiple: 0,1
default compat: compat={},rocompat={},incompat={1=base v0.20,2=client
writeable ranges,3=default file layouts on dirs,4=dir inode in separate
object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
anchor table,9=file layout v2,10=snaprealm v2}
legacy client fscid: 2

Filesystem 'cephfs' (2)
fs_name cephfs
epoch   226254
flags   12
created 2019-03-20T14:06:32.588328+0100
modified2021-09-17T14:47:08.513192+0200
tableserver 0
root0
session_timeout 60
session_autoclose   300
max_file_size   1099511627776
required_client_features{}
last_failure0
last_failure_osd_epoch  91941
compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable
ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds
uses versioned encoding,6=dirfrag is stored in omap,8=no anchor
table,9=file layout v2,10=snaprealm v2}
max_mds 1
in  0,1
up  {}
failed  0,1
damaged
stopped
data_pools  [3]
metadata_pool   4
inline_data disabled
balancer
standby_count_wanted1


Standby daemons:

[mds.ceph3{-1:4694171} state up:standby seq 1 addr [v2:
192.168.1.72:6800/2991378711,v1:192.168.1.72:6801/2991378711] compat
{c=[1],r=[1],i=[7ff]}]
dumped fsmap epoch 226256

On Fri, Sep 17, 2021 at 4:41 PM Patrick Donnelly 
wrote:

> On Fri, Sep 17, 2021 at 8:54 AM Eric Dold  wrote:
> >
> > Hi,
> >
> > I get the same after upgrading to 16.2.6. All mds daemons are standby.
> >
> > After setting
> > ceph fs set cephfs max_mds 1
> > ceph fs set cephfs allow_standby_replay false
> > the mds still wants to be standby.
> >
> > 2021-09-17T14:40:59.371+0200 7f810a58f600  0 ceph version 16.2.6
> > (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable), process
> > ceph-mds, pid 7113
> > 2021-09-17T14:40:59.371+0200 7f810a58f600  1 main not setting numa
> affinity
> > 2021-09-17T14:40:59.371+0200 7f810a58f600  0 pidfile_write: ignore empty
> > --pid-file
> > 2021-09-17T14:40:59.375+0200 7f8105cf1700  1 mds.ceph3 Updating MDS map
> to
> > version 226251 from mon.0
> > 2021-09-17T14:41:00.455+0200 7f8105cf1700  1 mds.ceph3 Updating MDS map
> to
> > version 226252 from mon.0
> > 2021-09-17T14:41:00.455+0200 7f8105cf1700  1 mds.ceph3 Monitors have
> > assigned me to become a standby.
> >
> > setting add_incompat 1 does also not work:
> > # ceph fs compat cephfs add_incompat 1
> > Error EINVAL: adding a feature requires a feature string
> >
> > Any ideas?
>
> Please share `ceph fs dump`.
>
>
> --
> Patrick Donnelly, Ph.D.
> He / Him / His
> Principal Software Engineer
> Red Hat Sunnyvale, CA
> GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Cephfs - MDS all up:standby, not becoming up:active

2021-09-17 Thread Eric Dold
Hi,

I get the same after upgrading to 16.2.6. All mds daemons are standby.

After setting
ceph fs set cephfs max_mds 1
ceph fs set cephfs allow_standby_replay false
the mds still wants to be standby.

2021-09-17T14:40:59.371+0200 7f810a58f600  0 ceph version 16.2.6
(ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable), process
ceph-mds, pid 7113
2021-09-17T14:40:59.371+0200 7f810a58f600  1 main not setting numa affinity
2021-09-17T14:40:59.371+0200 7f810a58f600  0 pidfile_write: ignore empty
--pid-file
2021-09-17T14:40:59.375+0200 7f8105cf1700  1 mds.ceph3 Updating MDS map to
version 226251 from mon.0
2021-09-17T14:41:00.455+0200 7f8105cf1700  1 mds.ceph3 Updating MDS map to
version 226252 from mon.0
2021-09-17T14:41:00.455+0200 7f8105cf1700  1 mds.ceph3 Monitors have
assigned me to become a standby.

setting add_incompat 1 does also not work:
# ceph fs compat cephfs add_incompat 1
Error EINVAL: adding a feature requires a feature string

Any ideas?

On Fri, Sep 17, 2021 at 2:19 PM Joshua West  wrote:

> Thanks Patrick,
>
> Similar to Robert, when trying that, I simply receive "Error EINVAL:
> adding a feature requires a feature string" 10x times.
>
> I attempted to downgrade, but wasn't able to successfully get my mons
> to come back up, as they had quincy specific "mon data structure
> changes" or something like that.
> So, I've settled into "17.0.0-6762-g0ff2e281889" on my cluster.
>
> cephfs is still down all this time later. (Good thing this is a
> learning cluster not in production, haha)
>
> I began to feel more and more that the issue was related to a damaged
> cephfs, from a recent set of server malfunctions on a single node
> causing mayhem on the cluster.
> (I went away for a bit, came back and one node had been killing itself
> every hour for 2 weeks, as it went on strike from the heat in the
> garage where it was living.)
>
> Recently went through the cephfs disaster recovery steps per the docs,
> with breaks per the docs to check if things were working in between
> some steps:
> cephfs-journal-tool --rank=cephfs:0 journal inspect
> cephfs-journal-tool --rank=cephfs:0 event recover_dentries summary
> cephfs-journal-tool --rank=cephfs:0 journal reset
> ceph fs reset cephfs --yes-i-really-mean-it
> #Check if working
> cephfs-table-tool all reset session
> cephfs-table-tool all reset snap
> cephfs-table-tool all reset inode
> #Check if working
> cephfs-data-scan init
>
> for ID in `seq 511`; do cephfs-data-scan scan_extents --worker_n $ID
> --worker_m 512 cephfs_data & done
> for ID in `seq 511`; do cephfs-data-scan scan_inodes --worker_n $ID
> --worker_m 512 cephfs_data & done
> (If anyone here can update the docs, cephfs-data-scan scan_extents,
> and scan_inodes, could use a for loop with many workers as I had to
> abandon running with 4 workers per the docs after over a week, but
> running 512 finished in a day)
>
> cephfs-data-scan scan_links
> cephfs-data-scan cleanup cephfs_data
>
> But mds still fail to come up, though the error has changed.
>
> ceph fs set cephfs max_mds 1
> ceph fs set cephfs allow_standby_replay false
>
> systemctl start ceph-mds@rog
> SEE ATTACHED LOGS
>
>
>
>
> Any guidance that can be offered would be greatly appreciated, as I've
> been without my cephfs data for almost 3 months now.
>
> Joshua
>
> On Fri, Sep 17, 2021 at 3:53 AM Robert Sander
>  wrote:
> >
> > Hi,
> >
> > I had to run
> >
> > ceph fs set cephfs max_mds 1
> > ceph fs set cephfs allow_standby_replay false
> >
> > and stop all MDS and NFS containers and start one after the other again
> > to clear this issue.
> >
> > Regards
> > --
> > Robert Sander
> > Heinlein Consulting GmbH
> > Schwedter Str. 8/9b, 10119 Berlin
> >
> > https://www.heinlein-support.de
> >
> > Tel: 030 / 405051-43
> > Fax: 030 / 405051-19
> >
> > Amtsgericht Berlin-Charlottenburg - HRB 220009 B
> > Geschäftsführer: Peer Heinlein - Sitz: Berlin
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] diskprediction_local fails with python3-sklearn 0.22.2

2020-06-04 Thread Eric Dold
Hello

the mgr module diskprediction_local fails under ubuntu 20.04 focal with
python3-sklearn version 0.22.2
Ceph version is 15.2.3

when the module is enabled i get the following error:

  File "/usr/share/ceph/mgr/diskprediction_local/module.py", line 112, in
serve
self.predict_all_devices()
  File "/usr/share/ceph/mgr/diskprediction_local/module.py", line 279, in
predict_all_devices
result = self._predict_life_expentancy(devInfo['devid'])
  File "/usr/share/ceph/mgr/diskprediction_local/module.py", line 222, in
_predict_life_expentancy
predicted_result = obj_predictor.predict(predict_datas)
  File "/usr/share/ceph/mgr/diskprediction_local/predictor.py", line 457,
in predict
pred = clf.predict(ordered_data)
  File "/usr/lib/python3/dist-packages/sklearn/svm/_base.py", line 585, in
predict
if self.break_ties and self.decision_function_shape == 'ovo':
AttributeError: 'SVC' object has no attribute 'break_ties'

Best Regards
Eric
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: verify_upmap number of buckets 5 exceeds desired 4

2019-09-25 Thread Eric Dold
After updating the CRUSH rule from

rule cephfs_ec {
id 1
type erasure
min_size 8
max_size 8
step set_chooseleaf_tries 5
step set_choose_tries 100
step take default
step choose indep 4 type host
step choose indep 2 type osd
step emit
}

to

rule cephfs_ec {
id 1
type erasure
min_size 8
max_size 12
#step set_chooseleaf_tries 6
step set_choose_tries 100
step take default
step choose indep 6 type host
step choose indep 2 type osd
step emit
}

upmap is not complaining anymore and is working with the six hosts.

Seems like CRUSH does not stop picking a host after the first four with the
first rule and is complaining when it gets the fifth host.
Is this a bug or intended behaviour?

Regards
Eric

On Tue, Sep 17, 2019 at 3:55 PM Eric Dold  wrote:

> With ceph 14.2.4 it's the same.
> The upmap balancer is not working.
>
> Any ideas?
>
> On Wed, Sep 11, 2019 at 11:32 AM Eric Dold  wrote:
>
>> Hello,
>>
>> I'm running ceph 14.2.3 on six hosts with each four osds. I did recently
>> upgrade this from four hosts.
>>
>> The cluster is running fine. But i get this in my logs:
>>
>> Sep 11 11:02:41 ceph1 ceph-mon[1333]: 2019-09-11 11:02:41.953
>> 7f26023a6700 -1 verify_upmap number of buckets 5 exceeds desired 4
>> Sep 11 11:02:41 ceph1 ceph-mon[1333]: 2019-09-11 11:02:41.953
>> 7f26023a6700 -1 verify_upmap number of buckets 5 exceeds desired 4
>> Sep 11 11:02:41 ceph1 ceph-mon[1333]: 2019-09-11 11:02:41.953
>> 7f26023a6700 -1 verify_upmap number of buckets 5 exceeds desired 4
>>
>> It looks like the balancer is not doing any work.
>>
>> Here are some infos about the cluster:
>>
>> ceph1 ~ # ceph osd crush rule ls
>> replicated_rule
>> cephfs_ec
>> ceph1 ~ # ceph osd crush rule dump replicated_rule
>> {
>> "rule_id": 0,
>> "rule_name": "replicated_rule",
>> "ruleset": 0,
>> "type": 1,
>> "min_size": 1,
>> "max_size": 10,
>> "steps": [
>> {
>> "op": "take",
>> "item": -1,
>> "item_name": "default"
>> },
>> {
>> "op": "chooseleaf_firstn",
>> "num": 0,
>> "type": "host"
>> },
>> {
>> "op": "emit"
>> }
>> ]
>> }
>>
>> ceph1 ~ # ceph osd crush rule dump cephfs_ec
>> {
>> "rule_id": 1,
>> "rule_name": "cephfs_ec",
>> "ruleset": 1,
>> "type": 3,
>> "min_size": 8,
>> "max_size": 8,
>> "steps": [
>> {
>> "op": "set_chooseleaf_tries",
>> "num": 5
>> },
>> {
>> "op": "set_choose_tries",
>> "num": 100
>> },
>> {
>> "op": "take",
>> "item": -1,
>> "item_name": "default"
>> },
>> {
>> "op": "choose_indep",
>> "num": 4,
>> "type": "host"
>> },
>> {
>> "op": "choose_indep",
>> "num": 2,
>> "type": "osd"
>> },
>> {
>> "op": "emit"
>> }
>> ]
>> }
>>
>> ceph1 ~ # ceph osd erasure-code-profile ls
>> default
>> isa_62
>> ceph1 ~ # ceph osd erasure-code-profile get default
>> k=2
>> m=1
>> plugin=jerasure
>> technique=reed_sol_van
>> ceph1 ~ # ceph osd erasure-code-profile get isa_62
>> crush-device-class=
>> crush-failure-domain=osd
>> crush-root=default
>> k=6
>> m=2
>> plugin=isa
>> technique=reed_sol_van
>>
>> The idea with four hosts was that the ec profile should take two osds on
>> each host for the eight buckets.
>> Now with six hosts i guess two hosts will have tow buckets on two osds
>> and four hosts will have each one bucket for a piece of data.
>>
>> Any idea how to resolve this?
>>
>> Regards
>> Eric
>>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: verify_upmap number of buckets 5 exceeds desired 4

2019-09-17 Thread Eric Dold
With ceph 14.2.4 it's the same.
The upmap balancer is not working.

Any ideas?

On Wed, Sep 11, 2019 at 11:32 AM Eric Dold  wrote:

> Hello,
>
> I'm running ceph 14.2.3 on six hosts with each four osds. I did recently
> upgrade this from four hosts.
>
> The cluster is running fine. But i get this in my logs:
>
> Sep 11 11:02:41 ceph1 ceph-mon[1333]: 2019-09-11 11:02:41.953 7f26023a6700
> -1 verify_upmap number of buckets 5 exceeds desired 4
> Sep 11 11:02:41 ceph1 ceph-mon[1333]: 2019-09-11 11:02:41.953 7f26023a6700
> -1 verify_upmap number of buckets 5 exceeds desired 4
> Sep 11 11:02:41 ceph1 ceph-mon[1333]: 2019-09-11 11:02:41.953 7f26023a6700
> -1 verify_upmap number of buckets 5 exceeds desired 4
>
> It looks like the balancer is not doing any work.
>
> Here are some infos about the cluster:
>
> ceph1 ~ # ceph osd crush rule ls
> replicated_rule
> cephfs_ec
> ceph1 ~ # ceph osd crush rule dump replicated_rule
> {
> "rule_id": 0,
> "rule_name": "replicated_rule",
> "ruleset": 0,
> "type": 1,
> "min_size": 1,
> "max_size": 10,
> "steps": [
> {
> "op": "take",
> "item": -1,
> "item_name": "default"
> },
> {
> "op": "chooseleaf_firstn",
> "num": 0,
> "type": "host"
> },
> {
> "op": "emit"
> }
> ]
> }
>
> ceph1 ~ # ceph osd crush rule dump cephfs_ec
> {
> "rule_id": 1,
> "rule_name": "cephfs_ec",
> "ruleset": 1,
> "type": 3,
> "min_size": 8,
> "max_size": 8,
> "steps": [
> {
> "op": "set_chooseleaf_tries",
> "num": 5
> },
> {
> "op": "set_choose_tries",
> "num": 100
> },
> {
> "op": "take",
> "item": -1,
> "item_name": "default"
> },
> {
> "op": "choose_indep",
> "num": 4,
> "type": "host"
> },
> {
> "op": "choose_indep",
> "num": 2,
> "type": "osd"
> },
> {
> "op": "emit"
> }
> ]
> }
>
> ceph1 ~ # ceph osd erasure-code-profile ls
> default
> isa_62
> ceph1 ~ # ceph osd erasure-code-profile get default
> k=2
> m=1
> plugin=jerasure
> technique=reed_sol_van
> ceph1 ~ # ceph osd erasure-code-profile get isa_62
> crush-device-class=
> crush-failure-domain=osd
> crush-root=default
> k=6
> m=2
> plugin=isa
> technique=reed_sol_van
>
> The idea with four hosts was that the ec profile should take two osds on
> each host for the eight buckets.
> Now with six hosts i guess two hosts will have tow buckets on two osds and
> four hosts will have each one bucket for a piece of data.
>
> Any idea how to resolve this?
>
> Regards
> Eric
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] verify_upmap number of buckets 5 exceeds desired 4

2019-09-11 Thread Eric Dold
Hello,

I'm running ceph 14.2.3 on six hosts with each four osds. I did recently
upgrade this from four hosts.

The cluster is running fine. But i get this in my logs:

Sep 11 11:02:41 ceph1 ceph-mon[1333]: 2019-09-11 11:02:41.953 7f26023a6700
-1 verify_upmap number of buckets 5 exceeds desired 4
Sep 11 11:02:41 ceph1 ceph-mon[1333]: 2019-09-11 11:02:41.953 7f26023a6700
-1 verify_upmap number of buckets 5 exceeds desired 4
Sep 11 11:02:41 ceph1 ceph-mon[1333]: 2019-09-11 11:02:41.953 7f26023a6700
-1 verify_upmap number of buckets 5 exceeds desired 4

It looks like the balancer is not doing any work.

Here are some infos about the cluster:

ceph1 ~ # ceph osd crush rule ls
replicated_rule
cephfs_ec
ceph1 ~ # ceph osd crush rule dump replicated_rule
{
"rule_id": 0,
"rule_name": "replicated_rule",
"ruleset": 0,
"type": 1,
"min_size": 1,
"max_size": 10,
"steps": [
{
"op": "take",
"item": -1,
"item_name": "default"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "host"
},
{
"op": "emit"
}
]
}

ceph1 ~ # ceph osd crush rule dump cephfs_ec
{
"rule_id": 1,
"rule_name": "cephfs_ec",
"ruleset": 1,
"type": 3,
"min_size": 8,
"max_size": 8,
"steps": [
{
"op": "set_chooseleaf_tries",
"num": 5
},
{
"op": "set_choose_tries",
"num": 100
},
{
"op": "take",
"item": -1,
"item_name": "default"
},
{
"op": "choose_indep",
"num": 4,
"type": "host"
},
{
"op": "choose_indep",
"num": 2,
"type": "osd"
},
{
"op": "emit"
}
]
}

ceph1 ~ # ceph osd erasure-code-profile ls
default
isa_62
ceph1 ~ # ceph osd erasure-code-profile get default
k=2
m=1
plugin=jerasure
technique=reed_sol_van
ceph1 ~ # ceph osd erasure-code-profile get isa_62
crush-device-class=
crush-failure-domain=osd
crush-root=default
k=6
m=2
plugin=isa
technique=reed_sol_van

The idea with four hosts was that the ec profile should take two osds on
each host for the eight buckets.
Now with six hosts i guess two hosts will have tow buckets on two osds and
four hosts will have each one bucket for a piece of data.

Any idea how to resolve this?

Regards
Eric
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io