[ceph-users] native linux distribution host running ceph container ?

2021-06-25 Thread marc boisis
Hi,

We have a containerised ceph cluster in version 16.2.4 (15 hosts, 180 osds) 
deployed with ceph-ansible.
Our host run on centos 7 (kernel 3.10) with ceph-deamon docker image based on 
centos 8.

I cannot find in the documentation which native distribution is recommended, 
should it be the same as docker image (centos 8) ?

About centos8 and the end of support announced for the end of the year, which 
distribution will ceph use in docker image ?

Thanks





 
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] iscsi and iser

2020-12-14 Thread Marc Boisis


Hi,

I would like to know if you support iser in gwcli like the traditional 
targetcli or if this is planned in a future version of ceph ?

Thanks

Marc
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: rbd map on octopus from luminous client

2020-09-17 Thread Marc Boisis
it works
Thanks

> On 17 Sep 2020, at 15:17, Ilya Dryomov  wrote:
> 
> On Thu, Sep 17, 2020 at 1:56 PM Marc Boisis  <mailto:marc.boi...@univ-lr.fr>> wrote:
>> 
>> 
>> Hi,
>> 
>> I had to map a rbd from an ubuntu Trusty  luminous client on an octopus 
>> cluster.
>> 
>> client dmesg :
>> feature set mismatch, my 4a042a42 < server's 14a042a42, missing 
>> 1
>> 
>> I downgrade my osd tunable to bobtail but it still doesn't work
> 
> Hi Marc,
> 
> Right, that's not enough in your case.
> 
>> 
>> ceph osd crush show-tunables
>> {
>>   "choose_local_tries": 0,
>>   "choose_local_fallback_tries": 0,
>>   "choose_total_tries": 50,
>>   "chooseleaf_descend_once": 1,
>>   "chooseleaf_vary_r": 0,
>>   "chooseleaf_stable": 0,
>>   "straw_calc_version": 1,
>>   "allowed_bucket_algs": 22,
>>   "profile": "bobtail",
>>   "optimal_tunables": 0,
>>   "legacy_tunables": 0,
>>   "minimum_required_version": "hammer",
>>   "require_feature_tunables": 1,
>>   "require_feature_tunables2": 1,
>>   "has_v2_rules": 0,
>>   "require_feature_tunables3": 0,
>>   "has_v3_rules": 0,
>>   "has_v4_buckets": 1,
> 
> This is the problem.  straw2 buckets require CRUSH_V4 feature bit.
> If you don't want to upgrade the kernel on the client, you would need
> to convert straw2 buckets to legacy straw buckets by manually editing
> your CRUSH map.
> 
> Thanks,
> 
>Ilya

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] rbd map on octopus from luminous client

2020-09-17 Thread Marc Boisis


Hi,

I had to map a rbd from an ubuntu Trusty  luminous client on an octopus cluster.

client dmesg :
feature set mismatch, my 4a042a42 < server's 14a042a42, missing 
1

I downgrade my osd tunable to bobtail but it still doesn't work

ceph osd crush show-tunables
{
   "choose_local_tries": 0,
   "choose_local_fallback_tries": 0,
   "choose_total_tries": 50,
   "chooseleaf_descend_once": 1,
   "chooseleaf_vary_r": 0,
   "chooseleaf_stable": 0,
   "straw_calc_version": 1,
   "allowed_bucket_algs": 22,
   "profile": "bobtail",
   "optimal_tunables": 0,
   "legacy_tunables": 0,
   "minimum_required_version": "hammer",
   "require_feature_tunables": 1,
   "require_feature_tunables2": 1,
   "has_v2_rules": 0,
   "require_feature_tunables3": 0,
   "has_v3_rules": 0,
   "has_v4_buckets": 1,
   "require_feature_tunables5": 0,
   "has_v5_rules": 0
}

Thanks for your help.

Marc
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] ceph-ansible replicated crush rule

2020-05-14 Thread Marc Boisis
Hello,

With ceph-ansible the default replicated crush rule is :
{
   "rule_id": 0,
   "rule_name": "replicated_rule",
   "ruleset": 0,
   "type": 1,
   "min_size": 1,
   "max_size": 10,
   "steps": [
   {
   "op": "take",
   "item": -1,
   "item_name": "default"
   },
   {
   "op": "chooseleaf_firstn",
   "num": 0,
   "type": "host"
   },
   {
   "op": "emit"
   }
   ]
   }

And I would like to have this:
{
   "rule_id": 0,
   "rule_name": "replicated_rule",
   "ruleset": 0,
   "type": 1,
   "min_size": 2,
   "max_size": 4,
   "steps": [
   {
   "op": "take",
   "item": -1,
   "item_name": "default"
   },
   {
   "op": "chooseleaf_firstn",
   "num": 0,
   "type": "rack"
   },
   {
   "op": "emit"
   }
   ]
   }

How can I do this within ceph-ansible playbook ?
I try :
crush_rule_replicated:
 name: replicated_rule
 root: default
 ruleset: 0
 type: replicated
 min_size: 2
 max_size: 4
 step: 
   - take default
   - chooseleaf firstn 0 type rack
   - emit
 default: true

crush_rules:
 - "{{ crush_rule_replicated }}" 


task path: roles/ceph-osd/tasks/crush_rules.yml:32 try to change it but it fail:
stdout_lines:
   - ''
   - 
'{"rule_id":0,"rule_name":"replicated_rule","ruleset":0,"type":1,"min_size":1,"max_size":10,"steps":[{"op":"take","item":-1,"item_name":"default"},{"op":"chooseleaf_firstn","num":0,"type":"host"},{"op":"emit"}]}'

Anyone have an idea ?
Regards

Marc
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io