Re: [ceph-users] Full Ratio

2018-01-24 Thread QR
ceph osd  



 原始邮件 发件人: Karun 
Josy收件人: Jean-Charles Lopez抄送: 
ceph-users@lists.ceph.com发送时间: 2018年1月25日(周四) 
04:42主题: Re: [ceph-users] Full RatioThank you!
Ceph version is 12.2
Also, can you let me know the format to set  osd_backfill_full_ratio ?
Is it  " ceph osd   set   -backfillfull-ratio .89 " ?







Karun Josy
On Thu, Jan 25, 2018 at 1:29 AM, Jean-Charles Lopez  wrote:
Hi,

if you are using an older Ceph version note that the mon_osd_near_full_ration 
and mon_osd_full_ration must be set in the config file on the MON hosts first 
and then the MONs restarted one after the other one.

If using a recent  version there is a command ceph osd set-full-ratio and ceph 
osd set-nearfull-ratio

Regards
JC

> On Jan 24, 2018, at 11:07, Karun Josy  wrote:
>
> Hi,
>
> I am trying to increase the full ratio of OSDs in a cluster.
> While adding a new node one of the new disk got backfilled to more than 95% 
> and cluster freezed. So I am trying to avoid it from happening again.
>
>
> Tried pg set command but it is not working :
> $ ceph pg set_nearfull_ratio 0.88
> Error ENOTSUP: this command is obsolete
>
> I had increased the full ratio in osds using injectargs initially but it 
> didnt work as when the disk reached 95% it showed osd full status
>
> $ ceph tell osd.* injectargs '--mon_osd_full_ratio 0.97'
> osd.0: mon_osd_full_ratio = '0.97' (not observed, change may require 
> restart)
> osd.1: mon_osd_full_ratio = '0.97' (not observed, change may require 
> restart)
> 
> 
>
> How can I set full ratio to more than 95% ?
>
> Karun
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph df shows 100% used

2018-01-19 Thread QR


'MAX AVAIL' in the 'ceph df' output represents the amount of data that can be 
used before the first OSD becomes full, and not the sum of all free space 
across a set of OSDs.
 原始邮件 发件人: Webert de 
Souza Lima收件人: 
ceph-users发送时间: 2018年1月19日(周五) 20:20主题: Re: 
[ceph-users] ceph df shows 100% usedWhile it seemed to be solved yesterday, 
today the %USED has grown a lot again. See:
~# ceph osd df tree http://termbin.com/0zhk

~# ceph df detail
http://termbin.com/thox

94% USED while there is about 21TB worth of data, size = 2 menas ~42TB RAW 
Usage, but the OSDs in that root sum ~70TB available space.

Regards,
Webert LimaDevOps Engineer at MAV TecnologiaBelo Horizonte - BrasilIRC NICK - 
WebertRLZ
On Thu, Jan 18, 2018 at 8:21 PM, Webert de Souza Lima  
wrote:
With the help of robbat2 and llua on IRC channel I was able to solve this 
situation by taking down the 2-OSD only hosts.
After crush reweighting OSDs 8 and 23 from host mia1-master-fe02 to 0, ceph df 
showed the expected storage capacity usage (about 70%)


With this in mind, those guys have told me that it is due the cluster beeing 
uneven and unable to balance properly. It makes sense and it worked.
But for me it is still a very unexpected bahaviour for ceph to say that the 
pools are 100% full and Available Space is 0.
There were 3 hosts and repl. size = 2, if the host with only 2 OSDs were full 
(it wasn't), ceph could still use space from OSDs from the other hosts.
Regards,
Webert LimaDevOps Engineer at MAV TecnologiaBelo Horizonte - BrasilIRC NICK - 
WebertRLZ
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Is narkive down? There is no updates for a week(EOF)

2018-01-05 Thread QR
http://ceph-users.ceph.narkive.com/



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] The return code for creating bucket is wrong

2017-12-24 Thread QR
When creating an existing bucket with s3cmd, the return code is not 
BucketAlreadyExists.~# s3cmd ls2017-12-15 06:37  s3://lyb   
<--  bucket is exist2017-12-21 01:46  s3://myz~# s3cmd mb 
s3://lybBucket 's3://lyb/' created  <-- the creation 
success

dig into the code, the ERR_BUCKET_EXISTS is modified to zero in function 
send_response.2570 void RGWCreateBucket::execute()2571 {. . . 2760   op_ret = 
rgw_link_bucket(store, s->user->user_id, s->bucket,2761
info.creation_time, false);2762   if (op_ret && !existed && op_ret != -EEXIST) 
{2763 /* if it exists (or previously existed), don't remove it! */2764 
op_ret = rgw_unlink_bucket(store, s->user->user_id, s->bucket.tenant,2765   
 s->bucket.name);2766 if (op_ret < 0) {2767   ldout(s->cct, 
0) << "WARNING: failed to unlink bucket: ret=" << op_ret2768<< 
dendl;2769 }2770   } else if (op_ret == -EEXIST || (op_ret == 0 && 
existed)) {2771 op_ret = -ERR_BUCKET_EXISTS;  
< the return code is set to 
ERR_BUCKET_EXISTS2772   }

1193 void RGWCreateBucket_ObjStore_S3::send_response()1194 {1195   if (op_ret 
== -ERR_BUCKET_EXISTS)   < 
ERR_BUCKET_EXISTS is changed to 01196 op_ret = 0;
Is anyone know the reason that ERR_BUCKET_EXISTS is modified to zero? Thanks.___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com