Re: [ceph-users] large omap object in usage_log_pool

2019-05-27 Thread shubjero
Thanks Casey. This helped me understand the purpose of this pool. I
trimmed the usage logs which reduced the number of keys stored in that
index significantly and I may even disable the usage log entirely as I
don't believe we use it for anything.

On Fri, May 24, 2019 at 3:51 PM Casey Bodley  wrote:
>
>
> On 5/24/19 1:15 PM, shubjero wrote:
> > Thanks for chiming in Konstantin!
> >
> > Wouldn't setting this value to 0 disable the sharding?
> >
> > Reference: http://docs.ceph.com/docs/mimic/radosgw/config-ref/
> >
> > rgw override bucket index max shards
> > Description:Represents the number of shards for the bucket index
> > object, a value of zero indicates there is no sharding. It is not
> > recommended to set a value too large (e.g. thousand) as it increases
> > the cost for bucket listing. This variable should be set in the client
> > or global sections so that it is automatically applied to
> > radosgw-admin commands.
> > Type:Integer
> > Default:0
> >
> > rgw dynamic resharding is enabled:
> > ceph daemon mon.controller1 config show | grep rgw_dynamic_resharding
> >  "rgw_dynamic_resharding": "true",
> >
> > I'd like to know more about the purpose of our .usage pool and the
> > 'usage_log_pool' in general as I cant find much about this component
> > of ceph.
>
> You can find docs for the usage log at
> http://docs.ceph.com/docs/master/radosgw/admin/#usage
>
> Unless trimmed, the usage log will continue to grow. If you aren't using
> it, I'd recommend turning it off and trimming it all.
>
> >
> > On Thu, May 23, 2019 at 11:24 PM Konstantin Shalygin  wrote:
> >> in the config.
> >> ```"rgw_override_bucket_index_max_shards": "8",```. Should this be
> >> increased?
> >>
> >> Should be decreased to default `0`, I think.
> >>
> >> Modern Ceph releases resolve large omaps automatically via bucket dynamic 
> >> resharding:
> >>
> >> ```
> >>
> >> {
> >>  "option": {
> >>  "name": "rgw_dynamic_resharding",
> >>  "type": "bool",
> >>  "level": "basic",
> >>  "desc": "Enable dynamic resharding",
> >>  "long_desc": "If true, RGW will dynamicall increase the number of 
> >> shards in buckets that have a high number of objects per shard.",
> >>  "default": true,
> >>  "daemon_default": "",
> >>  "tags": [],
> >>  "services": [
> >>  "rgw"
> >>  ],
> >>  "see_also": [
> >>  "rgw_max_objs_per_shard"
> >>  ],
> >>  "min": "",
> >>  "max": ""
> >>  }
> >> }
> >> ```
> >>
> >> ```
> >>
> >> {
> >>  "option": {
> >>  "name": "rgw_max_objs_per_shard",
> >>  "type": "int64_t",
> >>  "level": "basic",
> >>  "desc": "Max objects per shard for dynamic resharding",
> >>  "long_desc": "This is the max number of objects per bucket index 
> >> shard that RGW will allow with dynamic resharding. RGW will trigger an 
> >> automatic reshard operation on the bucket if it exceeds this number.",
> >>  "default": 10,
> >>  "daemon_default": "",
> >>  "tags": [],
> >>  "services": [
> >>  "rgw"
> >>  ],
> >>  "see_also": [
> >>  "rgw_dynamic_resharding"
> >>  ],
> >>  "min": "",
> >>  "max": ""
> >>  }
> >> }
> >> ```
> >>
> >>
> >> So when your bucket reached new 100k objects rgw will shard this bucket 
> >> automatically.
> >>
> >> Some old buckets may be not sharded, like your ancients from Giant. You 
> >> can check fill status like this: `radosgw-admin bucket limit check | jq 
> >> '.[]'`. If some buckets is not reshared you can shart it by hand via 
> >> `radosgw-admin reshard add ...`. Also, there may be some stale reshard 
> >> instances (fixed ~ in 12.2.11), you can check it via `radosgw-admin 
> >> reshard stale-instances list` and then remove via `reshard stale-instances 
> >> rm`.
> >>
> >>
> >>
> >> k
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] large omap object in usage_log_pool

2019-05-24 Thread Casey Bodley



On 5/24/19 1:15 PM, shubjero wrote:

Thanks for chiming in Konstantin!

Wouldn't setting this value to 0 disable the sharding?

Reference: http://docs.ceph.com/docs/mimic/radosgw/config-ref/

rgw override bucket index max shards
Description:Represents the number of shards for the bucket index
object, a value of zero indicates there is no sharding. It is not
recommended to set a value too large (e.g. thousand) as it increases
the cost for bucket listing. This variable should be set in the client
or global sections so that it is automatically applied to
radosgw-admin commands.
Type:Integer
Default:0

rgw dynamic resharding is enabled:
ceph daemon mon.controller1 config show | grep rgw_dynamic_resharding
 "rgw_dynamic_resharding": "true",

I'd like to know more about the purpose of our .usage pool and the
'usage_log_pool' in general as I cant find much about this component
of ceph.


You can find docs for the usage log at 
http://docs.ceph.com/docs/master/radosgw/admin/#usage


Unless trimmed, the usage log will continue to grow. If you aren't using 
it, I'd recommend turning it off and trimming it all.




On Thu, May 23, 2019 at 11:24 PM Konstantin Shalygin  wrote:

in the config.
```"rgw_override_bucket_index_max_shards": "8",```. Should this be
increased?

Should be decreased to default `0`, I think.

Modern Ceph releases resolve large omaps automatically via bucket dynamic 
resharding:

```

{
 "option": {
 "name": "rgw_dynamic_resharding",
 "type": "bool",
 "level": "basic",
 "desc": "Enable dynamic resharding",
 "long_desc": "If true, RGW will dynamicall increase the number of shards in 
buckets that have a high number of objects per shard.",
 "default": true,
 "daemon_default": "",
 "tags": [],
 "services": [
 "rgw"
 ],
 "see_also": [
 "rgw_max_objs_per_shard"
 ],
 "min": "",
 "max": ""
 }
}
```

```

{
 "option": {
 "name": "rgw_max_objs_per_shard",
 "type": "int64_t",
 "level": "basic",
 "desc": "Max objects per shard for dynamic resharding",
 "long_desc": "This is the max number of objects per bucket index shard that 
RGW will allow with dynamic resharding. RGW will trigger an automatic reshard operation on the 
bucket if it exceeds this number.",
 "default": 10,
 "daemon_default": "",
 "tags": [],
 "services": [
 "rgw"
 ],
 "see_also": [
 "rgw_dynamic_resharding"
 ],
 "min": "",
 "max": ""
 }
}
```


So when your bucket reached new 100k objects rgw will shard this bucket 
automatically.

Some old buckets may be not sharded, like your ancients from Giant. You can 
check fill status like this: `radosgw-admin bucket limit check | jq '.[]'`. If 
some buckets is not reshared you can shart it by hand via `radosgw-admin 
reshard add ...`. Also, there may be some stale reshard instances (fixed ~ in 
12.2.11), you can check it via `radosgw-admin reshard stale-instances list` and 
then remove via `reshard stale-instances rm`.



k

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] large omap object in usage_log_pool

2019-05-24 Thread shubjero
Thanks for chiming in Konstantin!

Wouldn't setting this value to 0 disable the sharding?

Reference: http://docs.ceph.com/docs/mimic/radosgw/config-ref/

rgw override bucket index max shards
Description:Represents the number of shards for the bucket index
object, a value of zero indicates there is no sharding. It is not
recommended to set a value too large (e.g. thousand) as it increases
the cost for bucket listing. This variable should be set in the client
or global sections so that it is automatically applied to
radosgw-admin commands.
Type:Integer
Default:0

rgw dynamic resharding is enabled:
ceph daemon mon.controller1 config show | grep rgw_dynamic_resharding
"rgw_dynamic_resharding": "true",

I'd like to know more about the purpose of our .usage pool and the
'usage_log_pool' in general as I cant find much about this component
of ceph.


On Thu, May 23, 2019 at 11:24 PM Konstantin Shalygin  wrote:
>
> in the config.
> ```"rgw_override_bucket_index_max_shards": "8",```. Should this be
> increased?
>
> Should be decreased to default `0`, I think.
>
> Modern Ceph releases resolve large omaps automatically via bucket dynamic 
> resharding:
>
> ```
>
> {
> "option": {
> "name": "rgw_dynamic_resharding",
> "type": "bool",
> "level": "basic",
> "desc": "Enable dynamic resharding",
> "long_desc": "If true, RGW will dynamicall increase the number of 
> shards in buckets that have a high number of objects per shard.",
> "default": true,
> "daemon_default": "",
> "tags": [],
> "services": [
> "rgw"
> ],
> "see_also": [
> "rgw_max_objs_per_shard"
> ],
> "min": "",
> "max": ""
> }
> }
> ```
>
> ```
>
> {
> "option": {
> "name": "rgw_max_objs_per_shard",
> "type": "int64_t",
> "level": "basic",
> "desc": "Max objects per shard for dynamic resharding",
> "long_desc": "This is the max number of objects per bucket index 
> shard that RGW will allow with dynamic resharding. RGW will trigger an 
> automatic reshard operation on the bucket if it exceeds this number.",
> "default": 10,
> "daemon_default": "",
> "tags": [],
> "services": [
> "rgw"
> ],
> "see_also": [
> "rgw_dynamic_resharding"
> ],
> "min": "",
> "max": ""
> }
> }
> ```
>
>
> So when your bucket reached new 100k objects rgw will shard this bucket 
> automatically.
>
> Some old buckets may be not sharded, like your ancients from Giant. You can 
> check fill status like this: `radosgw-admin bucket limit check | jq '.[]'`. 
> If some buckets is not reshared you can shart it by hand via `radosgw-admin 
> reshard add ...`. Also, there may be some stale reshard instances (fixed ~ in 
> 12.2.11), you can check it via `radosgw-admin reshard stale-instances list` 
> and then remove via `reshard stale-instances rm`.
>
>
>
> k
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] large omap object in usage_log_pool

2019-05-23 Thread Konstantin Shalygin

in the config.
```"rgw_override_bucket_index_max_shards": "8",```. Should this be
increased?


Should be decreased to default `0`, I think.

Modern Ceph releases resolve large omaps automatically via bucket 
dynamic resharding:


```

{
    "option": {
    "name": "rgw_dynamic_resharding",
    "type": "bool",
    "level": "basic",
    "desc": "Enable dynamic resharding",
    "long_desc": "If true, RGW will dynamicall increase the number 
of shards in buckets that have a high number of objects per shard.",

    "default": true,
    "daemon_default": "",
    "tags": [],
    "services": [
    "rgw"
    ],
    "see_also": [
    "rgw_max_objs_per_shard"
    ],
    "min": "",
    "max": ""
    }
}
```

```

{
    "option": {
    "name": "rgw_max_objs_per_shard",
    "type": "int64_t",
    "level": "basic",
    "desc": "Max objects per shard for dynamic resharding",
    "long_desc": "This is the max number of objects per bucket 
index shard that RGW will allow with dynamic resharding. RGW will 
trigger an automatic reshard operation on the bucket if it exceeds this 
number.",

    "default": 10,
    "daemon_default": "",
    "tags": [],
    "services": [
    "rgw"
    ],
    "see_also": [
    "rgw_dynamic_resharding"
    ],
    "min": "",
    "max": ""
    }
}
```


So when your bucket reached new 100k objects rgw will shard this bucket 
automatically.


Some old buckets may be not sharded, like your ancients from Giant. You 
can check fill status like this: `radosgw-admin bucket limit check | jq 
'.[]'`. If some buckets is not reshared you can shart it by hand via 
`radosgw-admin reshard add ...`. Also, there may be some stale reshard 
instances (fixed ~ in 12.2.11), you can check it via `radosgw-admin 
reshard stale-instances list` and then remove via `reshard 
stale-instances rm`.




k

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com