Re: [ceph-users] Number of OSD map versions

2015-12-01 Thread George Mihaiescu
Thanks Dan,

I'll use these ones from Infernalis:


[global]
osd map message max = 100

[osd]
osd map cache size = 200
osd map max advance = 150
osd map share max epochs = 100
osd pg epoch persisted max stale = 150


George

On Mon, Nov 30, 2015 at 4:20 PM, Dan van der Ster 
wrote:

> I wouldn't run with those settings in production. That was a test to
> squeeze too many OSDs into too little RAM.
>
> Check the values from infernalis/master. Those should be safe.
>
> --
> Dan
> On 30 Nov 2015 21:45, "George Mihaiescu"  wrote:
>
>> Hi,
>>
>> I've read the recommendation from CERN about the number of OSD maps (
>> https://cds.cern.ch/record/2015206/files/CephScaleTestMarch2015.pdf,
>> page 3) and I would like to know if there is any negative impact from these
>> changes:
>>
>> [global]
>> osd map message max = 10
>>
>> [osd]
>> osd map cache size = 20
>> osd map max advance = 10
>> osd map share max epochs = 10
>> osd pg epoch persisted max stale = 10
>>
>>
>> We are running Hammer with nowhere closer to 7000 OSDs, but I don't want
>> to waste memory on OSD maps which are not needed.
>>
>> Are there are large production deployments running with these or similar
>> settings?
>>
>> Thank you,
>> George
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Number of OSD map versions

2015-11-30 Thread Dan van der Ster
I wouldn't run with those settings in production. That was a test to
squeeze too many OSDs into too little RAM.

Check the values from infernalis/master. Those should be safe.

--
Dan
On 30 Nov 2015 21:45, "George Mihaiescu"  wrote:

> Hi,
>
> I've read the recommendation from CERN about the number of OSD maps (
> https://cds.cern.ch/record/2015206/files/CephScaleTestMarch2015.pdf, page
> 3) and I would like to know if there is any negative impact from these
> changes:
>
> [global]
> osd map message max = 10
>
> [osd]
> osd map cache size = 20
> osd map max advance = 10
> osd map share max epochs = 10
> osd pg epoch persisted max stale = 10
>
>
> We are running Hammer with nowhere closer to 7000 OSDs, but I don't want
> to waste memory on OSD maps which are not needed.
>
> Are there are large production deployments running with these or similar
> settings?
>
> Thank you,
> George
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Number of OSD map versions

2015-11-30 Thread George Mihaiescu
Hi,

I've read the recommendation from CERN about the number of OSD maps (
https://cds.cern.ch/record/2015206/files/CephScaleTestMarch2015.pdf, page
3) and I would like to know if there is any negative impact from these
changes:

[global]
osd map message max = 10

[osd]
osd map cache size = 20
osd map max advance = 10
osd map share max epochs = 10
osd pg epoch persisted max stale = 10


We are running Hammer with nowhere closer to 7000 OSDs, but I don't want to
waste memory on OSD maps which are not needed.

Are there are large production deployments running with these or similar
settings?

Thank you,
George
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com