Thank you very much!
On 29 Mar 2015 11:25, "Kobi Laredo" <kobi.lar...@dreamhost.com> wrote:

> I'm glad it worked.
> You can set a warning to catch this early next time (1GB)
>
> *mon leveldb size warn = 1000000000*
>
>
>
> *Kobi Laredo*
> *Cloud Systems Engineer* | (*408) 409-KOBI*
>
> On Fri, Mar 27, 2015 at 5:45 PM, Chu Duc Minh <chu.ducm...@gmail.com>
> wrote:
>
>> @Kobi Laredo: thank you! It's exactly my problem.
>> # du -sh /var/lib/ceph/mon/
>> *2.6G *   /var/lib/ceph/mon/
>> # ceph tell mon.a compact
>> compacted leveldb in 10.197506
>> # du -sh /var/lib/ceph/mon/
>> *461M*    /var/lib/ceph/mon/
>> Now my "ceph -s" return result immediately.
>>
>> Maybe monitors' LevelDB store grow so big because i pushed 13 millions
>> file into a bucket (over radosgw).
>> When have extreme large number of files in a bucket, the state of ceph
>> cluster could become unstable? (I'm running Giant)
>>
>> Regards,
>>
>> On Sat, Mar 28, 2015 at 12:57 AM, Kobi Laredo <kobi.lar...@dreamhost.com>
>> wrote:
>>
>>> What's the current health of the cluster?
>>> It may help to compact the monitors' LevelDB store if they have grown in
>>> size
>>> http://www.sebastien-han.fr/blog/2014/10/27/ceph-mon-store-taking-up-a-lot-of-space/
>>> Depends on the size of the mon's store size it may take some time to
>>> compact, make sure to do only one at a time.
>>>
>>> *Kobi Laredo*
>>> *Cloud Systems Engineer* | (*408) 409-KOBI*
>>>
>>> On Fri, Mar 27, 2015 at 10:31 AM, Chu Duc Minh <chu.ducm...@gmail.com>
>>> wrote:
>>>
>>>> All my monitors running.
>>>> But i deleting pool .rgw.buckets, now having 13 million objects (just
>>>> test data).
>>>> The reason that i must delete this pool is my cluster become unstable,
>>>> and sometimes an OSD down, PG peering, incomplete,...
>>>> Therefore i must delete this pool to re-stablize my cluster.  (radosgw
>>>> is too slow for delete objects when one of my bucket reachs few million
>>>> objects).
>>>>
>>>> Regards,
>>>>
>>>>
>>>> On Sat, Mar 28, 2015 at 12:23 AM, Gregory Farnum <g...@gregs42.com>
>>>> wrote:
>>>>
>>>>> Are all your monitors running? Usually a temporary hang means that the
>>>>> Ceph client tries to reach a monitor that isn't up, then times out and
>>>>> contacts a different one.
>>>>>
>>>>> I have also seen it just be slow if the monitors are processing so
>>>>> many updates that they're behind, but that's usually on a very unhappy
>>>>> cluster.
>>>>> -Greg
>>>>> On Fri, Mar 27, 2015 at 8:50 AM Chu Duc Minh <chu.ducm...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> On my CEPH cluster, "ceph -s" return result quite slow.
>>>>>> Sometimes it return result immediately, sometimes i hang few seconds
>>>>>> before return result.
>>>>>>
>>>>>> Do you think this problem (ceph -s slow return) only relate to
>>>>>> ceph-mon(s) process? or maybe it relate to ceph-osd(s) too?
>>>>>> (i deleting a big bucket, .rgw.buckets, and ceph-osd(s) disk util
>>>>>> quite high)
>>>>>>
>>>>>> Regards,
>>>>>> _______________________________________________
>>>>>> ceph-users mailing list
>>>>>> ceph-users@lists.ceph.com
>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@lists.ceph.com
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>
>>>>
>>>
>>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to