[ceph-users] It works !Re?? //?? // ceph-mon is blocked after shutting down and ip address changed

2019-12-11 Thread Chu
It works , after I remove v2 address in ceph.conf. Hope I see. Thank you ! [root@ceph-node1 ceph]# ceph -s cluster: id: e384e8e6-94d5-4812-bfbb-d1b0468bdef5 health: HEALTH_WARN 1 MDSs report slow metadata IOs noout,nobackfill,norecover flag(s) set 9 osds down no

[ceph-users] Deploying a Ceph storage cluster using Warewulf on Centos-7

2015-12-17 Thread Chu Ruilin
Hi, all I don't know which automation tool is best for deploying Ceph and I'd like to know about. I'm comfortable with Warewulf since I've been using it for HPC clusters. I find it quite convenient for Ceph too. I wrote a set of scripts that can deploy a Ceph cluster quickly. Here is how I did it

Re: [ceph-users] How to observed civetweb. (Kobi Laredo)

2015-10-05 Thread Po Chu
Hi, Kobi, >> We are currently testing tuning civetweb's num_threads >> and request_timeout_ms to improve radosgw performance I wonder how's your testing of radosgw perofrmance going? We hit some limitation of single radosgw instance, about ~300MB/s. I'm just thinking how num_threads &

Re: [ceph-users] cache-tier do not evict

2015-04-09 Thread Chu Duc Minh
What ceph version do you use? Regards, On 9 Apr 2015 18:58, Patrik Plank pat...@plank.me wrote: Hi, i have build a cach-tier pool (replica 2) with 3 x 512gb ssd for my kvm pool. these are my settings : ceph osd tier add kvm cache-pool ceph osd tier cache-mode cache-pool writeback

Re: [ceph-users] ceph -s slow return result

2015-03-29 Thread Chu Duc Minh
:45 PM, Chu Duc Minh chu.ducm...@gmail.com wrote: @Kobi Laredo: thank you! It's exactly my problem. # du -sh /var/lib/ceph/mon/ *2.6G * /var/lib/ceph/mon/ # ceph tell mon.a compact compacted leveldb in 10.197506 # du -sh /var/lib/ceph/mon/ *461M*/var/lib/ceph/mon/ Now my ceph -s

Re: [ceph-users] ceph -s slow return result

2015-03-27 Thread Chu Duc Minh
, then times out and contacts a different one. I have also seen it just be slow if the monitors are processing so many updates that they're behind, but that's usually on a very unhappy cluster. -Greg On Fri, Mar 27, 2015 at 8:50 AM Chu Duc Minh chu.ducm...@gmail.com wrote: On my CEPH cluster

[ceph-users] ceph -s slow return result

2015-03-27 Thread Chu Duc Minh
On my CEPH cluster, ceph -s return result quite slow. Sometimes it return result immediately, sometimes i hang few seconds before return result. Do you think this problem (ceph -s slow return) only relate to ceph-mon(s) process? or maybe it relate to ceph-osd(s) too? (i deleting a big bucket,

Re: [ceph-users] ceph -s slow return result

2015-03-27 Thread Chu Duc Minh
at a time. *Kobi Laredo* *Cloud Systems Engineer* | (*408) 409-KOBI* On Fri, Mar 27, 2015 at 10:31 AM, Chu Duc Minh chu.ducm...@gmail.com wrote: All my monitors running. But i deleting pool .rgw.buckets, now having 13 million objects (just test data). The reason that i must delete

Re: [ceph-users] [SPAM] Changing pg_num = RBD VM down !

2015-03-16 Thread Chu Duc Minh
of 2. So I’d go from 2048 to 4096. I’m not sure if this is the safest way, but it’s worked for me. [image: yp] Michael Kuriger Sr. Unix Systems Engineer * mk7...@yp.com |( 818-649-7235 From: Chu Duc Minh chu.ducm...@gmail.com Date: Monday, March 16, 2015 at 7:49 AM To: Florent B

Re: [ceph-users] [SPAM] Changing pg_num = RBD VM down !

2015-03-16 Thread Chu Duc Minh
I'm using the latest Giant and have the same issue. When i increase PG_num of a pool from 2048 to 2148, my VMs is still ok. When i increase from 2148 to 2400, some VMs die (Qemu-kvm process die). My physical servers (host VMs) running kernel 3.13 and use librbd. I think it's a bug in librbd with

Re: [ceph-users] Running ceph in Deis/Docker

2014-12-21 Thread Jimmy Chu
in the initial members section. And yes, rebooting things all at the same time can be fun, I managed to get into a similar situation like yours once (though that cluster really had only one mon) and it took a hard reset to fix things eventually. Christian On Tue, 16 Dec 2014 08:52:15 +0800 Jimmy Chu wrote

[ceph-users] Running ceph in Deis/Docker

2014-12-15 Thread Jimmy Chu
-prohibited All private IPs are ping-gable within the ceph monitor container. What could I do next to troubleshoot this issue? Thanks a lot! - Jimmy Chu ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users

[ceph-users] Running ceph in Deis/Docker

2014-12-15 Thread Jimmy Chu
Chu ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] [URGENT] My CEPH cluster is dying (due to incomplete PG)

2014-11-08 Thread Chu Duc Minh
My ceph cluster have a pg in state incomplete and i can not query them any more. *# ceph pg 6.9d8 query* (hang forever) All my volumes may be lost data because of this PG. # ceph pg dump_stuck inactive ok pg_stat objects mip degrmispunf bytes log disklog state

Re: [ceph-users] [URGENT] My CEPH cluster is dying (due to incomplete PG)

2014-11-08 Thread Chu Duc Minh
[] -1 [] -1 0'0 0.000'0 0.00 Do you have any suggestion? Thank you very much indeed! On Sun, Nov 9, 2014 at 12:52 AM, Chu Duc Minh chu.ducm...@gmail.com wrote: My ceph cluster have a pg in state incomplete and i can not query them any more. *# ceph pg

[ceph-users] rbd import so slow

2014-01-09 Thread Chu Duc Minh
When using RBD backend for Openstack volume, i can easily surpass 200MB/s. But when using rbd import command, eg: # rbd import --pool test Debian.raw volume-Debian-1 --new-format --id volumes I only can import at speed ~ 30MB/s I don't know why rbd import slow? What can i do to improve import

Re: [ceph-users] Speed limit on RadosGW?

2013-10-14 Thread Chu Duc Minh
, that already have millions files? On Wed, Sep 25, 2013 at 7:24 PM, Mark Nelson mark.nel...@inktank.comwrote: On 09/25/2013 02:49 AM, Chu Duc Minh wrote: I have a CEPH cluster with 9 nodes (6 data nodes 3 mon/mds nodes) And i setup 4 separate nodes to test performance of Rados-GW: - 2 node run

[ceph-users] Speed limit on RadosGW?

2013-09-25 Thread Chu Duc Minh
I have a CEPH cluster with 9 nodes (6 data nodes 3 mon/mds nodes) And i setup 4 separate nodes to test performance of Rados-GW: - 2 node run Rados-GW - 2 node run multi-process put file to [multi] Rados-GW Result: a) When i use 1 RadosGW node 1 upload-node, speed upload = 50MB/s /upload-node,