What ceph version do you use?
Regards,
On 9 Apr 2015 18:58, "Patrik Plank" wrote:
> Hi,
>
>
> i have build a cach-tier pool (replica 2) with 3 x 512gb ssd for my kvm
> pool.
>
> these are my settings :
>
>
> ceph osd tier add kvm cache-pool
>
> ceph osd tier cache-mode cache-pool writeback
>
>
t; It may help to compact the monitors' LevelDB store if they have grown in
>>> size
>>> http://www.sebastien-han.fr/blog/2014/10/27/ceph-mon-store-taking-up-a-lot-of-space/
>>> Depends on the size of the mon's store size it may take some time to
>>> compac
e time to
> compact, make sure to do only one at a time.
>
> *Kobi Laredo*
> *Cloud Systems Engineer* | (*408) 409-KOBI*
>
> On Fri, Mar 27, 2015 at 10:31 AM, Chu Duc Minh
> wrote:
>
>> All my monitors running.
>> But i deleting pool .rgw.buckets, now having 13 million o
n times out and
> contacts a different one.
>
> I have also seen it just be slow if the monitors are processing so many
> updates that they're behind, but that's usually on a very unhappy cluster.
> -Greg
> On Fri, Mar 27, 2015 at 8:50 AM Chu Duc Minh
> wrote:
>
&
On my CEPH cluster, "ceph -s" return result quite slow.
Sometimes it return result immediately, sometimes i hang few seconds before
return result.
Do you think this problem (ceph -s slow return) only relate to ceph-mon(s)
process? or maybe it relate to ceph-osd(s) too?
(i deleting a big bucket, .r
So I’d go from 2048 to 4096.
> I’m not sure if this is the safest way, but it’s worked for me.
>
>
>
> [image: yp]
>
>
>
> Michael Kuriger
>
> Sr. Unix Systems Engineer
>
> * mk7...@yp.com |( 818-649-7235
>
> From: Chu Duc Minh
> Date: Monday,
I'm using the latest Giant and have the same issue. When i increase PG_num
of a pool from 2048 to 2148, my VMs is still ok. When i increase from 2148
to 2400, some VMs die (Qemu-kvm process die).
My physical servers (host VMs) running kernel 3.13 and use librbd.
I think it's a bug in librbd with cr
0:0 []
-1 [] -1 0'0 0.000'0 0.00
Do you have any suggestion?
Thank you very much indeed!
On Sun, Nov 9, 2014 at 12:52 AM, Chu Duc Minh wrote:
> My ceph cluster have a pg in state "incomplete" and i can not query them
> any m
My ceph cluster have a pg in state "incomplete" and i can not query them
any more.
*# ceph pg 6.9d8 query* (hang forever)
All my volumes may be lost data because of this PG.
# ceph pg dump_stuck inactive
ok
pg_stat objects mip degrmispunf bytes log disklog
state state_sta
mail thread with the supporting
> details of that issue?
>
> --
>
> Jason Dillaman
> Red Hat
> dilla...@redhat.com
> http://www.redhat.com
>
>
> --
> *From: *"Chu Duc Minh"
> *To: *ceph-de...@vger.kernel.org, "ceph-users@li
Hi folks, some volumes in my ceph cluster have problem and I can NOT delete
it by rbd command. When i show info or try to delete it, rbd comand crash.
Command i used:
*# rbd -p volumes info volume-e110b0a5-5116-46f2-99c7-84bb546f15c2# rbd -p
volumes rm volume-e110b0a5-5116-46f2-99c7-84bb546f15c2*
@font-face{font-family:Calibri;panose-1:2 15 5 2 2 2 4 3 2
4;}You should run "ceph osd tree" and post the
output here.
BR,On March 28, 2014, at 5:54AM, Dan Koren
wrote:Just
ran into this problem: a week ago I set up a Ceph cluster on 4
systems, with one admin node and 3 mon+osd nodes, then ra
When using RBD backend for Openstack volume, i can easily surpass 200MB/s.
But when using "rbd import" command, eg:
# rbd import --pool test Debian.raw volume-Debian-1 --new-format --id
volumes
I only can import at speed ~ 30MB/s
I don't know why rbd import slow? What can i do to improve import sp
Gbps links on multiple radosgw nodes on a large
> cluster, if I remember correctly, Yehuda has tested it up into the 7Gbps
> range with 10Gbps gear. Could you describe your clusters hardware and
> connectivity?
>
>
> On Mon, Oct 14, 2013 at 3:34 AM, Chu Duc Minh wrote:
>
>>
into a bucket,
that already have millions files?
On Wed, Sep 25, 2013 at 7:24 PM, Mark Nelson wrote:
> On 09/25/2013 02:49 AM, Chu Duc Minh wrote:
>
>> I have a CEPH cluster with 9 nodes (6 data nodes & 3 mon/mds nodes)
>> And i setup 4 separate nodes to test performan
I have a CEPH cluster with 9 nodes (6 data nodes & 3 mon/mds nodes)
And i setup 4 separate nodes to test performance of Rados-GW:
- 2 node run Rados-GW
- 2 node run multi-process put file to [multi] Rados-GW
Result:
a) When i use 1 RadosGW node & 1 upload-node, speed upload = 50MB/s
/upload-node
16 matches
Mail list logo