Am 06.07.2012 um 05:50 schrieb Alexandre DERUMIER <aderum...@odiso.com>:

> Hi, 
> Stefan is on vacation for the moment,I don't know if he can reply you.
Thanks!

> 
> But I can reoly for him for the kvm part (as we do same tests together in 
> parallel).
> 
> - kvm is 1.1
> - rbd 0.48
> - drive option 
> rbd:pool/volume:auth_supported=cephx;none;keyring=/etc/pve/priv/ceph/ceph.keyring:mon_host=X.X.X.X";
> -using writeback
> 
> writeback tuning in ceph.conf on the kvm host
> 
> rbd_cache_size = 33554432 
> rbd_cache_max_age = 2.0 
Correct

> 
> benchmark use in kvm guest:
> fio --filename=$DISK --direct=1 --rw=randwrite --bs=4k --size=200G 
> --numjobs=50 --runtime=90 --group_reporting --name=file1
> 
> results show max 14000io/s with 1 vm, 7000io/s by vm with 2vm,...
> so it doesn't scale
Correct too

> 
> (bench is with directio, so maybe writeback cache don't help)
> 
> hardware for ceph , is 3 nodes with 4 intel ssd each. (1 drive can handle 
> 40000io/s randwrite locally)
30000 but still enough

Stefan

> - Alexandre
> 
> ----- Mail original ----- 
> 
> De: "Gregory Farnum" <g...@inktank.com> 
> À: "Stefan Priebe" <s.pri...@profihost.ag> 
> Cc: ceph-devel@vger.kernel.org, "Sage Weil" <s...@inktank.com> 
> Envoyé: Jeudi 5 Juillet 2012 23:33:18 
> Objet: Re: speedup ceph / scaling / find the bottleneck 
> 
> Could you send over the ceph.conf on your KVM host, as well as how 
> you're configuring KVM to use rbd? 
> 
> On Tue, Jul 3, 2012 at 11:20 AM, Stefan Priebe <s.pri...@profihost.ag> wrote: 
>> I'm sorry but this is the KVM Host Machine there is no ceph running on this 
>> machine. 
>> 
>> If i change the admin socket to: 
>> admin_socket=/var/run/ceph_$name.sock 
>> 
>> i don't have any socket at all ;-( 
>> 
>> Am 03.07.2012 17:31, schrieb Sage Weil: 
>> 
>>> On Tue, 3 Jul 2012, Stefan Priebe - Profihost AG wrote: 
>>>> 
>>>> Hello, 
>>>> 
>>>> Am 02.07.2012 22:30, schrieb Josh Durgin: 
>>>>> 
>>>>> If you add admin_socket=/path/to/admin_socket for your client running 
>>>>> qemu (in that client's ceph.conf section or manually in the qemu 
>>>>> command line) you can check that caching is enabled: 
>>>>> 
>>>>> ceph --admin-daemon /path/to/admin_socket show config | grep rbd_cache 
>>>>> 
>>>>> And see statistics it generates (look for cache) with: 
>>>>> 
>>>>> ceph --admin-daemon /path/to/admin_socket perfcounters_dump 
>>>> 
>>>> 
>>>> This doesn't work for me: 
>>>> ceph --admin-daemon /var/run/ceph.sock show config 
>>>> read only got 0 bytes of 4 expected for response length; invalid 
>>>> command?2012-07-03 09:46:57.931821 7fa75d129700 -1 asok(0x8115a0) 
>>>> AdminSocket: 
>>>> request 'show config' not defined 
>>> 
>>> 
>>> Oh, it's 'config show'. Also, 'help' will list the supported commands. 
>>> 
>>>> Also perfcounters does not show anything: 
>>>> # ceph --admin-daemon /var/run/ceph.sock perfcounters_dump 
>>>> {} 
>>> 
>>> 
>>> There may be another daemon that tried to attach to the same socket file. 
>>> You might want to set 'admin socket = /var/run/ceph/$name.sock' or 
>>> something similar, or whatever else is necessary to make it a unique file. 
>>> 
>>>> ~]# ceph -v 
>>>> ceph version 0.48argonaut-2-gb576faa 
>>>> (commit:b576faa6f24356f4d3ec7205e298d58659e29c68) 
>>> 
>>> 
>>> Out of curiousity, what patches are you applying on top of the release? 
>>> 
>>> sage 
>>> 
>> 
> -- 
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in 
> the body of a message to majord...@vger.kernel.org 
> More majordomo info at http://vger.kernel.org/majordomo-info.html 
> 
> 
> 
> -- 
> 
> -- 
> 
> 
> 
>    
> 
> Alexandre D e rumier 
> 
> Ingénieur Systèmes et Réseaux 
> 
> 
> Fixe : 03 20 68 88 85 
> 
> Fax : 03 20 68 90 88 
> 
> 
> 45 Bvd du Général Leclerc 59100 Roubaix 
> 12 rue Marivaux 75002 Paris 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to