Re: [ceph-users] Monitoring ceph statistics using rados python module

2014-05-13 Thread Kai Zhang
Hi Adrian,

You may be interested in "rados -p poo_name df --format json", although it's 
pool oriented, you could probably add the values together :)

Regards,
Kai

在 2014-05-13 08:33:11,"Adrian Banasiak"  写道:

Thanks for sugestion with admin daemon but it looks like single osd oriented. I 
have used perf dump on mon socket and it output some interesting data in case 
of monitoring whole cluster:
{ "cluster": { "num_mon": 4,
  "num_mon_quorum": 4,
  "num_osd": 29,
  "num_osd_up": 29,
  "num_osd_in": 29,
  "osd_epoch": 1872,
  "osd_kb": 20218112516,
  "osd_kb_used": 5022202696,
  "osd_kb_avail": 15195909820,
  "num_pool": 4,
  "num_pg": 3500,
  "num_pg_active_clean": 3500,
  "num_pg_active": 3500,
  "num_pg_peering": 0,
  "num_object": 400746,
  "num_object_degraded": 0,
  "num_object_unfound": 0,
  "num_bytes": 1678788329609,
  "num_mds_up": 0,
  "num_mds_in": 0,
  "num_mds_failed": 0,
  "mds_epoch": 1},


Unfortunately cluster wide IO statistics are still missing.



2014-05-13 17:17 GMT+02:00 Haomai Wang :
Not sure your demand.

I use "ceph --admin-daemon /var/run/ceph/ceph-osd.x.asok perf dump" to
get the monitor infos. And the result can be parsed by simplejson
easily via python.


On Tue, May 13, 2014 at 10:56 PM, Adrian Banasiak  wrote:
> Hi, i am working with test Ceph cluster and now I want to implement Zabbix
> monitoring with items such as:
>
> - whoe cluster IO (for example ceph -s -> recovery io 143 MB/s, 35
> objects/s)
> - pg statistics
>
> I would like to create single script in python to retrive values using rados
> python module, but there are only few informations in documentation about
> module usage. I've created single function which calculates all pools
> current read/write statistics but i cant find out how to add recovery IO
> usage and pg statistics:
>
> read = 0
> write = 0
> for pool in conn.list_pools():
> io = conn.open_ioctx(pool)
> stats[pool] = io.get_stats()
> read+=int(stats[pool]['num_rd'])
> write+=int(stats[pool]['num_wr'])
>
> Could someone share his knowledge about rados module for retriving ceph
> statistics?
>
> BTW Ceph is awesome!
>
> --
> Best regards, Adrian Banasiak
> email: adr...@banasiak.it
>

> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



--
Best Regards,

Wheat






--

Pozdrawiam, Adrian Banasiak
email: adr...@banasiak.it___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 回复: Hey, Where can I find the source code of " classObjectOperationImpl " ?

2014-04-30 Thread Kai Zhang
Hi Peng,
If you are interested in the code path of Ceph, these blogs may help:
How does a Ceph OSD handle a read message ? (in Firefly and up)
How does a Ceph OSD handle a write message ? (up to Emperor)


Here is the note about rados write I took when I read the source code:
| rados put  [infile]
L [tools/rados/rados.cc] main()
L rados_tool_common()
L do_put()L io_ctx.write()
L io_ctx_impl->write()
L [librados/IoCtxImpl.cc] write()
L operate()
L op_submit()
L [osdc/Objecter.cc] op_submit()
L _op_submit()
L recalc_op_target()
send_op()
L messenger->send_message()
L [msg/SimpleMessenger.cc] _send_message()
L submit_message()
L pipe->_send()
L [msg/Pipe.h] _send()
L [msg/Pipe.cc] witer()
L write_message()
L do_sendmsg()
L sendmsg()

Hope these could help.

Regards,
Kai Zhang



At 2014-04-30 00:04:55,peng  wrote:

In the librados.cc , I found the following code:
Step 1 .
file :  librados.cc 
void librados::ObjectWriteOperation::write(uint64_t off, const bufferlist& bl)
{
  ::ObjectOperation *o = (::ObjectOperation *)impl;
  bufferlist c = bl;
  o->write(off, c);
}
Step 2 .  to find  ::ObjectOperation   
File : Objecter.h
struct  ObjectOperation  {
   void  write( .. ) {}   //  call add_data
   void  add_data( .. ) {}  // call add_op 
   void  add_op(...)  {}  // need  OSDOp 
}
Step 3.  to findOSDOp 
File : osd_types.h
struct OSDOp { ... } 
 
But , the question is : how to transfer the data to rados-cluster???I 
assume that there will be some socket connection(tcp,etc)  to transfer data , 
but I find nothing about socket connection..
 
Besides,  I found something in IoCtxImpl.cc  and throught it I found  
ceph_tid_t Objecter::_op_submit(Op *op)   in Objecter.cc ..  It looks like the 
real operation is here. 
 
confused..Appreciate any help~!


-- 原始邮件 --
发件人: "John Spray";;
发送时间: 2014年4月29日(星期二) 下午5:59
收件人: "peng";
抄送: "ceph-users";
主题: Re: [ceph-users] Hey, Where can I find the source code of " 
classObjectOperationImpl " ?


It's not a real class, just a type definition used for the 
ObjectOperation::impl pointer.  The actual object is an ObjectOperation.


src/librados/librados.cc
1797:  impl = (ObjectOperationImpl *)new ::ObjectOperation;


John



On Tue, Apr 29, 2014 at 10:49 AM, peng  wrote:

Hey,
I can find a declaration in librados.hpp ,  but when I try to find the source 
code of  ObjectOperatoinImpl , I find nothing ..
 
 
Is it a ghost class??
 
Confused.. Appreciate any help .

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Error when building ceph. fatal error: civetweb/civetweb.h: No such file or directory

2014-04-07 Thread Kai Zhang
Hi Thanh,

I think you miss the "$ git submodule update --init", which clones all the 
submodules required for compilation.

Cheers,
Kai

At 2014-04-07 09:35:32,"Thanh Tran"  wrote:

Hi,


When i build ceph from source code that I downloaded from 
https://github.com/ceph/ceph/tree/v0.78, it has error as following:


rgw/rgw_civetweb.cc:4:31: fatal error: civetweb/civetweb.h: No such file or 
directory
compilation terminated.
make[3]: *** [rgw/rgw_civetweb.o] Error 1
make[3]: Leaving directory `/home/thanhtv3/ceph/source/ceph-0.78/src'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/home/thanhtv3/ceph/source/ceph-0.78/src'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/home/thanhtv3/ceph/source/ceph-0.78/src'
make: *** [all-recursive] Error 1


what did i missed?


Best regards,
Thanh Tran___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] windows client

2014-03-13 Thread Kai Zhang
Hi JiaMin,

There is C++ API for Ceph storage cluster: 
https://github.com/ceph/ceph/blob/27968a74d29998703207705194ec4e0c93a6b42d/src/include/rados/librados.hpp,
 maybe you can use that for your development. Here is a Hello World example: 
https://github.com/ceph/ceph/blob/27968a74d29998703207705194ec4e0c93a6b42d/examples/librados/hello_world.cc

Regrard,
Kai

At 2014-03-11 21:57:23,"ljm李嘉敏"  wrote:


Hi all,

 

Is it possible that ceph support windows client? Now I can only use RESTful 
API(Swift-compatible) through ceph object gateway,

but the languages that can be used are java, python and ruby, not C# or C++. Is 
there any good wrapper for C# or C++,thanks.

 

Thanks & Regards

Li JiaMin

 

System Cloud Platform

3#4F108

 ___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Performance

2014-01-09 Thread Kai Zhang
Hi Bradley,
I did the similar benchmark recently, and the result is no better than yours.


My setup:
3 servers(CPU: Intel Xeon E5-2609 0 @ 2.40GHz, RAM: 32GB), I used only 2 SATA 
7.2k RPM disk(2 TB) plus a 400 GB SSD for OSD in total.
Servers are connected with 10gbps ethernet.
Replication level: 2


I launched 3 VMs acting as ceph client, then I used fio to run 4k random 
read/write benchmark on all VMs at the same time.
For 4k random read: 176 IOPS
For 4k random write: 474 IOPS

I don't know why random read performance was so poor. Can someone help me out?


Here is my fio configuration:


[global]
iodepth=64
runtime=300
ioengine=libaio
direct=1
size=10G
directory=/mnt
filename=bench
ramp_time=40
invalidate=1
exec_prerun="echo 3 > /proc/sys/vm/drop_caches"


[rand-write-4k]
bs=4K
rw=randwrite


[rand-read-4k]
bs=4K
rw=randread


[seq-read-64k]
bs=64K
rw=read


[seq-write-64k]
bs=64K
rw=write


My benchmark script: https://gist.github.com/kazhang/8344180


Regards,
Kai


At 2014-01-09 01:25:17,"Bradley Kite"  wrote:

Hi there,


I am new to Ceph and still learning its performance capabilities, but I would 
like to share my performance results in the hope that they are useful to 
others, and also to see if there is room for improvement in my setup.


Firstly, a little about my setup:


3 servers (quad-core CPU, 16GB RAM), each with 4 SATA 7.2K RPM disks (4TB) plus 
a 160GB SSD.


I have mapped a 10GB volume to a 4th server which is acting as a ceph client. 
Due to Ceph's thin-provisioning, I used "dd" to write to the entire block 
device to ensure that the ceph volume is fully allocated. DD writes 
sequentially at around 95MB/sec which shows the network can run at full 
capacity.


Each device is connected by a single 1gbps ethernet link to a switch.


I then used "fio" to benchmark the raw block device. The reason for this is 
that I also need to compare ceph against a traditional iscsi SAN and the 
internal "rados bench" tools cannot be used for this.


The replication level for the pool I am testing against is 2.


I have tried two setups with regards to the OSD's - firstly with the journal 
running on a partition on the SSD, and secondly by using "bcache" 
(http://bcache.evilpiepirate.org) to provide a write-back cache of the 4TB 
drives.


In all tests, fio was configured to do direct I/O with 256 parallel I/O's.


With the journal on the SSD:


4k random read, around 1200 iops/second, 5mbps.

4k random write, around 300 iops/second, 1.2 mbps.


Using BCache for each OSD (journal is just a file on the OSD):
4k random read, around 2200 iops/second, 9mbps.
4k random write, around 300 iops/second, 1.2 mbps.


By comparison, a 12-disk RAID5 iscsi SAN is doing ~4000 read iops and ~2000 
iops write (but with 15KRPM SAS disks).


What is interesting is that bcache definitely has a positive effect on the read 
IOPS, but something else is being a bottle-neck for the writes.


It looks to me like I have missed something in the configuration which brings 
down the write IOPS - since 300 iops/second is very poor. If, however, I turn 
off Direct I/O in the fio tests the performance jumps to around 4000 
iops/second. It makes no difference to the read performance which is to be 
expected.


I have tried increasing the number of threads in each OSD but that has made no 
difference.


I have also tried images with different (smaller) stripe sizes (--order) 
instead of the default 4MB but it doesnt make any difference.


Do these figures look reasonable to others? What kind of IOPS should I be 
expecting?


Additional info is below:


Ceph 0.72.2 running on Centos 6.5 (with custom 3.10.25 kernel for bcache 
support)

3 servers of the following spec:
CPU: Quad Core Intel(R) Xeon(R) CPU E5-2609 0 @ 2.40GHz
RAM: 16GB
Disks: 4x 4TB Seagate Constellation (7.2K RPM) plus 1x Intel 160GB DC S3500 SSD


Test pool has 400 placement groups (and placement groups for placement).


fio configuration - read:
[global]
rw=randread
filename=/dev/rbd1

ioengine=posixaio
iodepth=256
direct=1
runtime=60

ramp_time=30
blocksize=4k
write_bw_log=fio-2-random-read
write_lat_log=fio-2-random-read
write_iops_log=fio-2-random-read


fio configuration - writes:
[global]
rw=randwrite
filename=/dev/rbd1

ioengine=posixaio
iodepth=256
direct=1
runtime=60

ramp_time=30
blocksize=4k
write_bw_log=fio-2-random-write
write_lat_log=fio-2-random-write
write_iops_log=fio-2-random-write



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com