Re: [ceph-users] full ssd cluster, how many iops can we expect ?

2014-03-16 Thread Stefan Priebe - Profihost AG
Hi Alexandre,

Am 17.03.2014 07:03, schrieb Alexandre DERUMIER:
> Hello,
> 
> I'm looking to build a full ssd cluster (I have mainly random io workload),
> 
> I would like to known how many iops can we expect, by osd ?
> 
> I have read on inktank doc, about 4000 iops by osd(that enough for me). Is is 
> true ?

I don't get / see these values. But may be replication of 3 is the
problem here.

per qemu VM i'm not able to get more than 15 000 iop/s no idea what is
the limiting factor. With iSCSI and the same network i get around 12
IOP/s with qemu.

> What is the bottleneck ? osd cpu limited ?
At 15 000 iop/s for one VM i have a 100-180% CPU Usage per OSD process
on a 3.7 Ghz E5 Xeon. So may be CPU is a Problem.

Greets,
Stefan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] why objects are still in .rgw.buckets after deleted

2014-03-16 Thread ljm李嘉敏
Hi all,

I have a question about the pool .rgw.buckets, when I upload a file(has been 
stripped because it is bigger than 4M) through swift API, it is stored in 
.rgw.buckets,
if I upload it again, why the objects in .rgw.buckets are not override? It is 
stored again and have different name. and when I delete the file, all of the 
objects in .rgw.buckets
aren’t delete even though I execute radosgw-admin gc process.

I also want to know something about the pool created for object gateway, why 
are they created and which role they will play? If anyone know about these,
please give me a guide, thanks.


Thanks & Regards
Li JiaMin

System Cloud Platform
3#4F108

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] full ssd cluster, how many iops can we expect ?

2014-03-16 Thread Alexandre DERUMIER
Hello,

I'm looking to build a full ssd cluster (I have mainly random io workload),

I would like to known how many iops can we expect, by osd ?

I have read on inktank doc, about 4000 iops by osd(that enough for me). Is is 
true ?

What is the bottleneck ? osd cpu limited ?

Also, can we have other limitations at global cluster side ? (like mon 
bottleneck? )


Regards,

Alexandre


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 答复: Re: why my "fs_commit_latency" is so high ? is it normal ?

2014-03-16 Thread Gregory Farnum
I'm not sure what's normal when you're sharing the journal and the data on
the same disk drive. You might do better if you partition the drive and put
the journals on an unformatted partition; obviously providing each journal
its own spindle/ssd/whatever would be progressively better.
There was a conversation a few days ago in which someone else saw some
pretty high commit latencies for shared-disk systems, but I believe their
averages were still a lot lower than yours. Disks and RAID/disk controllers
are just wildly variable in how they handle that sort of workload. :(
-Greg

Software Engineer #42 @ http://inktank.com | http://ceph.com


On Sun, Mar 16, 2014 at 10:21 PM,  wrote:

>
> Hi Gregory,
> The latency is under my ceph test environment. It has only 1 host
> with 16 2T SATA disks(16 OSDs) and 10Gb/s NIC,
> journal and data are on the same disk which fs type is ext4.
> my cluster config is like this. is it normal under this
> configuration ? or how can i improve performance ?
>
> Thanks.
>
> [root@storage1 ~]# ceph -s
> cluster 3429fd17-4a92-4d3b-a7fa-04adedb0da82
>  health HEALTH_OK
>  monmap e1: 1 mons at {storage1=193.168.1.100:6789/0}, election epoch
> 1, quorum 0 storage1
>  osdmap e373: 16 osds: 16 up, 16 in
>   pgmap v3515: 1024 pgs, 1 pools, 15635 MB data, 3946 objects
> 87220 MB used, 27764 GB / 29340 GB avail
> 1024 active+clean
>
> [root@storage1 ~]# ceph osd tree
> # idweight  type name   up/down reweight
> -1  16  root default
> -2  16  host storage1
> 0   1   osd.0   up  1
> 1   1   osd.1   up  1
> 2   1   osd.2   up  1
> 3   1   osd.3   up  1
> 4   1   osd.4   up  1
> 5   1   osd.5   up  1
> 6   1   osd.6   up  1
> 7   1   osd.7   up  1
> 8   1   osd.8   up  1
> 9   1   osd.9   up  1
> 10  1   osd.10  up  1
> 11  1   osd.11  up  1
> 12  1   osd.12  up  1
> 13  1   osd.13  up  1
> 14  1   osd.14  up  1
> 15  1   osd.15  up  1
>
> [root@storage1 ~]# cat /etc/ceph/ceph.conf
> [global]
> fsid = 3429fd17-4a92-4d3b-a7fa-04adedb0da82
> public network = 193.168.1.0/24
> auth cluster required = cephx
> auth service required = cephx
> auth client required = cephx
> filestore xattr use omap = true
>
> [mon]
> debug paxos = 0/5
>
> [mon.storage1]
> host = storage1
> mon addr = 193.168.1.100:6789
>
> [osd.0]
> host = storage1
>
> [osd.1]
> host = storage1
>
> [osd.2]
> host = storage1
>
> [osd.3]
> host = storage1
>
> [osd.4]
> host = storage1
>
> [osd.5]
> host = storage1
>
> [osd.6]
> host = storage1
>
> [osd.7]
> host = storage1
>
> [osd.8]
> host = storage1
>
> [osd.9]
> host = storage1
>
> [osd.10]
> host = storage1
>
> [osd.11]
> host = storage1
>
> [osd.12]
> host = storage1
>
> [osd.13]
> host = storage1
>
> [osd.14]
> host = storage1
>
> [osd.15]
> host = storage1
>
>
>
>
>*Re: [ceph-users] why my "fs_commit_latency" is so high ? is it normal
> ?*
>  *Gregory Farnum  * 收件人: duan.xufeng
> 2014/03/17 13:08
>
>抄送: "ceph-users@lists.ceph.com"
>
>
>
>
>
> That seems a little high; how do you have your system configured? That
> latency is how long it takes for the hard drive to durably write out
> something to the journal.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Sun, Mar 16, 2014 at 9:59 PM,   wrote:
> >
> > [root@storage1 ~]# ceph osd perf
> > osdid fs_commit_latency(ms) fs_apply_latency(ms)
> > 0   149   52
> > 1   201   61
> > 2   176  166
> > 3   240   57
> > 4   219   49
> > 5   167   56
> > 6   201   54
> > 7   175  188
> > 8   192  124
> > 9   367  193
> >10   343  160
> >11   183  110
> >12   158  143
> >13   267  147
> >14   150  155
> >15   159   54
> >
> > 
> > ZTE Information Security Notice: The information contained in this mail
> (and
> > any attachment transmitted herewith) is privileged and confidential and
> is
> > intended for the exclusive use of the addressee(s).  If

[ceph-users] 答复: Re: why my "fs_commit_latency" is so high ? is it normal ?

2014-03-16 Thread duan . xufeng
Hi Gregory,
The latency is under my ceph test environment. It has only 1 host 
with 16 2T SATA disks(16 OSDs) and 10Gb/s NIC,
journal and data are on the same disk which fs type is ext4.
my cluster config is like this. is it normal under this 
configuration ? or how can i improve performance ?

Thanks.

[root@storage1 ~]# ceph -s
cluster 3429fd17-4a92-4d3b-a7fa-04adedb0da82
 health HEALTH_OK
 monmap e1: 1 mons at {storage1=193.168.1.100:6789/0}, election epoch 
1, quorum 0 storage1
 osdmap e373: 16 osds: 16 up, 16 in
  pgmap v3515: 1024 pgs, 1 pools, 15635 MB data, 3946 objects
87220 MB used, 27764 GB / 29340 GB avail
1024 active+clean

[root@storage1 ~]# ceph osd tree
# idweight  type name   up/down reweight
-1  16  root default
-2  16  host storage1
0   1   osd.0   up  1
1   1   osd.1   up  1
2   1   osd.2   up  1
3   1   osd.3   up  1
4   1   osd.4   up  1
5   1   osd.5   up  1
6   1   osd.6   up  1
7   1   osd.7   up  1
8   1   osd.8   up  1
9   1   osd.9   up  1
10  1   osd.10  up  1
11  1   osd.11  up  1
12  1   osd.12  up  1
13  1   osd.13  up  1
14  1   osd.14  up  1
15  1   osd.15  up  1

[root@storage1 ~]# cat /etc/ceph/ceph.conf 
[global]
fsid = 3429fd17-4a92-4d3b-a7fa-04adedb0da82
public network = 193.168.1.0/24
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
filestore xattr use omap = true

[mon]
debug paxos = 0/5

[mon.storage1]
host = storage1
mon addr = 193.168.1.100:6789

[osd.0]
host = storage1

[osd.1]
host = storage1

[osd.2]
host = storage1

[osd.3]
host = storage1

[osd.4]
host = storage1

[osd.5]
host = storage1

[osd.6]
host = storage1

[osd.7]
host = storage1

[osd.8]
host = storage1

[osd.9]
host = storage1

[osd.10]
host = storage1

[osd.11]
host = storage1

[osd.12]
host = storage1

[osd.13]
host = storage1

[osd.14]
host = storage1

[osd.15]
host = storage1
 
 





Re: [ceph-users] why my "fs_commit_latency" is so high ? is it normal ?

Gregory Farnum 
收件人:
duan.xufeng
2014/03/17 13:08


抄送:
"ceph-users@lists.ceph.com"






That seems a little high; how do you have your system configured? That
latency is how long it takes for the hard drive to durably write out
something to the journal.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Sun, Mar 16, 2014 at 9:59 PM,   wrote:
>
> [root@storage1 ~]# ceph osd perf
> osdid fs_commit_latency(ms) fs_apply_latency(ms)
> 0   149   52
> 1   201   61
> 2   176  166
> 3   240   57
> 4   219   49
> 5   167   56
> 6   201   54
> 7   175  188
> 8   192  124
> 9   367  193
>10   343  160
>11   183  110
>12   158  143
>13   267  147
>14   150  155
>15   159   54
>
> 
> ZTE Information Security Notice: The information contained in this mail 
(and
> any attachment transmitted herewith) is privileged and confidential and 
is
> intended for the exclusive use of the addressee(s).  If you are not an
> intended recipient, any disclosure, reproduction, distribution or other
> dissemination or use of the information contained is strictly 
prohibited.
> If you have received this mail in error, please delete it and notify us
> immediately.
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.

Re: [ceph-users] why my "fs_commit_latency" is so high ? is it normal ?

2014-03-16 Thread Gregory Farnum
That seems a little high; how do you have your system configured? That
latency is how long it takes for the hard drive to durably write out
something to the journal.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Sun, Mar 16, 2014 at 9:59 PM,   wrote:
>
> [root@storage1 ~]# ceph osd perf
> osdid fs_commit_latency(ms) fs_apply_latency(ms)
> 0   149   52
> 1   201   61
> 2   176  166
> 3   240   57
> 4   219   49
> 5   167   56
> 6   201   54
> 7   175  188
> 8   192  124
> 9   367  193
>10   343  160
>11   183  110
>12   158  143
>13   267  147
>14   150  155
>15   159   54
>
> 
> ZTE Information Security Notice: The information contained in this mail (and
> any attachment transmitted herewith) is privileged and confidential and is
> intended for the exclusive use of the addressee(s).  If you are not an
> intended recipient, any disclosure, reproduction, distribution or other
> dissemination or use of the information contained is strictly prohibited.
> If you have received this mail in error, please delete it and notify us
> immediately.
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] why my "fs_commit_latency" is so high ? is it normal ?

2014-03-16 Thread duan . xufeng
[root@storage1 ~]# ceph osd perf
osdid fs_commit_latency(ms) fs_apply_latency(ms) 
0   149   52 
1   201   61 
2   176  166 
3   240   57 
4   219   49 
5   167   56 
6   201   54 
7   175  188 
8   192  124 
9   367  193 
   10   343  160 
   11   183  110 
   12   158  143 
   13   267  147 
   14   150  155 
   15   159   54 

ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] How to stop reoverying process when one osd down

2014-03-16 Thread Ta Ba Tuan

Hi everyone,

I am using replicate 2, 3 for my data pools.
Anh, I want to stop recovering process  when a data node be down. I 
think mark "unout" downed osds or setting crushmap for down osd with 
their weigh=0, Righ?


ceph-data-node-01:
70  1   osd.70  down0
71  1   osd.71  down0


Thank you!
--
TA BA TUAN

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] erasure coding testing

2014-03-16 Thread Gruher, Joseph R
Great, thanks!  I'll watch (hope) for an update later this week.  Appreciate 
the rapid response.

-Joe

From: Ian Colle [mailto:ian.co...@inktank.com]
Sent: Sunday, March 16, 2014 7:22 PM
To: Gruher, Joseph R; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] erasure coding testing

Joe,

We're pushing to get 0.78 out this week, which will allow you to play with EC.

Ian R. Colle
Director of Engineering
Inktank
Delivering the Future of Storage
http://www.linkedin.com/in/ircolle
http://www.twitter.com/ircolle
Cell: +1.303.601.7713
Email: i...@inktank.com

On 3/16/14, 8:11 PM, "Gruher, Joseph R" 
mailto:joseph.r.gru...@intel.com>> wrote:

Hey all-

Can anyone tell me, if I install the latest development release (looks like it 
is 0.77) can I enable and test erasure coding?  Or do I have to wait for the 
actual Firefly release?  I don't want to deploy anything for production, 
basically I just want to do some lab testing to see what kind of CPU loading 
results from erasure coding.  Also, if anyone has any data along those lines 
already, would love a pointer to it.  Thanks!

-Joe
___ ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] erasure coding testing

2014-03-16 Thread Ian Colle
Joe,

We¹re pushing to get 0.78 out this week, which will allow you to play with
EC.

Ian R. Colle
Director of Engineering
Inktank
Delivering the Future of Storage
http://www.linkedin.com/in/ircolle
http://www.twitter.com/ircolle
Cell: +1.303.601.7713
Email: i...@inktank.com

On 3/16/14, 8:11 PM, "Gruher, Joseph R"  wrote:

> Hey all-
>  
> Can anyone tell me, if I install the latest development release (looks like it
> is 0.77) can I enable and test erasure coding?  Or do I have to wait for the
> actual Firefly release?  I don¹t want to deploy anything for production,
> basically I just want to do some lab testing to see what kind of CPU loading
> results from erasure coding.  Also, if anyone has any data along those lines
> already, would love a pointer to it.  Thanks!
>  
> -Joe
> ___ ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] erasure coding testing

2014-03-16 Thread Gruher, Joseph R
Hey all-

Can anyone tell me, if I install the latest development release (looks like it 
is 0.77) can I enable and test erasure coding?  Or do I have to wait for the 
actual Firefly release?  I don't want to deploy anything for production, 
basically I just want to do some lab testing to see what kind of CPU loading 
results from erasure coding.  Also, if anyone has any data along those lines 
already, would love a pointer to it.  Thanks!

-Joe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph block device IO seems slow? Did i got something wrong?

2014-03-16 Thread duan . xufeng
Hi,
I attached one 500G block device to the vm, and test it in vm use 
"dd if=/dev/zero of=myfile bs=1M count=1024" ,
Then I got a average io speed about 31MB/s. I thought that i 
should have got 100MB/s,
cause my vm hypervisor has 1G nic and osd host has 10G nic。
Did i got a wrong result? how can i make it faster?

Your sincerely,

Michael.




[root@storage1 ~]# ceph -w
2014-03-16 17:24:44.596903 mon.0 [INF] pgmap v2245: 1127 pgs: 1127 
active+clean; 8758 MB data, 100 GB used, 27749 GB / 29340 GB avail; 5059 
kB/s wr, 1 op/s
2014-03-16 17:24:45.742589 mon.0 [INF] pgmap v2246: 1127 pgs: 1127 
active+clean; 8826 MB data, 100 GB used, 27749 GB / 29340 GB avail; 21390 
kB/s wr, 7 op/s
2014-03-16 17:24:46.864936 mon.0 [INF] pgmap v2247: 1127 pgs: 1127 
active+clean; 8838 MB data, 100 GB used, 27749 GB / 29340 GB avail; 36789 
kB/s wr, 13 op/s
2014-03-16 17:24:49.578711 mon.0 [INF] pgmap v2248: 1127 pgs: 1127 
active+clean; 8869 MB data, 100 GB used, 27749 GB / 29340 GB avail; 11404 
kB/s wr, 3 op/s
2014-03-16 17:24:50.824619 mon.0 [INF] pgmap v2249: 1127 pgs: 1127 
active+clean; 8928 MB data, 100 GB used, 27749 GB / 29340 GB avail; 22972 
kB/s wr, 7 op/s
2014-03-16 17:24:51.980126 mon.0 [INF] pgmap v2250: 1127 pgs: 1127 
active+clean; 8933 MB data, 100 GB used, 27749 GB / 29340 GB avail; 28408 
kB/s wr, 10 op/s
2014-03-16 17:24:54.603830 mon.0 [INF] pgmap v2251: 1127 pgs: 1127 
active+clean; 8954 MB data, 100 GB used, 27749 GB / 29340 GB avail; 7090 
kB/s wr, 2 op/s
2014-03-16 17:24:55.671644 mon.0 [INF] pgmap v2252: 1127 pgs: 1127 
active+clean; 9034 MB data, 100 GB used, 27749 GB / 29340 GB avail; 27465 
kB/s wr, 9 op/s
2014-03-16 17:24:57.057567 mon.0 [INF] pgmap v2253: 1127 pgs: 1127 
active+clean; 9041 MB data, 100 GB used, 27749 GB / 29340 GB avail; 39638 
kB/s wr, 13 op/s
2014-03-16 17:24:59.603449 mon.0 [INF] pgmap v2254: 1127 pgs: 1127 
active+clean; 9057 MB data, 100 GB used, 27749 GB / 29340 GB avail; 6019 
kB/s wr, 2 op/s
2014-03-16 17:25:00.671065 mon.0 [INF] pgmap v2255: 1127 pgs: 1127 
active+clean; 9138 MB data, 100 GB used, 27749 GB / 29340 GB avail; 25646 
kB/s wr, 9 op/s
2014-03-16 17:25:01.860269 mon.0 [INF] pgmap v2256: 1127 pgs: 1127 
active+clean; 9146 MB data, 100 GB used, 27749 GB / 29340 GB avail; 40427 
kB/s wr, 14 op/s
2014-03-16 17:25:04.561468 mon.0 [INF] pgmap v2257: 1127 pgs: 1127 
active+clean; 9162 MB data, 100 GB used, 27749 GB / 29340 GB avail; 6298 
kB/s wr, 2 op/s
2014-03-16 17:25:05.662565 mon.0 [INF] pgmap v2258: 1127 pgs: 1127 
active+clean; 9274 MB data, 101 GB used, 27748 GB / 29340 GB avail; 34520 
kB/s wr, 12 op/s
2014-03-16 17:25:06.851644 mon.0 [INF] pgmap v2259: 1127 pgs: 1127 
active+clean; 9286 MB data, 101 GB used, 27748 GB / 29340 GB avail; 56598 
kB/s wr, 19 op/s
2014-03-16 17:25:09.597428 mon.0 [INF] pgmap v2260: 1127 pgs: 1127 
active+clean; 9322 MB data, 101 GB used, 27748 GB / 29340 GB avail; 12426 
kB/s wr, 5 op/s
2014-03-16 17:25:10.765610 mon.0 [INF] pgmap v2261: 1127 pgs: 1127 
active+clean; 9392 MB data, 101 GB used, 27748 GB / 29340 GB avail; 27569 
kB/s wr, 13 op/s
2014-03-16 17:25:11.943055 mon.0 [INF] pgmap v2262: 1127 pgs: 1127 
active+clean; 9392 MB data, 101 GB used, 27748 GB / 29340 GB avail; 31581 
kB/s wr, 16 op/s


[root@storage1 ~]# ceph -s
cluster 3429fd17-4a92-4d3b-a7fa-04adedb0da82
 health HEALTH_OK
 monmap e1: 1 mons at {storage1=193.168.1.100:6789/0}, election epoch 
1, quorum 0 storage1
 osdmap e245: 16 osds: 16 up, 16 in
  pgmap v2273: 1127 pgs, 4 pools, 9393 MB data, 3607 objects
101 GB used, 27748 GB / 29340 GB avail
1127 active+clean

[root@storage1 ~]# ceph osd tree
# idweight  type name   up/down reweight
-1  16  root default
-2  16  host storage1
0   1   osd.0   up  1
1   1   osd.1   up  1
2   1   osd.2   up  1
3   1   osd.3   up  1
4   1   osd.4   up  1
5   1   osd.5   up  1
6   1   osd.6   up  1
7   1   osd.7   up  1
8   1   osd.8   up  1
9   1   osd.9   up  1
10  1   osd.10  up  1
11  1   osd.11  up  1
12  1   osd.12  up  1
13  1   osd.13  up  1
14  1   osd.14  up  1
15  1   osd.15  up  1

ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the inform

[ceph-users] ceph block device IO seems slow? Did i got something wrong?

2014-03-16 Thread duan . xufeng
Hi,
I attached one 500G block device to the vm, and test it in vm use 
"dd if=/dev/zero of=myfile bs=1M count=1024" ,
Then I got a average io speed about 31MB/s. I thought that i 
should have got 100MB/s,
cause my vm hypervisor has 1G nic and osd host has 10G nic。
Did i got a wrong result? how can i make it faster?

Your sincerely,

Michael.




[root@storage1 ~]# ceph -w
2014-03-16 17:24:44.596903 mon.0 [INF] pgmap v2245: 1127 pgs: 1127 
active+clean; 8758 MB data, 100 GB used, 27749 GB / 29340 GB avail; 5059 
kB/s wr, 1 op/s
2014-03-16 17:24:45.742589 mon.0 [INF] pgmap v2246: 1127 pgs: 1127 
active+clean; 8826 MB data, 100 GB used, 27749 GB / 29340 GB avail; 21390 
kB/s wr, 7 op/s
2014-03-16 17:24:46.864936 mon.0 [INF] pgmap v2247: 1127 pgs: 1127 
active+clean; 8838 MB data, 100 GB used, 27749 GB / 29340 GB avail; 36789 
kB/s wr, 13 op/s
2014-03-16 17:24:49.578711 mon.0 [INF] pgmap v2248: 1127 pgs: 1127 
active+clean; 8869 MB data, 100 GB used, 27749 GB / 29340 GB avail; 11404 
kB/s wr, 3 op/s
2014-03-16 17:24:50.824619 mon.0 [INF] pgmap v2249: 1127 pgs: 1127 
active+clean; 8928 MB data, 100 GB used, 27749 GB / 29340 GB avail; 22972 
kB/s wr, 7 op/s
2014-03-16 17:24:51.980126 mon.0 [INF] pgmap v2250: 1127 pgs: 1127 
active+clean; 8933 MB data, 100 GB used, 27749 GB / 29340 GB avail; 28408 
kB/s wr, 10 op/s
2014-03-16 17:24:54.603830 mon.0 [INF] pgmap v2251: 1127 pgs: 1127 
active+clean; 8954 MB data, 100 GB used, 27749 GB / 29340 GB avail; 7090 
kB/s wr, 2 op/s
2014-03-16 17:24:55.671644 mon.0 [INF] pgmap v2252: 1127 pgs: 1127 
active+clean; 9034 MB data, 100 GB used, 27749 GB / 29340 GB avail; 27465 
kB/s wr, 9 op/s
2014-03-16 17:24:57.057567 mon.0 [INF] pgmap v2253: 1127 pgs: 1127 
active+clean; 9041 MB data, 100 GB used, 27749 GB / 29340 GB avail; 39638 
kB/s wr, 13 op/s
2014-03-16 17:24:59.603449 mon.0 [INF] pgmap v2254: 1127 pgs: 1127 
active+clean; 9057 MB data, 100 GB used, 27749 GB / 29340 GB avail; 6019 
kB/s wr, 2 op/s
2014-03-16 17:25:00.671065 mon.0 [INF] pgmap v2255: 1127 pgs: 1127 
active+clean; 9138 MB data, 100 GB used, 27749 GB / 29340 GB avail; 25646 
kB/s wr, 9 op/s
2014-03-16 17:25:01.860269 mon.0 [INF] pgmap v2256: 1127 pgs: 1127 
active+clean; 9146 MB data, 100 GB used, 27749 GB / 29340 GB avail; 40427 
kB/s wr, 14 op/s
2014-03-16 17:25:04.561468 mon.0 [INF] pgmap v2257: 1127 pgs: 1127 
active+clean; 9162 MB data, 100 GB used, 27749 GB / 29340 GB avail; 6298 
kB/s wr, 2 op/s
2014-03-16 17:25:05.662565 mon.0 [INF] pgmap v2258: 1127 pgs: 1127 
active+clean; 9274 MB data, 101 GB used, 27748 GB / 29340 GB avail; 34520 
kB/s wr, 12 op/s
2014-03-16 17:25:06.851644 mon.0 [INF] pgmap v2259: 1127 pgs: 1127 
active+clean; 9286 MB data, 101 GB used, 27748 GB / 29340 GB avail; 56598 
kB/s wr, 19 op/s
2014-03-16 17:25:09.597428 mon.0 [INF] pgmap v2260: 1127 pgs: 1127 
active+clean; 9322 MB data, 101 GB used, 27748 GB / 29340 GB avail; 12426 
kB/s wr, 5 op/s
2014-03-16 17:25:10.765610 mon.0 [INF] pgmap v2261: 1127 pgs: 1127 
active+clean; 9392 MB data, 101 GB used, 27748 GB / 29340 GB avail; 27569 
kB/s wr, 13 op/s
2014-03-16 17:25:11.943055 mon.0 [INF] pgmap v2262: 1127 pgs: 1127 
active+clean; 9392 MB data, 101 GB used, 27748 GB / 29340 GB avail; 31581 
kB/s wr, 16 op/s


[root@storage1 ~]# ceph -s
cluster 3429fd17-4a92-4d3b-a7fa-04adedb0da82
 health HEALTH_OK
 monmap e1: 1 mons at {storage1=193.168.1.100:6789/0}, election epoch 
1, quorum 0 storage1
 osdmap e245: 16 osds: 16 up, 16 in
  pgmap v2273: 1127 pgs, 4 pools, 9393 MB data, 3607 objects
101 GB used, 27748 GB / 29340 GB avail
1127 active+clean

[root@storage1 ~]# ceph osd tree
# idweight  type name   up/down reweight
-1  16  root default
-2  16  host storage1
0   1   osd.0   up  1
1   1   osd.1   up  1
2   1   osd.2   up  1
3   1   osd.3   up  1
4   1   osd.4   up  1
5   1   osd.5   up  1
6   1   osd.6   up  1
7   1   osd.7   up  1
8   1   osd.8   up  1
9   1   osd.9   up  1
10  1   osd.10  up  1
11  1   osd.11  up  1
12  1   osd.12  up  1
13  1   osd.13  up  1
14  1   osd.14  up  1
15  1   osd.15  up  1

ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the inform