[ceph-users] hardware requirements for metadata server

2019-05-01 Thread Manuel Sopena Ballesteros
Dear Ceph users,

I would like to ask, does the metadata server needs much block devices for 
storage? Or does it only needs RAM? How could I calculate the amount of disks 
and/or memory needed?

Thank you very much
NOTICE
Please consider the environment before printing this email. This message and 
any attachments are intended for the addressee named and may contain legally 
privileged/confidential/copyright information. If you are not the intended 
recipient, you should not read, use, disclose, copy or distribute this 
communication. If you have received this message in error please notify us at 
once by return email and then delete both messages. We accept no liability for 
the distribution of viruses or similar in electronic communications. This 
notice should not be removed.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] hardware requirements for metadata server

2019-04-30 Thread Manuel Sopena Ballesteros
Dear ceph users,

I would like to ask, does the metadata server needs much block devices for 
storage? Or does it only needs RAM? How could I calculate the amount of disks 
and/or memory needed?

Thank you very much


Manuel Sopena Ballesteros

Big Data Engineer | Kinghorn Centre for Clinical Genomics

 [cid:image001.png@01D4C835.ED3C2230] <https://www.garvan.org.au/>

a: 384 Victoria Street, Darlinghurst NSW 2010
p: +61 2 9355 5760  |  +61 4 12 123 123
e: manuel...@garvan.org.au<mailto:manuel...@garvan.org.au>

Like us on Facebook<http://www.facebook.com/garvaninstitute> | Follow us on 
Twitter<http://twitter.com/GarvanInstitute> and 
LinkedIn<http://www.linkedin.com/company/garvan-institute-of-medical-research>

NOTICE
Please consider the environment before printing this email. This message and 
any attachments are intended for the addressee named and may contain legally 
privileged/confidential/copyright information. If you are not the intended 
recipient, you should not read, use, disclose, copy or distribute this 
communication. If you have received this message in error please notify us at 
once by return email and then delete both messages. We accept no liability for 
the distribution of viruses or similar in electronic communications. This 
notice should not be removed.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] active directory integration with cephfs

2018-07-25 Thread Manuel Sopena Ballesteros
Dear Ceph community,

I am quite new to Ceph but trying to learn as much quick as I can. We are 
deploying our first Ceph production cluster in the next few weeks, we choose 
luminous and our goal is to have cephfs. One of the question I have been asked 
by other members of our team is if there is a possibility to integrate ceph 
authentication/authorization with Active Directory. I have seen in the 
documentations that objct gateway can do this but I am not about cephfs.

Anyone has any idea if I can integrate cephfs with AD?

Thank you very much

Manuel Sopena Ballesteros | Big data Engineer
Garvan Institute of Medical Research
The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010
T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E: 
manuel...@garvan.org.au<mailto:manuel...@garvan.org.au>

NOTICE
Please consider the environment before printing this email. This message and 
any attachments are intended for the addressee named and may contain legally 
privileged/confidential/copyright information. If you are not the intended 
recipient, you should not read, use, disclose, copy or distribute this 
communication. If you have received this message in error please notify us at 
once by return email and then delete both messages. We accept no liability for 
the distribution of viruses or similar in electronic communications. This 
notice should not be removed.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] fuse vs kernel client

2018-07-09 Thread Manuel Sopena Ballesteros
Dear ceph community,

I just installed ceph luminous in a small NVMe cluster for testing and I tested 
2 clients:

Client 1:
VM running centos 7
Ceph client: kernel
# cpus: 4
RAM: 16GB

Fio test

# sudo fio --name=xx --filename=/mnt/mycephfs/test.file3 --filesize=100G 
--iodepth=1 --rw=write --bs=4M --numjobs=2 --group_reporting
xx: (g=0): rw=write, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 
4096KiB-4096KiB, ioengine=psync, iodepth=1
...
fio-3.1
Starting 2 processes
xx: Laying out IO file (1 file / 102400MiB)
Jobs: 1 (f=1): [_(1),W(1)][100.0%][r=0KiB/s,w=2325MiB/s][r=0,w=581 IOPS][eta 
00m:00s]
xx: (groupid=0, jobs=2): err= 0: pid=24290: Mon Jul  9 17:54:57 2018
  write: IOPS=550, BW=2203MiB/s (2310MB/s)(200GiB/92946msec)
clat (usec): min=946, max=464990, avg=3519.59, stdev=7031.97
 lat (usec): min=1010, max=465091, avg=3612.53, stdev=7035.85
clat percentiles (usec):
 |  1.00th=[  1188],  5.00th=[  1631], 10.00th=[  2245], 20.00th=[  2409],
 | 30.00th=[  2540], 40.00th=[  2671], 50.00th=[  2802], 60.00th=[  2966],
 | 70.00th=[  3195], 80.00th=[  3654], 90.00th=[  5080], 95.00th=[  6521],
 | 99.00th=[ 11469], 99.50th=[ 16450], 99.90th=[100140], 99.95th=[149947],
 | 99.99th=[291505]
   bw (  MiB/s): min=  224, max= 2064, per=50.01%, avg=1101.97, stdev=205.16, 
samples=369
   iops: min=   56, max=  516, avg=275.27, stdev=51.29, samples=369
  lat (usec)   : 1000=0.01%
  lat (msec)   : 2=7.89%, 4=75.24%, 10=15.42%, 20=1.09%, 50=0.22%
  lat (msec)   : 100=0.04%, 250=0.08%, 500=0.02%
  cpu  : usr=2.31%, sys=76.39%, ctx=15743, majf=1, minf=55
  IO depths: 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
 submit: 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
 complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
 issued rwt: total=0,51200,0, short=0,0,0, dropped=0,0,0
 latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=2203MiB/s (2310MB/s), 2203MiB/s-2203MiB/s (2310MB/s-2310MB/s), 
io=200GiB (215GB), run=92946-92946msec

Client 2:
Physical machine running Ubuntu xenial
Ceph client: FUSE
# cpus: 56
RAM: 512

Fio test

$ sudo fio --name=xx --filename=/mnt/cephfs/test.file2 --filesize=5G 
--iodepth=1 --rw=write --bs=4M --numjobs=1 --group_reporting
xx: (g=0): rw=write, bs=4M-4M/4M-4M/4M-4M, ioengine=sync, iodepth=1
fio-2.2.10
Starting 1 process
xx: Laying out IO file(s) (1 file(s) / 5120MB)
Jobs: 1 (f=1): [W(1)] [91.7% done] [0KB/580.0MB/0KB /s] [0/145/0 iops] [eta 
00m:01s]
xx: (groupid=0, jobs=1): err= 0: pid=6065: Mon Jul  9 17:44:02 2018
  write: io=5120.0MB, bw=497144KB/s, iops=121, runt= 10546msec
clat (msec): min=3, max=159, avg= 7.94, stdev= 4.81
 lat (msec): min=3, max=159, avg= 8.08, stdev= 4.82
clat percentiles (msec):
 |  1.00th=[4],  5.00th=[5], 10.00th=[6], 20.00th=[7],
 | 30.00th=[7], 40.00th=[8], 50.00th=[8], 60.00th=[9],
 | 70.00th=[9], 80.00th=[   10], 90.00th=[   11], 95.00th=[   11],
 | 99.00th=[   12], 99.50th=[   13], 99.90th=[   61], 99.95th=[  159],
 | 99.99th=[  159]
bw (KB  /s): min=185448, max=726183, per=97.08%, avg=482611.80, 
stdev=118874.09
lat (msec) : 4=1.64%, 10=88.20%, 20=10.00%, 100=0.08%, 250=0.08%
  cpu  : usr=1.63%, sys=34.44%, ctx=42266, majf=0, minf=1586
  IO depths: 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
 submit: 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
 complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
 issued: total=r=0/w=1280/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
 latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: io=5120.0MB, aggrb=497143KB/s, minb=497143KB/s, maxb=497143KB/s, 
mint=10546msec, maxt=10546msec

NOTE: I did an iperf test from Client 2 to ceph nodes and the bandwidth is 
~25GBs

QUESTION:
According to the documentation, FUSE is supposed to run slower. I found the 
client 2 using FUSE being much slower than client 1. Could someone advice if 
this is expected?

Thank you very much

Manuel Sopena Ballesteros | Big data Engineer
Garvan Institute of Medical Research
The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010
T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E: 
manuel...@garvan.org.au<mailto:manuel...@garvan.org.au>

NOTICE
Please consider the environment before printing this email. This message and 
any attachments are intended for the addressee named and may contain legally 
privileged/confidential/copyright information. If you are not the intended 
recipient, you should not read, use, disclose, copy or distribute this 
communication. If you have received this message in error please notify us at 
once by return email and then delete both messages. We accept no liability for 
the distribution 

[ceph-users] troubleshooting ceph performance

2018-01-30 Thread Manuel Sopena Ballesteros
194304
Object size:4194304
Bandwidth (MB/sec): 2685.39
Stddev Bandwidth:   70.0286
Max bandwidth (MB/sec): 2752
Min bandwidth (MB/sec): 2512
Average IOPS:   671
Stddev IOPS:17
Max IOPS:   688
Min IOPS:   628
Average Latency(s): 0.023819
Stddev Latency(s):  0.00463709
Max latency(s): 0.0594516
Min latency(s): 0.0138556
[root@zeus-59 ceph-block-device]# rados bench -p scbench 10 seq
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
0   0 0 0 0 0   -   0
1  15  1150  1135   4498.75  4540   0.0146433   0.0131456
2  15  2313  2298   4571.38  4652   0.0144489   0.0131564
3  15  3468  3453   4585.68  4620  0.00975626   0.0131211
4  15  4663  4648   4633.41  4780   0.0163181   0.0130076
5  15  5949  5934   4734.49  5144  0.00944718   0.0127327
Total time run:   5.643929
Total reads made: 6731
Read size:4194304
Object size:  4194304
Bandwidth (MB/sec):   4770.43
Average IOPS  1192
Stddev IOPS:  59
Max IOPS: 1286
Min IOPS: 1135
Average Latency(s):   0.0126349
Max latency(s):   0.0490061
Min latency(s):   0.00613382
[root@zeus-59 ceph-block-device]# rados bench -p scbench 10 rand
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
0   0 0 0 0 0   -   0
1  15  1197  11824726.8  4728   0.01303310.012711
2  15  2364  2349   4697.02  4668   0.0105971   0.0128123
3  15  3686  3671   4893.78  5288  0.00906867   0.0123103
4  15  4994  4979   4978.16  5232  0.009469010.012104
5  15  6302  6287   5028.83  5232   0.0115159   0.0119879
6  15  7620  7605   5069.28  5272  0.00986636   0.0118935
7  15  8912  8897   5083.31  5168   0.0106201   0.0118648
8  15 10185 10170   5084.34  5092   0.0116891   0.0118632
9  15 11484 11469   5096.68  5196  0.00911787   0.0118354
   10  16 12748 12732   5092.16  5052   0.0111988   0.0118476
Total time run:   10.020135
Total reads made: 12748
Read size:4194304
Object size:  4194304
Bandwidth (MB/sec):   5088.95
Average IOPS: 1272
Stddev IOPS:  55
Max IOPS: 1322
Min IOPS: 1167
Average Latency(s):   0.0118531
Max latency(s):   0.0441046
Min latency(s):   0.00590162
[root@zeus-59 ceph-block-device]# rbd bench-write image01 --pool=rbdbench
bench-write  io_size 4096 io_threads 16 bytes 1073741824 pattern sequential
  SEC   OPS   OPS/SEC   BYTES/SEC
1 56159  56180.51  230115361.66
2119975  59998.28  245752967.01
3182956  60990.78  249818235.33
4244195  61054.17  250077889.88
elapsed: 4  ops:   262144  ops/sec: 60006.56  bytes/sec: 245786880.86
[root@zeus-59 ceph-block-device]#


I am far from a ceph/storage expert but my feeling is that the numbers provided 
by rbd bench-write are quite poor considering the hardware I am using (please 
correct me if I am wrong).

I would like to ask for some help from the community in order to dig into this 
issue and find what is throttling the performance (cpu? Memory? Network 
configuration? Not enough data nodes? Not enough OSDs per disk? Cpu pinning? 
Etc.).

Apologies beforehand as I know this is a quite a broad topic and not easy to 
give an exact answer but I would like to have some guidance and hope we can 
make an interesting topic for performance troubleshooting for other people who 
is learning distributed storage and ceph.

Thank you very much

Manuel Sopena Ballesteros | Systems engineer
Garvan Institute of Medical Research
The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010
T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E: 
manuel...@garvan.org.au<mailto:manuel...@garvan.org.au>

NOTICE
Please consider the environment before printing this email. This message and 
any attachments are intended for the addressee named and may contain legally 
privileged/confidential/copyright information. If you are not the intended 
recipient, you should not read, use, disclose, copy or distribute this 
communication. If you have received this message in error please notify us at 
once by return email and then delete both messages. We accept no liability for 
the distribution of viruses or similar in electronic communications. This 
notice should not be removed.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbd: list: (1) Operation not permitted

2017-11-20 Thread Manuel Sopena Ballesteros
Ok, I got it working. I got help from the irc channel, my problem was a type in 
the command managing caps for the user.

Manuel

From: Manuel Sopena Ballesteros
Sent: Tuesday, November 21, 2017 2:56 PM
To: ceph-users@lists.ceph.com
Subject: rbd: list: (1) Operation not permitted

Hi all,

I just built a small ceph cluster for Openstack but I am getting a permission 
problem:

[root@zeus-59 ~]# ceph auth list
installed auth entries:
...
client.cinder
key: AQCvaBNawgsXAxAA18S90LWPLIiZ4tCY0Boa/w==
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rdb_children, allow rwx 
pool=volumes,
...
client.glance
key: AQCiaBNaTDOCJxAArUEI6cuqLmiF2TqictGAEA==
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rdb_children, allow rwx 
pool=images
...

[root@zeus-59 ~]# cat /etc/ceph/ceph.client.cinder.keyring
[client.cinder]
key = AQCvaBNawgsXAxAA18S90LWPLIiZ4tCY0Boa/w==

[root@zeus-59 ~]# rbd -p volumes --user cinder ls
rbd: list: (1) Operation not permitted


[root@zeus-59 ~]# rbd -p images --user glance ls
15b87aeb-6482-403d-825b-e7c7bc007679
e972681b-3028-4b44-84c7-3752a93d5518
fc6dd1dc-fe11-4bdd-96f4-28276ecb75c0

I also tried deleting and recreating user and pool but that didn't fix the 
issue.

Ceph looks ok because user glance can list images pool, but I am not sure why 
user cinder doesn't have permission as they both have same permissions to their 
respective pools?

Any advice?

Thank you very much

Manuel Sopena Ballesteros | Big data Engineer
Garvan Institute of Medical Research
The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010
T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E: 
manuel...@garvan.org.au<mailto:manuel...@garvan.org.au>

NOTICE
Please consider the environment before printing this email. This message and 
any attachments are intended for the addressee named and may contain legally 
privileged/confidential/copyright information. If you are not the intended 
recipient, you should not read, use, disclose, copy or distribute this 
communication. If you have received this message in error please notify us at 
once by return email and then delete both messages. We accept no liability for 
the distribution of viruses or similar in electronic communications. This 
notice should not be removed.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] rbd: list: (1) Operation not permitted

2017-11-20 Thread Manuel Sopena Ballesteros
Hi all,

I just built a small ceph cluster for Openstack but I am getting a permission 
problem:

[root@zeus-59 ~]# ceph auth list
installed auth entries:
...
client.cinder
key: AQCvaBNawgsXAxAA18S90LWPLIiZ4tCY0Boa/w==
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rdb_children, allow rwx 
pool=volumes,
...
client.glance
key: AQCiaBNaTDOCJxAArUEI6cuqLmiF2TqictGAEA==
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rdb_children, allow rwx 
pool=images
...

[root@zeus-59 ~]# cat /etc/ceph/ceph.client.cinder.keyring
[client.cinder]
key = AQCvaBNawgsXAxAA18S90LWPLIiZ4tCY0Boa/w==

[root@zeus-59 ~]# rbd -p volumes --user cinder ls
rbd: list: (1) Operation not permitted


[root@zeus-59 ~]# rbd -p images --user glance ls
15b87aeb-6482-403d-825b-e7c7bc007679
e972681b-3028-4b44-84c7-3752a93d5518
fc6dd1dc-fe11-4bdd-96f4-28276ecb75c0

I also tried deleting and recreating user and pool but that didn't fix the 
issue.

Ceph looks ok because user glance can list images pool, but I am not sure why 
user cinder doesn't have permission as they both have same permissions to their 
respective pools?

Any advice?

Thank you very much

Manuel Sopena Ballesteros | Big data Engineer
Garvan Institute of Medical Research
The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010
T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E: 
manuel...@garvan.org.au<mailto:manuel...@garvan.org.au>

NOTICE
Please consider the environment before printing this email. This message and 
any attachments are intended for the addressee named and may contain legally 
privileged/confidential/copyright information. If you are not the intended 
recipient, you should not read, use, disclose, copy or distribute this 
communication. If you have received this message in error please notify us at 
once by return email and then delete both messages. We accept no liability for 
the distribution of viruses or similar in electronic communications. This 
notice should not be removed.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] documentation

2017-01-01 Thread Manuel Sopena Ballesteros
Hi,

Regarding this doc page --> 
http://docs.ceph.com/docs/jewel/start/quick-ceph-deploy/

I think the following text needs to be changed?

rados put {object-name} {file-path} --pool=data
to

rados put {object-name} {file-path} --pool= {poolname}
thank you

Manuel Sopena Ballesteros | Big data Engineer
Garvan Institute of Medical Research
The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010
T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E: 
manuel...@garvan.org.au<mailto:manuel...@garvan.org.au>

NOTICE
Please consider the environment before printing this email. This message and 
any attachments are intended for the addressee named and may contain legally 
privileged/confidential/copyright information. If you are not the intended 
recipient, you should not read, use, disclose, copy or distribute this 
communication. If you have received this message in error please notify us at 
once by return email and then delete both messages. We accept no liability for 
the distribution of viruses or similar in electronic communications. This 
notice should not be removed.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] linux kernel version for clients

2016-12-30 Thread Manuel Sopena Ballesteros
Hi,

I have several questions regarding kernel running on client machines:


* Why is kernel 3.10 considered an old kernel to run ceph clients?

* Which features are missing?

* What would be the impact of running clients on centos7.3 (kernel 
3.10) compared to using a higher/recommended version?

Thank you very much

Manuel

NOTICE
Please consider the environment before printing this email. This message and 
any attachments are intended for the addressee named and may contain legally 
privileged/confidential/copyright information. If you are not the intended 
recipient, you should not read, use, disclose, copy or distribute this 
communication. If you have received this message in error please notify us at 
once by return email and then delete both messages. We accept no liability for 
the distribution of viruses or similar in electronic communications. This 
notice should not be removed.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] installation docs

2016-12-30 Thread Manuel Sopena Ballesteros
Hi,

I just would like to point a couple of issues I have following the INSTALLATION 
(QUICK) document.


1.   The order to clean ceph deployment is:

a.   Ceph-deploy purge {ceph-node} [{ceph-node}]

b.  Ceph-deploy purgedata {ceph-node} [{ceph-node}]



2.   I run ceph jewel 10.2.5 and "eph-deploy preapre" also activates the 
OSD, this is something that confused me because I could not understand why 
there was 3 commands (prepare_activate or create) to do this when only 1 is 
needed (right now I don't know the difference between prepare and create as for 
me both do the same)


3.   Right after the installation the status of the ceph cluster is 
"HEALTH_WARN too few PGs per OSD (10 < min 30)" and not "active + clean" as the 
documentation says.



4.   It would be good to add a small guide for troubleshooting like check 
if the monitors are working, how to restart the monitor processes, check 
communication between the OSDs processes and the monitors, run commands on the 
local nodes to see in more details what is failing, etc.



5.   Also I spent a lot of time from the IRC channel trying to understand 
why ceph-deploy and the problem was the disks were already mounted. I could not 
see in running df -h but lsblk, this is also something that in my opinion would 
be good to have. Special thanks to Ivve and badone who helped me to find out 
what the issue was.



6.   Last thing, would be good to mention that installing ceph through 
Ansible is also an option


Other than that congratulation to the community for your effort and keep it 
going!


Manuel Sopena Ballesteros | Big data Engineer
Garvan Institute of Medical Research
The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010
T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E: 
manuel...@garvan.org.au<mailto:manuel...@garvan.org.au>

NOTICE
Please consider the environment before printing this email. This message and 
any attachments are intended for the addressee named and may contain legally 
privileged/confidential/copyright information. If you are not the intended 
recipient, you should not read, use, disclose, copy or distribute this 
communication. If you have received this message in error please notify us at 
once by return email and then delete both messages. We accept no liability for 
the distribution of viruses or similar in electronic communications. This 
notice should not be removed.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com