Re: [ceph-users] Impact add PG

2015-09-04 Thread Wang, Warren
Sadly, this is one of those things that people find out after running their 
first production Ceph cluster. Never run with the defaults. I know it's been 
recently reduced to 3 and 1 or 1 and 3, I forget, but I would advocate 1 and 1. 
Even that will cause a tremendous amount of traffic with any reasonable sized 
cluster.

Warren

-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Jimmy 
Goffaux
Sent: Friday, September 04, 2015 8:52 AM
To: Ceph users 
Subject: [ceph-users] Impact add PG

English version  :

Hello everyone,

Recently we have increased the number of PG in a pool. We had a big performance 
problem because everything had CEPH cluster 0 on IOPS while there are 
production above.

So we did this:

ceph tell osd.* injectargs '--osd_max_backfills 1'
ceph tell osd.* injectargs '--osd_recovery_max_active 1'

This changes the priority actions and we found a functional cluster.

Hope it can help you;)


French Version :

Bonjour à tous,

Récemment nous avons augmenté le nombre de PG dans un pool. Nous avons eu un 
gros problème de performances car tout le cluster CEPH avait 0 en IOPS alors 
qu'il y a de la production dessus.

Nous avons donc fait ceci :

ceph tell osd.* injectargs '--osd_max_backfills 1'
ceph tell osd.* injectargs '--osd_recovery_max_active 1'

Cela change la priorité des actions et nous avons retrouvé un cluster 
fonctionnelle.

J'espère que ça pourra vous aider ;)

-- 

Jimmy Goffaux
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph performance, empty vs part full

2015-09-03 Thread Wang, Warren
I'm about to change it on a big cluster too. It totals around 30 million, so 
I'm a bit nervous on changing it. As far as I understood, it would indeed move 
them around, if you can get underneath the threshold, but it may be hard to do. 
Two more settings that I highly recommend changing on a big prod cluster. I'm 
in favor of bumping these two up in the defaults.

Warren

-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mark 
Nelson
Sent: Thursday, September 03, 2015 6:04 PM
To: Ben Hines 
Cc: ceph-users 
Subject: Re: [ceph-users] Ceph performance, empty vs part full

Hrm, I think it will follow the merge/split rules if it's out of whack given 
the new settings, but I don't know that I've ever tested it on an existing 
cluster to see that it actually happens.  I guess let it sit for a while and 
then check the OSD PG directories to see if the object counts make sense given 
the new settings? :D

Mark

On 09/03/2015 04:31 PM, Ben Hines wrote:
> Hey Mark,
>
> I've just tweaked these filestore settings for my cluster -- after 
> changing this, is there a way to make ceph move existing objects 
> around to new filestore locations, or will this only apply to newly 
> created objects? (i would assume the latter..)
>
> thanks,
>
> -Ben
>
> On Wed, Jul 8, 2015 at 6:39 AM, Mark Nelson  wrote:
>> Basically for each PG, there's a directory tree where only a certain 
>> number of objects are allowed in a given directory before it splits 
>> into new branches/leaves.  The problem is that this has a fair amount 
>> of overhead and also there's extra associated dentry lookups to get at any 
>> given object.
>>
>> You may want to try something like:
>>
>> "filestore merge threshold = 40"
>> "filestore split multiple = 8"
>>
>> This will dramatically increase the number of objects per directory allowed.
>>
>> Another thing you may want to try is telling the kernel to greatly 
>> favor retaining dentries and inodes in cache:
>>
>> echo 1 | sudo tee /proc/sys/vm/vfs_cache_pressure
>>
>> Mark
>>
>>
>> On 07/08/2015 08:13 AM, MATHIAS, Bryn (Bryn) wrote:
>>>
>>> If I create a new pool it is generally fast for a short amount of time.
>>> Not as fast as if I had a blank cluster, but close to.
>>>
>>> Bryn

 On 8 Jul 2015, at 13:55, Gregory Farnum  wrote:

 I think you're probably running into the internal PG/collection 
 splitting here; try searching for those terms and seeing what your 
 OSD folder structures look like. You could test by creating a new 
 pool and seeing if it's faster or slower than the one you've already 
 filled up.
 -Greg

 On Wed, Jul 8, 2015 at 1:25 PM, MATHIAS, Bryn (Bryn) 
  wrote:
>
> Hi All,
>
>
> I’m perf testing a cluster again,
> This time I have re-built the cluster and am filling it for testing.
>
> on a 10 min run I get the following results from 5 load 
> generators, each writing though 7 iocontexts, with a queue depth of 50 
> async writes.
>
>
> Gen1
> Percentile 100 = 0.729775905609
> Max latencies = 0.729775905609, Min = 0.0320818424225, mean =
> 0.0750389684542
> Total objects writen = 113088 in time 604.259738207s gives 
> 187.151307376/s (748.605229503 MB/s)
>
> Gen2
> Percentile 100 = 0.735981941223
> Max latencies = 0.735981941223, Min = 0.0340068340302, mean =
> 0.0745198070711
> Total objects writen = 113822 in time 604.437897921s gives 
> 188.310495407/s (753.241981627 MB/s)
>
> Gen3
> Percentile 100 = 0.828994989395
> Max latencies = 0.828994989395, Min = 0.0349340438843, mean =
> 0.0745455575197
> Total objects writen = 113670 in time 604.352181911s gives 
> 188.085694736/s (752.342778944 MB/s)
>
> Gen4
> Percentile 100 = 1.06834602356
> Max latencies = 1.06834602356, Min = 0.0333499908447, mean =
> 0.0752239764659
> Total objects writen = 112744 in time 604.408732891s gives 
> 186.536020849/s (746.144083397 MB/s)
>
> Gen5
> Percentile 100 = 0.609658002853
> Max latencies = 0.609658002853, Min = 0.032968044281, mean =
> 0.0744482759499
> Total objects writen = 113918 in time 604.671534061s gives 
> 188.396498897/s (753.585995589 MB/s)
>
> example ceph -w output:
> 2015-07-07 15:50:16.507084 mon.0 [INF] pgmap v1077: 2880 pgs: 2880
> active+clean; 1996 GB data, 2515 GB used, 346 TB / 348 TB avail; 
> active+2185 MB/s
> wr, 572 op/s
>
>
> However when the cluster gets over 20% full I see the following 
> results, this gets worse as the cluster fills up:
>
> Gen1
> Percentile 100 = 6.71176099777
> Max latencies = 6.71176099777, Min = 0.0358741283417, mean =
> 0.161760483485
> Total objects writen = 52196 in time 604.488474131s 

Re: [ceph-users] high density machines

2015-09-03 Thread Wang, Warren
In the minority on this one. We have a number of the big SM 72 drive units w/ 
40 Gbe. Definitely not as fast as even the 36 drive units, but it isn't awful 
for our average mixed workload. We can exceed all available performance with 
some workloads though.

So while we can't extract all the performance out of the box, as long as we 
don't max out on performance, the cost is very appealing, and as far as filling 
a unit, I'm not sure how many folks have filled big prod clusters, but you 
really don't want them even running into the 70+% range due to some inevitable 
uneven filling, and room for failure.

Also, I'm betting that Ceph will continue to optimize things like the 
messenger, and reduce some of the massive CPU and TCP overhead, so we can claw 
back performance. I would love to see a thread count reduction. These can see 
over 130K threads per box.

Warren

-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mark 
Nelson
Sent: Thursday, September 03, 2015 3:58 PM
To: Gurvinder Singh ; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] high density machines



On 09/03/2015 02:49 PM, Gurvinder Singh wrote:
> Thanks everybody for the feedback.
> On 09/03/2015 05:09 PM, Mark Nelson wrote:
>> My take is that you really only want to do these kinds of systems if 
>> you have massive deployments.  At least 10 of them, but probably more 
>> like
>> 20-30+.  You do get massive density with them, but I think if you are
>> considering 5 of these, you'd be better off with 10 of the 36 drive 
>> units.  An even better solution might be ~30-40 of these:
>>
>> http://www.supermicro.com/products/system/1U/6017/SYS-6017R-73THDP_.c
>> fm
>>
> This one does look interesting.
>> An extremely compelling solution would be if they took this system:
>>
>> http://www.supermicro.com/products/system/1U/5018/SSG-5018A-AR12L.cfm
>> ?parts=SHOW
>>
> This one can be really good solution for archiving purpose with 
> replaced CPU to get more juice into it.
>>
>> and replaced the C2750 with a Xeon-D 1540 (but keep the same number 
>> of SATA ports).
>>
>> Potentially you could have:
>>
>> - 8x 2.0GHz Xeon Broadwell-DE Cores, 45W TDP
>> - Up to 128GB RAM (32GB probably the sweet spot)
>> - 2x 10GbE
>> - 12x 3.5" spinning disks
>> - single PCIe slot for PCIe SSD/NVMe
> I am wondering does single PCIe SSD/NVMe device can support 12 OSDs 
> journals and still perform the same as 4 OSD per SSD ?

Basically the limiting factor is how fast the device can do O_DSYNC writes.  
We've seen that some PCIe SSD and NVME devices can do 1-2GB/s depending on the 
capacity which is enough to reasonably support 12-24 OSDs.  Whether or not it's 
good to have a single PCIe card to be a point of failure is a worthwhile topic 
(Probably only high write endurance cards should be considered).  There are 
plenty of other things that can bring the node down too though (motherboard, 
ram, cpu, etc) though.  A single node failure will also have less impact if 
there are lots of small nodes vs a couple big ones.

>>
>> The density would be higher than the 36 drive units but lower than 
>> the
>> 72 drive units (though with shorter rack depth afaik).
> You mean the 1U solution with 12 disk is longer in length than 72 disk 
> 4U version ?

Sorry, the other way around I believe.

>
> - Gurvinder
>Probably more
>> CPU per OSD and far better distribution of OSDs across servers.  
>> Given that the 10GbE and processor are embedded on the motherboard, 
>> there's a decent chance these systems could be priced reasonably and 
>> wouldn't have excessive power/cooling requirements.
>>
>> Mark
>>
>> On 09/03/2015 09:13 AM, Jan Schermer wrote:
>>> It's not exactly a single system
>>>
>>> SSG-F618H-OSD288P*
>>> 4U-FatTwin, 4x 1U 72TB per node, Ceph-OSD-Storage Node
>>>
>>> This could actually be pretty good, it even has decent CPU power.
>>>
>>> I'm not a big fan of blades and blade-like systems - sooner or later 
>>> a backplane will die and you'll need to power off everything, which 
>>> is a huge PITA.
>>> But assuming you get 3 of these it could be pretty cool!
>>> It would be interesting to have a price comparison to a SC216 
>>> chassis or similiar, I'm afraid it won't be much cheaper.
>>>
>>> Jan
>>>
 On 03 Sep 2015, at 16:09, Kris Gillespie  wrote:

 It's funny cause in my mind, such dense servers seems like a bad 
 idea to me for exactly the reason you mention, what if it fails. 
 Losing 400+TB of storage is going to have quite some impact, 40G 
 interfaces or not and no matter what options you tweak.
 Sure it'll be cost effective per TB, but that isn't the only aspect 
 to look at (for production use anyways).

 But I'd also be curious about real world feedback.

 Cheers

 Kris

 The 09/03/2015 16:01, Gurvinder Singh wrote:
> Hi,
>
> I am wondering if anybody in the community is 

Re: [ceph-users] Ceph Performance Questions with rbd images access by qemu-kvm

2015-09-01 Thread Wang, Warren
Be selective with the SSDs you choose. I personally have tried Micron M500DC, 
Intel S3500, and some PCIE cards that would all suffice. There are MANY that do 
not work well at all. A shockingly large list, in fact.

Intel 3500/3700 are the gold standards.

Warren

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of 
Kenneth Van Alstyne
Sent: Tuesday, September 01, 2015 12:50 PM
To: Robert LeBlanc 
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph Performance Questions with rbd images access by 
qemu-kvm

Got it — I’ll keep that in mind. That may just be what I need to “get by” for 
now.  Ultimately, we’re looking to buy at least three nodes of servers that can 
hold 40+ OSDs backed by 2TB+ SATA disks,

Thanks,

--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Service-Disabled Veteran-Owned Business
1775 Wiehle Avenue Suite 101 | Reston, VA 20190
c: 228-547-8045 f: 571-266-3106
www.knightpoint.com
DHS EAGLE II Prime Contractor: FC1 SDVOSB Track
GSA Schedule 70 SDVOSB: GS-35F-0646S
GSA MOBIS Schedule: GS-10F-0404Y
ISO 2 / ISO 27001

Notice: This e-mail message, including any attachments, is for the sole use of 
the intended recipient(s) and may contain confidential and privileged 
information. Any unauthorized review, copy, use, disclosure, or distribution is 
STRICTLY prohibited. If you are not the intended recipient, please contact the 
sender by reply e-mail and destroy all copies of the original message.

On Sep 1, 2015, at 11:26 AM, Robert LeBlanc 
> wrote:


-BEGIN PGP SIGNED MESSAGE-

Hash: SHA256



Just swapping out spindles for SSD will not give you orders of magnitude 
performance gains as it does in regular cases. This is because Ceph has a lot 
of overhead for each I/O which limits the performance of the SSDs. In my 
testing, two Intel S3500 SSDs with an 8 core Atom (Intel(R) Atom(TM) CPU  C2750 
 @ 2.40GHz) and size=1 and fio with 8 jobs and QD=8 sync,direct 4K read/writes 
produced 2,600 IOPs. Don't get me wrong, it will help, but don't expect 
spectacular results.



- 

Robert LeBlanc

PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1



On Tue, Sep 1, 2015 at 8:01 AM, Kenneth Van Alstyne  wrote:

Thanks for the awesome advice folks.  Until I can go larger scale (50+ SATA 
disks), I’m thinking my best option here is to just swap out these 1TB SATA 
disks with 1TB SSDs.  Am I oversimplifying the short term solution?



Thanks,



- --

Kenneth Van Alstyne

Systems Architect

Knight Point Systems, LLC

Service-Disabled Veteran-Owned Business

1775 Wiehle Avenue Suite 101 | Reston, VA 20190

c: 228-547-8045 f: 571-266-3106

www.knightpoint.com

DHS EAGLE II Prime Contractor: FC1 SDVOSB Track

GSA Schedule 70 SDVOSB: GS-35F-0646S

GSA MOBIS Schedule: GS-10F-0404Y

ISO 2 / ISO 27001



Notice: This e-mail message, including any attachments, is for the sole use of 
the intended recipient(s) and may contain confidential and privileged 
information. Any unauthorized review, copy, use, disclosure, or distribution is 
STRICTLY prohibited. If you are not the intended recipient, please contact the 
sender by reply e-mail and destroy all copies of the original message.



On Aug 31, 2015, at 7:29 PM, Christian Balzer  wrote:





Hello,



On Mon, 31 Aug 2015 12:28:15 -0500 Kenneth Van Alstyne wrote:



In addition to the spot on comments by Warren and Quentin, verify this by

watching your nodes with atop, iostat, etc.

The culprit (HDDs) should be plainly visible.



More inline:



Christian, et al:



Sorry for the lack of information.  I wasn’t sure what of our hardware

specifications or Ceph configuration was useful information at this

point.  Thanks for the feedback — any feedback, is appreciated at this

point, as I’ve been beating my head against a wall trying to figure out

what’s going on.  (If anything.  Maybe the spindle count is indeed our

upper limit or our SSDs really suck? :-) )



Your SSDs aren't the problem.



To directly address your questions, see answers below:

  - CBT is the Ceph Benchmarking Tool.  Since my question was more

generic rather than with CBT itself, it was probably more useful to post

in the ceph-users list rather than cbt.

  - 8 Cores are from 2x quad core Intel(R) Xeon(R) CPU E5-2609 0 @

2.40GHz

Not your problem either.



  - The SSDs are indeed Intel S3500s.  I agree — not ideal, but

supposedly capable of up to 75,000 random 4KB reads/writes.  Throughput

and longevity is quite low for an SSD, rated at about 400MB/s reads and

100MB/s writes, though.  When we added these as journals in front of the

SATA spindles, both VM performance and rados benchmark numbers were

relatively unchanged.



The only thing relevant in regards to journal SSDs is the sequential write

speed (SYNC), they don't seek and normally don't get read 

Re: [ceph-users] Moving/Sharding RGW Bucket Index

2015-09-01 Thread Wang, Warren
I added sharding to our busiest RGW sites, but it will not shard existing 
bucket indexes, only applies to new buckets. Even with that change, I'm still 
considering moving the index pool to SSD. The main factor being the rate of 
writes. We are looking at a project that will have extremely high writes/sec 
through the RGWs. 

The other thing worth noting is that at that scale, you also need to change 
filestore merge threshold and filestore split multiple to something 
considerably larger. Props to Michael Kidd @ RH for that tip. There's a 
mathematical formula on the filestore config reference.

Warren

-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Daniel 
Maraio
Sent: Tuesday, September 01, 2015 10:40 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Moving/Sharding RGW Bucket Index

Hello,

   I have two large buckets in my RGW and I think the performance is being 
impacted by the bucket index. One bucket contains 9 million objects and the 
other one has 22 million. I'd like to shard the bucket index and also change 
the ruleset of the .rgw.buckets.index pool to put it on our SSD root. I could 
not find any documentation on this issue. It looks like the bucket indexes can 
be rebuilt using the radosgw-admin bucket check command but I'm not sure how to 
proceed. We can stop writes or take the cluster down completely if necessary. 
My initial thought was to backup the existing index pool and create a new one. 
I'm not sure if I can change the index_pool of an existing bucket. If that is 
possible I assume I can change that to my new pool and execute a radosgw-admin 
bucket check command to rebuild/shard the index.

   Does anyone have experience in getting sharding running with an existing 
bucket, or even moving the index pool to a different ruleset? 
When I change the crush ruleset for the .rgw.buckets.index pool to my SSD root 
we run into issues, buckets cannot be created or listed, writes cease to work, 
reads seem to work fine though. Thanks for your time!

- Daniel
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Performance Questions with rbd images access by qemu-kvm

2015-08-31 Thread Wang, Warren
Hey Kenneth, it looks like you¹re just down the tollroad from me. I¹m in
Reston Town Center.

Just as a really rough estimate, I¹d say this is your max IOPS:
80 IOPS/spinner * 6 drives / 3 replicas = 160ish max sustained IOPS

It¹s more complicated than that, since you have a reasonable solid state
journal, lots of memory, etc, but that¹s a guess, since the backend will
eventually need to keep up. That being said, almost every time I have seen
blocked requests, there is some other underlying issue. I would say start
with implementation checks:

- checking connectivity between OSDs, with and without LACP (overkill for
your purposes)
- ensuring that the OSDs target drives are actually mounted instead of
scribbling to the root drive
- ensuring that the journal is properly implemented
- all OSDs on the same version
- Any OSDs crashing?
- packet fragmentation? We have to stick with 1500 MTU to prevent frags.
Don¹t assume you can run jumbo
- You¹re not running much traffic, so a short capture on both sides and
wireshark should reveal any obvious issues

Is there anything in the ceph.log from a mon host? Grep for WRN. Also look
at the individual OSD log. This seems more like an implementation issue.
Happy to help out a local if you need more.

-- 
Warren Wang
Comcast Cloud (OpenStack)



On 8/31/15, 1:28 PM, "ceph-users on behalf of Kenneth Van Alstyne"
 wrote:

>Christian, et al:
>
>Sorry for the lack of information.  I wasn¹t sure what of our hardware
>specifications or Ceph configuration was useful information at this
>point.  Thanks for the feedback ‹ any feedback, is appreciated at this
>point, as I¹ve been beating my head against a wall trying to figure out
>what¹s going on.  (If anything.  Maybe the spindle count is indeed our
>upper limit or our SSDs really suck? :-) )
>
>To directly address your questions, see answers below:
>   - CBT is the Ceph Benchmarking Tool.  Since my question was more generic
>rather than with CBT itself, it was probably more useful to post in the
>ceph-users list rather than cbt.
>   - 8 Cores are from 2x quad core Intel(R) Xeon(R) CPU E5-2609 0 @ 2.40GHz
>   - The SSDs are indeed Intel S3500s.  I agree ‹ not ideal, but supposedly
>capable of up to 75,000 random 4KB reads/writes.  Throughput and
>longevity is quite low for an SSD, rated at about 400MB/s reads and
>100MB/s writes, though.  When we added these as journals in front of the
>SATA spindles, both VM performance and rados benchmark numbers were
>relatively unchanged.
>
>   - Regarding throughput vs iops, indeed ‹ the throughput that I¹m seeing
>is nearly worst case scenario, with all I/O being 4KB block size.  With
>RBD cache enabled and the writeback option set in the VM configuration, I
>was hoping more coalescing would occur, increasing the I/O block size.
>
>As an aside, the orchestration layer on top of KVM is OpenNebula if
>that¹s of any interest.
>
>VM information:
>   - Number = 15
>   - Worload = Mixed (I know, I know ‹ that¹s as vague of an answer as they
>come)  A handful of VMs are running some MySQL databases and some web
>applications in Apache Tomcat.  One is running a syslog server.
>Everything else is mostly static web page serving for a low number of
>users.
>
>I can duplicate the blocked request issue pretty consistently, just by
>running something simple like a ³yum -y update² in one VM.  While that is
>running, ceph -w and ceph -s show the following:
>root@dashboard:~# ceph -s
>cluster f79d8c2a-3c14-49be-942d-83fc5f193a25
> health HEALTH_WARN
>1 requests are blocked > 32 sec
> monmap e3: 3 mons at
>{storage-1=10.0.0.1:6789/0,storage-2=10.0.0.2:6789/0,storage-3=10.0.0.3:67
>89/0}
>election epoch 136, quorum 0,1,2 storage-1,storage-2,storage-3
> osdmap e75590: 6 osds: 6 up, 6 in
>  pgmap v3495103: 224 pgs, 1 pools, 826 GB data, 225 kobjects
>2700 GB used, 2870 GB / 5571 GB avail
> 224 active+clean
>  client io 3292 B/s rd, 2623 kB/s wr, 81 op/s
>
>2015-08-31 16:39:46.490696 mon.0 [INF] pgmap v3495096: 224 pgs: 224
>active+clean; 826 GB data, 2700 GB used, 2870 GB / 5571 GB avail
>2015-08-31 16:39:47.789982 mon.0 [INF] pgmap v3495097: 224 pgs: 224
>active+clean; 826 GB data, 2700 GB used, 2870 GB / 5571 GB avail; 0 B/s
>rd, 517 kB/s wr, 130 op/s
>2015-08-31 16:39:49.239033 mon.0 [INF] pgmap v3495098: 224 pgs: 224
>active+clean; 826 GB data, 2700 GB used, 2870 GB / 5571 GB avail; 0 B/s
>rd, 474 kB/s wr, 128 op/s
>2015-08-31 16:39:51.970679 mon.0 [INF] pgmap v3495099: 224 pgs: 224
>active+clean; 826 GB data, 2700 GB used, 2870 GB / 5571 GB avail; 0 B/s
>rd, 58662 B/s wr, 22 op/s
>2015-08-31 16:39:57.267697 mon.0 [INF] pgmap v3495100: 224 pgs: 224
>active+clean; 826 GB data, 2700 GB used, 2870 GB / 5571 GB avail; 11357
>B/s wr, 5 op/s
>2015-08-31 16:39:58.700312 mon.0 [INF] pgmap v3495101: 224 pgs: 224
>active+clean; 826 GB 

Re: [ceph-users] Storage node refurbishing, a "freeze" OSD feature would be nice

2015-08-31 Thread Wang, Warren
When we know we need to off a node, we weight it down over time. Depending
on your cluster, you may need to do this over days or hours.

In theory, you could do the same when putting OSDs in, by setting noin,
and then setting weight to something very low, and going up over time. I
haven¹t tried this though.

-- 
Warren Wang
Comcast Cloud (OpenStack)



On 8/31/15, 2:57 AM, "ceph-users on behalf of Udo Lembke"

wrote:

>Hi Christian,
>for my setup "b" takes too long - too much data movement and stress to
>all nodes.
>I have simply (with replica 3) "set noout", reinstall one node (with new
>filesystem on the OSDs, but leave them in the
>crushmap) and start all OSDs (at friday night) - takes app. less than one
>day for rebuild (11*4TB 1*8TB).
>Do also stress the other nodes, but less than with weigting to zero.
>
>Udo
>
>On 31.08.2015 06:07, Christian Balzer wrote:
>> 
>> Hello,
>> 
>> I'm about to add another storage node to small firefly cluster here and
>> refurbish 2 existing nodes (more RAM, different OSD disks).
>> 
>> Insert rant about not going to start using ceph-deploy as I would have
>>to
>> set the cluster to no-in since "prepare" also activates things due to
>>the
>> udev magic...
>> 
>> This cluster is quite at the limits of its IOPS capacity (the HW was
>> requested ages ago, but the mills here grind slowly and not particular
>> fine either), so the plan is to:
>> 
>> a) phase in the new node (lets call it C), one OSD at a time (in the
>>dead
>> of night)
>> b) empty out old node A (weight 0), one OSD at a time. When
>> done, refurbish and bring it back in, like above.
>> c) repeat with 2nd old node B.
>> 
>> Looking at this it's obvious where the big optimization in this
>>procedure
>> would be, having the ability to "freeze" the OSDs on node B.
>> That is making them ineligible for any new PGs while preserving their
>> current status. 
>> So that data moves from A to C (which is significantly faster than A or
>>B)
>> and then back to A when it is refurbished, avoiding any heavy lifting
>>by B.
>> 
>> Does that sound like something other people might find useful as well
>>and
>> is it feasible w/o upsetting the CRUSH applecart?
>> 
>> Christian
>> 
>
>___
>ceph-users mailing list
>ceph-users@lists.ceph.com
>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] PCIE-SSD OSD bottom performance issue

2015-08-22 Thread Wang, Warren
Are you running fio against a sparse file, prepopulated file, or a raw device?

Warren

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of 
scott_tan...@yahoo.com
Sent: Thursday, August 20, 2015 3:48 AM
To: ceph-users ceph-users@lists.ceph.com
Cc: liuxy666 liuxy...@yahoo.com
Subject: [ceph-users] PCIE-SSD OSD bottom performance issue

dear ALL:
I used PCIE-SSD to OSD disk . But I found it very bottom performance.
I have two hosts, each host 1 PCIE-SSD,so i create two osd by PCIE-SSD.

ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1   0.35999 root default
-2   0.17999 host tds_node03
0 0.17999  osd.0 up 1.0 1.0
-30.17999host tds_node04
1 0.17999  osd.1 up 1.0 1.0

I create pool and rbd device.
I use fio test 8K randrw(70%) in rbd device,the result is only 1W IOPS, I have 
tried many osd thread parameters, but not effect.
But i tested 8K randrw(70%) in single PCIE-SSD, it has 10W IOPS.

Is there any way to improve the PCIE-SSD  OSD performance?




scott_tan...@yahoo.commailto:scott_tan...@yahoo.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph tell not persistent through reboots?

2015-08-06 Thread Wang, Warren
Injecting args into the running procs is not meant to be persistent. You'll 
need to modify /etc/ceph/ceph.conf for that.

Warren

-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Steve 
Dainard
Sent: Thursday, August 06, 2015 9:16 PM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] ceph tell not persistent through reboots?

Hello,

Version 0.94.1

I'm passing settings to the admin socket ie:
ceph tell osd.* injectargs '--osd_deep_scrub_begin_hour 20'
ceph tell osd.* injectargs '--osd_deep_scrub_end_hour 4'
ceph tell osd.* injectargs '--osd_deep_scrub_interval 1209600'

Then I check to see if they're in the configs now:
# ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show | egrep -i 
'scrub_interval|hour'
osd_scrub_begin_hour: 4,
osd_scrub_end_hour: 20,
osd_deep_scrub_interval: 1.2096e+06,

Then I restart that host and check again and the values have returned to 
default:
# ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show | egrep -i 
'scrub_interval|hour'
osd_scrub_begin_hour: 0,
osd_scrub_end_hour: 24,
osd_deep_scrub_interval: 604800,

If I check on another host the values are correct:
# ceph --admin-daemon /var/run/ceph/ceph-osd.90.asok config show | egrep -i 
'scrub_interval|hour'
osd_scrub_begin_hour: 20,
osd_scrub_end_hour: 4,
osd_deep_scrub_interval: 1.2096e+06,

If I check on a mon the values are default:
# ceph --admin-daemon /var/run/ceph/ceph-mon.mon1.asok config show | egrep -i 
'scrub_interval|hour'
osd_scrub_begin_hour: 0,
osd_scrub_end_hour: 24,
osd_deep_scrub_interval: 604800,

If I try to pass a config to mon1 via a osd host it appears to do something:
# ceph tell mon.1 injectargs --osd_deep_scrub_interval 1209600
injectargs:osd_deep_scrub_interval = '1.2096e+06'

And then check on mon1 and its still the default value:
# ceph --admin-daemon /var/run/ceph/ceph-mon.mon1.asok config show | egrep -i 
scrub_interval
osd_deep_scrub_interval: 604800,


And if I pass a config on mon1 it looks like its being updated, but the default 
remains:
# ceph tell mon.1 injectargs --osd_deep_scrub_interval 1209600
injectargs:osd_deep_scrub_interval = '1.2096e+06'
# ceph --admin-daemon /var/run/ceph/ceph-mon.mon1.asok config show | egrep -i 
scrub_interval
osd_deep_scrub_interval: 604800,

I don't know if this is a bug, or if I'm doing something wrong here...
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Real world benefit from SSD Journals for a more read than write cluster

2015-07-09 Thread Wang, Warren
You'll take a noticeable hit on write latency. Whether or not it's tolerable 
will be up to you and the workload you have to capture. Large file operations 
are throughput efficient without an SSD journal, as long as you have enough 
spindles.

About the Intel P3700, you will only need 1 to keep up with 12 SATA drives. The 
400 GB is probably okay if you keep the journal sizes small, but the 800 is 
probably safer if you plan on leaving these in production for a few years. 
Depends on the turnover of data on the servers.

The dual disk failure comment is pointing out that you are more exposed for 
data loss with 2 copies. You do need to understand that there is a possibility 
for 2 drives to fail either simultaneously, or one before the cluster is 
repaired. As usual, this is going to be a decision you need to decide if it's 
acceptable or not. We have many clusters, and some are 2, and others are 3. If 
your data resides nowhere else, then 3 copies is the safe thing to do. That's 
getting harder and harder to justify though, when the price of other storage 
solutions using erasure coding continues to plummet.

Warren

-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Götz 
Reinicke - IT Koordinator
Sent: Thursday, July 09, 2015 4:47 AM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Real world benefit from SSD Journals for a more read 
than write cluster

Hi Christian,
Am 09.07.15 um 09:36 schrieb Christian Balzer:
 
 Hello,
 
 On Thu, 09 Jul 2015 08:57:27 +0200 Götz Reinicke - IT Koordinator wrote:
 
 Hi again,

 time is passing, so is my budget :-/ and I have to recheck the 
 options for a starter cluster. An expansion next year for may be an 
 openstack installation or more performance if the demands rise is 
 possible. The starter could always be used as test or slow dark archive.

 At the beginning I was at 16SATA OSDs with 4 SSDs for journal per 
 node, but now I'm looking for 12 SATA OSDs without SSD journal. Less 
 performance, less capacity I know. But thats ok!

 Leave the space to upgrade these nodes with SSDs in the future.
 If your cluster grows large enough (more than 20 nodes) even a single
 P3700 might do the trick and will need only a PCIe slot.

If I get you right, the 12Disk is not a bad idea, if there would be the need of 
SSD Journal I can add the PCIe P3700.

In the 12 OSD Setup I should get 2 P3700 one per 6 OSDs.

God or bad idea?

 
 There should be 6 may be with the 12 OSDs 8 Nodes with a repl. of 2.

 Danger, Will Robinson.
 This is essentially a RAID5 and you're plain asking for a double disk 
 failure to happen.

May be I do not understand that. size = 2 I think is more sort of raid1 ... ? 
And why am I asking for for a double disk failure?

To less nodes, OSDs or because of the size = 2.

 
 See this recent thread:
 calculating maximum number of disk and node failure that can be 
 handled by cluster with out data loss
 for some discussion and python script which you will need to modify 
 for
 2 disk replication.
 
 With a RAID5 failure calculator you're at 1 data loss event per 3.5 
 years...
 

Thanks for that thread, but I dont get the point out of it for me.

I see that calculating the reliability is some sort of complex math ...

 The workload I expect is more writes of may be some GB of Office 
 files per day and some TB of larger video Files from a few users per week.

 At the end of this year we calculate to have +- 60 to 80 TB of lager 
 videofiles in that cluster, which are accessed from time to time.

 Any suggestion on the drop of ssd journals?

 You will miss them when the cluster does write, be it from clients or 
 when re-balancing a lost OSD.

I can imagine, that I might miss the SSD Journal, but if I can add the
P3700 later I feel comfy with it for now. Budget and evaluation related.

Thanks for your helpful input and feedback. /Götz

--
Götz Reinicke
IT-Koordinator

Tel. +49 7141 969 82420
E-Mail goetz.reini...@filmakademie.de

Filmakademie Baden-Württemberg GmbH
Akademiehof 10
71638 Ludwigsburg
www.filmakademie.de

Eintragung Amtsgericht Stuttgart HRB 205016

Vorsitzender des Aufsichtsrats: Jürgen Walter MdL Staatssekretär im Ministerium 
für Wissenschaft, Forschung und Kunst Baden-Württemberg

Geschäftsführer: Prof. Thomas Schadt


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Discuss: New default recovery config settings

2015-06-01 Thread Wang, Warren
Hi Mark, I don¹t suppose you logged latency during those tests, did you?
I¹m one of the folks, as Bryan mentioned, that advocates turning these
values down. I¹m okay with extending recovery time, especially when we are
talking about a default of 3x replication, with the trade off of better
client response.

-- 
Warren Wang





On 6/1/15, 12:43 PM, Mark Nelson mnel...@redhat.com wrote:

On 05/29/2015 04:47 PM, Samuel Just wrote:
 Many people have reported that they need to lower the osd recovery
config options to minimize the impact of recovery on client io.  We are
talking about changing the defaults as follows:

 osd_max_backfills to 1 (from 10)
 osd_recovery_max_active to 3 (from 15)
 osd_recovery_op_priority to 1 (from 10)
 osd_recovery_max_single_start to 1 (from 5)

 We'd like a bit of feedback first though.  Is anyone happy with the
current configs?  Is anyone using something between these values and the
current defaults?  What kind of workload?  I'd guess that lowering
osd_max_backfills to 1 is probably a good idea, but I wonder whether
lowering osd_recovery_max_active and osd_recovery_max_single_start will
cause small objects to recover unacceptably slowly.

 Thoughts?

We ran recovery tests last year around when firefly was released.  The
basic gist of it was that as you increase client IO, the ratio of
backfill to client IO changes for a given combination of priority
settings.  IE you can tune around 10 15 10 5, or 1 3 1 1, but in each
case the ratio of client to recovery IO appears to scale with the amount
of client IO, even past the super saturation point.  I believe users
will have a hard time finding optimal settings as clusters at the
saturation point will behave differently than those in heavy
super-saturation.


http://nhm.ceph.com/Ceph_3XRep_Backfill_Recovery_Results.pdf
http://nhm.ceph.com/Ceph_62EC_Backfill_Recovery_Results.pdf
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] HDFS on Ceph (RBD)

2015-05-22 Thread Wang, Warren

On 5/21/15, 5:04 AM, Blair Bethwaite blair.bethwa...@gmail.com wrote:

Hi Warren,

On 20 May 2015 at 23:23, Wang, Warren warren_w...@cable.comcast.com
wrote:
 We¹ve contemplated doing something like that, but we also realized that
 it would result in manual work in Ceph everytime we lose a drive or
 server,
 and a pretty bad experience for the customer when we have to do
 maintenance.

Yeah I guess you have to delete and recreate the pool, but is that
really so bad?

Or trash the associated volumes. Plus the perceived failure rate from
client perspective would be high, especially when we have to do things
like reboots.


 We also kicked around the idea of leveraging the notion of a Hadoop rack
 to define a set of instances which are Cinder volume backed, and the
rest
 be ephemeral drives (not Ceph backed ephemeral). Using 100% ephemeral
 isn¹t out of the question either, but we have seen a few instances where
 all the instances in a region were quickly terminated.

What's the implication here - the HDFS instances were terminated and
that would have caused Hadoop data-loss had they been ephemeral?

Yeah. Of course it would be able to tolerate up to 2/3 but 100% would
result in permanent data loss. I see the Intel folks are tackling this
from the object backed approach:

https://wiki.ceph.com/Planning/Blueprints/Infernalis/rgw%3A_Hadoop_FileSyst
em_Interface_for_a_RADOS_Gateway_Caching_Tier

Probably should have chatted with them about that. I totally forgot.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] HDFS on Ceph (RBD)

2015-05-21 Thread Wang, Warren
We¹ve contemplated doing something like that, but we also realized that
it would result in manual work in Ceph everytime we lose a drive or
server, 
and a pretty bad experience for the customer when we have to do
maintenance.

We also kicked around the idea of leveraging the notion of a Hadoop rack
to define a set of instances which are Cinder volume backed, and the rest
be ephemeral drives (not Ceph backed ephemeral). Using 100% ephemeral
isn¹t out of the question either, but we have seen a few instances where
all the instances in a region were quickly terminated.

Our customer has also tried grabbing the Sahara code (Hadoop Swift) and
running it on their own to interface with RGW backed Swift, but ran into
an issue where Sahara code sequentially stats each item within a
container. 
I think there are efforts to multithread this.

-- 
Warren Wang





On 5/20/15, 7:27 PM, Blair Bethwaite blair.bethwa...@gmail.com wrote:

Hi Warren,

Following our brief chat after the Ceph Ops session at the Vancouver
summit today, I added a few more notes to the etherpad
(https://etherpad.openstack.org/p/YVR-ops-ceph).

I wonder whether you'd considered setting up crush layouts so you can
have multiple cinder AZs or volume-types that map to a subset of OSDs
in your cluster. You'd have them in pools with rep=1 (i.e., no
replication). Then have your Hadoop users follow a provisioning
pattern that involves attaching volumes from each crush ruleset and
building HDFS over them in a manner/topology so as to avoid breaking
HDFS for any single underlying OSD failure, assuming regular HDFS
replication is used on top. Maybe a pool per HDFS node is the
obvious/naive starting point, clearly that implies a certain scale to
begin with, but probably works for you...?

-- 
Cheers,
~Blairo


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy with --release (--stable) for dumpling?

2014-09-02 Thread Wang, Warren
We've chosen to use the gitbuilder site to make sure we get the same version 
when we rebuild nodes, etc.

http://gitbuilder.ceph.com/ceph-deb-precise-x86_64-basic/

So our sources list looks like:
deb http://gitbuilder.ceph.com/ceph-deb-precise-x86_64-basic/ref/v0.80.5 
precise main

Warren

-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Nigel 
Williams
Sent: Tuesday, August 26, 2014 11:30 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph-deploy with --release (--stable) for dumpling?

On Tue, Aug 26, 2014 at 5:10 PM, Konrad Gutkowski konrad.gutkow...@ffs.pl 
wrote:
 Ceph-deploy should set priority for ceph repository, which it doesn't, 
 this usually installs the best available version from any repository.

Thanks Konrad for the tip. It took several goes (notably ceph-deploy purge did 
not, for me at least, seem to be removing librbd1 cleanly) but I managed to get 
0.67.10 to be preferred, basically I did this:

root@ceph12:~# ceph -v
ceph version 0.67.10
root@ceph12:~# cat /etc/apt/preferences
Package: *
Pin: origin ceph.com
Pin-priority: 900

Package: *
Pin: origin ceph.newdream.net
Pin-priority: 900
root@ceph12:~#
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-02 Thread Wang, Warren
Hi Sebastien,

Something I didn't see in the thread so far, did you secure erase the SSDs 
before they got used? I assume these were probably repurposed for this test. We 
have seen some pretty significant garbage collection issue on various SSD and 
other forms of solid state storage to the point where we are overprovisioning 
pretty much every solid state device now. By as much as 50% to handle sustained 
write operations. Especially important for the journals, as we've found.

Maybe not an issue on the short fio run below, but certainly evident on longer 
runs or lots of historical data on the drives. The max transaction time looks 
pretty good for your test. Something to consider though.

Warren

-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of 
Sebastien Han
Sent: Thursday, August 28, 2014 12:12 PM
To: ceph-users
Cc: Mark Nelson
Subject: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

Hey all,

It has been a while since the last thread performance related on the ML :p I've 
been running some experiment to see how much I can get from an SSD on a Ceph 
cluster.
To achieve that I did something pretty simple:

* Debian wheezy 7.6
* kernel from debian 3.14-0.bpo.2-amd64
* 1 cluster, 3 mons (i'd like to keep this realistic since in a real deployment 
i'll use 3)
* 1 OSD backed by an SSD (journal and osd data on the same device)
* 1 replica count of 1
* partitions are perfectly aligned
* io scheduler is set to noon but deadline was showing the same results
* no updatedb running

About the box:

* 32GB of RAM
* 12 cores with HT @ 2,4 GHz
* WB cache is enabled on the controller
* 10Gbps network (doesn't help here)

The SSD is a 200G Intel DC S3700 and is capable of delivering around 29K iops 
with random 4k writes (my fio results) As a benchmark tool I used fio with the 
rbd engine (thanks deutsche telekom guys!).

O_DIECT and D_SYNC don't seem to be a problem for the SSD:

# dd if=/dev/urandom of=rand.file bs=4k count=65536
65536+0 records in
65536+0 records out
268435456 bytes (268 MB) copied, 29.5477 s, 9.1 MB/s

# du -sh rand.file
256Mrand.file

# dd if=rand.file of=/dev/sdo bs=4k count=65536 oflag=dsync,direct
65536+0 records in
65536+0 records out
268435456 bytes (268 MB) copied, 2.73628 s, 98.1 MB/s

See my ceph.conf:

[global]
  auth cluster required = cephx
  auth service required = cephx
  auth client required = cephx
  fsid = 857b8609-8c9b-499e-9161-2ea67ba51c97
  osd pool default pg num = 4096
  osd pool default pgp num = 4096
  osd pool default size = 2
  osd crush chooseleaf type = 0

   debug lockdep = 0/0
debug context = 0/0
debug crush = 0/0
debug buffer = 0/0
debug timer = 0/0
debug journaler = 0/0
debug osd = 0/0
debug optracker = 0/0
debug objclass = 0/0
debug filestore = 0/0
debug journal = 0/0
debug ms = 0/0
debug monc = 0/0
debug tp = 0/0
debug auth = 0/0
debug finisher = 0/0
debug heartbeatmap = 0/0
debug perfcounter = 0/0
debug asok = 0/0
debug throttle = 0/0

[mon]
  mon osd down out interval = 600
  mon osd min down reporters = 13
[mon.ceph-01]
host = ceph-01
mon addr = 172.20.20.171
  [mon.ceph-02]
host = ceph-02
mon addr = 172.20.20.172
  [mon.ceph-03]
host = ceph-03
mon addr = 172.20.20.173

debug lockdep = 0/0
debug context = 0/0
debug crush = 0/0
debug buffer = 0/0
debug timer = 0/0
debug journaler = 0/0
debug osd = 0/0
debug optracker = 0/0
debug objclass = 0/0
debug filestore = 0/0
debug journal = 0/0
debug ms = 0/0
debug monc = 0/0
debug tp = 0/0
debug auth = 0/0
debug finisher = 0/0
debug heartbeatmap = 0/0
debug perfcounter = 0/0
debug asok = 0/0
debug throttle = 0/0

[osd]
  osd mkfs type = xfs
osd mkfs options xfs = -f -i size=2048
osd mount options xfs = rw,noatime,logbsize=256k,delaylog
  osd journal size = 20480
  cluster_network = 172.20.20.0/24
  public_network = 172.20.20.0/24
  osd mon heartbeat interval = 30
  # Performance tuning
  filestore merge threshold = 40
  filestore split multiple = 8
  osd op threads = 8
  # Recovery tuning
  osd recovery max active = 1
  osd max backfills = 1
  osd recovery op priority = 1


debug lockdep = 0/0
debug context = 0/0
debug crush = 0/0
debug buffer = 0/0
debug timer = 0/0
debug journaler = 0/0
debug osd = 0/0
debug optracker = 0/0
debug objclass = 0/0
debug filestore = 0/0
debug journal = 0/0
debug ms = 0/0
debug monc = 0/0
debug tp = 0/0
debug auth = 0/0
debug finisher = 0/0
debug heartbeatmap = 0/0
debug perfcounter = 0/0
debug asok = 0/0
debug 

[ceph-users] Washington DC area: Ceph users meetup, 12/18

2013-12-09 Thread Wang, Warren
Hi folks,

I know it's short notice, but we have recently formed a Ceph users meetup group 
in the DC area.  We have our first meetup on 12/18.  We should have more notice 
before the next one, so please join the meetup group, even if you can't make 
this one!

http://www.meetup.com/Ceph-DC/events/154304092/

--
Warren Wang
Comcast
PE Operations, Platform Infrastructure
Office:703-939-8445
Mobile: 703-598-1643


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com