Re: [ceph-users] Enclosure power failure pausing client IO till all connected hosts up

2015-07-09 Thread Tony Harris
Sounds to me like you've put yourself at too much risk - *if* I'm reading your message right about your configuration, you have multiple hosts accessing OSDs that are stored on a single shared box - so if that single shared box (single point of failure for multiple nodes) goes down it's possible fo

Re: [ceph-users] The first infernalis dev release will be v9.0.0

2015-05-05 Thread Tony Harris
So with this, will even numbers then be LTS? Since 9.0.0 is following 0.94.x/Hammer, and every other release is normally LTS, I'm guessing 10.x.x, 12.x.x, etc. will be LTS... On Tue, May 5, 2015 at 11:45 AM, Sage Weil wrote: > On Tue, 5 May 2015, Joao Eduardo Luis wrote: > > On 05/04/2015 05:09

[ceph-users] Quick question - version query

2015-05-01 Thread Tony Harris
Hi all, I feel a bit like an idiot at the moment - I know there is a command through ceph to query the monitor and OSD daemons to check their version level, but I can't remember what it is to save my life and I'm having trouble locating it in the docs. I need to make sure the entire cluster is ru

[ceph-users] Ceph Hammer question..

2015-04-22 Thread Tony Harris
Hi all, I have a cluster currently on Giant - is Hammer stable/ready for production use? -Tony ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Do I have enough pgs?

2015-04-15 Thread Tony Harris
Hi all, I have a cluster of 3 nodes, 18 OSDs. I used the pgcalc to give a suggested number of PGs - here was my list: Group1 3 rep 18 OSDs 30% data 512PGs Group2 3 rep 18 OSDs 30% data 512PGs Group3 3 rep 18 OSDs 30% data 512PGs Group4 2 rep 18 OSDs 5% data 256PGs Group5 2

Re: [ceph-users] Ceph BIG outage : 200+ OSD are down , OSD cannot create thread

2015-03-09 Thread Tony Harris
I know I'm not even close to this type of a problem yet with my small cluster (both test and production clusters) - but it would be great if something like that could appear in the cluster HEALTHWARN, if Ceph could determine the amount of used processes and compare them against the current limit th

[ceph-users] Question about rados bench

2015-03-03 Thread Tony Harris
Hi all, In my reading on the net about various implementations of Ceph, I came across this website blog page (really doesn't give a lot of good information but caused me to wonder): http://avengermojo.blogspot.com/2014/12/cubieboard-cluster-ceph-test.html near the bottom, the person did a rados

[ceph-users] New SSD Question

2015-03-02 Thread Tony Harris
Hi all, After the previous thread, I'm doing my SSD shopping for and I came across an SSD called an Edge Boost Pro w/ Power Fail, it seems to have some impressive specs - in most places decent user reviews, in once place a poor one - I was wondering if anyone has had any experience with these dri

Re: [ceph-users] SSD selection

2015-03-02 Thread Tony Harris
On Sun, Mar 1, 2015 at 11:19 PM, Christian Balzer wrote: > > > > > > I'll be honest, the pricing on Intel's website is far from reality. I > > haven't been able to find any OEMs, and retail pricing on the 200GB 3610 > > is ~231 (the $300 must have been a different model in the line). > > Althoug

Re: [ceph-users] SSD selection

2015-03-01 Thread Tony Harris
On Sun, Mar 1, 2015 at 10:18 PM, Christian Balzer wrote: > On Sun, 1 Mar 2015 21:26:16 -0600 Tony Harris wrote: > > > On Sun, Mar 1, 2015 at 6:32 PM, Christian Balzer wrote: > > > > > > > > Again, penultimately you will need to sit down, compile and compare

Re: [ceph-users] SSD selection

2015-03-01 Thread Tony Harris
leave at least 50% free. > Well, I'd like to steer away from the consumer models if possible since they (AFAIK) don't contain caps to finish writes should a power loss occur, unless there is one that does? -Tony > > Christian > > On Sun, 1 Mar 2015 15:08:10 -0600 Tony H

Re: [ceph-users] SSD selection

2015-03-01 Thread Tony Harris
he speeds of 240 is a bit better than > 120s). and i've left 50% underprovisioned. I've got 10GB for journals and I > am using 4 osds per ssd. > > Andrei > > > ------ > > *From: *"Tony Harris" > *To: *"Andrei Mikhailovs

Re: [ceph-users] SSD selection

2015-03-01 Thread Tony Harris
r DC3700 and > it has the best wear per $ ratio out of all the drives. > > Andrei > > > -- > > *From: *"Tony Harris" > *To: *"Christian Balzer" > *Cc: *ceph-users@lists.ceph.com > *Sent: *Sunday, 1 March, 2015 4:19:30 PM

Re: [ceph-users] SSD selection

2015-03-01 Thread Tony Harris
-Tony On Sat, Feb 28, 2015 at 11:21 PM, Christian Balzer wrote: > On Sat, 28 Feb 2015 20:42:35 -0600 Tony Harris wrote: > > > Hi all, > > > > I have a small cluster together and it's running fairly well (3 nodes, 21 > > osds). I'm looking to improve the wr

[ceph-users] SSD selection

2015-02-28 Thread Tony Harris
Hi all, I have a small cluster together and it's running fairly well (3 nodes, 21 osds). I'm looking to improve the write performance a bit though, which I was hoping that using SSDs for journals would do. But, I was wondering what people had as recommendations for SSDs to act as journal drives.

[ceph-users] Am I reaching the list now?

2015-02-28 Thread Tony Harris
I was subscribed with a yahoo email address, but it was getting some grief so I decided to try using my gmail address, hopefully this one is working -Tony ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users

[ceph-users] Mail not reaching the list?

2015-02-28 Thread Tony Harris
Hi,I've sent a couple of emails to the list since subscribing, but I've never seen them reach the list; I was just wondering if there was something wrong?___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-

[ceph-users] Ceph - networking question

2015-02-27 Thread Tony Harris
Hi all, I've only been using ceph for a few months now and currently have a small cluster (3 nodes, 18 OSDs).  I get decent performance based upon the configuration. My question is, should I have a larger pipe on the client/public network or on the ceph cluster private network?  I can only have