Re: Small feature request for v0.55 release

2012-11-14 Thread Tren Blackburn
On Wed, Nov 14, 2012 at 1:53 PM, Nick Bartos n...@pistoncloud.com wrote: I see that v0.55 will be the next stable release. Would it be possible to use standard tarball naming conventions for this release? If I download http://ceph.com/download/ceph-0.48.2.tar.bz2, the top level directory is

Re: Small feature request for v0.55 release

2012-11-14 Thread Tren Blackburn
On Wed, Nov 14, 2012 at 3:40 PM, Jimmy Tang jt...@tchpc.tcd.ie wrote: On 14 Nov 2012, at 16:14, Sage Weil wrote: Appending the codename to the version string is something we did with argonaut (0.48argonaut) just to make it obvious to users which stable version they are on. How do people

Re: Ceph journal

2012-10-31 Thread Tren Blackburn
On Wed, Oct 31, 2012 at 2:18 PM, Gandalf Corvotempesta gandalf.corvotempe...@gmail.com wrote: In a multi replica cluster (for example, replica = 3) is safe to set journal on a tmpfs? As fa as I understood with journal enabled all writes are wrote on journal and then to disk in a second time.

mds cache size configuration option being ignored

2012-10-03 Thread Tren Blackburn
Hi List; I was advised to use the mds cache size option to limit the memory that the mds process will take. I have it set to 32768. However it the ceph-mds process is now at 50GB and still growing. fern ceph # ps wwaux | grep ceph-mds root 895 4.3 26.6 53269304 52725820 ? Ssl Sep28

Re: mds cache size configuration option being ignored

2012-10-03 Thread Tren Blackburn
On Wed, Oct 3, 2012 at 4:15 PM, Gregory Farnum g...@inktank.com wrote: On Wed, Oct 3, 2012 at 3:22 PM, Tren Blackburn t...@eotnetworks.com wrote: Hi List; I was advised to use the mds cache size option to limit the memory that the mds process will take. I have it set to 32768. However

Re: mds cache size configuration option being ignored

2012-10-03 Thread Tren Blackburn
On Wed, Oct 3, 2012 at 4:56 PM, Gregory Farnum g...@inktank.com wrote: On Wed, Oct 3, 2012 at 4:23 PM, Tren Blackburn t...@eotnetworks.com wrote: On Wed, Oct 3, 2012 at 4:15 PM, Gregory Farnum g...@inktank.com wrote: On Wed, Oct 3, 2012 at 3:22 PM, Tren Blackburn t...@eotnetworks.com wrote: Hi

mds stuck in clientreplay state after failover

2012-09-25 Thread Tren Blackburn
Hi List; I'm having an issue where the mds failed over between two nodes, and now is stuck in clientreplay state. It's been like this for several hours. Here are some details about the environment: mds/mon server sap: sap ceph # emerge --info Portage 2.1.10.65 (default/linux/amd64/10.0,

Re: mds stuck in clientreplay state after failover

2012-09-25 Thread Tren Blackburn
On Tue, Sep 25, 2012 at 2:15 PM, Gregory Farnum g...@inktank.com wrote: Hi Tren, Sorry your last message got dropped — we've all been really busy! No worries! I know you guys are busy, and I appreciate any assistance you're able to provide. On Tue, Sep 25, 2012 at 10:22 AM, Tren Blackburn t

Re: Why does mkcephfs take approximately 30 seconds per osd on ceph 0.51?

2012-09-21 Thread Tren Blackburn
On Wed, Sep 19, 2012 at 3:38 PM, Josh Durgin josh.dur...@inktank.com wrote: On 09/18/2012 04:47 PM, Tren Blackburn wrote: On Tue, Sep 18, 2012 at 4:32 PM, Josh Durgin josh.dur...@inktank.com wrote: On 09/18/2012 02:23 PM, Tren Blackburn wrote: On Tue, Sep 18, 2012 at 2:11 PM, Tren

mds stuck in replay state

2012-09-20 Thread Tren Blackburn
Hi List; Still rsyncing the same data as the last ticket. However the mds for some reason is stuck in replay state. I've tried restarting the mds process to get it to fail over to another node, but regardless of which node is the active mds, it still is in replay state. Not sure how to diagnose

Re: Memory usage of ceph-mds

2012-09-20 Thread Tren Blackburn
On Wed, Sep 19, 2012 at 4:30 PM, Sage Weil s...@inktank.com wrote: On Wed, 19 Sep 2012, Tren Blackburn wrote: On Wed, Sep 19, 2012 at 2:45 PM, Tren Blackburn t...@eotnetworks.com wrote: On Wed, Sep 19, 2012 at 2:33 PM, Sage Weil s...@inktank.com wrote: On Wed, 19 Sep 2012, Tren Blackburn

Re: Why does mkcephfs take approximately 30 seconds per osd on ceph 0.51?

2012-09-20 Thread Tren Blackburn
On Wed, Sep 19, 2012 at 3:38 PM, Josh Durgin josh.dur...@inktank.com wrote: On 09/18/2012 04:47 PM, Tren Blackburn wrote: On Tue, Sep 18, 2012 at 4:32 PM, Josh Durgin josh.dur...@inktank.com wrote: On 09/18/2012 02:23 PM, Tren Blackburn wrote: On Tue, Sep 18, 2012 at 2:11 PM, Tren

Re: Memory usage of ceph-mds

2012-09-19 Thread Tren Blackburn
On Wed, Sep 19, 2012 at 1:52 PM, Gregory Farnum g...@inktank.com wrote: On Wed, Sep 19, 2012 at 1:48 PM, Sage Weil s...@inktank.com wrote: On Wed, 19 Sep 2012, Tren Blackburn wrote: Hey List; I'm in the process of rsyncing in about 7TB of data to Ceph across approximately 58565475 files

Re: Memory usage of ceph-mds

2012-09-19 Thread Tren Blackburn
On Wed, Sep 19, 2012 at 2:12 PM, Sage Weil s...@inktank.com wrote: On Wed, 19 Sep 2012, Tren Blackburn wrote: On Wed, Sep 19, 2012 at 1:52 PM, Gregory Farnum g...@inktank.com wrote: On Wed, Sep 19, 2012 at 1:48 PM, Sage Weil s...@inktank.com wrote: On Wed, 19 Sep 2012, Tren Blackburn wrote

Re: Memory usage of ceph-mds

2012-09-19 Thread Tren Blackburn
On Wed, Sep 19, 2012 at 2:12 PM, Gregory Farnum g...@inktank.com wrote: On Wed, Sep 19, 2012 at 2:05 PM, Tren Blackburn t...@eotnetworks.com wrote: Greg: It's difficult to tell you that. I'm rsyncing 2 volumes from our filers. Each base directory on each filer mount has approximate 213

Re: Memory usage of ceph-mds

2012-09-19 Thread Tren Blackburn
On Wed, Sep 19, 2012 at 2:33 PM, Sage Weil s...@inktank.com wrote: On Wed, 19 Sep 2012, Tren Blackburn wrote: On Wed, Sep 19, 2012 at 2:12 PM, Gregory Farnum g...@inktank.com wrote: On Wed, Sep 19, 2012 at 2:05 PM, Tren Blackburn t...@eotnetworks.com wrote: Greg: It's difficult to tell

Re: Memory usage of ceph-mds

2012-09-19 Thread Tren Blackburn
On Wed, Sep 19, 2012 at 2:45 PM, Tren Blackburn t...@eotnetworks.com wrote: On Wed, Sep 19, 2012 at 2:33 PM, Sage Weil s...@inktank.com wrote: On Wed, 19 Sep 2012, Tren Blackburn wrote: On Wed, Sep 19, 2012 at 2:12 PM, Gregory Farnum g...@inktank.com wrote: On Wed, Sep 19, 2012 at 2:05 PM

Re: Memory usage of ceph-mds

2012-09-19 Thread Tren Blackburn
On Wed, Sep 19, 2012 at 4:30 PM, Sage Weil s...@inktank.com wrote: On Wed, 19 Sep 2012, Tren Blackburn wrote: On Wed, Sep 19, 2012 at 2:45 PM, Tren Blackburn t...@eotnetworks.com wrote: On Wed, Sep 19, 2012 at 2:33 PM, Sage Weil s...@inktank.com wrote: On Wed, 19 Sep 2012, Tren Blackburn

Re: How are you using Ceph?

2012-09-18 Thread Tren Blackburn
On Mon, Sep 17, 2012 at 7:32 PM, Sage Weil s...@inktank.com wrote: On Mon, 17 Sep 2012, Tren Blackburn wrote: On Mon, Sep 17, 2012 at 5:05 PM, Smart Weblications GmbH - Florian Wiessner f.wiess...@smart-weblications.de wrote: Hi, i use ceph to provide storage via rbd for our

Why does mkcephfs take approximately 30 seconds per osd on ceph 0.51?

2012-09-18 Thread Tren Blackburn
Hi List; I've been working with ceph 0.51 lately, and have noticed this for a while now, but it hasn't been a big enough issue for me to report. However today I'm turning up a 192 OSD cluster, and 30 seconds per OSD adds up pretty quick. For some reason it's taking 30 seconds between checking the

Re: Why does mkcephfs take approximately 30 seconds per osd on ceph 0.51?

2012-09-18 Thread Tren Blackburn
On Tue, Sep 18, 2012 at 1:58 PM, Sage Weil s...@inktank.com wrote: On Tue, 18 Sep 2012, Tren Blackburn wrote: Hi List; I've been working with ceph 0.51 lately, and have noticed this for a while now, but it hasn't been a big enough issue for me to report. However today I'm turning up a 192

Re: Why does mkcephfs take approximately 30 seconds per osd on ceph 0.51?

2012-09-18 Thread Tren Blackburn
On Tue, Sep 18, 2012 at 2:11 PM, Tren Blackburn t...@eotnetworks.com wrote: On Tue, Sep 18, 2012 at 1:58 PM, Sage Weil s...@inktank.com wrote: On Tue, 18 Sep 2012, Tren Blackburn wrote: Hi List; I've been working with ceph 0.51 lately, and have noticed this for a while now, but it hasn't

Re: Why does mkcephfs take approximately 30 seconds per osd on ceph 0.51?

2012-09-18 Thread Tren Blackburn
On Tue, Sep 18, 2012 at 4:32 PM, Josh Durgin josh.dur...@inktank.com wrote: On 09/18/2012 02:23 PM, Tren Blackburn wrote: On Tue, Sep 18, 2012 at 2:11 PM, Tren Blackburn t...@eotnetworks.com wrote: On Tue, Sep 18, 2012 at 1:58 PM, Sage Weil s...@inktank.com wrote: On Tue, 18 Sep 2012, Tren

Re: How are you using Ceph?

2012-09-17 Thread Tren Blackburn
On Mon, Sep 17, 2012 at 5:05 PM, Smart Weblications GmbH - Florian Wiessner f.wiess...@smart-weblications.de wrote: Hi, i use ceph to provide storage via rbd for our virtualization cluster delivering KVM based high availability Virtual Machines to my customers. I also use it as rbd device

Re: www.ceph.com down?

2012-09-16 Thread Tren Blackburn
On Sun, Sep 16, 2012 at 10:12 AM, Mike Ryan mike.r...@inktank.com wrote: On Sun, Sep 16, 2012 at 09:06:15AM -0700, Tren Blackburn wrote: Has something happened to the ceph.com webserver? It appears to be working from my box. Can you ctrl-shift-R and see if it's working for you? Yup, I can

Re: Integration work

2012-08-28 Thread Tren Blackburn
On Tue, Aug 28, 2012 at 11:51 AM, Dieter Kasper d.kas...@kabelmail.de wrote: Hi Ross, focusing on core stability and feature expansion for RBD was the right appoach in the past and I feel you have reached an adequate maturity level here. Performance enhancements - especially to reduce the

How to increase the number of pgs at pool creation time?

2012-08-23 Thread Tren Blackburn
Hi List; I am attempting to get a test ceph cluster up and running. I am using ceph-0.50 across all nodes. I am attempting to get the number of pgs to be around 100 per osd as per the documentation at: http://ceph.com/docs/master/dev/placement-group/ I have attempted to increase the pgs via

Re: How to increase the number of pgs at pool creation time?

2012-08-23 Thread Tren Blackburn
On Thu, Aug 23, 2012 at 2:17 PM, Jim Schutt jasc...@sandia.gov wrote: On 08/23/2012 02:39 PM, Tren Blackburn wrote: 2) Increase the number of pgs via ceph.conf (osd pg bits = 7) and create the cluster. This does not work either as the cluster comes up with 6 pgs bits per osd still

Re: How to increase the number of pgs at pool creation time?

2012-08-23 Thread Tren Blackburn
On Thu, Aug 23, 2012 at 2:48 PM, Jim Schutt jasc...@sandia.gov wrote: On 08/23/2012 03:26 PM, Tren Blackburn wrote: On Thu, Aug 23, 2012 at 2:17 PM, Jim Schuttjasc...@sandia.gov wrote: On 08/23/2012 02:39 PM, Tren Blackburn wrote: 2) Increase the number of pgs via ceph.conf (osd pg bits

Re: How to increase the number of pgs at pool creation time?

2012-08-23 Thread Tren Blackburn
On Thu, Aug 23, 2012 at 2:58 PM, Gregory Farnum g...@inktank.com wrote: On Thu, Aug 23, 2012 at 5:48 PM, Jim Schutt jasc...@sandia.gov wrote: On 08/23/2012 03:26 PM, Tren Blackburn wrote: On Thu, Aug 23, 2012 at 2:17 PM, Jim Schuttjasc...@sandia.gov wrote: On 08/23/2012 02:39 PM, Tren