On Wed, 25 Jul 2012, Mandell Degerness wrote:
> When a cluster has been shut down and then re-started, how do the
> monitors know what the cluster fsid is? Is it stored somewhere?
It's embedded in the monmap, currently found at $mon_data/monmap/. Not terribly convenient, sorry!
> I would like t
When a cluster has been shut down and then re-started, how do the
monitors know what the cluster fsid is? Is it stored somewhere?
I would like to be able to verify, before starting a monitor on a
given server, if an existing monitor directory belongs to the current
cluster or to a previous cluste
Hi Florian,
On Wed, Jul 25, 2012 at 10:06:04PM +0200, Florian Haas wrote:
> Hi Mehdi,
> For the OSD tests, which OSD filesystem are you testing on? Are you
>
>
> using a separate journal device? If y
On Wed, Jul 25, 2012 at 1:25 PM, Gregory Farnum wrote:
> Yeah, an average isn't necessarily very useful here — it's what you
> get because that's easy to implement (with a sum and a counter
> variable, instead of binning). The inclusion of max and min latencies
> is an attempt to cheaply compensat
On Wed, Jul 25, 2012 at 1:06 PM, Florian Haas wrote:
> Hi Mehdi,
>
> great work! A few questions (for you, Mark, and anyone else watching
> this thread) regarding the content of that wiki page:
>
> For the OSD tests, which OSD filesystem are you testing on? Are you
> using a separate journal devic
Hi Mehdi,
great work! A few questions (for you, Mark, and anyone else watching
this thread) regarding the content of that wiki page:
For the OSD tests, which OSD filesystem are you testing on? Are you
using a separate journal device? If yes, what type?
For the RADOS benchmarks:
# rados bench -p
On Tue, Jul 24, 2012 at 6:19 PM, Tommi Virtanen wrote:
> On Tue, Jul 24, 2012 at 8:55 AM, Mark Nelson wrote:
>> personally I think it's fine to have it on the wiki. I do want to stress
>> that performance is going to be (hopefully!) improving over the next couple
>> of months so we will probably
On Tue, Jul 24, 2012 at 10:55:37AM -0500, Mark Nelson wrote:
> On 07/24/2012 09:43 AM, Mehdi Abaakouk wrote:
>
> Thanks for taking the time to put all of your benchmarking
> procedures into writing! Having this kind of community
>
> ...
>
Thanks, for yours comments and these tools, that will hel
On 07/25/2012 06:34 AM, Ryan Nicholson wrote:
I'm running a cluster based on 4 hosts that each have 3 fast, SCSI osd's, and 1
very large SATA osd, meaning, 12 fast osd's and 4 slow osd's total. I wish to
segregate these into 2 pools, that operate independently. The goal is to use
the faster
Hi,
On a couple of systems I'm using the Debian packages provided on
Ceph.com, but these packages are hosted on a CA based server.
In the EU that's rather slow, especially when updating multiple servers
and when downloading the debug packages.
As I'm lazy I don't want to maintain my own mir
10 matches
Mail list logo