Re: Ideal hardware spec?

2012-08-23 Thread Wido den Hollander
On 08/22/2012 04:39 PM, Stephen Perkins wrote: Hi all, Is there a place we can set up a group of hardware recipes that people can query and modify over time? It would be good if people could submit and "group modify" the recipes. I would envision "hypothetical" configurations and "deployed/te

Re: SimpleMessenger dispatching: cause of performance problems?

2012-08-23 Thread Andreas Bluemle
Hi, the size of the rbd data objects on the OSD is 4 MByte (default). Best Regards Andreas Samuel Just wrote: What rbd block size were you using? -Sam On Tue, Aug 21, 2012 at 10:29 PM, Andreas Bluemle wrote: Hi, Samuel Just wrote: Was the cluster complete healthy at the time th

Re: Ideal hardware spec?

2012-08-23 Thread Wido den Hollander
On 08/22/2012 05:46 PM, Jonathan Proulx wrote: On Wed, Aug 22, 2012 at 04:17:23PM +0200, Wido den Hollander wrote: :On 08/22/2012 03:55 PM, Jonathan Proulx wrote: :You can also use the USB sticks[0] from Stec, they have servergrade :onboard USB sticks for these kind of applications. Those look

[PATCH v2] libceph: Fix sparse warning

2012-08-23 Thread Iulius Curt
From: Iulius Curt Make ceph_monc_do_poolop() static to remove the following sparse warning: * net/ceph/mon_client.c:616:5: warning: symbol 'ceph_monc_do_poolop' was not declared. Should it be static? Also drops the 'ceph_monc_' prefix, now being a private function. Signed-off-by: Iulius Curt

Re: [ceph-commit] How to build ceph upon zfs filesystem.

2012-08-23 Thread Joao Eduardo Luis
On 08/23/2012 11:41 AM, RamuNaidu Eppa wrote: > Hi all, > > I want build ceph upon zfs file system,now iam installed ceph upon > btrfs filesystem. > Please help me to ceph buils upon zfs filesystem. > > Thanks, > Ramu. Hello Ramu, I suspect the right place for this question would be the ceph-

Re: [ceph-commit] sudo apt-get install radosgw command not working.

2012-08-23 Thread Joao Eduardo Luis
On 08/23/2012 06:02 AM, RamuNaidu Eppa wrote: > Hi all, > >I want to install radosgw,ifollwed the steps > "http://ceph.com/docs/master/radosgw/manual-install/"; but after i tried > the command "sudo apt-get install radosgw" > was not working it is showing error, > Reading package lists... Done

PG's

2012-08-23 Thread Ryan Nicholson
All: I have a 16-OSD cluster running 0.48 (Argonaut), built from source. I rebuilt the entire cluster on Sunday Evening 8-19-2012, and started some rados testing. I have a custom CRUSH map, that calls for the "rbd", "metadata" pools and a custom pool called "SCSI" to be pulled from osd.0-11, w

Re: PG's

2012-08-23 Thread Gregory Farnum
On Thu, Aug 23, 2012 at 2:51 PM, Ryan Nicholson wrote: > All: > > I have a 16-OSD cluster running 0.48 (Argonaut), built from source. > > I rebuilt the entire cluster on Sunday Evening 8-19-2012, and started some > rados testing. > > I have a custom CRUSH map, that calls for the "rbd", "metadata"

How to increase the number of pgs at pool creation time?

2012-08-23 Thread Tren Blackburn
Hi List; I am attempting to get a test ceph cluster up and running. I am using ceph-0.50 across all nodes. I am attempting to get the number of pgs to be around 100 per osd as per the documentation at: http://ceph.com/docs/master/dev/placement-group/ I have attempted to increase the pgs via seve

kernel oops on 0.47.2, kernel 3.4.4

2012-08-23 Thread Mandell Degerness
I know this is an old build, but I just want to verify that this isn't an unknown bug. For context, the attached log covers the time from when server .15 dropped off the net (we think power failure at this point). OSDs 72, 73, 74, and 75 are on the node which apparently lost power. Ceph version

Re: kernel oops on 0.47.2, kernel 3.4.4

2012-08-23 Thread Sage Weil
On Thu, 23 Aug 2012, Mandell Degerness wrote: > I know this is an old build, but I just want to verify that this isn't > an unknown bug. For context, the attached log covers the time from > when server .15 dropped off the net (we think power failure at this > point). OSDs 72, 73, 74, and 75 are o

Re: How to increase the number of pgs at pool creation time?

2012-08-23 Thread Jim Schutt
On 08/23/2012 02:39 PM, Tren Blackburn wrote: 2) Increase the number of pgs via ceph.conf (osd pg bits = 7) and create the cluster. This does not work either as the cluster comes up with 6 pgs bits per osd still. re:http://ceph.com/docs/master/config-cluster/osd-config-ref/ I use this regu

Re: How to increase the number of pgs at pool creation time?

2012-08-23 Thread Tren Blackburn
On Thu, Aug 23, 2012 at 2:17 PM, Jim Schutt wrote: > On 08/23/2012 02:39 PM, Tren Blackburn wrote: >> >> 2) Increase the number of pgs via ceph.conf (osd pg bits = 7) and >> create the cluster. This does not work either as the cluster comes up >> with 6 pgs bits per osd still. >> re:http://ce

RE: PG's

2012-08-23 Thread Ryan Nicholson
Thanks, Greg! Here's the link to all of the dumps you asked for: http://pastebin.com/y4bPwSz8 Let me know what you think! Ryan Nicholson -Original Message- From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Gregory Farnum Sent: Thursday, Augu

Re: How to increase the number of pgs at pool creation time?

2012-08-23 Thread Jim Schutt
On 08/23/2012 03:26 PM, Tren Blackburn wrote: On Thu, Aug 23, 2012 at 2:17 PM, Jim Schutt wrote: On 08/23/2012 02:39 PM, Tren Blackburn wrote: 2) Increase the number of pgs via ceph.conf (osd pg bits = 7) and create the cluster. This does not work either as the cluster comes up with 6 pgs bit

Re: How to increase the number of pgs at pool creation time?

2012-08-23 Thread Tren Blackburn
On Thu, Aug 23, 2012 at 2:48 PM, Jim Schutt wrote: > On 08/23/2012 03:26 PM, Tren Blackburn wrote: >> >> On Thu, Aug 23, 2012 at 2:17 PM, Jim Schutt wrote: >>> >>> On 08/23/2012 02:39 PM, Tren Blackburn wrote: 2) Increase the number of pgs via ceph.conf (osd pg bits = 7) and c

Re: How to increase the number of pgs at pool creation time?

2012-08-23 Thread Gregory Farnum
On Thu, Aug 23, 2012 at 5:48 PM, Jim Schutt wrote: > On 08/23/2012 03:26 PM, Tren Blackburn wrote: >> >> On Thu, Aug 23, 2012 at 2:17 PM, Jim Schutt wrote: >>> >>> On 08/23/2012 02:39 PM, Tren Blackburn wrote: 2) Increase the number of pgs via ceph.conf (osd pg bits = 7) and c

Re: How to increase the number of pgs at pool creation time?

2012-08-23 Thread Tren Blackburn
On Thu, Aug 23, 2012 at 2:58 PM, Gregory Farnum wrote: > On Thu, Aug 23, 2012 at 5:48 PM, Jim Schutt wrote: >> On 08/23/2012 03:26 PM, Tren Blackburn wrote: >>> >>> On Thu, Aug 23, 2012 at 2:17 PM, Jim Schutt wrote: On 08/23/2012 02:39 PM, Tren Blackburn wrote: > > > 2) Inc

RE: PG's

2012-08-23 Thread Ryan Nicholson
http://pastebin.com/5bRiUTxf Greg I've also attached a "ceph osd tree" dump (above). From what I can tell, the tree is correct, and lines up with how I desire to weight the cluster(s), however, I do see that the reweight for the smaller osds (SCSI-Nodes) are less than 1. Perhaps I need to look

Re: PG's

2012-08-23 Thread Gregory Farnum
Wow, those are quite a dynamic range of numbers — I'm not sure you can count on the cluster behaving well with that much happening in the overrides. If you actually have OSDs with varying capacity, you should give them different CRUSH weights (using "ceph osd crush set ...") rather than using the m