Adding new mon to existing cluster in ceph v0.39(+?)

2012-01-10 Thread Sławomir Skowron
I have some problem with adding a new mon to existing ceph cluster. Now cluster contains a 3 mon's, but i started with only one in one machine. Then adding a second, and third machine, with new mon's, and OSD. Adding, a new OSD is quiet simple, but adding, a new mon is compilation of some pieces i

.rgw expand number of pg's

2012-01-10 Thread Sławomir Skowron
How to expand number of pg's in rgw pool ?? -- - Pozdrawiam Sławek "sZiBis" Skowron -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: .rgw expand number of pg's

2012-01-10 Thread Samuel Just
At the moment, expanding the number of pgs in a pool is not working. We hope to get it working in the somewhat near future (probably a few months). Are you attempting to expand the number of osds and running out of pgs? -Sam 2012/1/10 Sławomir Skowron : > How to expand number of pg's in rgw pool

Re: .rgw expand number of pg's

2012-01-10 Thread Samuel Just
At the moment, expanding the number of pgs in a pool is not working. We hope to get it working in the somewhat near future (probably a few months). Are you attempting to expand the number of osds and running out of pgs? -Sam 2012/1/10 Sławomir Skowron : > How to expand number of pg's in rgw pool

[PATCH]: set up rbd snapshot handling

2012-01-10 Thread Gregory Farnum
rbd: wire up snapshot removal and rollback functionality Signed-off-by: Greg Farnum Reviewed-by: Josh Durgin --- block/rbd.c | 32 1 files changed, 32 insertions(+), 0 deletions(-) diff --git a/block/rbd.c b/block/rbd.c index 7a2384c..f52c1ca 100644 --- a/blo

Re: .rgw expand number of pg's

2012-01-10 Thread Sławomir Skowron
Maybe i missunderstood problem, but i see something like this. My setup is 3 node cluster. 78 osd and 3 mons. At the top of the cluster working radosgw on every machine. Every pool have a 3 replicas. Default politics for replicas is host in racks, and every machine is in other rack. When i do a s

Re: .rgw expand number of pg's

2012-01-10 Thread Samuel Just
Could you post your ceph.conf to the list? The output of 'ceph -s' and 'ceph pg dump' would also help. -Sam 2012/1/10 Sławomir Skowron : > Maybe i missunderstood problem, but i see something like this. > > My setup is 3 node cluster. 78 osd and 3 mons. At the top of the > cluster working radosgw

Re: Adding new mon to existing cluster in ceph v0.39(+?)

2012-01-10 Thread Samuel Just
It looks like in step one you needed to supply a either monmap or addresses of existing monitors. What errors did you encounter? -Sam 2012/1/10 Sławomir Skowron : > I have some problem with adding a new mon to existing ceph cluster. > > Now cluster contains a 3 mon's, but i started with only one

Re: towards a user-mode diagnostic log mechanism

2012-01-10 Thread Tommi Virtanen
On Thu, Jan 5, 2012 at 20:09, Colin McCabe wrote: > Getting the system time is a surprisingly expensive operation, and > this poses a problem for logging system designers.  You can use the > rdtsc CPU instruction, but unfortunately on some architectures CPU > frequency scaling makes it very inaccu

Re: towards a user-mode diagnostic log mechanism

2012-01-10 Thread Tommi Virtanen
On Tue, Jan 10, 2012 at 16:40, Noah Watkins wrote: > The cost of reading the software clock via gettimeofday is dependent on the > OS implementation, and can vary by as much as 2 orders of magnitude (see > attached slide: 1.6us vs 60ns per clock read). > > Older Linux kernels force a thread into t

Re: towards a user-mode diagnostic log mechanism

2012-01-10 Thread Noah Watkins
On 1/10/12 5:04 PM, Tommi Virtanen wrote: On Tue, Jan 10, 2012 at 16:40, Noah Watkins wrote: Older Linux kernels force a thread into the kernel when calling gettimeofday adding enormous overhead relative to the newer vsyscall implementations that evaluate gettimeofday within userspace. So tu