Re: [ceph-users] cannot remove rbd image, snapshot busy

2014-04-03 Thread YIP Wai Peng
josh.dur...@inktank.com wrote: On 04/03/2014 03:36 PM, Jonathan Gowar wrote: On Tue, 2014-03-04 at 14:05 +0800, YIP Wai Peng wrote: Dear all, I have a rbd image that I can't delete. It contains a snapshot that is busy # rbd --pool openstack-images rm 2383ba62-b7ab-4964-a776-fb3f3723aabe

Re: [ceph-users] if partition name changes, will ceph get corrupted?

2014-03-14 Thread YIP Wai Peng
Not sure if this answers your question, but when you start the osd that's remapped, ceph will not be able to find the correct key and refuse to use that osd. - WP On Thursday, 13 March 2014, Sidharta Mukerjee smukerje...@gmail.com wrote: If a partition name such as /dev/sdd changes to /dev/sde

Re: [ceph-users] Remove volume

2014-03-14 Thread YIP Wai Peng
Had the same issue. I restarted glance, and tried removing with rbd snap rm image@snap Some of them are marked protected, in which you'd need to unprotected them first. - WP On Thursday, 13 March 2014, yalla.gnan.ku...@accenture.com yalla.gnan.ku...@accenture.com wrote: Hi All, Any

[ceph-users] Help, ceph mons all crashed

2014-03-06 Thread YIP Wai Peng
Hi, I am currently facing a horrible situation. All my mons are crashing on startup. Here's a dump of mon.a.log. The last few ops are below. It seems to crash trying to remove a snap? Any ideas? - WP snip -10 2014-03-06 17:04:38.838490 7fb2a541a700 1 -- 192.168.116.24:6789/0-- osd.9

[ceph-users] Help! All ceph mons crashed.

2014-03-06 Thread YIP Wai Peng
Hi, I am currently facing a horrible situation. All my mons are crashing on startup. Here's a dump of mon.a.log. The last few ops are below. It seems to crash trying to remove a snap? Any ideas? - WP snip -10 2014-03-06 17:04:38.838490 7fb2a541a700 1 -- 192.168.116.24:6789/0 -- osd.9

Re: [ceph-users] Help! All ceph mons crashed.

2014-03-06 Thread YIP Wai Peng
when he deleted an image in openstack. I'm now wondering if I can ignore the operation, or the openstack glance pool, and get the mons to start up again. Any help will be greatly appreciated! - WP On Thu, Mar 6, 2014 at 5:33 PM, YIP Wai Peng yi...@comp.nus.edu.sg wrote: Hi, I am currently

[ceph-users] [Solved]: Help! All ceph mons crashed.

2014-03-06 Thread YIP Wai Peng
I've managed to get joao's assistance in tracking down the issue. I'll be updating the bug 7210. Thanks joao and all! - WP On Thu, Mar 6, 2014 at 6:25 PM, YIP Wai Peng yi...@comp.nus.edu.sg wrote: Ok, I think I got bitten by http://tracker.ceph.com/issues/7210, or rather, the cppool command

[ceph-users] cannot remove rbd image, snapshot busy

2014-03-03 Thread YIP Wai Peng
Dear all, I have a rbd image that I can't delete. It contains a snapshot that is busy # rbd --pool openstack-images rm 2383ba62-b7ab-4964-a776-fb3f3723aabe-deleted 2014-03-04 14:02:04.062099 7f340b2d5760 -1 librbd: image has snapshots - not removing Removing image: 0% complete...failed. rbd:

Re: [ceph-users] trying to understand stuck_unclean

2014-01-10 Thread YIP Wai Peng
should be -9 6 datacenter COM1 -6 6 room 02-WIRECEN -4 3 host ceph2 snip -2 3 host ceph1 snip Moving a host away from the bucket and moving it back solved the problem. - WP On Fri, Jan 10, 2014 at 12:22 PM, YIP Wai Peng yi...@comp.nus.edu.sgwrote: Hi Wido, Thanks for the reply. I've

[ceph-users] trying to understand stuck_unclean

2014-01-09 Thread YIP Wai Peng
Dear all, I have some pgs that are stuck_unclean, I'm trying to understand why. Hopefully someone can help me shed some light on it. For example, one of them is # ceph pg dump_stuck unclean 1.fa 0 0 0 0 0 0 0 active+remapped 2014-01-10 11:18:53.147842 0'0 6452:4272 [7] [7,3] 0'0 2014-01-09

Re: [ceph-users] trying to understand stuck_unclean

2014-01-09 Thread YIP Wai Peng
, scrubber.waiting_on: 0, scrubber.waiting_on_whom: []}}, { name: Started, enter_time: 2014-01-10 11:18:40.137868}]} On Fri, Jan 10, 2014 at 12:16 PM, Wido den Hollander w...@42on.com wrote: On 01/10/2014 05:13 AM, YIP Wai Peng wrote: Dear all, I have some pgs

Re: [ceph-users] alternative approaches to CEPH-FS

2013-11-19 Thread YIP Wai Peng
On Wednesday, 20 November 2013, Gautam Saxena wrote: Hi Yip, Thanks for the code. With respect to can't grow, I think I can (with some difficulty perhaps?) resize the vm if I needed to, but I'm really just trying to buy myself time till CEPH-FS is production readyPoint #3 scares me, so

[ceph-users] alternative approaches to CEPH-FS

2013-11-19 Thread YIP Wai Peng
On Wednesday, 20 November 2013, Dimitri Maziuk wrote: On 11/18/2013 01:19 AM, YIP Wai Peng wrote: Hi Dima, Benchmark FYI. $ /usr/sbin/bonnie++ -s 0 -n 5:1m:4k Version 1.97 --Sequential Create-- Random Create altair -Create-- --Read

Re: [ceph-users] alternative approaches to CEPH-FS

2013-11-18 Thread YIP Wai Peng
and performance of this technique? (That is, is there are any reason to believe that it would more/less robust and/or performant than option #3 mentioned in the original thread?) On Fri, Nov 15, 2013 at 1:57 AM, YIP Wai Peng yi...@comp.nus.edu.sgwrote: On Fri, Nov 15, 2013 at 12:08 AM, Gautam

Re: [ceph-users] alternative approaches to CEPH-FS

2013-11-18 Thread YIP Wai Peng
Hi Dima, Benchmark FYI. $ /usr/sbin/bonnie++ -s 0 -n 5:1m:4k Version 1.97 --Sequential Create-- Random Create altair -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min/sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec

Re: [ceph-users] alternative approaches to CEPH-FS

2013-11-14 Thread YIP Wai Peng
On Fri, Nov 15, 2013 at 12:08 AM, Gautam Saxena gsax...@i-a-inc.com wrote: 1) nfs over rbd (http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/ ) We are now running this - basically an intermediate/gateway node that mounts ceph rbd objects and exports them as NFS.

Re: [ceph-users] Setting nearfull / full ratio

2013-08-02 Thread YIP Wai Peng
Yes, I have changed them on all monitors (3). Is reading the nearfull off 'ceph pg dump' the correct way of viewing? - WP On Fri, Aug 2, 2013 at 12:04 AM, Joao Eduardo Luis joao.l...@inktank.comwrote: On 08/01/2013 12:53 PM, YIP Wai Peng wrote: Hi all, I am trying to change the mon osd

[ceph-users] Setting nearfull / full ratio

2013-08-01 Thread YIP Wai Peng
Hi all, I am trying to change the mon osd nearfull / full ratio. Currently, my settings are these: # ceph pg dump | head snip full_ratio 0.95 nearfull_ratio 0.85 I edited ceph.conf file and added the configuration options, following instructions at

Re: [ceph-users] PG active+clean+degraded, but not creating new replicas

2013-06-04 Thread YIP Wai Peng
on tunables optimal. # ceph osd crush tunables optimal adjusted tunables profile to optimal What's wrong? - WP On Tue, Jun 4, 2013 at 1:23 PM, YIP Wai Peng yi...@comp.nus.edu.sg wrote: Hi Sage, Thanks, I noticed after re-reading the documentation. I realized that osd.8 was not in host3. After

Re: [ceph-users] CentOS + qemu-kvm rbd support update

2013-06-03 Thread YIP Wai Peng
Hi Andrel, Have you tried the patched ones at https://objects.dreamhost.com/rpms/qemu/qemu-kvm-0.12.1.2-2.355.el6.2.x86_64.rpmand https://objects.dreamhost.com/rpms/qemu/qemu-img-0.12.1.2-2.355.el6.2.x86_64.rpm? I got the links off the IRC chat, I'm using them now. - WP On Sun, Jun 2, 2013 at

[ceph-users] PG active+clean+degraded, but not creating new replicas

2013-06-03 Thread YIP Wai Peng
Hi all, I'm running ceph on CentOS6 on 3 hosts, with 3 OSD each (total 9 OSD). When I increased one of my pool rep size from 2 to 3, just 6 PGs will get stuck in active+clean+degraded mode, but it doesn't create new replicas. One of the problematic PG has the following (snipped for brevity) {

Re: [ceph-users] PG active+clean+degraded, but not creating new replicas

2013-06-03 Thread YIP Wai Peng
Hi Sage, It is on optimal tunables already. However, I'm on kernel 2.6.32-358.6.2.el6.x86_64. Will the tunables take effect or do I have to upgrade to something newer? - WP On Tue, Jun 4, 2013 at 11:58 AM, Sage Weil s...@inktank.com wrote: On Tue, 4 Jun 2013, YIP Wai Peng wrote: Hi all

Re: [ceph-users] PG active+clean+degraded, but not creating new replicas

2013-06-03 Thread YIP Wai Peng
if this issue goes away. Regards, Wai Peng On Tue, Jun 4, 2013 at 12:01 PM, YIP Wai Peng yi...@comp.nus.edu.sg wrote: Hi Sage, It is on optimal tunables already. However, I'm on kernel 2.6.32-358.6.2.el6.x86_64. Will the tunables take effect or do I have to upgrade to something newer? - WP

Re: [ceph-users] PG active+clean+degraded, but not creating new replicas

2013-06-03 Thread YIP Wai Peng
, 6], snip Still, nothing is happening. What can be wrong? - WP On Tue, Jun 4, 2013 at 12:26 PM, Sage Weil s...@inktank.com wrote: On Tue, 4 Jun 2013, YIP Wai Peng wrote: Sorry, to set things in context, I had some other problems last weekend. Setting it to optimal tunables helped

[ceph-users] ceph rbd info and resize not supported on centos6.3

2013-05-10 Thread YIP Wai Peng
Dear all, I'm currently running a ceph client on centos6.3. Kernel has been upgraded to kernel-lt-3.0.77-1 from elrepo which includes the rbd module. I can create an map a rbd image fine. However, info and resize fails. create new image [root@nfs1 ~]# rbd --pool userfs -m ceph1 --id nfs

Re: [ceph-users] ceph rbd info and resize not supported on centos6.3

2013-05-10 Thread YIP Wai Peng
likely that you're seeing https://bugzilla.redhat.com/**show_bug.cgi?id=891993https://bugzilla.redhat.com/show_bug.cgi?id=891993 Barry On 10/05/13 08:15, YIP Wai Peng wrote: Dear all, I'm currently running a ceph client on centos6.3. Kernel has been upgraded to kernel-lt-3.0.77-1 from