josh.dur...@inktank.com wrote:
On 04/03/2014 03:36 PM, Jonathan Gowar wrote:
On Tue, 2014-03-04 at 14:05 +0800, YIP Wai Peng wrote:
Dear all,
I have a rbd image that I can't delete. It contains a snapshot that is
busy
# rbd --pool openstack-images rm
2383ba62-b7ab-4964-a776-fb3f3723aabe
Not sure if this answers your question, but when you start the osd that's
remapped, ceph will not be able to find the correct key and refuse to use
that osd.
- WP
On Thursday, 13 March 2014, Sidharta Mukerjee smukerje...@gmail.com wrote:
If a partition name such as /dev/sdd changes to /dev/sde
Had the same issue.
I restarted glance, and tried removing with rbd snap rm image@snap
Some of them are marked protected, in which you'd need to unprotected them
first.
- WP
On Thursday, 13 March 2014, yalla.gnan.ku...@accenture.com
yalla.gnan.ku...@accenture.com wrote:
Hi All,
Any
Hi,
I am currently facing a horrible situation. All my mons are crashing on
startup.
Here's a dump of mon.a.log. The last few ops are below. It seems to crash
trying to remove a snap? Any ideas?
- WP
snip
-10 2014-03-06 17:04:38.838490 7fb2a541a700 1 --
192.168.116.24:6789/0-- osd.9
Hi,
I am currently facing a horrible situation. All my mons are crashing on
startup.
Here's a dump of mon.a.log. The last few ops are below. It seems to crash
trying to remove a snap? Any ideas?
- WP
snip
-10 2014-03-06 17:04:38.838490 7fb2a541a700 1 -- 192.168.116.24:6789/0 --
osd.9
when he deleted an
image in openstack.
I'm now wondering if I can ignore the operation, or the openstack glance
pool, and get the mons to start up again. Any help will be greatly
appreciated!
- WP
On Thu, Mar 6, 2014 at 5:33 PM, YIP Wai Peng yi...@comp.nus.edu.sg wrote:
Hi,
I am currently
I've managed to get joao's assistance in tracking down the issue. I'll be
updating the bug 7210.
Thanks joao and all!
- WP
On Thu, Mar 6, 2014 at 6:25 PM, YIP Wai Peng yi...@comp.nus.edu.sg wrote:
Ok, I think I got bitten by http://tracker.ceph.com/issues/7210, or
rather, the cppool command
Dear all,
I have a rbd image that I can't delete. It contains a snapshot that is
busy
# rbd --pool openstack-images rm
2383ba62-b7ab-4964-a776-fb3f3723aabe-deleted
2014-03-04 14:02:04.062099 7f340b2d5760 -1 librbd: image has snapshots -
not removing
Removing image: 0% complete...failed.
rbd:
should be
-9 6 datacenter COM1
-6 6 room 02-WIRECEN
-4 3 host ceph2
snip
-2 3 host ceph1
snip
Moving a host away from the bucket and moving it back solved the problem.
- WP
On Fri, Jan 10, 2014 at 12:22 PM, YIP Wai Peng yi...@comp.nus.edu.sgwrote:
Hi Wido,
Thanks for the reply. I've
Dear all,
I have some pgs that are stuck_unclean, I'm trying to understand why.
Hopefully someone can help me shed some light on it.
For example, one of them is
# ceph pg dump_stuck unclean
1.fa 0 0 0 0 0 0 0 active+remapped 2014-01-10 11:18:53.147842 0'0 6452:4272
[7] [7,3] 0'0 2014-01-09
,
scrubber.waiting_on: 0,
scrubber.waiting_on_whom: []}},
{ name: Started,
enter_time: 2014-01-10 11:18:40.137868}]}
On Fri, Jan 10, 2014 at 12:16 PM, Wido den Hollander w...@42on.com wrote:
On 01/10/2014 05:13 AM, YIP Wai Peng wrote:
Dear all,
I have some pgs
On Wednesday, 20 November 2013, Gautam Saxena wrote:
Hi Yip,
Thanks for the code. With respect to can't grow, I think I can (with
some difficulty perhaps?) resize the vm if I needed to, but I'm really just
trying to buy myself time till CEPH-FS is production readyPoint #3
scares me, so
On Wednesday, 20 November 2013, Dimitri Maziuk wrote:
On 11/18/2013 01:19 AM, YIP Wai Peng wrote:
Hi Dima,
Benchmark FYI.
$ /usr/sbin/bonnie++ -s 0 -n 5:1m:4k
Version 1.97 --Sequential Create-- Random
Create
altair -Create-- --Read
and performance of this technique? (That is, is there are
any reason to believe that it would more/less robust and/or performant than
option #3 mentioned in the original thread?)
On Fri, Nov 15, 2013 at 1:57 AM, YIP Wai Peng yi...@comp.nus.edu.sgwrote:
On Fri, Nov 15, 2013 at 12:08 AM, Gautam
Hi Dima,
Benchmark FYI.
$ /usr/sbin/bonnie++ -s 0 -n 5:1m:4k
Version 1.97 --Sequential Create-- Random
Create
altair -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files:max:min/sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec
On Fri, Nov 15, 2013 at 12:08 AM, Gautam Saxena gsax...@i-a-inc.com wrote:
1) nfs over rbd (http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/
)
We are now running this - basically an intermediate/gateway node that
mounts ceph rbd objects and exports them as NFS.
Yes, I have changed them on all monitors (3).
Is reading the nearfull off 'ceph pg dump' the correct way of viewing?
- WP
On Fri, Aug 2, 2013 at 12:04 AM, Joao Eduardo Luis joao.l...@inktank.comwrote:
On 08/01/2013 12:53 PM, YIP Wai Peng wrote:
Hi all,
I am trying to change the mon osd
Hi all,
I am trying to change the mon osd nearfull / full ratio. Currently, my
settings are these:
# ceph pg dump | head
snip
full_ratio 0.95
nearfull_ratio 0.85
I edited ceph.conf file and added the configuration options, following
instructions at
on tunables optimal.
# ceph osd crush tunables optimal
adjusted tunables profile to optimal
What's wrong?
- WP
On Tue, Jun 4, 2013 at 1:23 PM, YIP Wai Peng yi...@comp.nus.edu.sg wrote:
Hi Sage,
Thanks, I noticed after re-reading the documentation.
I realized that osd.8 was not in host3. After
Hi Andrel,
Have you tried the patched ones at
https://objects.dreamhost.com/rpms/qemu/qemu-kvm-0.12.1.2-2.355.el6.2.x86_64.rpmand
https://objects.dreamhost.com/rpms/qemu/qemu-img-0.12.1.2-2.355.el6.2.x86_64.rpm?
I got the links off the IRC chat, I'm using them now.
- WP
On Sun, Jun 2, 2013 at
Hi all,
I'm running ceph on CentOS6 on 3 hosts, with 3 OSD each (total 9 OSD).
When I increased one of my pool rep size from 2 to 3, just 6 PGs will get
stuck in active+clean+degraded mode, but it doesn't create new replicas.
One of the problematic PG has the following (snipped for brevity)
{
Hi Sage,
It is on optimal tunables already. However, I'm on kernel
2.6.32-358.6.2.el6.x86_64. Will the tunables take effect or do I have to
upgrade to something newer?
- WP
On Tue, Jun 4, 2013 at 11:58 AM, Sage Weil s...@inktank.com wrote:
On Tue, 4 Jun 2013, YIP Wai Peng wrote:
Hi all
if this issue goes away.
Regards,
Wai Peng
On Tue, Jun 4, 2013 at 12:01 PM, YIP Wai Peng yi...@comp.nus.edu.sg wrote:
Hi Sage,
It is on optimal tunables already. However, I'm on kernel
2.6.32-358.6.2.el6.x86_64. Will the tunables take effect or do I have to
upgrade to something newer?
- WP
,
6],
snip
Still, nothing is happening. What can be wrong?
- WP
On Tue, Jun 4, 2013 at 12:26 PM, Sage Weil s...@inktank.com wrote:
On Tue, 4 Jun 2013, YIP Wai Peng wrote:
Sorry, to set things in context, I had some other problems last weekend.
Setting it to optimal tunables helped
Dear all,
I'm currently running a ceph client on centos6.3. Kernel has been upgraded
to kernel-lt-3.0.77-1 from elrepo which includes the rbd module.
I can create an map a rbd image fine. However, info and resize fails.
create new image
[root@nfs1 ~]# rbd --pool userfs -m ceph1 --id nfs
likely that
you're seeing
https://bugzilla.redhat.com/**show_bug.cgi?id=891993https://bugzilla.redhat.com/show_bug.cgi?id=891993
Barry
On 10/05/13 08:15, YIP Wai Peng wrote:
Dear all,
I'm currently running a ceph client on centos6.3. Kernel has been
upgraded to kernel-lt-3.0.77-1 from
26 matches
Mail list logo