Hi,
Recently I`ve reduced my test suite from 6 to 4 osds at ~60% usage on
six-node,
and I have removed a bunch of rbd objects during recovery to avoid
overfill.
Right now I`m constantly receiving a warn about nearfull state on
non-existing osd:
health HEALTH_WARN 1 near full osd(s)
monmap
On Fri, Jun 22, 2012 at 1:23 PM, John Axel Eriksson j...@insane.se wrote:
I guess this has been asked before, I'm just new to the list and
wondered whether it's possible to do
rolling upgrades of mons, osds and radosgw? We will soon be in the
process of migrating from our current
storage
Hmm, can`t reproduce that(phew!). Qemu-1.1-release, 0.47.2, guest/host
mainly debian wheezy. Only one main difference with my setup from
yours is a underlying fs - I`m tired of btrfs unpredictable load
issues and moved back to xfs.
BTW you calculate sha1 in test suite, not sha256 as you mentioned
] [8146a839] ? system_call_fastpath+0x16/0x1b
On Wed, May 16, 2012 at 12:24 PM, Andrey Korolyov and...@xdel.ru wrote:
This is most likely due to a recently-fixed problem.
The fix is found in this commit, although there were
other changes that led up to it:
32eec68d2f rbd: don't drop
Hi,
For Stefan:
Increasing socket memory gave me about some percents on fio tests
inside VM(I have measured
'max-iops-until-ceph-throws-message-about-delayed-write' parameter).
What is more important, osd process, if possible, should be pinned to
dedicated core or two, and all other processes
Hi,
I`ve run in almost same problem about two months ago, and there is a
couple of corner cases: near-default tcp parameters, small journal
size, disks that are not backed by controller with NVRAM cache and
high load on osd` cpu caused by side processes. Finally, I have able
to achieve 115Mb/s
josh.dur...@inktank.com wrote:
On 05/15/2012 04:49 AM, Andrey Korolyov wrote:
Hi,
There are strange bug when I tried to map excessive amounts of block
devices inside the pool, like following
for vol in $(rbd ls); do rbd map $vol; [some-microsleep]; [some
operation or nothing, I have
Hi,
There are strange bug when I tried to map excessive amounts of block
devices inside the pool, like following
for vol in $(rbd ls); do rbd map $vol; [some-microsleep]; [some
operation or nothing, I have stubbed guestfs mount here] ;
[some-microsleep]; unmap /dev/rbd/rbd/$vol ;
Hello everyone,
I have just tried ceph collectd fork on wheezy and noticed that all
logs for ceph plugin produce nothing but zeroes(see below) for all
types of nodes. Python cephtool works just fine. Collectd run as root
and there is no obvious errors like socket permissions and no tips
from its
I have tested all of them about a week ago, all works fine. Also it
will be very nice if rbd can list an actual allocated size of every
image or snapshot in future.
On Wed, Apr 18, 2012 at 5:22 PM, Martin Mailand mar...@tuxadero.com wrote:
Hi Wido,
I am looking for doing the snapshots via
: Disk 'rbd/vm1:rbd_cache_enabled=1'
does not support snapshotting
maybe the rbd_cache option is the problem?
-martin
Am 18.04.2012 16:39, schrieb Andrey Korolyov:
I have tested all of them about a week ago, all works fine. Also it
will be very nice if rbd can list an actual allocated size
feel it's up to the sysadmin to mount / symlink the correct storage devices
on the correct paths - ceph should not be concerned that some volumes might
need to sit together.
Rgds,
Bernard
On 05 Apr 2012, at 09:12, Andrey Korolyov wrote:
Right, but probably we need journal separation
Hi,
# virsh blkdeviotune Test vdb --write_iops_sec 50 //file block device
# virsh blkdeviotune Test vda --write_iops_sec 50 //rbd block device
error: Unable to change block I/O throttle
error: invalid argument: No device found for specified path
2012-04-03 07:38:49.170+: 30171: debug :
But I am able to set static limits in the config for rbd :) All I want
is a change on-the-fly.
It is NOT cgroups mechanism, but completely qemu-driven.
On Tue, Apr 3, 2012 at 12:21 PM, Wido den Hollander w...@widodh.nl wrote:
Hi,
Op 3-4-2012 10:02, Andrey Korolyov schreef:
Hi,
# virsh
-2012 10:28, Andrey Korolyov schreef:
But I am able to set static limits in the config for rbd :) All I want
is a change on-the-fly.
It is NOT cgroups mechanism, but completely qemu-driven.
Are you sure about that?
http://libvirt.org/formatdomain.html#elementsBlockTuning
Browsing through
Suggested hack works, seems that libvirt devs does not remove block
limitation as they count this feature as experimental, or forgot about
it.
On Tue, Apr 3, 2012 at 12:55 PM, Andrey Korolyov and...@xdel.ru wrote:
At least, elements under iotune block applies to rbd and you can
test
On Fri, Mar 23, 2012 at 5:25 AM, Andrey Korolyov and...@xdel.ru wrote:
Hi Sam,
Can you please suggest on where to start profiling osd? If the
bottleneck has related to such non-complex things as directio speed,
I`m sure that I was able to catch it long ago, even crossing around by
results
~4096] 0.17eb9fd8) v4)
Sorry for my previous question about rbd chunks, it was really stupid :)
On Mon, Mar 19, 2012 at 10:40 PM, Josh Durgin josh.dur...@dreamhost.com wrote:
On 03/19/2012 11:13 AM, Andrey Korolyov wrote:
Nope, I`m using KVM for rbd guests. Surely I`ve been noticed that Sage
with? If you run a rados bench from both
machines, what do the results look like?
Also, can you do the ceph osd bench on each of your OSDs, please?
(http://ceph.newdream.net/wiki/Troubleshooting#OSD_performance)
-Greg
On Monday, March 19, 2012 at 6:46 AM, Andrey Korolyov wrote:
More strangely
101 - 119 of 119 matches
Mail list logo