ed to do to get ceph to recognise the osds? (again
without ceph-deploy)
--
Alex Bligh
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
imised.
Try:
dd if=ddbenchfile of=- bs=8K | dd if=- of=/dev/null bs=8K
--
Alex Bligh
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ers of MON devices wasteful (does not increase quorum) and arguably
increases the chance of failure (as now we need k devices of n+1 to fail, as
opposed to k devices of n).
--
Alex Bligh
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
nt.
>
> Probably the only thing to do is to white list the address and put up with
> the spam.
>
> James
>
>> -Original Message-
>> From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
>> boun...@lists.ceph.com] On Behalf Of Alex Bligh
&
page, you can change various delivery options such
> as your email address and whether you get digests or not. As a
> reminder, your membership password is
>
>[REDACTED]
>
> If you have any questions or problems, you can contact the list owner
> at
>
>ceph-users
.5 users are strongly recommended to upgrade.
Was this bug also in 0.61.4?
--
Alex Bligh
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
n
the client (either in qemu or in librbd), the former being something
I'm toying with. Being persistent it can complete flush/fua
type operations before they are actually written to ceph.
It wasn't intended for this use case but it might be interesting.
--
Alex Bligh
___
t of contention
(multiple readers and writers of files or file metadata). You may need to
forward port some of the more modern tools to your distro.
--
Alex Bligh
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
qemu -m 1024 -drive format=raw,file=rbd:data/squeeze
I don't think he did. As I read it he wants his VMs to all access the same
filing system, and doesn't want to use cephfs.
OCFS2 on RBD I suppose is a reasonable choice for that.
--
Alex Bligh
pg_num: The number of placement groups.
Perhaps worth demystifying for those hard of understanding such as
myself.
I'm still not quite sure how that relates to pgp_num.
--
Alex Bligh
___
ceph-users mailing list
ceph-users@lists.ceph.com
http:
ow to increase that number (whether it's experimental
or not) after a pool has been created.
Also, they say the default number of PGs is 8, but "When you create a pool, set
the number of placement groups to a reasonable value (e.g., 100)." If so,
perhaps a different defau
oblems comes from the kvm emulator, but we are
> not sure, can you give us some advice to improve our vm's disk performance in
> the aspect of writing speed?)
Are you using cache=writeback on your kvm command line? What about librbd
caching? What versions of kvm
itors?
Once you have got to a stable 3 mon config, you can go back up
to 5.
--
Alex Bligh
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
sh).
I've also backported this to the Ubuntu Precise packaging of qemu-kvm,
(again note the branch is v1.0-rbd-add-async-flush) at
https://github.com/flexiant/qemu-kvm-1.0-noroms/tree/v1.0-rbd-add-async-flush
THESE PATCHES ARE VERY LIGHTLY TESTED. USE AT YOUR OWN R
en there is some difficulty starting mon services. Once everything
is up and running, it doesn't happen (at least for me). I never worked out
quite what it was, but I think it was something like the init script starts
them, but doesn't kill them under every circumstance where starting a
y.
We're using format 2 images, if that's relevant.
--
Alex Bligh
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
on (1.4.0+dfsg-1expubuntu4)
contains this (unchecked as yet).
--
Alex Bligh
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
me time, I can share the packages
> with you. drop me a line if you're interested.
Information as to what the important fixes are would be appreciated!
--
Alex Bligh
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.
On 21 May 2013, at 07:17, Dan Mick wrote:
> Yes, with the proviso that you really mean "kill the osd" when clean.
> Marking out is step 1.
Thanks
--
Alex Bligh
___
ceph-users mailing list
ceph-users@lists.ceph.com
http:
Dan,
On 21 May 2013, at 00:52, Dan Mick wrote:
> On 05/20/2013 01:33 PM, Alex Bligh wrote:
>> If I want to remove an osd, I use 'ceph out' before taking it down, i.e.
>> stopping the OSD process, and removing the disk.
>>
>> How do I (preferably programa
(a)
if I want to do it programatically, or (b) if there are other problems in the
cluster so ceph was not reporting HEALTH_OK to start with.
Is there a better way?
--
Alex Bligh
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.c
osd crush reweight osd.0 2
?
--
Alex Bligh
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
eph docs. But (unless I am being stupid
which is quite possible), setting the weight (either to 0.0001 or
to 2) appears to have no effect per a ceph osd dump.
--
Alex Bligh
root@kvm:~# ceph osd dump
epoch 12
fsid ed0e2e56-bc17-4ef2-a1db-b030c77a8d45
created 2013-05-20 14:58:02.250461
modif
On 18 May 2013, at 18:20, Alex Bligh wrote:
> I want to discover what happens if I move an OSD from one host to another,
> simulating the effect of moving a working harddrive from a dead host to a
> live host, which I believe should work. So I stopped osd.0 on one host, and
> c
ceph6...
starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0
/var/lib/ceph/osd/ceph-0/journal
...
root@ceph6:~# ceph health
HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean; 1/2 in osds are down
osd.0 was not running on the new host, due to the abort as set out below (from
the log file). Sho
25 matches
Mail list logo