[ceph-users] Adding OSDs without ceph-deploy

2014-07-30 Thread Alex Bligh
ed to do to get ceph to recognise the osds? (again without ceph-deploy) -- Alex Bligh ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph performance with 8K blocks.

2013-09-18 Thread Alex Bligh
imised. Try: dd if=ddbenchfile of=- bs=8K | dd if=- of=/dev/null bs=8K -- Alex Bligh ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] newbie question: rebooting the whole cluster, powerfailure

2013-09-06 Thread Alex Bligh
ers of MON devices wasteful (does not increase quorum) and arguably increases the chance of failure (as now we need k devices of n+1 to fail, as opposed to k devices of n). -- Alex Bligh ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] [list admin] - membership disabled due to bounces

2013-08-11 Thread Alex Bligh
nt. > > Probably the only thing to do is to white list the address and put up with > the spam. > > James > >> -Original Message- >> From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users- >> boun...@lists.ceph.com] On Behalf Of Alex Bligh &

[ceph-users] [list admin] - membership disabled due to bounces

2013-08-11 Thread Alex Bligh
page, you can change various delivery options such > as your email address and whether you get digests or not. As a > reminder, your membership password is > >[REDACTED] > > If you have any questions or problems, you can contact the list owner > at > >ceph-users

Re: [ceph-users] v0.61.6 Cuttlefish update released

2013-07-24 Thread Alex Bligh
.5 users are strongly recommended to upgrade. Was this bug also in 0.61.4? -- Alex Bligh ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Location of MONs

2013-07-23 Thread Alex Bligh
n the client (either in qemu or in librbd), the former being something I'm toying with. Being persistent it can complete flush/fua type operations before they are actually written to ceph. It wasn't intended for this use case but it might be interesting. -- Alex Bligh ___

Re: [ceph-users] OCFS2 or GFS2 for cluster filesystem?

2013-07-12 Thread Alex Bligh
t of contention (multiple readers and writers of files or file metadata). You may need to forward port some of the more modern tools to your distro. -- Alex Bligh ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] OCFS2 or GFS2 for cluster filesystem?

2013-07-11 Thread Alex Bligh
qemu -m 1024 -drive format=raw,file=rbd:data/squeeze I don't think he did. As I read it he wants his VMs to all access the same filing system, and doesn't want to use cephfs. OCFS2 on RBD I suppose is a reasonable choice for that. -- Alex Bligh

Re: [ceph-users] Problem with data distribution

2013-07-01 Thread Alex Bligh
pg_num: The number of placement groups. Perhaps worth demystifying for those hard of understanding such as myself. I'm still not quite sure how that relates to pgp_num. -- Alex Bligh ___ ceph-users mailing list ceph-users@lists.ceph.com http:

Re: [ceph-users] Problem with data distribution

2013-07-01 Thread Alex Bligh
ow to increase that number (whether it's experimental or not) after a pool has been created. Also, they say the default number of PGs is 8, but "When you create a pool, set the number of placement groups to a reasonable value (e.g., 100)." If so, perhaps a different defau

ceph-users@lists.ceph.com

2013-06-28 Thread Alex Bligh
oblems comes from the kvm emulator, but we are > not sure, can you give us some advice to improve our vm's disk performance in > the aspect of writing speed?) Are you using cache=writeback on your kvm command line? What about librbd caching? What versions of kvm

Re: [ceph-users] monitor removal and re-add

2013-06-24 Thread Alex Bligh
itors? Once you have got to a stable 3 mon config, you can go back up to 5. -- Alex Bligh ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Backport of modern qemu rbd driver to qemu 1.0 + Precise packaging

2013-06-21 Thread Alex Bligh
sh). I've also backported this to the Ubuntu Precise packaging of qemu-kvm, (again note the branch is v1.0-rbd-add-async-flush) at https://github.com/flexiant/qemu-kvm-1.0-noroms/tree/v1.0-rbd-add-async-flush THESE PATCHES ARE VERY LIGHTLY TESTED. USE AT YOUR OWN R

Re: [ceph-users] why so many ceph-create-keys processes?

2013-06-19 Thread Alex Bligh
en there is some difficulty starting mon services. Once everything is up and running, it doesn't happen (at least for me). I never worked out quite what it was, but I think it was something like the init script starts them, but doesn't kill them under every circumstance where starting a

[ceph-users] Recommended versions of Qemu/KVM to run Ceph Cuttlefish

2013-06-18 Thread Alex Bligh
y. We're using format 2 images, if that's relevant. -- Alex Bligh ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] qemu-1.4.2 rbd-fixed ubuntu packages

2013-05-29 Thread Alex Bligh
on (1.4.0+dfsg-1expubuntu4) contains this (unchecked as yet). -- Alex Bligh ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] qemu-1.4.2 rbd-fixed ubuntu packages

2013-05-28 Thread Alex Bligh
me time, I can share the packages > with you. drop me a line if you're interested. Information as to what the important fixes are would be appreciated! -- Alex Bligh ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.

Re: [ceph-users] Determining when an 'out' OSD is actually unused

2013-05-21 Thread Alex Bligh
On 21 May 2013, at 07:17, Dan Mick wrote: > Yes, with the proviso that you really mean "kill the osd" when clean. > Marking out is step 1. Thanks -- Alex Bligh ___ ceph-users mailing list ceph-users@lists.ceph.com http:

Re: [ceph-users] Determining when an 'out' OSD is actually unused

2013-05-20 Thread Alex Bligh
Dan, On 21 May 2013, at 00:52, Dan Mick wrote: > On 05/20/2013 01:33 PM, Alex Bligh wrote: >> If I want to remove an osd, I use 'ceph out' before taking it down, i.e. >> stopping the OSD process, and removing the disk. >> >> How do I (preferably programa

[ceph-users] Determining when an 'out' OSD is actually unused

2013-05-20 Thread Alex Bligh
(a) if I want to do it programatically, or (b) if there are other problems in the cluster so ceph was not reporting HEALTH_OK to start with. Is there a better way? -- Alex Bligh ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.c

Re: [ceph-users] Setting OSD weight

2013-05-20 Thread Alex Bligh
osd crush reweight osd.0 2 ? -- Alex Bligh ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Setting OSD weight

2013-05-20 Thread Alex Bligh
eph docs. But (unless I am being stupid which is quite possible), setting the weight (either to 0.0001 or to 2) appears to have no effect per a ceph osd dump. -- Alex Bligh root@kvm:~# ceph osd dump epoch 12 fsid ed0e2e56-bc17-4ef2-a1db-b030c77a8d45 created 2013-05-20 14:58:02.250461 modif

Re: [ceph-users] Abort on moving OSD

2013-05-18 Thread Alex Bligh
On 18 May 2013, at 18:20, Alex Bligh wrote: > I want to discover what happens if I move an OSD from one host to another, > simulating the effect of moving a working harddrive from a dead host to a > live host, which I believe should work. So I stopped osd.0 on one host, and > c

[ceph-users] Abort on moving OSD

2013-05-18 Thread Alex Bligh
ceph6... starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal ... root@ceph6:~# ceph health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean; 1/2 in osds are down osd.0 was not running on the new host, due to the abort as set out below (from the log file). Sho