Re: [ceph-users] problem with delete or rename a pool

2013-11-29 Thread You, RongX
Hi, Thank you. It's very helpful. ceph osd pool rename -- "-help" aaa that works well. Best regards! RONG -Original Message- From: Daniel Schwager [mailto:daniel.schwa...@dtnet.de] Sent: Friday, November 29, 2013 2:56 PM To: 'malm...@gmail.com'; You, RongX Cc: 'cep

[ceph-users] ceph-deploy Platform is not supported: debian

2013-11-29 Thread James Harper
When I do gatherkeys, ceph-deploy tells me: UnsupportedPlatform: Platform is not supported: debian Given that I downloaded ceph-deploy from the ceph.com debian repository, I'm hoping that Debian is supported and that I have something screwy somewhere. Any suggestions? Thanks James ___

[ceph-users] Basic cephx configuration

2013-11-29 Thread nicolasc
Hello every one, Just ran a fresh install of version Emperor on an empty cluster, and I am left clueless, trying to troubleshoot cephx. After ceph-deploy created the keys, I used ceph-authtool to generate the client.admin keyring and the monitor keyring, as indicated in the doc. The configura

Re: [ceph-users] ceph-deploy Platform is not supported: debian

2013-11-29 Thread James Harper
> > When I do gatherkeys, ceph-deploy tells me: > > UnsupportedPlatform: Platform is not supported: debian > > Given that I downloaded ceph-deploy from the ceph.com debian repository, > I'm hoping that Debian is supported and that I have something screwy > somewhere. > > Any suggestions? > I

Re: [ceph-users] radosgw setting puplic ACLs fails.

2013-11-29 Thread Micha Krause
Hi, >> So how does AWS S3 handle Public access to objects? You have to explicitly set public ACL on each object. Ok, but this also does not work with radosgw + s3cmd: s3cmd setacl -P s3://test/fstab ERROR: S3 error: 403 (AccessDenied): Micha Krause __

[ceph-users] ZFS on Ceph (rbd-fuse)

2013-11-29 Thread Charles 'Boyo
Hello all. I have a Ceph cluster using XFS on the OSDs. Btrfs is not available to me at the moment (cluster is running CentOS 6.4 with stock kernel). I intend to maintain a full replica of an active ZFS dataset on the Ceph infrastructure by installing an OpenSolaris KVM guest using rbd-fuse to ex

Re: [ceph-users] ZFS on Ceph (rbd-fuse)

2013-11-29 Thread John Spray
The trouble with using ZFS copies on top of RBD is that both copies of any particular block might end up on the same OSD. If you have disabled replication in Ceph, then this would mean a single OSD failure could cause data loss. For that reason, it seems it would be better to do the replication i

[ceph-users] Impact of fancy striping

2013-11-29 Thread nicolasc
Hi every one, I am currently testing a use-case with large rbd images (several TB), each containing an XFS filesystem, which I mount on local clients. I have been testing the throughput writing on a single file in the XFS mount, using "dd oflag=direct", for various block sizes. With a defaul

Re: [ceph-users] Impact of fancy striping

2013-11-29 Thread James Pearce
Did you try moving the journals to separate SSDs? It was recently discovered that due to a kernel bug/design, the journal writes are translated into device cache flush commands, so thinking about that I wonder also whether there would be performance improvement in the case that journal and OSD

Re: [ceph-users] One SSD Per Drive

2013-11-29 Thread James Pearce
Looking at this review, it seems the SSD and spinning disk are in a flat LBA address space: http://www.techwarelabs.com/western-digital-black%C2%B2-dual-drive-review-two-drives-in-one/2/ So I guess that the Windows utility just submits SET MAX ADDRESS to reveal the hidden (spinning) area of th

Re: [ceph-users] Docker

2013-11-29 Thread Sebastien Han
Hi guys! Some experiment here: http://www.sebastien-han.fr/blog/2013/09/19/how-I-barely-got-my-first-ceph-mon-running-in-docker/ Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 10, ru

Re: [ceph-users] Impact of fancy striping

2013-11-29 Thread nicolasc
Hi James, Unfortunately, SSDs are out of budget. Currently there are 2 SAS disks in RAID0 on each node, split into 9 partitions: one for each OSD journal on the node. I benchmarked the RAID0 volumes at around 500MB/s in sequential sustained write, so that's not bad — maybe access latency is a

Re: [ceph-users] Basic cephx configuration

2013-11-29 Thread nicolasc
An update on this issue: Explicitly setting the "keyring" parameter to its default value, in the client section, like this: [client.admin] keyring = /etc/ceph/ceph.client.admin.keyring solves the problem in the particular case when ONLY "auth_cluster_required" is set to "cephx", and the two

Re: [ceph-users] Impact of fancy striping

2013-11-29 Thread James Pearce
I will try to look into this issue of device cache flush. Do you have a tracker link for the bug? How I wish this were a forum! But here is a link: http://www.spinics.net/lists/ceph-users/msg05966.html And this: https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/commit/?

Re: [ceph-users] ZFS on Ceph (rbd-fuse)

2013-11-29 Thread Charles 'Boyo
Thanks for the input John. So I should leave ZFS checksumming on, disable ZFS replicas and rely on Ceph RBD replicas. Is it even sane to use rbd-fuse for this? On a related note, is there any discard/trim support in rbd-fuse? Else I won't ever be able to thin-out the RBD image once it is allocate

Re: [ceph-users] ZFS on Ceph (rbd-fuse)

2013-11-29 Thread James Pearce
On a related note, is there any discard/trim support in rbd-fuse? Apparently so (but not in the kernel module unfortunately). ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] radosgw daemon stalls on download of some files

2013-11-29 Thread Sebastian
Hi, thanks for the hint. I tried this again and noticed that the time out message does seem to be unrelated. Here is the log file for a stalling request with debug turned on: http://pastebin.com/DcQuc9wP I really cannot really find a real "error" in the log. The download stalls at about 500kb

Re: [ceph-users] ZFS on Ceph (rbd-fuse)

2013-11-29 Thread Dan van der Ster
On Fri, Nov 29, 2013 at 12:13 PM, Charles 'Boyo wrote: > That's because qemu-kvm > in CentOS 6.4 doesn't support librbd. RedHat just added RBD support in qemu-kvm-rhev in RHEV 6.5. I don't know if that will trickle down to CentOS but you can probably recompile it yourself like we did. https://rh

Re: [ceph-users] radosgw daemon stalls on download of some files

2013-11-29 Thread Artem Silenkov
Good day! We ve noticed such things recently during some osd recovery things like scrubbing or so. Restarting OSD did the trick. We had even 404 errors until deep scrubbing ended. Any noise in ceph -w? Regards, Artem S. 29 нояб. 2013 г. 22:28 пользователь "Sebastian" написал: > > Hi, > > thanks fo

[ceph-users] does ceph-deploy adding of osds automatically update ceph.conf? It seems no...

2013-11-29 Thread Gautam Saxena
I've got ceph up and running on a 3-node centos 6.4 cluster. However, after I a) set the cluster to nout as follows: ceph osd set noout b) rebooted 1 node c) logged into that 1 node, I tried to do: service ceph start osd.12 but it returned with error message: /etc/init.d/ceph: osd.12 not found (

Re: [ceph-users] radosgw daemon stalls on download of some files

2013-11-29 Thread Sebastian
Hi, our ceph -w is clean: cluster e54d66c5-5191-4296-a217-f818e1f92830 health HEALTH_OK monmap e1: 4 mons at {a=5.9.67.9:6789/0,b=5.9.67.8:6789/0,c=5.9.67.7:6789/0,d=5.9.67.6:6789/0}, election epoch 19724, quorum 0,1,2,3 a,b,c,d osdmap e1629: 4 osds: 4 up, 4 in pgmap v4896303: 18

Re: [ceph-users] radosgw daemon stalls on download of some files

2013-11-29 Thread Yehuda Sadeh
It's interesting, the responses are received but seems that they aren't being handled (hence the following pings). There are a few things that you could look at. First, try to connect to the admin socket and see if you get any useful information from there. This could include in-flight requests, lo

Re: [ceph-users] ZFS on Ceph (rbd-fuse)

2013-11-29 Thread Charles 'Boyo
>> On a related note, is there any discard/trim support in rbd-fuse? > > > Apparently so (but not in the kernel module unfortunately). > Ok, librbd (which is used by the qemu alternative) supports discard, not the rbd kernel module does not. Neither of these are available to me right now. Is rbd-f

Re: [ceph-users] ZFS on Ceph (rbd-fuse)

2013-11-29 Thread Charles 'Boyo
>> That's because qemu-kvm >> in CentOS 6.4 doesn't support librbd. > > RedHat just added RBD support in qemu-kvm-rhev in RHEV 6.5. I don't > know if that will trickle down to CentOS but you can probably > recompile it yourself like we did. > > https://rhn.redhat.com/errata/RHSA-2013-1754.html > (h

Re: [ceph-users] radosgw daemon stalls on download of some files

2013-11-29 Thread Sebastian
Hi Yehuda, > It's interesting, the responses are received but seems that they > aren't being handled (hence the following pings). There are a few > things that you could look at. First, try to connect to the admin > socket and see if you get any useful information from there. This > could include

[ceph-users] adding another mon failed

2013-11-29 Thread German Anders
Hi, i'm having issues while trying to add another monitor to my cluster: ceph@ceph-deploy01:~/ceph-cluster$ ceph-deploy mon create ceph-node02 [ceph_deploy.cli][INFO ] Invoked (1.3.3): /usr/bin/ceph-deploy mon create ceph-node02 [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ce

Re: [ceph-users] adding another mon failed

2013-11-29 Thread Michael
This previous thread looks like it might be the same error, could be helpful. http://www.spinics.net/lists/ceph-users/msg05295.html -Michael On 29/11/2013 19:24, German Anders wrote: Hi, i'm having issues while trying to add another monitor to my cluster: ceph@ceph-deploy01:~/ceph-cluster$

Re: [ceph-users] radosgw daemon stalls on download of some files

2013-11-29 Thread German Anders
Thanks a lot Sebastian, i'm going to try that, also i'm having an issue while trying to test a rbd creation, i've install in the deploy server the ceph-client: ceph@ceph-deploy01:/etc/ceph$ sudo rbd -n client.ceph-test -k /home/ceph/ceph-cluster/ceph.client.admin.keyring create --size 10240 c

[ceph-users] ceph-deploy and config file

2013-11-29 Thread James Harper
Aside from the messages about ceph-deploy saying debian is not supported on two of my nodes, I'm having some other problems moving to ceph-deploy. I'm running with 2 OSD's on each node, and I'm using a numbering sequence of osd., so node 7 has osd.70 and osd.71. This way it's immediately obviou