Re: [ceph-users] WSGI file for ceph-rest-api

2014-05-28 Thread Wido den Hollander
On 05/27/2014 02:20 PM, Wido den Hollander wrote: Hi, I'm trying to run the ceph_rest_api module as a WSGI application behind Apache with mod_wsgi but I'm struggling a bit. Not having used WSGI that much I'm stuck on the .wsgi file. Has anybody done this before? I've been reading the Flask

[ceph-users] someone using btrfs with ceph

2014-05-28 Thread VELARTIS Philipp Dürhammer
Is someone using btrfs in production? I know people say it's still not stable. But do we use so many features with ceph? And facebook uses it also in production. Would be a big speed gain. ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] someone using btrfs with ceph

2014-05-28 Thread Wido den Hollander
On 05/28/2014 04:11 PM, VELARTIS Philipp Dürhammer wrote: Is someone using btrfs in production? I know people say it’s still not stable. But do we use so many features with ceph? And facebook uses it also in production. Would be a big speed gain. As far as I know the main problem is still

Re: [ceph-users] someone using btrfs with ceph

2014-05-28 Thread Stefan Priebe - Profihost AG
Am 28.05.2014 16:13, schrieb Wido den Hollander: On 05/28/2014 04:11 PM, VELARTIS Philipp Dürhammer wrote: Is someone using btrfs in production? I know people say it’s still not stable. But do we use so many features with ceph? And facebook uses it also in production. Would be a big speed

Re: [ceph-users] someone using btrfs with ceph

2014-05-28 Thread Cedric Lemarchand
Le 28/05/2014 16:15, Stefan Priebe - Profihost AG a écrit : Am 28.05.2014 16:13, schrieb Wido den Hollander: On 05/28/2014 04:11 PM, VELARTIS Philipp Dürhammer wrote: Is someone using btrfs in production? I know people say it’s still not stable. But do we use so many features with ceph? And

Re: [ceph-users] Is there a way to repair placement groups? [Offtopic - ZFS]

2014-05-28 Thread Scott Laird
IMHO, you were probably either benchmarking the wrong thing or had a really unusual use profile. RAIDZ* always does full-stripe reads so it can verify checksums, so even small reads hit all of the devices in the vdev. That means that you get 0 parallelism on small reads, unlike most other RAID5+

Re: [ceph-users] Is there a way to repair placement groups? [Offtopic - ZFS]

2014-05-28 Thread Christian Balzer
I was about to write something similar yesterday, but work interfered. ^o^ For bandwidth a RAID(Z*/6, don't even think about RAID5 or equivalent) is indeed very nice, but for IOPS it will be worse than a RAID10. Of course a controller with a large writeback cache can pretty alleviate or at

Re: [ceph-users] RBD clone for OpenStack Nova ephemeral volumes

2014-05-28 Thread Jens-Christian Fischer
We are currently starting to set up a new Icehouse/Ceph based cluster and will help to get this patch in shape as well. I am currently collecting the information needed that allow us to patch Nova and I have this: https://github.com/angdraug/nova/tree/rbd-ephemeral-clone-stable-icehouse on my

[ceph-users] Multiple L2 LAN segments with Ceph

2014-05-28 Thread Travis Rhoden
Hi folks, Does anybody know if there are any issues running Ceph with multiple L2 LAN segements? I'm picturing a large multi-rack/multi-row deployment where you may give each rack (or row) it's own L2 segment, then connect them all with L3/ECMP in a leaf-spine architecture. I'm wondering how

Re: [ceph-users] RBD clone for OpenStack Nova ephemeral volumes

2014-05-28 Thread Dmitry Borodaenko
The rbd-ephemeral-clone-stable-icehouse branch has everything I've got so far for Icehouse. There were minor changes to these commits on the Juno version of the branch (rbd-ephemeral-clone) in response to code review comments, once code review is done and commits are merged I plan to re-backport

Re: [ceph-users] Multiple L2 LAN segments with Ceph

2014-05-28 Thread Sage Weil
On Wed, 28 May 2014, Travis Rhoden wrote: Hi folks, Does anybody know if there are any issues running Ceph with multiple L2 LAN segements?  I'm picturing a large multi-rack/multi-row deployment where you may give each rack (or row) it's own L2 segment, then connect them all with L3/ECMP

Re: [ceph-users] Multiple L2 LAN segments with Ceph

2014-05-28 Thread Wido den Hollander
On 05/28/2014 07:01 PM, Travis Rhoden wrote: Hi folks, Does anybody know if there are any issues running Ceph with multiple L2 LAN segements? I'm picturing a large multi-rack/multi-row deployment where you may give each rack (or row) it's own L2 segment, then connect them all with L3/ECMP in a

Re: [ceph-users] Multiple L2 LAN segments with Ceph

2014-05-28 Thread Mike Dawson
Travis, We run a routed ECMP spine-leaf network architecture with Ceph and have no issues on the network side whatsoever. Each leaf switch has an L2 cidr block inside a common L3 supernet. We do not currently split cluster_network and public_network. If we did, we'd likely build a separate

Re: [ceph-users] How to implement a rados plugin to encode/decode data while r/w

2014-05-28 Thread Craig Lewis
On 5/27/14 19:44 , Plato wrote: For certain security issue, I need to make sure the data finally saved to disk is encrypted. So, I'm trying to write a rados class, which would be triggered to reading and writing process. That is, before data is written, encrypting method of the class will be

Re: [ceph-users] Is there a way to repair placement groups? [Offtopic - ZFS]

2014-05-28 Thread Craig Lewis
On 5/28/14 09:45 , Dimitri Maziuk wrote: On 05/28/2014 09:32 AM, Christian Balzer wrote: I was about to write something similar yesterday, but work interfered. ^o^ For bandwidth a RAID(Z*/6, don't even think about RAID5 or equivalent) is indeed very nice, but for IOPS it will be worse than a

Re: [ceph-users] Multiple L2 LAN segments with Ceph

2014-05-28 Thread Travis Rhoden
Thanks to you all! You confirmed everything I thought I knew, but it is nice to be sure! On Wed, May 28, 2014 at 1:37 PM, Mike Dawson mike.daw...@cloudapt.comwrote: Travis, We run a routed ECMP spine-leaf network architecture with Ceph and have no issues on the network side whatsoever.

Re: [ceph-users] someone using btrfs with ceph

2014-05-28 Thread Craig Lewis
On 5/28/14 07:19 , Cedric Lemarchand wrote: Le 28/05/2014 16:15, Stefan Priebe - Profihost AG a écrit : Am 28.05.2014 16:13, schrieb Wido den Hollander: On 05/28/2014 04:11 PM, VELARTIS Philipp Dürhammer wrote: Is someone using btrfs in production? I know people say it’s still not stable. But

Re: [ceph-users] someone using btrfs with ceph

2014-05-28 Thread Mark Nelson
On 05/28/2014 09:19 AM, Cedric Lemarchand wrote: Le 28/05/2014 16:15, Stefan Priebe - Profihost AG a écrit : Am 28.05.2014 16:13, schrieb Wido den Hollander: On 05/28/2014 04:11 PM, VELARTIS Philipp Dürhammer wrote: Is someone using btrfs in production? I know people say it’s still not

Re: [ceph-users] CephFS MDS Setup

2014-05-28 Thread Scottix
Looks like we are going to put a hold on CephFS and use RBD till it is fully supported. Which brings me to my next question. I am trying to remove MDS completely and seem to be having issues I disabled all mounts disabled all the startup scripts // Cleaned the mdsmap ceph mds newfs 0 1

Re: [ceph-users] Inter-region data replication through radosgw

2014-05-28 Thread Craig Lewis
On 5/21/14 22:55 , wsnote wrote: Hi, Lewis! With your way, there will be a contradition because of the limit of secondary zone. In secondary zone, one can't do any files operations. Let me give some example.I define the symbols first. The instances of cluster 1: M1: master zone of cluster 1