On 05/27/2014 02:20 PM, Wido den Hollander wrote:
Hi,
I'm trying to run the ceph_rest_api module as a WSGI application behind
Apache with mod_wsgi but I'm struggling a bit.
Not having used WSGI that much I'm stuck on the .wsgi file. Has anybody
done this before?
I've been reading the Flask
Is someone using btrfs in production?
I know people say it's still not stable. But do we use so many features with
ceph? And facebook uses it also in production. Would be a big speed gain.
___
ceph-users mailing list
ceph-users@lists.ceph.com
On 05/28/2014 04:11 PM, VELARTIS Philipp Dürhammer wrote:
Is someone using btrfs in production?
I know people say it’s still not stable. But do we use so many features
with ceph? And facebook uses it also in production. Would be a big speed
gain.
As far as I know the main problem is still
Am 28.05.2014 16:13, schrieb Wido den Hollander:
On 05/28/2014 04:11 PM, VELARTIS Philipp Dürhammer wrote:
Is someone using btrfs in production?
I know people say it’s still not stable. But do we use so many features
with ceph? And facebook uses it also in production. Would be a big speed
Le 28/05/2014 16:15, Stefan Priebe - Profihost AG a écrit :
Am 28.05.2014 16:13, schrieb Wido den Hollander:
On 05/28/2014 04:11 PM, VELARTIS Philipp Dürhammer wrote:
Is someone using btrfs in production?
I know people say it’s still not stable. But do we use so many features
with ceph? And
IMHO, you were probably either benchmarking the wrong thing or had a really
unusual use profile. RAIDZ* always does full-stripe reads so it can verify
checksums, so even small reads hit all of the devices in the vdev. That
means that you get 0 parallelism on small reads, unlike most other RAID5+
I was about to write something similar yesterday, but work interfered. ^o^
For bandwidth a RAID(Z*/6, don't even think about RAID5 or equivalent) is
indeed very nice, but for IOPS it will be worse than a RAID10.
Of course a controller with a large writeback cache can pretty alleviate
or at
We are currently starting to set up a new Icehouse/Ceph based cluster and will
help to get this patch in shape as well.
I am currently collecting the information needed that allow us to patch Nova
and I have this:
https://github.com/angdraug/nova/tree/rbd-ephemeral-clone-stable-icehouse on my
Hi folks,
Does anybody know if there are any issues running Ceph with multiple L2 LAN
segements? I'm picturing a large multi-rack/multi-row deployment where you
may give each rack (or row) it's own L2 segment, then connect them all with
L3/ECMP in a leaf-spine architecture.
I'm wondering how
The rbd-ephemeral-clone-stable-icehouse branch has everything I've got
so far for Icehouse. There were minor changes to these commits on the
Juno version of the branch (rbd-ephemeral-clone) in response to code
review comments, once code review is done and commits are merged I
plan to re-backport
On Wed, 28 May 2014, Travis Rhoden wrote:
Hi folks,
Does anybody know if there are any issues running Ceph with multiple L2
LAN segements? I'm picturing a large multi-rack/multi-row deployment
where you may give each rack (or row) it's own L2 segment, then connect
them all with L3/ECMP
On 05/28/2014 07:01 PM, Travis Rhoden wrote:
Hi folks,
Does anybody know if there are any issues running Ceph with multiple L2
LAN segements? I'm picturing a large multi-rack/multi-row deployment
where you may give each rack (or row) it's own L2 segment, then connect
them all with L3/ECMP in a
Travis,
We run a routed ECMP spine-leaf network architecture with Ceph and have
no issues on the network side whatsoever. Each leaf switch has an L2
cidr block inside a common L3 supernet.
We do not currently split cluster_network and public_network. If we did,
we'd likely build a separate
On 5/27/14 19:44 , Plato wrote:
For certain security issue, I need to make sure the data finally saved
to disk is encrypted.
So, I'm trying to write a rados class, which would be triggered to
reading and writing process.
That is, before data is written, encrypting method of the class will
be
On 5/28/14 09:45 , Dimitri Maziuk wrote:
On 05/28/2014 09:32 AM, Christian Balzer wrote:
I was about to write something similar yesterday, but work interfered. ^o^
For bandwidth a RAID(Z*/6, don't even think about RAID5 or equivalent) is
indeed very nice, but for IOPS it will be worse than a
Thanks to you all! You confirmed everything I thought I knew, but it is
nice to be sure!
On Wed, May 28, 2014 at 1:37 PM, Mike Dawson mike.daw...@cloudapt.comwrote:
Travis,
We run a routed ECMP spine-leaf network architecture with Ceph and have no
issues on the network side whatsoever.
On 5/28/14 07:19 , Cedric Lemarchand wrote:
Le 28/05/2014 16:15, Stefan Priebe - Profihost AG a écrit :
Am 28.05.2014 16:13, schrieb Wido den Hollander:
On 05/28/2014 04:11 PM, VELARTIS Philipp Dürhammer wrote:
Is someone using btrfs in production?
I know people say it’s still not stable. But
On 05/28/2014 09:19 AM, Cedric Lemarchand wrote:
Le 28/05/2014 16:15, Stefan Priebe - Profihost AG a écrit :
Am 28.05.2014 16:13, schrieb Wido den Hollander:
On 05/28/2014 04:11 PM, VELARTIS Philipp Dürhammer wrote:
Is someone using btrfs in production?
I know people say it’s still not
Looks like we are going to put a hold on CephFS and use RBD till it is
fully supported.
Which brings me to my next question.
I am trying to remove MDS completely and seem to be having issues
I disabled all mounts
disabled all the startup scripts
// Cleaned the mdsmap
ceph mds newfs 0 1
On 5/21/14 22:55 , wsnote wrote:
Hi, Lewis!
With your way, there will be a contradition because of the limit of
secondary zone.
In secondary zone, one can't do any files operations.
Let me give some example.I define the symbols first.
The instances of cluster 1:
M1: master zone of cluster 1
20 matches
Mail list logo