We had a similar idea on our mind for a while now. The thought was to
add a key value support that leverages omaps and get it exposed
through the RESTful rados gateway. Having a real world use for it will
certainly help in understanding the requirements.
Yehuda
On Mon, Sep 17, 2012 at 10:44 PM,
Well I've used Btrfs on and off for two years now I think - in less
critical situations though (at home, on testing equipment at work and
on easily rebuildable systems). I've been bitten several times before
so I know there've been serious problems with it.
With Kernel 3.5 I had a pretty good
On Tue, 18 Sep 2012 01:26:03 +0200
John Axel Eriksson j...@insane.se wrote:
another distributed
storage solution that had failed us more than once and we lost data.
Since the old system had an http interface (not S3 compatible though)
can you say a bit more about this? failure stories are
Am 18.09.2012 04:32, schrieb Sage Weil:
On Mon, 17 Sep 2012, Tren Blackburn wrote:
On Mon, Sep 17, 2012 at 5:05 PM, Smart Weblications GmbH - Florian
Wiessner f.wiess...@smart-weblications.de wrote:
Hi,
i use ceph to provide storage via rbd for our virtualization cluster
delivering
KVM
I actually opted to not specifically mention the product we had
problems with since there have been lots of changes and fixes to it,
which we unfortunately were unable to make use of(you'll know why
later). But I guess it's interesting enough to go into a little more
detail so... before moving to
thanks a lot for the detailed writeup, I found it quite useful.
the list of contestants is similar to the list I made when researching (and I
also had luwak);
while I also think ceph is very promising and probably deserves to dominate in
the future,
I'm focusing on openstack swift for now. FWIW
Agreed, this was a really interesting writeup! Thanks John!
Dieter, do you mind if I ask what is compelling for you in choosing
swift vs the other options you've looked at including Ceph?
Thanks,
Mark
On 09/18/2012 09:51 AM, Plaetinck, Dieter wrote:
thanks a lot for the detailed writeup, I
I don't mind.
Ultimately it came down to ceph vs swift for us.
Nothing is cast in stone yet, but we choose swift for our new
not-yet-production cluster, because
swift has has been around longer and has more production deployments, and hence
a bigger/more experienced community, better
Hi Dieter,
It sounds like some of those things will come with time (more
experienced community, docs, deployments, papers, etc). Are there other
things we could be doing that would make Ceph feel less risky for people
doing similar comparisons?
Thanks,
Mark
On 09/18/2012 10:19 AM,
Right, it just takes time to grow these things.
Maybe the process could be accelerated by being more out there, but what do I
know about marketing.. not much :)
Dieter
On Tue, 18 Sep 2012 10:27:52 -0500
Mark Nelson mark.nel...@inktank.com wrote:
Hi Dieter,
It sounds like some of those
I am using Ceph mainly for it's KVM and OpenStack integration, and
also RBD. I also needed to provide shared storage to clusters of
nodes, and thus far I haven't needed the highest-possible performance.
Thus, I create RBDs, format them with ext4, and re-export them with
NFS. Clients do both NFS
Hi Matt,
On Mon, 17 Sep 2012, Matt W. Benjamin wrote:
Hi
Just FYI, on the NFS integration front. A pnfs files (RFC5661)-capable
NFSv4 re-exporter for Ceph has been committed to the Ganesha NFSv4
server development branch. We're continuing to enhance and elaborate
this. We have had on
On Tue, 18 Sep 2012, Smart Weblications GmbH - Florian Wiessner wrote:
Am 18.09.2012 04:32, schrieb Sage Weil:
On Mon, 17 Sep 2012, Tren Blackburn wrote:
On Mon, Sep 17, 2012 at 5:05 PM, Smart Weblications GmbH - Florian
Wiessner f.wiess...@smart-weblications.de wrote:
Hi,
i use
Hi Sage,
- Sage Weil s...@inktank.com wrote:
Hi Matt,
On Mon, 17 Sep 2012, Matt W. Benjamin wrote:
Hi
Just FYI, on the NFS integration front. A pnfs files
(RFC5661)-capable
NFSv4 re-exporter for Ceph has been committed to the Ganesha NFSv4
server development branch. We're
Excellent write-up. We are exactly in the same mess with
Riak Luwak, a decision that was made before I took over the
project. I thought we were the only one :)
We are still paying the price for it, as after over
a month of migrating the data from Riak to Ceph, we barely
moved 30% of the data.
On Mon, Sep 17, 2012 at 7:32 PM, Sage Weil s...@inktank.com wrote:
On Mon, 17 Sep 2012, Tren Blackburn wrote:
On Mon, Sep 17, 2012 at 5:05 PM, Smart Weblications GmbH - Florian
Wiessner f.wiess...@smart-weblications.de wrote:
Hi,
i use ceph to provide storage via rbd for our
On Tue, 18 Sep 2012, Tren Blackburn wrote:
On Mon, Sep 17, 2012 at 7:32 PM, Sage Weil s...@inktank.com wrote:
On Mon, 17 Sep 2012, Tren Blackburn wrote:
On Mon, Sep 17, 2012 at 5:05 PM, Smart Weblications GmbH - Florian
Wiessner f.wiess...@smart-weblications.de wrote:
Hi,
i use
Hey Xiaopong (is that your first or last name by the way? - sorry for
my ignorance),
I feel your pain believe me :-). We've had many sleepless nights
salvaging data. We've actually completely
migrated off Riak/Luwak by now and are pretty happy about it. As you
say - we've watched the cluster go
Hi, all!
One of the most important parts of Inktank's mission is to spread the
word about Ceph. We want everyone to know what it is and how to use
it.
In order to tell a better story to potential new users, I'm trying to
get a sense for today's deployments. We've spent the last few months
My use of Ceph is probably pretty unique in some of the aspects of where/how
I'm using it. I run an IT department for a medium-sized engineering firm. One
of my goals is to try to make the best possible use of the hardware we're
deploying to users' desktops. Often times users cannot get by
Hi Nick,
All I have to say, is that is totally awesome and scary at the same time. :)
Glad to hear that it recovers well when people shut their desktops off!
Mark
On 09/17/2012 05:47 PM, Nick Couchman wrote:
My use of Ceph is probably pretty unique in some of the aspects of where/how
I'm
Our use of Ceph started pretty recently (this summer). We only use
rados together with the radosgw. We moved from another distributed
storage solution that had failed us more than once and we lost data.
Since the old system had an http interface (not S3 compatible though)
we looked around for
We actually ask people to not shut off their desktops, so it doesn't happen
very often :-). Also, I run the MDS and MON systems inside my datacenter, so
only the OSDs are out there on the desktops.
-Nick
Mark Nelson 09/17/12 4:53 PM
Hi Nick,
All I have to say, is that is totally awesome
John,
I'd be really interested to hear how Btrfs goes over time. I tried it out a
few kernel versions ago and regretted it - lost some data after using it.
Hopefully the stability is better than it was before, and inline compression is
always great!
-Nick
John Axel Eriksson 09/17/12 5:26
Hi,
i use ceph to provide storage via rbd for our virtualization cluster delivering
KVM based high availability Virtual Machines to my customers. I also use it
as rbd device with ocfs2 on top of it for a 4 node webserver cluster as shared
storage - i do this, because unfortunatelly cephfs is not
On Mon, Sep 17, 2012 at 5:05 PM, Smart Weblications GmbH - Florian
Wiessner f.wiess...@smart-weblications.de wrote:
Hi,
i use ceph to provide storage via rbd for our virtualization cluster
delivering
KVM based high availability Virtual Machines to my customers. I also use it
as rbd device
On Mon, 17 Sep 2012, Tren Blackburn wrote:
On Mon, Sep 17, 2012 at 5:05 PM, Smart Weblications GmbH - Florian
Wiessner f.wiess...@smart-weblications.de wrote:
Hi,
i use ceph to provide storage via rbd for our virtualization cluster
delivering
KVM based high availability Virtual
Hi
Just FYI, on the NFS integration front. A pnfs files (RFC5661)-capable NFSv4
re-exporter for Ceph has been committed to the Ganesha NFSv4 server development
branch. We're continuing to enhance and elaborate this. We have had on our
(full) plates for a while to return Ceph client library
I'm looking at building a hbase/bigtable style key-value store on top
of Ceph's omap abstraction of LevelDB. The plan is to use this for log
storage at first. Writes use libradospp, with individual log lines
serialized via message-pack and then stored as omap values. Omap keys
are strings which
29 matches
Mail list logo