erasure coding (sorry)

2013-04-18 Thread Plaetinck, Dieter
sorry to bring this up again, googling revealed some people don't like the subject [anymore]. but I'm working on a new +- 3PB cluster for storage of immutable files. and it would be either all cold data, or mostly cold. 150MB avg filesize, max size 5GB (for now) For this use case, my impression

Re: erasure coding (sorry)

2013-04-18 Thread Plaetinck, Dieter
On Thu, 18 Apr 2013 16:09:52 -0500 Mark Nelson wrote: > On 04/18/2013 04:08 PM, Josh Durgin wrote: > > On 04/18/2013 01:47 PM, Sage Weil wrote: > >> On Thu, 18 Apr 2013, Plaetinck, Dieter wrote: > >>> sorry to bring this up again, googling revealed some peopl

Re: Ceph benchmarks

2012-08-28 Thread Plaetinck, Dieter
Sébastien Han wrote: > Just as a reminder the system maintains 2 caches facilities: > * disk write cache > * page cache the page cache is the one commonly referred to as block cache right (i.e. in the block layer, below the filesystem layer in the kernel)? what do you mean with disk write cache

Re: Integration work

2012-08-28 Thread Plaetinck, Dieter
On Tue, 28 Aug 2012 11:12:16 -0700 Ross Turk wrote: > > Hi, ceph-devel! It's me, your friendly community guy. > > Inktank has an engineering team dedicated to Ceph, and we want to work > on the right stuff. From time to time, I'd like to check in with you to > make sure that we are. > > Over

Re: How are you using Ceph?

2012-09-18 Thread Plaetinck, Dieter
On Tue, 18 Sep 2012 01:26:03 +0200 John Axel Eriksson wrote: > another distributed > storage solution that had failed us more than once and we lost data. > Since the old system had an http interface (not S3 compatible though) can you say a bit more about this? failure stories are very interestin

Re: How are you using Ceph?

2012-09-18 Thread Plaetinck, Dieter
> > To sum it up: we made, in retrospect, a bad choice - not because Riak > itself doesn't work or isn't any good for the things it's good at(it > really is!) but because the add-on Luwak was misrepresented and not a > good fit for us. > > I really have high hopes f

Re: How are you using Ceph?

2012-09-18 Thread Plaetinck, Dieter
te: > Agreed, this was a really interesting writeup! Thanks John! > > Dieter, do you mind if I ask what is compelling for you in choosing > swift vs the other options you've looked at including Ceph? > > Thanks, > Mark > > On 09/18/2012 09:51 AM, Plaetinck, D

Re: How are you using Ceph?

2012-09-18 Thread Plaetinck, Dieter
come with time (more > experienced community, docs, deployments, papers, etc). Are there other > things we could be doing that would make Ceph feel less risky for people > doing similar comparisons? > > Thanks, > Mark > > On 09/18/2012 10:19 AM, Plaetinck, Dieter wrote: &g

platform requirements / centos 6.2

2012-03-21 Thread Plaetinck, Dieter
Hello, Ceph/Rados looks very well designed and engineered. I would like to build a cluster to test the rados distributed object storage (not the distributed FS or block devices) I've seen the list of dependencies on the wiki, but it doesn't mention specific versions for the libraries and tools, n

non-btrfs and checksumming, rados tutorial, planning wrt documentation

2012-03-21 Thread Plaetinck, Dieter
1) if you don't use btrfs (i.e. xfs) will checksumming still be done? by the OSD's? does the checksumming happen on every data read, or do underused OSD's also checksum their data in the background? (like openstack swift does). if checksum mismatch, what does the client see, and what actions ar