Not a lot of people are publicly discussing their sizes on things like
that, unfortunately. I believe DreamHost is still the most open. They
have an (RGW-based) object storage service which is backed by ~800
OSDs and are currently beta-testing a compute service using RBD, which
you can see
On Tue, Oct 30, 2012 at 2:36 PM, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.com wrote:
2012/10/30 Gregory Farnum g...@inktank.com:
Not a lot of people are publicly discussing their sizes on things like
that, unfortunately. I believe DreamHost is still the most open. They
have an
Am 30.10.2012 14:36, schrieb Gandalf Corvotempesta:
2012/10/30 Gregory Farnum g...@inktank.com:
Not a lot of people are publicly discussing their sizes on things like
that, unfortunately. I believe DreamHost is still the most open. They
have an (RGW-based) object storage service which is backed
On Tue, Oct 30, 2012 at 2:38 PM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
Am 30.10.2012 14:36, schrieb Gandalf Corvotempesta:
2012/10/30 Gregory Farnum g...@inktank.com:
Not a lot of people are publicly discussing their sizes on things like
that, unfortunately. I believe
Nothing prevents me to offer a service directly based on RADOS API, if
S3 compatibility is not needed, right ?
Correct, That is librados.
What I don't understand is how can I access a single file from RGW. If
LibRBD and RGW are 'gateway' to a RADOS store, i'll have access to a
block
In this case, can a single block device (for example a huge virtual
machine image) be striped across many OSDs to archieve better
performance in reading?
an image striped across 3 disks, should get 3*IOPS when reading
Yes, but network (and many other isssues) must be considered.
Another
Am 30.10.2012 14:45, schrieb Gregory Farnum:
But there's still the problem of slow random write IOP/s. At least i haven't
seen any good benchmarks.
It's not magic — I haven't done extensive testing but I believe people
see aggregate IOPs of about what you can calculate:
(number of storage
On 10/30/2012 07:59 AM, Gandalf Corvotempesta wrote:
2012/10/30 袁冬 yuandong1...@gmail.com:
Yes, but network (and many other isssues) must be considered.
Obviously
3 is suggested.
Any contraindication running mon in the same OSD server?
Generally that's considered OK. ceph-mon
On Tue, 30 Oct 2012, Gandalf Corvotempesta wrote:
2012/10/30 Dan Mick dan.m...@inktank.com:
Generally that's considered OK. ceph-mon doesn't use very much disk or CPU
or network bandwidth.
In this case, should I reserve some space to ceph-mon (a partition or
a dedicated disk) or ceph-mon
On 10/26/2012 02:52 PM, Gandalf Corvotempesta wrote:
Hi all,i'm new to ceph.
Are RBD and REST API production ready?
There are sites using them in production now.
Do you have any use case to share? we are looking for a distributed
block storage for an HP C7000 blade with 16 dual processor
That's great but if you go the way of having for example 8x OSDs per
server with 8 single Disks - how can i archieve that ceph is splitting
my files to the correct servers for redundancy?
I believe this is handled by CRUSH:
http://ceph.com/wiki/Custom_data_placement_with_CRUSH
Regards,
Tim
Am 21.05.2012 10:18, schrieb Tim O'Donovan:
That's great but if you go the way of having for example 8x OSDs per
server with 8 single Disks - how can i archieve that ceph is splitting
my files to the correct servers for redundancy?
I believe this is handled by CRUSH:
Hi,
I'm going to build a rados block cluster for my kvm hypervisors.
Is it already production ready ? (stable,no crash)
I have read some btrfs bugs on this mailing list, so I'm a bit scary...
Also, what performance could I expect ?
I try to build a fast cluster, with fast ssd disk.
each node
2012/5/18 Alexandre DERUMIER aderum...@odiso.com:
Hi,
I'm going to build a rados block cluster for my kvm hypervisors.
Is it already production ready ? (stable,no crash)
We are using 0.45 in production. Recent ceph versions are quite stable
(although we hat some troubles with excessive
2012/5/18 Alexandre DERUMIER aderum...@odiso.com:
Hi Christian,
thanks for your response.
We are using 0.45 in production. Recent ceph versions are quite stable
(although we hat some troubles with excessive logging and a full log
partition lately which caused our cluster to halt).
excessive
Thanks Christian for doing an awesome job, you answered some of the
questions better than I personally could have ;)
On Thu, May 17, 2012 at 11:08 PM, Alexandre DERUMIER
aderum...@odiso.com wrote:
About network, does the rados protocol support some kind of multipathing ? Or
does I need to use
16 matches
Mail list logo