Sounds interesting and definately worth looking into. You would not have
the advantage of snapshots like you do with an LVM type solution, but it
may pay off in some instances from a performance perspective.
I've never used InfiniBand, but I think you've just convinced me to go
buy a few cheap adaptors and have a little play.
On 19/04/11 23:39, Henrik Andersson wrote:
Now that VastSky has been reported to be on hiatus atleast, I'dd like
to propose GlusterFS as a candidate. It is well tested and actively
developed and maintained project. I'm personally really interested in
"RDMA version". It should provide really low latencies and since
40Gbit InfiniBand is a bargain compared to 10GbE, there should be more
than enough throughput availeable.
This would require IB support on XCP but my thinking is, it would be
beneficial in many other ways. For example I would imagine RDMA could
be used with live migrates.
-Henrik Andersson
On 20 April 2011 01:10, Tim Titley <[email protected]
<mailto:[email protected]>> wrote:
Has anyone considered a replacement for the vastsky storage
backend now that the project is officially dead (at least for now)?
I have been looking at Ceph ( http://ceph.newdream.net/ ). A
suggestion to someone so inclined to do something about it, may be
to use the Rados block device (RBD) and put an LVM storage group
on it, which would require modification of the current LVM storage
manager code - I assume similar to LVMOISCSI.
This would provide scalable, redundant storage at what I assume
would be reasonable performance since the data can be striped
across many storage nodes.
Development seems reasonably active and although the project is
not officially production quality yet, it is part of the Linux
kernel which looks promising, as does the news that they will be
providing commercial support.
The only downside is that RBD requires a 2.6.37 kernel. For those
"in the know" - how long will it be before this kernel makes it to
XCP - considering that this vanilla kernel supposedly works in
dom0 (I have yet to get it working)?
Any thoughts?
Regards,
Tim
_______________________________________________
xen-api mailing list
[email protected] <mailto:[email protected]>
http://lists.xensource.com/mailman/listinfo/xen-api
_______________________________________________
xen-api mailing list
[email protected]
http://lists.xensource.com/mailman/listinfo/xen-api