Il 02/03/2018 13:27, Federico Lucifredi ha scritto:

On Fri, Mar 2, 2018 at 4:29 AM, Max Cuttins <m...@phoenixweb.it <mailto:m...@phoenixweb.it>> wrote:



    Hi Federico,

        Hi Max,

            On Feb 28, 2018, at 10:06 AM, Max Cuttins
            <m...@phoenixweb.it <mailto:m...@phoenixweb.it>> wrote:

            This is true, but having something that just works in
            order to have minimum compatibility and start to dismiss
            old disk is something you should think about.
            You'll have ages in order to improve and get better
            performance. But you should allow Users to cut-off old
            solutions as soon as possible while waiting for a better
            implementation.

        I like your thinking, but I wonder why doesn’t a
        locally-mounted kRBD volume meet this need? It seems easier
        than iSCSI and I would venture would show twice the
        performance at least in some cases.


    Simple because it's not possible.
    XenServer is closed. You cannot add RPM (so basically install
    ceph) without hack the distribution by removing the limitation to YUM.
    And this is what we do here:
    https://github.com/rposudnevskiy/RBDSR
    <https://github.com/rposudnevskiy/RBDSR>


Understood. Thanks Max, I did not realize you were also speaking about Xen, I thought you meant to find an arbitrary non-virtual disk  replacement strategy ("start to dismiss old disk").
I need to find an arbitrary non-virtual disk replacement strategy.... compatible with Xen.



We do speak to the Xen team every once in a while, but while there is interest in adding Ceph support on their side, I think we are somewhat down the list of their priorities.

Thanks -F

They are somewhat interested in higher the limitation instead of improving their hypervisor.
Xen 7.3 is _*exactly *_Xen 7.2 with new limitations and no added features.
It's a shame.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to