Hello,

I'm running Proxmox PVE with GlusterFS storage over Infiniband and have noticed 
that the way proxmox assigns drives to virtual machines over glusterfs (-drive 
file=gluster://..) causes qemu and glusterfs to communicate over TCP. This is 
not only slower but also more CPU intensive for RDMA enabled interconnects.

I was thinking of (actually implemented it on my servers) adding a "transport" 
option to storage.cfg. In my implementation transport is optional and if not 
specified defaults to the current behaviour. Possible values are tcp, rdma or 
unix (haven't tested the latter though).

So my question is are you (Proxmox) guys interested in implementing this 
feature? I gotta clean up my patch a bit, as currently it's not against git 
version.

Regards,
Stoyan
_______________________________________________
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to