Re: [Qemu-devel] [PATCH 1/1] block/gluster: add support for multiple gluster backup volfile servers

2015-09-09 Thread Raghavendra Talur
luster,uri[0]=gluster[+transport-type]://server1:24007/testvol/a.img, > >> > >> uri[1]=gluster[+transport-type]://server2:24008/testvol/a.img, > >> > >> uri[2]=gluster[+transport-type]://server3:24009/testvol/a.img > > > > > > Whats the adv

Re: [Qemu-devel] [PATCH v19 3/5] block/gluster: remove rdma transport

2016-07-18 Thread Raghavendra Talur
On Mon, Jul 18, 2016 at 5:48 PM, Markus Armbruster wrote: > Prasanna Kalever writes: > > > On Mon, Jul 18, 2016 at 2:23 PM, Markus Armbruster > wrote: > >> Prasanna Kumar Kalever writes: > >> > >>> gluster volfile server fetch happens through unix and/or tcp, it > doesn't > >>> support volfile

Re: [Qemu-devel] [PATCH v2 1/1] block/gluster: memory usage: use one glfs instance per volume

2016-10-27 Thread Raghavendra Talur
On Thu, Oct 27, 2016 at 8:54 PM, Prasanna Kumar Kalever < prasanna.kale...@redhat.com> wrote: > Currently, for every drive accessed via gfapi we create a new glfs > instance (call glfs_new() followed by glfs_init()) which could consume > memory in few 100 MB's, from the table below it looks like f

Re: [Qemu-devel] [Qemu-block] [PATCH for-2.6 2/2] block/gluster: prevent data loss after i/o error

2016-05-08 Thread Raghavendra Talur
ync failure is not default behavior? > > 1. No ways to identify whether setting of option succeeded in gfapi: > > I've added Poornima and Raghavendra Talur who work on gfapi to assist on > this. > There is currently no such feature in gfapi. We could think of two possible so