On Sun, Apr 20, 2014 at 11:57 PM, Deepak Shetty wrote:
> This also tells us that the gfapi based validation/QE testcases needs to
> take this scenario in to account
> so that in future this can be caught sooner :)
>
> Bharata,
> Does the existing QEMU testcase for gfapi cover this ?
>
If are
On Fri, Apr 18, 2014 at 8:23 PM, Soumya Koduri wrote:
> Posted my comments in the bug link.
>
Thanks.
>
> " glfs_init" cannot be called before as it checks for
> cmds_args->volfile_server which is initialized only in
> "glfs_set_volfile_server".
> As Deepak had mentioned, we should either defi
On Thu, Apr 17, 2014 at 7:56 PM, Deepak Shetty wrote:
>
> The glfs_lock indeed seems to work only when glfs_init is succesfull!
> We can call glfs_unset_volfile_server for the error case of
> glfs_set_volfile_server as a good practice.
> But it does look like we need a opposite of glfs_new (maybe
Hi,
In QEMU, we initialize gfapi in the following manner:
glfs = glfs_new();
if (!glfs)
goto out;
if (glfs_set_volfile_server() < 0)
goto out;
if (glfs_set_logging() < 0)
goto out;
if (glfs_init(glfs))
goto out;
...
out:
if (glfs)
glfs_fini(glfs)
On Sat, Jan 25, 2014 at 4:17 PM, Harshavardhana
wrote:
> > Not for this specific change, but may be it's time to add a test case
> into
> > gluster to test QEMU requirements. The last effort in that direction was
> > made here:
> > http://lists.nongnu.org/archive/html/gluster-devel/2013-02/msg0001
On Sat, Jan 25, 2014 at 4:04 PM, Harshavardhana
wrote:
> > "In essence, every time you make a change to the library and
> > release it, the C:R:A should change. A new library should start
> > with 0:0:0. Each time you change the public interface
> > (i.e., your installed header files),
On Sat, Jan 25, 2014 at 3:41 PM, Anand Avati wrote:
>
>
> This needs to be sorted out. I'm not (yet) sure if we needed the X.Y.Z
> libtool versioning. Gfapi has been following the "do not change an
> interface once published" model, and only new APIs are added along with a
> bump in the correspon
On Sat, Jan 25, 2014 at 3:28 PM, Harshavardhana
wrote:
> > QEMU uses pkg-config to get the version of libgfapi and accordingly
> enable
> > features. Currently, it checks for version 3 for base libgfapi support,
> > version 5 for discard API support and version 6 for zerofill API support.
> >
> >
Hi,
QEMU uses pkg-config to get the version of libgfapi and accordingly enable
features. Currently, it checks for version 3 for base libgfapi support,
version 5 for discard API support and version 6 for zerofill API support.
However with recent commit c2b09dc87, libgfapi version changed from 6 to
Hi,
Now that QEMU-GlusterFS integration via libgfapi is supported in
GlusterFS-3.4 release, I documented a few failure scenarios to help
the potential users of this feature.
http://raobharata.wordpress.com/2013/08/21/troubleshooting-qemu-glusterfs/
Hope that users will find this useful when sett
On Wed, Aug 14, 2013 at 1:37 PM, Anand Avati wrote:
>
> Actually this *wasnt* what we discussed. glusterfs-api was supposed to
> depend on glusterfs-libs *ONLY*. This is because it has a linking (hard)
> relationship with glusterfs-libs, and glusterfs.rpm is only a run-time
> dependency - everythi
Hi Avati, Brian,
During the recently held gluster meetup, Shishir mentioned about a
potential problem (related to fd migration etc) in the zerofill
implementation (http://review.gluster.org/#/c/5327/) and also
mentioned that same/similar issues are present with fallocate and
discard implementation
On Fri, Jul 26, 2013 at 10:56 PM, Anand Avati wrote:
>
> Looking forward to hearing from you!
Hi Avati,
As many said on the thread, frequent releases will be good.
Another aspect which we felt could be improved is the timely review
for the patches. Though this is the responsibility of the commu
On Mon, Jul 29, 2013 at 12:18 AM, Anand Avati wrote:
>
> Another model can be:
>
> 0. glusterfs-libs.rpm - libglusterfs.so libgfrpc.so libgfxdr.so
> 1. glusterfs (depends on glusterfs-libs) - glusterfsd binary, glusterfs
> symlink, all common xlators
> 2. glusterfs-rdma (depends on glusterfs) - rd
On Mon, Jun 3, 2013 at 9:12 PM, Anand Avati wrote:
> This looks more like a compile time feature check than runtime. The
> PKG_CONFIG() api number which had the initial set of QEMU requirements was 3
> (i.e, PKG_CONFIG(..,glusterfs-api>=3,..). The new updates for Samba
> requirements has api numbe
On Tue, Jun 11, 2013 at 12:18 AM, Anand Avati wrote:
> It would be nice to make the fop match the writesame() semantics (@buf,
> @len, @offset, @repeat) and just use it for zerofill as a specific use case,
> by providing a 1 byte buffer storing a 0, repeated @len times. Thoughts?
You mean "repeat
Hi,
We are planning to add a new FOP to GlusterFS to exploit the WRITE
SAME capability of the underlying block device in case of BD xlator.
Linux has recently added support for WRITE SAME by means of a new
ioctl (BLKZEROOUT) that can be used to zero-out a range of blocks. We
are proposing the foll
Hi,
What is the best way to determine if I am running (or linked to) a
particular version of libgfapi/glusterfs ?
Requirement: If I want to support the use of glfs_discard() from QEMU,
I should be doing it only if QEMU is linked to a version of libgfapi
that supports glfs_discard().
Regards,
Bha
On Tue, May 14, 2013 at 8:09 PM, John Mark Walker wrote:
>
>
>
> - Original Message -
> > It looks as though the qemu package is built without Gluster support.
> > So I added the --with-glusterfs flag and tried to rebuilt the RPM. I
>
> Did you do ./configure --enable-glusterfs ?
--enable
> 1. fallocate()
> 2. discard()
> 3. zerofill()
> 4. splice()
>
> This set of FOPs should probably be sufficient to overlay most of the
> functionality in this general area. It will be nice to have them exposed
> through GFAPI and integrated in the QEMU plugin.
>
> Ava
Hi,
As support for SCSI offloads like WRITE SAME and UNMAP is being made
available on Linux via ioctls (BLKZEROUT, BLKDISCARD), the same can be
exploited from GlusterFS for virtualization usecase if GlusterFS also
supported ioctl as an FOP.
Is there any historical reason for GlusterFS not support
On Wed, Mar 6, 2013 at 5:24 PM, Bharata B Rao wrote:
> On Tue, Mar 5, 2013 at 9:41 PM, Anand Avati wrote:
>>
>> Do you have comparison of the %cpu util with and without zero-copy? cpu util
>> is probably be an important parameter of comparison.
>
> Right, I don
On Tue, Mar 5, 2013 at 9:41 PM, Anand Avati wrote:
>
> Do you have comparison of the %cpu util with and without zero-copy? cpu util
> is probably be an important parameter of comparison.
Right, I don't have the numbers. BTW, I am seeing that the results are
a bit fragile at the moment and I am tr
A hack to support zero copy readv, only supports glfs_preadv_async() now.
From: Bharata B Rao
---
api/src/glfs-fops.c | 38 +++
api/src/glfs.h|2
libglusterfs/src/call-stub.c | 63
libglusterfs
Hi,
Here is a highly experimental patch to support zero copy readv in
GlusterFS. An attempt is made to copy the read data in the socket to
client supplied buffers (iovecs) directly thereby eliminating one
memory copy for each readv request. Currently I have support for zero
copy readv only in glfs
On Wed, Feb 13, 2013 at 6:26 PM, Thomas Oulevey wrote:
> Is there (someone knows of) a qemu-kvm supporting qemu 1.3 for testing on
> el6 ?
> Any advice on rebuilding qemu for testing gluster backend ? Is
> --enable-glusterfs --enable-kvm enough for basic testing ?
GlusterFS backend support w
On Mon, Feb 11, 2013 at 11:09 PM, Whit Blauvelt
wrote:
> On Mon, Feb 11, 2013 at 12:18:38PM -0500, John Mark Walker wrote:
> > Here's a write-up on the newly-released GlusterFS 3.4 alpha:
> >
> > http://www.gluster.org/2013/02/new-release-glusterfs-3-4alpha/
>
> If we want to test the QEMU stuff,
On Thu, Jan 10, 2013 at 11:55 AM, Anand Avati wrote:
> - On the read side things are a little more complicated. In
> rpc-transport/socket, there is a call to iobuf_get() to create a new iobuf
> for reading in the readv reply data from the server. We will need a
> framework changes where, if the re
I see similar failures when installing a distribution (and not
necessarily running FIO) on a VM image present on BD backend too...
On Mon, Oct 29, 2012 at 3:16 PM, Bharata B Rao wrote:
> Hi,
>
> With GlusterFS block backend in QEMU, running FIO from inside VM
> causes a segfau
Hi,
With GlusterFS block backend in QEMU, running FIO from inside VM
causes a segfault like this...
(gdb) bt
#0 0x75ff03e8 in __memcpy_ssse3_back () from /lib64/libc.so.6
#1 0x7fffeb95cd5e in iov_unload (buf=0x7fffeaeec000 "",
vector=0x7fff24001610, count=1)
at ../../../../libgl
On Fri, Oct 26, 2012 at 10:27 PM, Harsh Prateek Bora
wrote:
> @@ -1624,7 +1626,15 @@
> one of the sheepdog servers (default is localhost:7000)
>
> zero or one
>
> +
> + gluster
> + a server running glusterd daemon
Its
On Wed, Sep 26, 2012 at 9:50 AM, Anand Avati wrote:
>
> I merged the patch after your mail. Unlikely?
I had applied this commit on top of latest git.
Regards,
Bharata.
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/ma
On Wed, Sep 26, 2012 at 9:45 AM, Anand Avati wrote:
> I suspect this is because of the RPC portmap reconnection delay. Can you
> check with the latest git head which has review.gluster.com/3994?
I started git bisect with dae3ae397e1a as HEAD, so the problem exists
even with that.
Regards,
Bharat
Hi,
With recent git tree, glusterd takes a long time to start.
Earlier:
[root@bharata glusterfs]# time glusterd
real0m0.007s
user0m0.001s
sys 0m0.006s
Now:
[root@bharata glusterfs]# time glusterd
real0m4.152s
user0m0.003s
sys 0m0.009s
git bisect points to this commit:
On Tue, Sep 25, 2012 at 8:50 PM, Amar Tumballi wrote:
> On 09/25/2012 02:12 PM, Bharata B Rao wrote:
>>
>> Hi,
>>
>> With latest git tree, glfs_open() returns -EINVAL for valid files on
>> gluster volumes.
>>
>
> Can you check if http://review.gl
Hi,
With latest git tree, glfs_open() returns -EINVAL for valid files on
gluster volumes.
This is the offending commit:
[root@bharata glusterfs]# git bisect bad
8f9e94c65516662ff49926203a73b3a0166c087e is the first bad commit
commit 8f9e94c65516662ff49926203a73b3a0166c087e
Author: Anand Avati
D
符永涛,
Fwd'ing your mail to gluster-devel to bring this to the attention of
gluster developers.
-- Forwarded message --
From: 符永涛
Date: Tue, Sep 11, 2012 at 6:18 AM
Subject: Re: [Gluster-users] QEMU-GlusterFS native integration demo video
To: Bharata B Rao
Hi Bharata,
On Wed, Sep 5, 2012 at 8:45 PM, Eric Blake wrote:
> On 09/05/2012 09:08 AM, Bharata B Rao wrote:
>> On Wed, Sep 5, 2012 at 7:03 PM, Jiri Denemark wrote:
>>>>
On Wed, Sep 5, 2012 at 7:03 PM, Jiri Denemark wrote:
>> @@ -1042,6 +1043,13 @@
>>
>>
>>
>> +
>> +
>> +socket
>> +unix
>> +
Hi,
If you are interested and/or curious to know how QEMU can be used to
create and boot VM's from GlusterFS volume, take a look at the demo
video I have created at:
www.youtube.com/watch?v=JG3kF_djclg
Regards,
Bharata.
--
http://bharata.sulekha.com/blog/posts.htm, http://raobharata.wordpress.c
On Tue, Aug 21, 2012 at 8:53 AM, Yin Yin wrote:
> Hi, Bharata B Rao:
>I've test your V5 qemu-gluster
> patch:http://lists.nongnu.org/archive/html/qemu-devel/2012-08/msg01023.html
>The result is impressed!
Nice to know that you were able to use my patches.
>bu
On Wed, Aug 15, 2012 at 12:14 PM, Yin Yin wrote:
> Hi,Bharata B Rao:
>I have try your patch, but has some problem. I found that both
> glusterfs and qemu have a fun uuid_is_null.
> glusterfs use contrib/uuid/isnull.c
> qemu use block/vdi.c
>
>
> I test the api/exa
On Wed, Aug 8, 2012 at 11:50 PM, John Mark Walker wrote:
>
> - Original Message -
>>
>> Or change your perspective. Do you NEED to write to the VM image?
>>
>> I write to fuse mounted GlusterFS volumes from within my VMs. The VM
>> image is just for the OS and application. With the data on
On Tue, Aug 7, 2012 at 12:01 PM, Anand Avati wrote:
> Bharata,
> Can you try this with 'gluster volume set
> performance.write-behind off' and see if you still face the problem?
Excellent! That fixes the problem.
Regards,
Bharata.
___
Gluster-devel
Hi,
With latest QEMU and latest gluster git, I observe guest root
filesystem corruption when the VM boots.
QEMU command line I am using is this:
qemu-system-x86_64 --enable-kvm --nographic -m 1024 -smp 4 -drive
file=/mnt/F17,if=virtio -net nic,model=virtio -net user -redir
tcp:2000::22
Gluster v
On Mon, Aug 6, 2012 at 1:31 PM, Bharata B Rao wrote:
> Hi,
>
> I see a segmentation fault with syncop_lookup() when trying with
> latest git (87d453f7211d3a38113aea895947143ea8bf7d68)
>
> Things work with 66205114267ec659b4ad8084c7e9497009529c61.
>
> [root@bharata qemu]#
Hi,
I see a segmentation fault with syncop_lookup() when trying with
latest git (87d453f7211d3a38113aea895947143ea8bf7d68)
Things work with 66205114267ec659b4ad8084c7e9497009529c61.
[root@bharata qemu]# gdb ./qemu-img
(gdb) r create gluster://bharata/test/4 1k
Starting program: qemu-img create g
On Fri, Aug 3, 2012 at 1:58 AM, Anand Avati wrote:
> review.gluster.com/3771 should fix this.
No. Can you test with the sample program and check if it works for you ?
(gdb)
57 ret = glfs_init(fs);
(gdb)
[New Thread 0x75206700 (LWP 9796)]
[New Thread 0x73fb9700 (LWP 9797)]
58
On Thu, Aug 2, 2012 at 8:16 AM, Anand Avati wrote:
> I think this was fixed in http://review.gluster.com/#change,3732. Can you
> confirm?
That commit doesn't fix this issue. I still see glfs_init() returning
-1 with errno=0 when an invalid server is specified.
Regards,
Bharata.
Hi Avati,
I hope this mail didn't miss your attention.
QEMU depends on errno to be set appropriately and hence the effect of
any small typo by user in server name can result in a crash!
Regards,
Bharata.
On Wed, Jul 25, 2012 at 2:38 PM, Bharata B Rao wrote:
> Hi,
>
> If an inv
On Mon, Jul 30, 2012 at 11:50 AM, Vijay Bellur wrote:
> On 07/26/2012 10:28 AM, Bharata B Rao wrote:
>
>>
>> Its already possible to specify a custom extension to the volume name
>> and glusterd picks the appropriate volume file. For eg, for a volname
>> of tes
On Thu, Jul 26, 2012 at 11:35 AM, Anand Avati wrote:
> Fixed in my branch. Will be available upstream in next push. Thanks for
> reporting.
Not sure if this fix is part of the 4 commits from you I see in gerrit
now. With those 4 commits, I get EADDRINUSE. I would have expected
EINVAL.
Regards,
B
On Thu, Jul 26, 2012 at 12:14 PM, Anand Avati wrote:
> ret of 1 is intentional to indicate that glfs_init() could not complete yet.
> 0 indicates success and you can start issuing fops right away. -1 is
> definitive failure. When ret is positive, it means initialization could not
> complete, but g
On Wed, Jul 25, 2012 at 2:08 PM, Bharata B Rao wrote:
> glfs_init() is supposed to return 0 on success and -1 on failure.
>
> When I specify a volume that's not yet "started" from gluster CLI,
> glfs_init() returns 1 with errno 98.
Client volfile
-
vo
Hi,
I have discussed this briefly with some gluster developers over IRC,
but thought of bringing this up here since I would like to propose
this as a feature for 3.4 once we have consensus on this.
With QEMU supporting GlusterFS backend natively, it is good to
- have a direct IO path from QEMU
Hi,
I am trying to access a local volume using libgfapi. The port number
passed to glfs_set_volfile_server() doesn't seem to make any
difference. I tried -1, 0, 1, 10, 100, 1 but everything works just
fine.
Am I missing something here ?
Regards,
Bharata.
--
http://bharata.sulekha.com/blog/
Hi,
If an invalid server is specified with glfs_set_volfile_sever(),
glfs_init() returns -1 but errno is still 0.
Regards,
Bharata.
--
http://bharata.sulekha.com/blog/posts.htm, http://raobharata.wordpress.com/
___
Gluster-devel mailing list
Gluster-d
If I specify a non-existent volume name with glfs_new(), glfs_init() just hangs.
(gdb) bt
#0 0x00365020bae5 in pthread_cond_wait@@GLIBC_2.3.2 () from
/lib64/libpthread.so.0
#1 0x77dd7dcb in glfs_init_wait (fs=0x622150) at glfs.c:462
#2 0x00400aa2 in main (argc=3, argv=0x7fff
glfs_init() is supposed to return 0 on success and -1 on failure.
When I specify a volume that's not yet "started" from gluster CLI,
glfs_init() returns 1 with errno 98.
Regards,
Bharata.
--
http://bharata.sulekha.com/blog/posts.htm, http://raobharata.wordpress.com/
___
On Thu, Jul 19, 2012 at 9:59 AM, Anand Avati wrote:
>
>
> On Wed, Jul 18, 2012 at 9:21 PM, Bharata B Rao
> wrote:
>>
>> On Wed, Jul 18, 2012 at 10:35 PM, Anand Avati
>> wrote:
>> > Why do you want to re-init an already existing object?
>>
>> As
On Wed, Jul 18, 2012 at 10:35 PM, Anand Avati wrote:
> Why do you want to re-init an already existing object?
As you note below, I don't want to re-init but rather do fini and init :)
> Rather, why do you
> want to fini an object which you know you will init again? There is a cost
> to establish
Hi,
I see the following hang when I reissue glfs_init() to reinitialize
the GlusterFS connection.
(gdb) bt
#0 0x763a2ae5 in pthread_cond_wait@@GLIBC_2.3.2 () from
/lib64/libpthread.so.0
#1 0x77284dcb in glfs_init_wait (fs=0x569d1f80) at glfs.c:462
I assume that a proper glf
Hi Avati,
In QEMU, this is how I am planning to use libgfapi to set the GlusterFS backend.
struct glfs *glfs;
int ret = 0;
glfs = glfs_new(volname);
if (!glfs) {
ret = -errno;
goto out_glfs_new_failed;
}
ret = glfs_set_volfile_server(glfs, "socket", server, port);
if (ret < 0) {
ret
On Tue, Jul 17, 2012 at 8:31 PM, Bharata B Rao wrote:
> Hi Avati,
>
> I am trying to boot QEMU with GlusterFS backend in RPC bypass mode. I
> am using a custom volume file that just has storage/posix xlator.
> Currently there is no way in gluster CLI to generate such custom
> vo
Hi Avati,
I am trying to boot QEMU with GlusterFS backend in RPC bypass mode. I
am using a custom volume file that just has storage/posix xlator.
Currently there is no way in gluster CLI to generate such custom
volfiles and hence I added such a custom volfile into the directory
where glusterd look
On Sat, Jul 14, 2012 at 11:07 PM, Anand Avati wrote:
> Thanks for reporting! Let me know if this fixes the problem --
>
> snip---
> diff --git a/api/src/glfs-fops.c b/api/src/glfs-fops.c
> index 7523ed2..b62942b 100644
> --- a/api/src/glfs-fops.c
> +++ b/api/src
Hi Avati,
I am seeing a failure when using glfs_pwritev_async() with iovec count
greater than 1 from QEMU. I have verified from a sample application
that it works correctly for iovec count of 1 and 2. However when QEMU
issues writes with higher iovec counts, I see a problem whose log is
pasted bel
On Tue, Jun 26, 2012 at 8:28 AM, Vijay Bellur wrote:
> On 06/20/2012 04:43 AM, Bharata B Rao wrote:
>>
>>
>>
>> Using this arbitrary delay of 1s from QEMU backend driver is a hack
>> and I would like to know if there is a better and cleaner way solve
>>
Hi,
I was experimenting to see if I can use distribute+replicate kind of
volume using my QEMU-GlusterFS integration patches. It appears that
dht xlator depends on a routine (glusterfs_rebalance_event_notify)
that happens to be part of glusterfs binary. Because of this
dependency I am not able to l
Hi,
With the patches that add GlusterFS backend support to QEMU
(http://lists.nongnu.org/archive/html/qemu-devel/2012-06/msg01748.html),
I am seeing a problem where lookup and other routines fail since QEMU
ends up issuing posix calls (lookup, read etc) even before the
client-server connection is
On Tue, Jun 19, 2012 at 1:37 AM, Jeff Darcy wrote:
> On Mon, 2012-06-18 at 09:33 +0530, Bharata B Rao wrote:
>> Hi,
>>
>> I recently posted patches to integrate GlusterFS with QEMU.
>> (http://lists.nongnu.org/archive/html/qemu-devel/2012-06/msg01745.html).
>>
Hi,
I recently posted patches to integrate GlusterFS with QEMU.
(http://lists.nongnu.org/archive/html/qemu-devel/2012-06/msg01745.html).
While updating those patches to latest gluster git, I am seeing a
problem and I tracked that down to this commit:
e8eb0a9cb6539a7607d4c134daf331400a93d136 (Opti
72 matches
Mail list logo