Re: [Gluster-devel] Checklist for QEMU integration for upstream release

2016-09-06 Thread Bharata B Rao
On Tue, Sep 06, 2016 at 10:56:39AM +0200, Niels de Vos wrote:
> On Tue, Sep 06, 2016 at 09:01:18AM +0530, Bharata B Rao wrote:
> > On Mon, Sep 05, 2016 at 05:57:55PM +0200, Niels de Vos wrote:
> > > On Sat, Sep 03, 2016 at 01:04:37AM +0530, Pranith Kumar Karampuri wrote:
> > > > hi Bharata,
> > > >What tests are run before the release of glusterfs so that we 
> > > > make
> > > > sure this integration is stable? Could you add that information here so
> > > > that I can update it at
> > > > https://public.pad.fsfe.org/p/gluster-component-release-checklist
> > > 
> > > I normally run some qemu-img commands to create/copy/... VM-images. When
> > > I have sufficient time, I start a VM based on a gluster:// URL on the
> > > commandline (through libvirt XML files), similar to this:
> > >   
> > > http://blog.nixpanic.net/2013/11/initial-work-on-gluster-integration.html
> > 
> > I did reply with the testcases I used to run normally. Guess the reply
> > didn't make it to the list.
> 
> Thanks, now it did. One of our admins accepted the challange ;-)
> 
> > > In case Bharata is not actively working (or interested) in QEMU and it's
> > > Gluster driver, Prasanna and I should probably replace or get added in
> > > the MAINTAINERS file, both of us get requests from the QEMU maintainers
> > > directly.
> > 
> > Makes sense as Prasanna is actively contributing to Gluter driver now.
> 
> Do you still want to be listed, or shall we move you to the 'thank you'
> section?

Please move to 'thank you' section.

Regards,
Bharata.

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Review request for lock migration patches

2016-09-06 Thread Susant Palai
Gentle reminder for reviews.

Thanks,
Susant

- Original Message -
> From: "Susant Palai" 
> To: "Raghavendra Gowdappa" , "Pranith Kumar Karampuri" 
> 
> Cc: "gluster-devel" 
> Sent: Tuesday, 30 August, 2016 3:19:13 PM
> Subject: [Gluster-devel] Review request for lock migration patches
> 
> Hi,
> 
> There are few patches targeted for lock migration. Requesting for review.
> 1. http://review.gluster.org/#/c/13901/
> 2. http://review.gluster.org/#/c/14286/
> 3. http://review.gluster.org/#/c/14492/
> 4. http://review.gluster.org/#/c/15076/
> 
> 
> Thanks,
> Susant~
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Checklist for gluster-swift for upstream release

2016-09-06 Thread Prashanth Pai


> > > hi,
> > > Did you get a chance to decide on the gluster-swift integration
> > > tests that need to be run before doing an upstream gluster release? Could
> > > you let me know who will be providing with the list?
> >
> > The tests (unit test and functional test) can be run before doing
> > upstream release. These tests reside in gluster-swift repo.
> >
> > I can run those tests (manually as of now) whenever required.
> >
> 
> Do you think long term it makes sense to add it as part of a job, so that
> it is simply a matter of launching this job before release?

It was a job being run on every patchset submitted to gluster-swift repo
but gluster-swift CI was disabled during migration of Jenkins.

I'll work with Nigel later to set it up again in the new infra.

> 
> 
> >
> > >
> > > I can update it at https://public.pad.fsfe.org/p/
> > > gluster-component-release-checklist
> > > 
> > >
> > > --
> > > Pranith
> > >
> >
> 
> 
> 
> --
> Pranith
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Post mortem of features that didn't make it to 3.9.0

2016-09-06 Thread Pranith Kumar Karampuri
On Wed, Sep 7, 2016 at 6:07 AM, Sankarshan Mukhopadhyay <
sankarshan.mukhopadh...@gmail.com> wrote:

> On Wed, Sep 7, 2016 at 5:10 AM, Pranith Kumar Karampuri
>  wrote:
>
> >  Do you think it makes sense to do post-mortem of features that
> didn't
> > make it to 3.9.0? We have some features that missed deadlines twice as
> well,
> > i.e. planned for 3.8.0 and didn't make it and planned for 3.9.0 and
> didn't
> > make it. So may be we are adding features to roadmap without thinking
> things
> > through? Basically it leads to frustration in the community who are
> waiting
> > for these components and they keep moving to next releases.
>
> Doing a post-mortem to understand the pieces which went well (so that
> we can continue doing them); which didn't go well (so that we can
> learn from those) and which were impediments (so that we can address
> the topics and remove them) is an useful exercise.
>

Ah, that makes more sense. We should also do these for features that went
well as well.


>
> > Please let me know your thoughts. Goal is to get better at planning
> and
> > deliver the features as planned as much as possible. Native subdirectoy
> > mounts is in same situation which I was supposed to deliver.
> >
> > I have the following questions we need to ask ourselves the following
> > questions IMO:
>
> Incident based post-mortems require a timeline. However, while the
> need for that might be unnecessary here, the questions are perhaps too
> specific. Also, it would be good to set up the expectation from the
> exercise - what would all the inputs lead to?
>

Timeline is a good idea. But I am not sure what would be a good time. I
think it is better to concentrate on getting the 3.9.0 release out, so may
be in the last week of this month, we can start this exercise in full flow.
At the moment we want to collect this information so that we acknowledge
the good things we did for the release and things we need to avoid in the
future releases. Like I was mentioning, the main goal at least in my mind
was to prevent these slips as much as possible in future. At the moment the
roadmap is more like a backlog, at least that is how it seems like IMO, we
keep pushing them to next release based whether we get time or not. Instead
it should be like a proper roadmap where we are sure we will deliver them
for the release with good confidence.


>
> > 1) Did we have approved design before we committed the feature upstream
> for
> > 3.9?
> > 2) Did we allocate time for execution of this feature upstream?
> > 3) Was the execution derailed by any of the customer issues/important
> work
> > in your organizatoin?
> > 4) Did developers focus on something that is not of priority which could
> > have derailed the feature's delivery?
> > 5) Did others in the team suspect the developers are not focusing on
> things
> > that are of priority but didn't communicate?
> > 6) Were there any infra issues that delayed delivery of this
> > feature(regression failures etc)?
> > 7) Were there any big delays in reviews of patches?
> >
> > Do let us know if you think we should ask more questions here.
> >
> > --
> > Aravinda & Pranith
>
>
>
> --
> sankarshan mukhopadhyay
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Post mortem of features that didn't make it to 3.9.0

2016-09-06 Thread Sankarshan Mukhopadhyay
On Wed, Sep 7, 2016 at 5:10 AM, Pranith Kumar Karampuri
 wrote:

>  Do you think it makes sense to do post-mortem of features that didn't
> make it to 3.9.0? We have some features that missed deadlines twice as well,
> i.e. planned for 3.8.0 and didn't make it and planned for 3.9.0 and didn't
> make it. So may be we are adding features to roadmap without thinking things
> through? Basically it leads to frustration in the community who are waiting
> for these components and they keep moving to next releases.

Doing a post-mortem to understand the pieces which went well (so that
we can continue doing them); which didn't go well (so that we can
learn from those) and which were impediments (so that we can address
the topics and remove them) is an useful exercise.

> Please let me know your thoughts. Goal is to get better at planning and
> deliver the features as planned as much as possible. Native subdirectoy
> mounts is in same situation which I was supposed to deliver.
>
> I have the following questions we need to ask ourselves the following
> questions IMO:

Incident based post-mortems require a timeline. However, while the
need for that might be unnecessary here, the questions are perhaps too
specific. Also, it would be good to set up the expectation from the
exercise - what would all the inputs lead to?

> 1) Did we have approved design before we committed the feature upstream for
> 3.9?
> 2) Did we allocate time for execution of this feature upstream?
> 3) Was the execution derailed by any of the customer issues/important work
> in your organizatoin?
> 4) Did developers focus on something that is not of priority which could
> have derailed the feature's delivery?
> 5) Did others in the team suspect the developers are not focusing on things
> that are of priority but didn't communicate?
> 6) Were there any infra issues that delayed delivery of this
> feature(regression failures etc)?
> 7) Were there any big delays in reviews of patches?
>
> Do let us know if you think we should ask more questions here.
>
> --
> Aravinda & Pranith



-- 
sankarshan mukhopadhyay

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Post mortem of features that didn't make it to 3.9.0

2016-09-06 Thread Pranith Kumar Karampuri
hi,
 Do you think it makes sense to do post-mortem of features that didn't
make it to 3.9.0? We have some features that missed deadlines twice as
well, i.e. planned for 3.8.0 and didn't make it and planned for 3.9.0 and
didn't make it. So may be we are adding features to roadmap without
thinking things through? Basically it leads to frustration in the community
who are waiting for these components and they keep moving to next releases.
Please let me know your thoughts. Goal is to get better at planning and
deliver the features as planned as much as possible. Native subdirectoy
mounts is in same situation which I was supposed to deliver.

I have the following questions we need to ask ourselves the following
questions IMO:
1) Did we have approved design before we committed the feature upstream for
3.9?
2) Did we allocate time for execution of this feature upstream?
3) Was the execution derailed by any of the customer issues/important work
in your organizatoin?
4) Did developers focus on something that is not of priority which could
have derailed the feature's delivery?
5) Did others in the team suspect the developers are not focusing on things
that are of priority but didn't communicate?
6) Were there any infra issues that delayed delivery of this
feature(regression failures etc)?
7) Were there any big delays in reviews of patches?

Do let us know if you think we should ask more questions here.

-- 
Aravinda & Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] md-cache changes and impact on tiering

2016-09-06 Thread Dan Lambright


- Original Message -
> From: "Dan Lambright" 
> To: "Poornima Gurusiddaiah" 
> Cc: "Nithya Balachandran" , "Gluster Devel" 
> 
> Sent: Sunday, August 28, 2016 10:01:36 AM
> Subject: Re: md-cache changes and impact on tiering
> 
> 
> 
> - Original Message -
> > From: "Poornima Gurusiddaiah" 
> > To: "Dan Lambright" , "Nithya Balachandran"
> > 
> > Cc: "Gluster Devel" 
> > Sent: Tuesday, August 23, 2016 12:56:38 AM
> > Subject: md-cache changes and impact on tiering
> > 
> > Hi,
> > 
> > The basic patches for md-cache and integrating it with cache-invalidation
> > is
> > merged in master. You could try master build and enable the following
> > settings, to see if there is any impact on tiering performance at all:
> > 
> > # gluster volume set  performance.stat-prefetch on
> > # gluster volume set  features.cache-invalidation on
> > # gluster volume set  performance.cache-samba-metadata on
> > # gluster volume set  performance.md-cache-timeout 600
> > # gluster volume set  features.cache-invalidation-timeout 600
> 
> On the tests I run, this cut the number of LOOKUPs by about three orders of
> magnitude. Each saved lookup reduces a round trip over the network.
> 
> I'm running a "small file" performance test. It creates 16K 64 byte files in
> a seven level directory. It then reads each file twice.
> 
> Configuration is HOT: 2 x 2 ramdisk COLD: 2 x (8 + 4) disk, network is
> 1Mb/s 9000 mtu. The number of lookups is a factor of the number of
> directories and subvolumes. On each I/O the file is re-opened and each
> directory is laboriously rechecked for existence/permission.
> 
> Without using md-cache, these lookups used to be further propagated across
> each subvolume by DHT to obtain the entire layout. So it would be something
> like order of 16K*7*26 round trips across the network.
> 
> The counts are all visible with gluster profile.

I'm going to have to retract the above comments. The optimization does not work 
well for me yet. 

If I follow the traces, something odd happens when the client sends a LOOKUP. 
The server will send an invalidation, from the upcall translator's lookup fop 
callback. At that point any future LOOKUPs for that entry are passed right 
through again to the server.. this logic defeats the reasoning for using 
md-cache.. can you explain the reasoning behind that?

> 
> 
> > 
> > Note: It has to be executed in the same order.
> > 
> > Tracker bug: https://bugzilla.redhat.com/show_bug.cgi?id=1211863
> > Patches:
> > http://review.gluster.org/#/q/status:open+project:glusterfs+branch:master+topic:bug-1211863
> > 
> > Thanks,
> > Poornima
> > 
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Profiling GlusterFS FUSE client with Valgrind's Massif tool

2016-09-06 Thread Oleksandr Natalenko
Created BZ for it [1].

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1373630

On вівторок, 6 вересня 2016 р. 23:32:51 EEST Pranith Kumar Karampuri wrote:
> I included you on a thread on users, let us see if he can help you out.
> 
> On Mon, Aug 29, 2016 at 4:02 PM, Oleksandr Natalenko <
> 
> oleksa...@natalenko.name> wrote:
> > More info here.
> > 
> > Massif puts the following warning on volume unmount:
> > 
> > ===
> > valgrind: m_mallocfree.c:304 (get_bszB_as_is): Assertion 'bszB_lo ==
> > bszB_hi' failed.
> > valgrind: Heap block lo/hi size mismatch: lo = 1, hi = 0.
> > This is probably caused by your program erroneously writing past the
> > end of a heap block and corrupting heap metadata.  If you fix any
> > invalid writes reported by Memcheck, this assertion failure will
> > probably go away.  Please try that before reporting this as a bug.
> > ...
> > Thread 1: status = VgTs_Runnable
> > ==30590==at 0x4C29037: free (in /usr/lib64/valgrind/vgpreload_
> > massif-amd64-linux.so)
> > ==30590==by 0x67CE63B: __libc_freeres (in /usr/lib64/libc-2.17.so)
> > ==30590==by 0x4A246B4: _vgnU_freeres (in
> > /usr/lib64/valgrind/vgpreload_
> > core-amd64-linux.so)
> > ==30590==by 0x66A2E2A: __run_exit_handlers (in /usr/lib64/libc-2.17.so
> > )
> > ==30590==by 0x66A2EB4: exit (in /usr/lib64/libc-2.17.so)
> > ==30590==by 0x1117E9: cleanup_and_exit (glusterfsd.c:1308)
> > ==30590==by 0x669F66F: ??? (in /usr/lib64/libc-2.17.so)
> > ==30590==by 0x606EEF4: pthread_join (in /usr/lib64/libpthread-2.17.so)
> > ==30590==by 0x4EC2687: event_dispatch_epoll (event-epoll.c:762)
> > ==30590==by 0x10E876: main (glusterfsd.c:2370)
> > ...
> > ===
> > 
> > I rechecked mount/ls/unmount with memcheck tool as suggested and got the
> > following:
> > 
> > ===
> > ...
> > ==30315== Thread 8:
> > ==30315== Syscall param writev(vector[...]) points to uninitialised
> > byte(s)
> > ==30315==at 0x675FEA0: writev (in /usr/lib64/libc-2.17.so)
> > ==30315==by 0xE664795: send_fuse_iov (fuse-bridge.c:158)
> > ==30315==by 0xE6649B9: send_fuse_data (fuse-bridge.c:197)
> > ==30315==by 0xE666F7A: fuse_attr_cbk (fuse-bridge.c:753)
> > ==30315==by 0xE6671A6: fuse_root_lookup_cbk (fuse-bridge.c:783)
> > ==30315==by 0x14519937: io_stats_lookup_cbk (io-stats.c:1512)
> > ==30315==by 0x14300B3E: mdc_lookup_cbk (md-cache.c:867)
> > ==30315==by 0x13EE9226: qr_lookup_cbk (quick-read.c:446)
> > ==30315==by 0x13CD8B66: ioc_lookup_cbk (io-cache.c:260)
> > ==30315==by 0x1346405D: dht_revalidate_cbk (dht-common.c:985)
> > ==30315==by 0x1320EC60: afr_discover_done (afr-common.c:2316)
> > ==30315==by 0x1320EC60: afr_discover_cbk (afr-common.c:2361)
> > ==30315==by 0x12F9EE91: client3_3_lookup_cbk (client-rpc-fops.c:2981)
> > ==30315==  Address 0x170b238c is on thread 8's stack
> > ==30315==  in frame #3, created by fuse_attr_cbk (fuse-bridge.c:723)
> > ...
> > ==30315== Warning: invalid file descriptor -1 in syscall close()
> > ==30315== Thread 1:
> > ==30315== Invalid free() / delete / delete[] / realloc()
> > ==30315==at 0x4C2AD17: free (in /usr/lib64/valgrind/vgpreload_
> > memcheck-amd64-linux.so)
> > ==30315==by 0x67D663B: __libc_freeres (in /usr/lib64/libc-2.17.so)
> > ==30315==by 0x4A246B4: _vgnU_freeres (in
> > /usr/lib64/valgrind/vgpreload_
> > core-amd64-linux.so)
> > ==30315==by 0x66AAE2A: __run_exit_handlers (in /usr/lib64/libc-2.17.so
> > )
> > ==30315==by 0x66AAEB4: exit (in /usr/lib64/libc-2.17.so)
> > ==30315==by 0x1117E9: cleanup_and_exit (glusterfsd.c:1308)
> > ==30315==by 0x66A766F: ??? (in /usr/lib64/libc-2.17.so)
> > ==30315==by 0x6076EF4: pthread_join (in /usr/lib64/libpthread-2.17.so)
> > ==30315==by 0x4ECA687: event_dispatch_epoll (event-epoll.c:762)
> > ==30315==by 0x10E876: main (glusterfsd.c:2370)
> > ==30315==  Address 0x6a2d3d0 is 0 bytes inside data symbol
> > "noai6ai_cached"
> > ===
> > 
> > It seems Massif crashes (?) because of invalid memory access in glusterfs
> > process cleanup stage.
> > 
> > Pranith? Nithya?
> > 
> > 29.08.2016 13:14, Oleksandr Natalenko wrote:
> >> ===
> >> valgrind --tool=massif --trace-children=yes /usr/sbin/glusterfs -N
> >> --volfile-server=server.example.com --volfile-id=test
> >> /mnt/net/glusterfs/test
> >> ===
> > 
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Checklist for gluster-swift for upstream release

2016-09-06 Thread Pranith Kumar Karampuri
On Tue, Sep 6, 2016 at 11:23 AM, Prashanth Pai  wrote:

>
>
> - Original Message -
> > From: "Pranith Kumar Karampuri" 
> > To: tdasi...@redhat.com, "Prashanth Pai" 
> > Cc: "Gluster Devel" 
> > Sent: Saturday, 3 September, 2016 12:58:41 AM
> > Subject: Checklist for gluster-swift for upstream release
> >
> > hi,
> > Did you get a chance to decide on the gluster-swift integration
> > tests that need to be run before doing an upstream gluster release? Could
> > you let me know who will be providing with the list?
>
> The tests (unit test and functional test) can be run before doing
> upstream release. These tests reside in gluster-swift repo.
>
> I can run those tests (manually as of now) whenever required.
>

Do you think long term it makes sense to add it as part of a job, so that
it is simply a matter of launching this job before release?


>
> >
> > I can update it at https://public.pad.fsfe.org/p/
> > gluster-component-release-checklist
> > 
> >
> > --
> > Pranith
> >
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Checklist for snapshot component for upstream release

2016-09-06 Thread Pranith Kumar Karampuri
Thanks Avra, added the list to etherpad.

On Tue, Sep 6, 2016 at 9:30 AM, Avra Sengupta  wrote:

> Hi Pranith,
>
> The following set of automated and manual tests need to pass before doing
> a release for snapshot component:
> 1. The entire snapshot regression suite present in the source repository,
> which as of now consist of:
>
> a. ./basic/volume-snapshot.t
> b. ./basic/volume-snapshot-clone.t
> c. ./basic/volume-snapshot-xml.t
> d. All tests present in ./bugs/snapshot
>
> 2. Manual test of using snapshot scheduler.
> 3. Till the eventing test framework is integrated with the regression
> suite, manual test of all 28 snapshot events.
>
> Regards,
> Avra
>
>
> On 09/03/2016 12:26 AM, Pranith Kumar Karampuri wrote:
>
> hi,
> Did you get a chance to decide on the tests that need to be done
> before doing a release for snapshot component? Could you let me know who
> will be providing with the list?
>
> I can update it at https://public.pad.fsfe.org/p/
> gluster-component-release-checklist
>
> --
> Aravinda & Pranith
>
>
>


-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Checklist for gfapi for upstream release

2016-09-06 Thread Pranith Kumar Karampuri
On Mon, Sep 5, 2016 at 8:54 PM, Niels de Vos  wrote:

> On Sat, Sep 03, 2016 at 12:10:41AM +0530, Pranith Kumar Karampuri wrote:
> > hi,
> > I think most of this testing will be covered in nfsv4, smb
> testing.
> > But I could be wrong. Could you let me know who will be providing with
> the
> > list if you think there are more tests that need to be run?
> >
> > I can update it at https://public.pad.fsfe.org/p/
> > gluster-component-release-checklist
>
> I've added this to the etherpad:
>
> > test known applications, run their test-suites:
> > glusterfs-coreutils (has test suite in repo)
> > libgfapi-python (has test suite in repo)
> > nfs-ganesha (pynfs and cthon04 tests)
> > Samba (test?)
> > QEMU (run qemu binary and qemu-img with gluster:// URL,
> possibly/somehow run Advocado suite)
>

I think we should also add add-brick/replace-brick with gfapi? Thoughts?


>
> Niels
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Checklist for GlusterFS Hadoop HCFS plugin for upstream release

2016-09-06 Thread Pranith Kumar Karampuri
Thanks for this information. We will follow up on this one.

On Sun, Sep 4, 2016 at 3:30 AM, Jay Vyas  wrote:

> Hi pranith;
>
> The bigtop  smoke tests are a good way to go.  You can run them against
> pig hive and so on.
>
> In general running a simple mapreduce job like wordcount is a good first
> pass start.
>
> Many other communities like orangefs and so on run Hadoop tests on
> alternative file systems, you can collaborate with them.
>
> There is an hcfs wiki page you can contribute to on Hadoop.apache.org
>  where we detail Hadoop interoperability
>
>
>
> On Sep 2, 2016, at 3:33 PM, Pranith Kumar Karampuri 
> wrote:
>
> hi Jay,
>   Are there any tests that are done before releasing glusterfs
> upstream to make sure the plugin is stable? Could you let us know the
> process, so that we can add it to https://public.pad.fsfe.org/p/
> gluster-component-release-checklist
>
> --
> Pranith
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
>


-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Mistake in regression-test-burn-in

2016-09-06 Thread Vijay Bellur
On Tue, Sep 6, 2016 at 7:07 AM, Nigel Babu  wrote:
> Hello folks,
>
> I made a mistake in configuring regression-test-burn-in jobs. They've been
> running from the same revision ever since I converted them to JJB. I've fixed
> this up today, so it shouldn't happen again. I'm adding a post-mortem to bug
> 1373454 if you're curious about what went wrong.
>

Thank you, Nigel! This clears up a mystery that puzzled me for the
past several days :-).

-Vijay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Anyone wants to maintain Mac-OSX port of gluster?

2016-09-06 Thread Kaleb S. KEITHLEY

On 09/06/2016 08:03 AM, Emmanuel Dreyfus wrote:

On Tue, Sep 06, 2016 at 07:30:08AM -0400, Kaleb S. KEITHLEY wrote:

Mac OS X doesn't build at the present time because its sed utility (used in
the xdrgen/rpcgen part of the build) doesn't support the (linux compatible)
'-r' command line option. (NetBSD and FreeBSD do.)

(There's an easy fix)


Easy fix, replace sed -r by $SED_R and
SED_R="sed -r" on Linux vs SED_R="sed -E" on BSDs, including OSX.



Even easier is don't use an extended regex, then you won't need `sed -r` 
or `sed -E`.


See the regex I used in 
http://review.gluster.org/#/c/14085/14/rpc/xdr/src/Makefile.am  (line 48)


--

Kaleb

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Anyone wants to maintain Mac-OSX port of gluster?

2016-09-06 Thread Emmanuel Dreyfus
On Tue, Sep 06, 2016 at 07:30:08AM -0400, Kaleb S. KEITHLEY wrote:
> Mac OS X doesn't build at the present time because its sed utility (used in
> the xdrgen/rpcgen part of the build) doesn't support the (linux compatible)
> '-r' command line option. (NetBSD and FreeBSD do.)
> 
> (There's an easy fix)

Easy fix, replace sed -r by $SED_R and
SED_R="sed -r" on Linux vs SED_R="sed -E" on BSDs, including OSX. 

-- 
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Traige meeting

2016-09-06 Thread Ankit Raj
Hi Gluster team,

The weekly Gluster bug triage is about to take place in 26 min.

Meeting details:
- location: #gluster-meeting on Freenode IRC
( https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
(in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor



Regards,
Ankit Raj
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Anyone wants to maintain Mac-OSX port of gluster?

2016-09-06 Thread Kaleb S. KEITHLEY

On 09/02/2016 03:49 PM, Pranith Kumar Karampuri wrote:

hi,
 As per MAINTAINERS file this port doesn't have maintainer. If you
want to take up the responsibility of maintaining the port please let us
know how you want to go about doing it and what should be the checklist
of things that should be done before every release upstream. It is
extremely healthy to have more than one maintainer for the port. Even if
multiple people already responded and you still want to be part of it,
don't feel shy to respond. More the merrier.


Mac OS X doesn't build at the present time because its sed utility (used 
in the xdrgen/rpcgen part of the build) doesn't support the (linux 
compatible) '-r' command line option. (NetBSD and FreeBSD do.)


(There's an easy fix)

--

Kaleb


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Checklist for glusterfs packaging/build for upstream release

2016-09-06 Thread Kaleb S. KEITHLEY

On 09/02/2016 03:40 PM, Pranith Kumar Karampuri wrote:

hi,
  In the past we have had issues where some of the functionality
didn't work on debian/ubuntu because 'glfsheal' binary was not packaged.


It was? That seems strange to me because our Debian packaging is less 
"selective" than our RPM.


Less selective in that it wildcards pretty much everything that gets 
installed, unlike the Fedora/RHEL/CentOS.


But perhaps I'm just not remembering this particular incident.


What do you guys as packaging/build maintainers on different distros
suggest that we do to make sure we catch such mistakes before the
releases are made?


Short of a trial build of Debian packages before the release, coupled 
with some kind of audit of what's in them, and compare that to what's in 
the RPMs?


And for the record, I'm trying not to be the packaging maintainer for so 
many different distributions.




Please suggest them here so that we can add them at
https://public.pad.fsfe.org/p/gluster-component-release-checklist after
the discussion is complete


And BTW, at some point we should compare our current Debian/Ubuntu 
package files with Patrick's and get them back in sync again if they 
have diverged.


--

Kaleb


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Mistake in regression-test-burn-in

2016-09-06 Thread Nigel Babu
Hello folks,

I made a mistake in configuring regression-test-burn-in jobs. They've been
running from the same revision ever since I converted them to JJB. I've fixed
this up today, so it shouldn't happen again. I'm adding a post-mortem to bug
1373454 if you're curious about what went wrong.

--
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] GlusterFs upstream bugzilla components Fine graining

2016-09-06 Thread Kaushal M
On Tue, Sep 6, 2016 at 2:30 PM, Atin Mukherjee  wrote:
>
>
> On Tue, Sep 6, 2016 at 12:42 PM, Muthu Vigneshwaran 
> wrote:
>>
>> Hi,
>>
>>   Actually the currently component list in the Bugzilla appears to be in
>> just alphabetical order of all components and sub-components as a flattened
>> list.
>>
>> Planning to better organize the component list. So the bugs can be
>> reported on the components( mostly matching different git repositories) and
>> sub-components( mostly matching different components in the git repository,
>> or functionality ) in the list respectively which will help in easy access
>> for the reporter of the bug and as well as the assignee.
>>
>> Along with these changes we will have only major version number(3.6,
>> 3.7..) (as mentioned in an earlier email from Kaleb - check that :) ) unlike
>> previously we had major version with minor version. Reporter has to mention
>> the minor version in the description (the request for the exact version is
>> already part of the template)
>>
>> In order to do so we require the maintainers to list their top-level
>> component and sub-components to be listed along with the version for
>> each.You should include the version for glusterfs (3.6,3.7,3.8,3.9,mainline
>> ) and the sub-components as far as you have them ready. Also give examples
>> of other components and their versions (gdeploy etc). It makes a huge
>> difference for people to amend that has bits missing, starting from scratch
>> without examples it difficult ;-)
>
>
> This is the tree structure for cli, glusterd & glusterd2 sub components.
> Although glusterd2 is currently maintained as a separate github project
> under gluster, going forward the same would be integrated in the main repo
> and hence there is no point to have this maintained as a different component
> in bugzilla IMHO. @Kaushal - let us know if you think otherwise.

Are you saying that we don't need to have a glusterd2 component in bugzilla?
When glusterd2 moves into the glusterfs tree, I'd like to be just be
called glusterd.
So yeah there isn't a need for a glusterd2 component as yet.
But let's see what we decide to call it when we move it.

We're going to continue using Github issues for glusterd2 for now.

>
> |
> |
> - glusterfs
> | |
> | |- cli
> | |- glusterd
> | |- glusterd2
> |
>
>>
>> Thanks and regards,
>> Muthu Vigneshwaran and Niels
>>
>>
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>
>
>
>
> --
>
> --Atin
>
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Bugs with incorrect status

2016-09-06 Thread Niels de Vos
1349723 (mainline) MODIFIED: Added libraries to get server_brick dictionaries
  [master] I904612 distaf: adding libraries to get server_brick dictionaries 
(MERGED)
  ** akhak...@redhat.com: Bug 1349723 should be ON_QA, use v3.9rc0 for 
verification of the fix **

1342298 (mainline) MODIFIED: reading file with size less than 512 fails with 
odirect read
  [master] I097418 features/shard: Don't modify readv size (MERGED)
  ** b...@gluster.org: Bug 1342298 should be ON_QA, use v3.9rc0 for 
verification of the fix **

1362397 (mainline) MODIFIED: Mem leak in meta_default_readv in meta xlators
  [master] Ieb4132 meta: fix memory leak in meta xlators (MERGED)
  ** b...@gluster.org: Bug 1362397 should be ON_QA, use v3.9rc0 for 
verification of the fix **

1279747 (mainline) MODIFIED: spec: add CFLAGS=-DUSE_INSECURE_OPENSSL to 
configure command-line for RHEL-5 only
  ** mchan...@redhat.com: No change posted, but bug 1279747 is in MODIFIED **

1339181 (mainline) MODIFIED: Full heal of a sub-directory does not clean up 
name-indices when granular-entry-heal is enabled.
  [master] Ief71cc cluster/afr: Attempt name-index purge even on full-heal of 
directory (MERGED)
  ** kdhan...@redhat.com: Bug 1339181 should be ON_QA, use v3.9rc0 for 
verification of the fix **

1332073 (mainline) MODIFIED: EINVAL errors while aggregating the directory size 
by quotad
  [master] Iaa quotad: fix potential buffer overflows (NEW)
  [master] If8a267 quotad: fix potential buffer overflows (NEW)
  [master] If8a267 quotad: fix potential buffer overflows (MERGED)
  ** mselv...@redhat.com: Bug 1332073 should be in POST, change If8a267 under 
review **

1336612 (mainline) MODIFIED: one of vm goes to paused state when network goes 
down and comes up back
  [master] Ife1ce4 cluster/afr: Fix warning about unused variable (MERGED)
  [master] I5c50b6 cluster/afr: Refresh inode for inode-write fops in need 
(MERGED)
  [master] I571d0c cluster/afr: Refresh inode for inode-write fops in need 
(ABANDONED)
  [master] If6479e cluster/afr: Refresh inode for inode-write fops in need 
(ABANDONED)
  [master] Iabd91c cluster/afr: If possible give errno received from lower 
xlators (MERGED)
  ** pkara...@redhat.com: Bug 1336612 should be ON_QA, use v3.9rc0 for 
verification of the fix **

1343286 (mainline) MODIFIED: enabling glusternfs with nfs.rpc-auth-allow to 
many hosts failed
  [master] Ibbabad nfs: build exportlist with multiple groupnodes (MERGED)
  [master] I9d04ea xdr/nfs: free complete groupnode structure (MERGED)
  ** bku...@redhat.com: Bug 1343286 should be ON_QA, use v3.9rc0 for 
verification of the fix **

1349284 (mainline) MODIFIED: [tiering]: Files of size greater than that of high 
watermark level should not be promoted
  [master] Ice0457 cluster/tier: dont promote if estimated block consumption > 
hi watermark (MERGED)
  ** mchan...@redhat.com: Bug 1349284 should be ON_QA, use v3.9rc0 for 
verification of the fix **

1358936 (mainline) MODIFIED: coverity: iobuf_get_page_aligned calling 
iobuf_get2 should check the return pointer
  [master] I3aa5b0 core: coverity, NULL potinter check (MERGED)
  ** johnzzpcrys...@gmail.com: Bug 1358936 should be ON_QA, use v3.9rc0 for 
verification of the fix **

1202717 (mainline) MODIFIED: quota: re-factor quota cli and glusterd changes 
and remove code duplication
  ** mselv...@redhat.com: No change posted, but bug 1202717 is in MODIFIED **

1334285 (mainline) MODIFIED: Under high read load, sometimes the message "XDR 
decoding failed" appears in the logs and read fails
  [master] I4db1f4 socket: Fix incorrect handling of partial reads (MERGED)
  ** xhernan...@datalab.es: Bug 1334285 should be ON_QA, use v3.9rc0 for 
verification of the fix **

1153964 (mainline) MODIFIED: quota: rename of "dir" fails in case of quota 
space availability is around 1GB
  [master] Iaad907 quota: No need for quota-limit check if rename is under same 
parent (ABANDONED)
  [master] I2c8140 quota: For a link operation, do quota_check_limit only till 
the common ancestor of src and dst file (MERGED)
  [master] Ia1e536 quota: For a rename operation, do quota_check_limit only 
till the common ancestor of src and dst file (MERGED)
  ** mselv...@redhat.com: Bug 1153964 should be ON_QA, use v3.9rc0 for 
verification of the fix **

1340488 (mainline) MODIFIED: copy-export-ganesha.sh does not have a correct 
shebang
  [master] I22061a ganesha: fix the shebang for the copy-export script (MERGED)
  ** nde...@redhat.com: Bug 1340488 should be ON_QA, use v3.9rc0 for 
verification of the fix **

1008839 (mainline) POST: Certain blocked entry lock info not retained after the 
lock is granted
  [master] Ie37837 features/locks : Certain blocked entry lock info not 
retained after the lock is granted (ABANDONED)
  ** ata...@redhat.com: Bug 1008839 is in POST, but all changes have been 
abandoned **

1370862 (mainline) MODIFIED: dht: fix the broken build
  [master] I81e1e6 dht: define GF_IPC_TARGET_UPCALL (MERGED)
  ** b...@gluster.org: 

Re: [Gluster-devel] Checklist for ganesha FSAL plugin integration testing for 3.9

2016-09-06 Thread Soumya Koduri

CCin gluster-devel & users ML. Somehow they got missed in my earlier reply.

Thanks,
Soumya

On 09/06/2016 12:19 PM, Soumya Koduri wrote:


On 09/03/2016 12:44 AM, Pranith Kumar Karampuri wrote:

hi,
Did you get a chance to decide on the nfs-ganesha integrations
tests that need to be run before doing an upstream gluster release?
Could you let me know who will be providing with the list?



I have added few basic test cases for NFS-Ganesha FSAL and Upcall
component in the etherpad shared. Please check and update the tests
which you recommend.

Thanks,
Soumya


I can update it at
https://public.pad.fsfe.org/p/gluster-component-release-checklist


--
Aravinda & Pranith

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] GlusterFs upstream bugzilla components Fine graining

2016-09-06 Thread Manikandan Selvaganesh
Hi,

This is the tree structure for quota and marker(3.6, 3.7, 3.8, 3.9 and
mainline).

|
|
- glusterfs
||
|| - quota
|| - marker
|

On Tue, Sep 6, 2016 at 2:30 PM, Atin Mukherjee  wrote:

>
>
> On Tue, Sep 6, 2016 at 12:42 PM, Muthu Vigneshwaran 
> wrote:
>
>> Hi,
>>
>>   Actually the currently component list in the Bugzilla appears to be in
>> just alphabetical order of all components and sub-components as a flattened
>> list.
>>
>> Planning to better organize the component list. So the bugs can be
>> reported on the components( mostly matching different git repositories) and
>> sub-components( mostly matching different components in the git repository,
>> or functionality ) in the list respectively which will help in easy access
>> for the reporter of the bug and as well as the assignee.
>>
>> Along with these changes we will have only major version number(3.6,
>> 3.7..) (as mentioned in an earlier email from Kaleb - check that :) )
>> unlike previously we had major version with minor version. Reporter has to
>> mention the minor version in the description (the request for the exact
>> version is already part of the template)
>>
>> In order to do so we require the maintainers to list their top-level
>> component and sub-components to be listed along with the version for
>> each.You should include the version for glusterfs (3.6,3.7,3.8,3.9,mainline
>> ) and the sub-components as far as you have them ready. Also give examples
>> of other components and their versions (gdeploy etc). It makes a huge
>> difference for people to amend that has bits missing, starting from scratch
>> without examples it difficult ;-)
>>
>
> This is the tree structure for cli, glusterd & glusterd2 sub components.
> Although glusterd2 is currently maintained as a separate github project
> under gluster, going forward the same would be integrated in the main repo
> and hence there is no point to have this maintained as a different
> component in bugzilla IMHO. @Kaushal - let us know if you think otherwise.
>
> |
> |
> - glusterfs
> | |
> | |- cli
> | |- glusterd
> | |- glusterd2
> |
>
>
>> Thanks and regards,
>> Muthu Vigneshwaran and Niels
>>
>>
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
>
> --Atin
>
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
>
>


-- 
Regards,
Manikandan Selvaganesh.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] GlusterFs upstream bugzilla components Fine graining

2016-09-06 Thread Atin Mukherjee
On Tue, Sep 6, 2016 at 12:42 PM, Muthu Vigneshwaran 
wrote:

> Hi,
>
>   Actually the currently component list in the Bugzilla appears to be in
> just alphabetical order of all components and sub-components as a flattened
> list.
>
> Planning to better organize the component list. So the bugs can be
> reported on the components( mostly matching different git repositories) and
> sub-components( mostly matching different components in the git repository,
> or functionality ) in the list respectively which will help in easy access
> for the reporter of the bug and as well as the assignee.
>
> Along with these changes we will have only major version number(3.6,
> 3.7..) (as mentioned in an earlier email from Kaleb - check that :) )
> unlike previously we had major version with minor version. Reporter has to
> mention the minor version in the description (the request for the exact
> version is already part of the template)
>
> In order to do so we require the maintainers to list their top-level
> component and sub-components to be listed along with the version for
> each.You should include the version for glusterfs (3.6,3.7,3.8,3.9,mainline
> ) and the sub-components as far as you have them ready. Also give examples
> of other components and their versions (gdeploy etc). It makes a huge
> difference for people to amend that has bits missing, starting from scratch
> without examples it difficult ;-)
>

This is the tree structure for cli, glusterd & glusterd2 sub components.
Although glusterd2 is currently maintained as a separate github project
under gluster, going forward the same would be integrated in the main repo
and hence there is no point to have this maintained as a different
component in bugzilla IMHO. @Kaushal - let us know if you think otherwise.

|
|
- glusterfs
| |
| |- cli
| |- glusterd
| |- glusterd2
|


> Thanks and regards,
> Muthu Vigneshwaran and Niels
>
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 

--Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Checklist for QEMU integration for upstream release

2016-09-06 Thread Niels de Vos
On Tue, Sep 06, 2016 at 09:01:18AM +0530, Bharata B Rao wrote:
> On Mon, Sep 05, 2016 at 05:57:55PM +0200, Niels de Vos wrote:
> > On Sat, Sep 03, 2016 at 01:04:37AM +0530, Pranith Kumar Karampuri wrote:
> > > hi Bharata,
> > >What tests are run before the release of glusterfs so that we make
> > > sure this integration is stable? Could you add that information here so
> > > that I can update it at
> > > https://public.pad.fsfe.org/p/gluster-component-release-checklist
> > 
> > I normally run some qemu-img commands to create/copy/... VM-images. When
> > I have sufficient time, I start a VM based on a gluster:// URL on the
> > commandline (through libvirt XML files), similar to this:
> >   http://blog.nixpanic.net/2013/11/initial-work-on-gluster-integration.html
> 
> I did reply with the testcases I used to run normally. Guess the reply
> didn't make it to the list.

Thanks, now it did. One of our admins accepted the challange ;-)

> > In case Bharata is not actively working (or interested) in QEMU and it's
> > Gluster driver, Prasanna and I should probably replace or get added in
> > the MAINTAINERS file, both of us get requests from the QEMU maintainers
> > directly.
> 
> Makes sense as Prasanna is actively contributing to Gluter driver now.

Do you still want to be listed, or shall we move you to the 'thank you'
section?

Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Checklist for QEMU integration for upstream release

2016-09-06 Thread Bharata B Rao
On Sat, Sep 03, 2016 at 01:04:37AM +0530, Pranith Kumar Karampuri wrote:
> hi Bharata,
>What tests are run before the release of glusterfs so that we make
> sure this integration is stable? Could you add that information here so
> that I can update it at
> https://public.pad.fsfe.org/p/gluster-component-release-checklist

Not sure how to edit that, but this is what you should ensure minimally...

- Create a VM image on gluster backend
  qemu-img create -f qcow2 gluster://...
- Install a distro on the created image using qemu
  qemu ... gluster://...
- May be run a few IO benchmarks/stress tests like dbench, fio etc on guest
  disk which is backed by gluster.

Regards,
Bharata.

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Checklist for QEMU integration for upstream release

2016-09-06 Thread Bharata B Rao
On Mon, Sep 05, 2016 at 05:57:55PM +0200, Niels de Vos wrote:
> On Sat, Sep 03, 2016 at 01:04:37AM +0530, Pranith Kumar Karampuri wrote:
> > hi Bharata,
> >What tests are run before the release of glusterfs so that we make
> > sure this integration is stable? Could you add that information here so
> > that I can update it at
> > https://public.pad.fsfe.org/p/gluster-component-release-checklist
> 
> I normally run some qemu-img commands to create/copy/... VM-images. When
> I have sufficient time, I start a VM based on a gluster:// URL on the
> commandline (through libvirt XML files), similar to this:
>   http://blog.nixpanic.net/2013/11/initial-work-on-gluster-integration.html

I did reply with the testcases I used to run normally. Guess the reply
didn't make it to the list.
 
> In case Bharata is not actively working (or interested) in QEMU and it's
> Gluster driver, Prasanna and I should probably replace or get added in
> the MAINTAINERS file, both of us get requests from the QEMU maintainers
> directly.

Makes sense as Prasanna is actively contributing to Gluter driver now.

Regards,
Bharata.

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Checklist for QEMU integration for upstream release

2016-09-06 Thread Niels de Vos
On Tue, Sep 06, 2016 at 12:41:31PM +0530, Prasanna Kalever wrote:
> On Mon, Sep 5, 2016 at 9:27 PM, Niels de Vos  wrote:
> > On Sat, Sep 03, 2016 at 01:04:37AM +0530, Pranith Kumar Karampuri wrote:
> >> hi Bharata,
> >>What tests are run before the release of glusterfs so that we make
> >> sure this integration is stable? Could you add that information here so
> >> that I can update it at
> >> https://public.pad.fsfe.org/p/gluster-component-release-checklist
> >
> > I normally run some qemu-img commands to create/copy/... VM-images. When
> > I have sufficient time, I start a VM based on a gluster:// URL on the
> > commandline (through libvirt XML files), similar to this:
> >   http://blog.nixpanic.net/2013/11/initial-work-on-gluster-integration.html
> 
> Certainly this is good way of testing, but unfortunately this is not enough.

Yes, I only recently noticed that different image formats use different
functions in the storage drivers. I'm planning to run the whole Advocado
tests in the CentOS CI with the nightly builds.

> With the recent changes to support multivolfile server in qemu, I feel
> we need more tests w.r.t that area (ex. switching volfile servers both
> initial client select time and run-time) ?
> 
> Niels,
> 
> Why don't we add some testcases/scripts for this?
> I shall create a repository for this in my fee times and we keep
> adding the test cases here which will be one run per release. (Let me
> know if you are in favor of adding them in the gluster repo itself)

Yes, that would be good. Either Glusto test-cases or Advocado should do.
We probably should have some coverage in Glusto anyway, and it lends
itself better for the multi-host testing than Advocado, I guess.

> And I also feel we should be responsible with some checks with libvirt
> compatibility, as in testing with virsh commands would be super cool.

Indeed! I only test with libvirt because it is easier than writing a
QEMU command by hand ;-) Getting it included in Glusto should be our
aim. We can contribute Gluster testing to the libvirt tests too, that
makes sure the integration keeps working from both ways.

> > In case Bharata is not actively working (or interested) in QEMU and it's
> > Gluster driver, Prasanna and I should probably replace or get added in
> > the MAINTAINERS file, both of us get requests from the QEMU maintainers
> > directly.
> 
> I am happy to take this responsibility.

The final responsibility lies with Jeff Cody and other QEMU maintainers,
it'll be our task to make sure new features in Gluster get exposed
through libgfapi and used by QEMU/gluster. We should also watch out for
new features added to the block-layer in QEMU, and consider extending
Gluster to provide support for them.

I'll send a patch for the MAINTAINERS file later.

Thanks,
Niels

> Thanks,
> --
> Prasanna
> 
> >
> > Niels
> >
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] How to enable FUSE kernel cache about dentry and inode?

2016-09-06 Thread Ravishankar N

On 09/06/2016 12:27 PM, Keiviw wrote:
Could you please tell me your glusterfs version and the mount command 
that you have used?? My GlusterFS version is 3.3.0, different versions 
may be exits different results.


I tried it on the master branch, on Fedora 22 virtual machines (kernel 
version: 4.1.6-200.fc22.x86_64 ). By the way 3.3 is a rather old 
version, you might want to use the latest 3.8.x release.








At 2016-09-06 12:35:19, "Ravishankar N"  wrote:

That is strange. I tried the experiment on a volume with a million
files. The client node's memory usage did grow, as I observed from
the output of free(1) http://paste.fedoraproject.org/422551/ when
I did a `ls`.
-Ravi

On 09/02/2016 07:31 AM, Keiviw wrote:

Exactly, I mounted the volume in a no-brick node(nodeB), and
nodeA was the server. I have set different timeout, but when I
excute "ls /mnt/glusterfs(about 3 million small files, in other
words, about 3 million dentries)", the result was the same,
memory usage in nodeB didn't change at all while nodeA's memory
usage was changed about 4GB!

发自 网易邮箱大师 
On 09/02/2016 09:45, Ravishankar N
 wrote:

On 09/02/2016 05:42 AM, Keiviw wrote:

Even if I set the attribute-timeout and entry-timeout to
3600s(1h), in the nodeB, it didn't cache any metadata
because the memory usage didn't change. So I was confused
that why did the client not cache dentries and inodes.


If you only want to test fuse's caching, I would try mounting
the volume on a separate machine (not on the brick node
itself), disable all gluster performance xlators, do a
find.|xargs stat on the mount 2 times in succession and see
what free(1) reports the 1st and 2nd time. You could do this
experiment with various attr/entry timeout values. Make sure
your volume has a lot of small files.
-Ravi



在 2016-09-01 16:37:00,"Ravishankar N"
 写道:

On 09/01/2016 01:04 PM, Keiviw wrote:

Hi,
I have found that GlusterFS client(mounted by FUSE)
didn't cache metadata like dentries and inodes. I have
installed GlusterFS 3.6.0 in nodeA and nodeB, and the
brick1 and brick2 was in nodeA, then in nodeB, I
mounted the volume to /mnt/glusterfs by FUSE. From my
test, I excuted 'ls /mnt/glusterfs' in nodeB, and found
that the memory didn't use at all. Here are my questions:
1. In fuse kernel, the author set some attributes
to control the time-out about dentry and inode, in
other words, the fuse kernel supports metadata cache,
but in my test, dentries and inodes were not cached. WHY?
2. Were there some options in GlusterFS mounted to
local to enable the metadata cache in fuse kernel?



You can tweak the attribute-timeout and entry-timeout
seconds while mounting the volume. Default is 1 second
for both.  `man mount.glusterfs` lists various mount
options.
-Ravi




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel















___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Checklist for QEMU integration for upstream release

2016-09-06 Thread Prasanna Kalever
On Mon, Sep 5, 2016 at 9:27 PM, Niels de Vos  wrote:
> On Sat, Sep 03, 2016 at 01:04:37AM +0530, Pranith Kumar Karampuri wrote:
>> hi Bharata,
>>What tests are run before the release of glusterfs so that we make
>> sure this integration is stable? Could you add that information here so
>> that I can update it at
>> https://public.pad.fsfe.org/p/gluster-component-release-checklist
>
> I normally run some qemu-img commands to create/copy/... VM-images. When
> I have sufficient time, I start a VM based on a gluster:// URL on the
> commandline (through libvirt XML files), similar to this:
>   http://blog.nixpanic.net/2013/11/initial-work-on-gluster-integration.html

Certainly this is good way of testing, but unfortunately this is not enough.

With the recent changes to support multivolfile server in qemu, I feel
we need more tests w.r.t that area (ex. switching volfile servers both
initial client select time and run-time) ?

Niels,

Why don't we add some testcases/scripts for this?
I shall create a repository for this in my fee times and we keep
adding the test cases here which will be one run per release. (Let me
know if you are in favor of adding them in the gluster repo itself)

And I also feel we should be responsible with some checks with libvirt
compatibility, as in testing with virsh commands would be super cool.

>
> In case Bharata is not actively working (or interested) in QEMU and it's
> Gluster driver, Prasanna and I should probably replace or get added in
> the MAINTAINERS file, both of us get requests from the QEMU maintainers
> directly.

I am happy to take this responsibility.

Thanks,
--
Prasanna

>
> Niels
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Gerrit Access Control

2016-09-06 Thread Nigel Babu
On Thu, Sep 01, 2016 at 12:43:06PM +0530, Nigel Babu wrote:
> > > Just need a clarification. Does a "commit in the last 90 days" means
> > > merging a patch sent by someone else by maintainer or maintainer sending a
> > > patch to be merged?
> >
>
> Your email needs to either be in Reviewed-By or Author in git log. So you
> either need to send patches or review patches. Ideally, I'm looking for
> activity on Gerrit and this is the easiest way to figure that out. Yes, I'm
> checking across all active branches.
>
> As an additional bonus, this will also give us a list of people who should be
> on the maintainers team, but aren't.
>
> > Interesting question. I was wondering about something similar as well.
> > What about commits/permissions for the different repositories we host on
> > Gerrit? Does each repository has its own maintainers, or is it one group
> > of maintainers that has merge permissions for all repos?
> >
>
> Each repo on Gerrit seems to mostly have it's own permissions. That's
> a sensible way to go about it. Some of them are unused a clean up is coming
> along, but that's later.

I've answered everyone's concerns on this thread. If nobody is opposed to the
idea, shall I go ahead with this?


--
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] How to enable FUSE kernel cache about dentry and inode?

2016-09-06 Thread Keiviw
Could you please tell me your glusterfs version and the mount command that you 
have used?? My GlusterFS version is 3.3.0, different versions may be exits 
different results.






At 2016-09-06 12:35:19, "Ravishankar N"  wrote:

That is strange. I tried the experiment on a volume with a million files. The 
client node's memory usage did grow, as I observed from the output of free(1)  
http://paste.fedoraproject.org/422551/ when I did a `ls`.
-Ravi
 
On 09/02/2016 07:31 AM, Keiviw wrote:

Exactly, I mounted the volume in a no-brick node(nodeB), and nodeA was the 
server. I have set different timeout, but when I excute "ls 
/mnt/glusterfs(about 3 million small files, in other words, about 3 million 
dentries)", the result was the same, memory usage in nodeB didn't change at all 
while nodeA's memory usage was changed about 4GB!


发自 网易邮箱大师
On 09/02/2016 09:45, Ravishankar N wrote:
On 09/02/2016 05:42 AM, Keiviw wrote:

Even if I set the attribute-timeout and entry-timeout to 3600s(1h), in the 
nodeB, it didn't cache any metadata because the memory usage didn't change. So 
I was confused that why did the client not cache dentries and inodes.


If you only want to test fuse's caching, I would try mounting the volume on a 
separate machine (not on the brick node itself), disable all gluster 
performance xlators, do a find.|xargs stat on the mount 2 times in succession 
and see what free(1) reports the 1st and 2nd time. You could do this experiment 
with various attr/entry timeout values. Make sure your volume has a lot of 
small files.
-Ravi



在 2016-09-01 16:37:00,"Ravishankar N"  写道:

On 09/01/2016 01:04 PM, Keiviw wrote:

Hi,
I have found that GlusterFS client(mounted by FUSE) didn't cache metadata 
like dentries and inodes. I have installed GlusterFS 3.6.0 in nodeA and nodeB, 
and the brick1 and brick2 was in nodeA, then in nodeB, I mounted the volume to 
/mnt/glusterfs by FUSE. From my test, I excuted 'ls /mnt/glusterfs' in nodeB, 
and found that the memory didn't use at all. Here are my questions:
1. In fuse kernel, the author set some attributes to control the time-out 
about dentry and inode, in other words, the fuse kernel supports metadata 
cache, but in my test, dentries and inodes were not cached. WHY?
2. Were there some options in GlusterFS mounted to local to enable the 
metadata cache in fuse kernel? 


You can tweak the attribute-timeout and entry-timeout seconds while mounting 
the volume. Default is 1 second for both.  `man mount.glusterfs` lists various 
mount options.
-Ravi


 




___
Gluster-devel mailing list
Gluster-devel@gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-devel








 









___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Spurious termination of fuse invalidation notifier thread

2016-09-06 Thread Xavier Hernandez

Hi Raghavendra,

On 06/09/16 06:11, Raghavendra Gowdappa wrote:



- Original Message -

From: "Xavier Hernandez" 
To: "Raghavendra Gowdappa" , "Kaleb Keithley" , 
"Pranith Kumar Karampuri"

Cc: "Csaba Henk" , "Gluster Devel" 
Sent: Monday, September 5, 2016 12:46:43 PM
Subject: Re: Spurious termination of fuse invalidation notifier thread

Hi Raghavendra,

On 03/09/16 05:42, Raghavendra Gowdappa wrote:

Hi Xavi/Kaleb/Pranith,

During few of our older conversations (like [1], but not only one), some of
you had reported that the thread which writes invalidation notifications
(of inodes, entries) to /dev/fuse terminates spuriously. Csaba tried to
reproduce the issue, but without success. It would be helpful if you
provide any information on reproducer and/or possible reasons for the
behavior.


I didn't found what really caused the problem. I only saw the
termination message on a production server after some days working but
hadn't had the opportunity to debug it.

Looking at the code, the only conclusion I got is that the result from
the write to /dev/fuse was unexpected. The patch solves this and I
haven't seen the problem again.

The old code only manages ENOENT error. It exits the thread for any
other error. I guess that in some situations a write to /dev/fuse can
return other "non fatal" errors.


Thanks Xavi. Now I remember the changes. Since you have not seen spurious 
termination after the changes, I assume the issue is fixed.


Yes, I haven't seen the issue again since the patch was applied.





As a guess, I think it may be a failure in an entry invalidation.
Looking at the code of fuse, it may return ENOTDIR if parent of the
entry is not a directory and some race happens doing rm/create while
sending invalidations in the background. Another possibility is
ENOTEMPTY if the entry references a non empty directory (again probably
caused by races between user mode operations and background
invalidations). Anyway this is only a guess, I have no more information.

Xavi



[1]
http://review.gluster.org/#/c/13274/1/xlators/mount/fuse/src/fuse-bridge.c

regards,
Raghavendra




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel