Re: [Gluster-users] nfs-ganesha/pnfs read/write path on EC volume

2016-02-09 Thread Serkan Çoban
Hi Jiffin,

Any update about the write path?

Serkan

On Sun, Jan 31, 2016 at 5:00 PM, Jiffin Tony Thottan
 wrote:
>
>
> On 31/01/16 16:19, Serkan Çoban wrote:
>>
>> Hi,
>> I am testing nfs-ganesha with pNFS on EC volume and I want to ask some
>> questions.
>> Assume we have two clients: c1,c2
>> and six servers with one 4+2 EC volume constructed as below:
>>
>> gluster volume create vol1 disperse 6 redundancy 2 server{1..6}:/brick/b1
>> \
>>
>>   server{1..6}:/brick/b2 \
>>
>>   server{1..6}:/brick/b3 \
>>
>>   server{1..6}:/brick/b4 \
>>
>>   server{1..6}:/brick/b5 \
>>
>>   server{1..6}:/brick/b6
>> vol1 is mounted on both clients as server1:/vol1
>>
>> Here is first question: When I write file1 from client1 and file2 from
>> client2; which servers get the files? In my opinion server1 gets file1
>> and server2 gets file2 and do EC calculations and distribute chunks to
>> other servers. Am I right?
>>
>> Can anyone explain detailed read/write path with pNFS and EC volumes?
>
>
> I never tried pNFS with EC volume, will try the same by my own and reply to
> your question as soon as possible.
> --
> Jiffin
>
>> Thanks,
>> Serkan
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Sparse files and heal full bug fix backport to 3.6.x

2016-02-09 Thread Ravishankar N

Hi Steve,
The patch already went in for 3.6.3 
(https://bugzilla.redhat.com/show_bug.cgi?id=1187547). What version are 
you using? If it is 3.6.3 or newer, can you share the logs if this 
happens again? (or possibly try if you can reproduce the issue on your 
setup).

Thanks,
Ravi

On 02/10/2016 02:25 AM, FNU Raghavendra Manjunath wrote:


Adding Pranith, maintainer of the replicate feature.


Regards,
Raghavendra


On Tue, Feb 9, 2016 at 3:33 PM, Steve Dainard > wrote:


There is a thread from 2014 mentioning that the heal process on a
replica volume was de-sparsing sparse files.(1)

I've been experiencing the same issue on Gluster 3.6.x. I see there is
a bug closed for a fix on Gluster 3.7 (2) and I'm wondering if this
fix can be back-ported to Gluster 3.6.x?

My experience has been:
Replica 3 volume
1 brick went offline
Brought brick back online
Heal full on volume
My 500G vm-storage volume went from ~280G used to >400G used.

I've experienced this a couple times previously, and used fallocate to
re-sparse files but this is cumbersome at best, and lack of proper
heal support on sparse files could be disastrous if I didn't have
enough free space and ended up crashing my VM's when my storage domain
ran out of space.

Seeing as 3.6 is still a supported release, and 3.7 feels too bleeding
edge for production systems, I think it makes sense to back-port this
fix if possible.

Thanks,
Steve



1.
https://www.gluster.org/pipermail/gluster-users/2014-November/019512.html
2. https://bugzilla.redhat.com/show_bug.cgi?id=1166020
___
Gluster-users mailing list
Gluster-users@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] samba-vfs-glusterfs deleting only metadata and not the actual folders/files

2016-02-09 Thread Jason Chang
I'm on glusterfs 3.7.6 running samba 4.1.6 with samba-vfs-glusterfs 3.7
from monotek.
I have three nodes running in distributed mode.

when going through cifs share provided by the module, I can create
folders/files normally but when deleted through cifs share from windows,
files disappears from the view but still exists in the storage.

When mounting the volume using glusterfs client, the file creation and
deletion all works correctly.

Currently the volume is being shared through the mount point to the rest of
the users avoiding the vfs module completely.

I'm not sure what to catch for in the logs since it is very cryptic. Please
let me know how I can provide the info you require.

Thanks so much!

-JGC
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Sparse files and heal full bug fix backport to 3.6.x

2016-02-09 Thread FNU Raghavendra Manjunath
Adding Pranith, maintainer of the replicate feature.


Regards,
Raghavendra


On Tue, Feb 9, 2016 at 3:33 PM, Steve Dainard  wrote:

> There is a thread from 2014 mentioning that the heal process on a
> replica volume was de-sparsing sparse files.(1)
>
> I've been experiencing the same issue on Gluster 3.6.x. I see there is
> a bug closed for a fix on Gluster 3.7 (2) and I'm wondering if this
> fix can be back-ported to Gluster 3.6.x?
>
> My experience has been:
> Replica 3 volume
> 1 brick went offline
> Brought brick back online
> Heal full on volume
> My 500G vm-storage volume went from ~280G used to >400G used.
>
> I've experienced this a couple times previously, and used fallocate to
> re-sparse files but this is cumbersome at best, and lack of proper
> heal support on sparse files could be disastrous if I didn't have
> enough free space and ended up crashing my VM's when my storage domain
> ran out of space.
>
> Seeing as 3.6 is still a supported release, and 3.7 feels too bleeding
> edge for production systems, I think it makes sense to back-port this
> fix if possible.
>
> Thanks,
> Steve
>
>
>
> 1.
> https://www.gluster.org/pipermail/gluster-users/2014-November/019512.html
> 2. https://bugzilla.redhat.com/show_bug.cgi?id=1166020
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Sparse files and heal full bug fix backport to 3.6.x

2016-02-09 Thread Steve Dainard
There is a thread from 2014 mentioning that the heal process on a
replica volume was de-sparsing sparse files.(1)

I've been experiencing the same issue on Gluster 3.6.x. I see there is
a bug closed for a fix on Gluster 3.7 (2) and I'm wondering if this
fix can be back-ported to Gluster 3.6.x?

My experience has been:
Replica 3 volume
1 brick went offline
Brought brick back online
Heal full on volume
My 500G vm-storage volume went from ~280G used to >400G used.

I've experienced this a couple times previously, and used fallocate to
re-sparse files but this is cumbersome at best, and lack of proper
heal support on sparse files could be disastrous if I didn't have
enough free space and ended up crashing my VM's when my storage domain
ran out of space.

Seeing as 3.6 is still a supported release, and 3.7 feels too bleeding
edge for production systems, I think it makes sense to back-port this
fix if possible.

Thanks,
Steve



1. https://www.gluster.org/pipermail/gluster-users/2014-November/019512.html
2. https://bugzilla.redhat.com/show_bug.cgi?id=1166020
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Building 3.7.8 rpms

2016-02-09 Thread Serkan Çoban
I checkout v3.7.8 tag and rebuild the RPMs. Thanks for the answer.

On Tue, Feb 9, 2016 at 7:43 PM, Atin Mukherjee
 wrote:
> Depends on whether the head matches or not.  IIRC, Pranith has already
> tagged 3.7.8, so just wait for a day or two till he announces the release,
> otherwise match the head of your source and 3.7.8 tag.
>
> -Atin
> Sent from one plus one
>
> On 09-Feb-2016 11:06 pm, "Serkan Çoban"  wrote:
>>
>> Hi,
>>
>> I just build 3.7.8 RPMs from git. Is is ok to install them to
>> production? Will they be same as official 3.7.8 packages?
>>
>> Thanks,
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Building 3.7.8 rpms

2016-02-09 Thread Atin Mukherjee
Depends on whether the head matches or not.  IIRC, Pranith has already
tagged 3.7.8, so just wait for a day or two till he announces the release,
otherwise match the head of your source and 3.7.8 tag.

-Atin
Sent from one plus one
On 09-Feb-2016 11:06 pm, "Serkan Çoban"  wrote:

> Hi,
>
> I just build 3.7.8 RPMs from git. Is is ok to install them to
> production? Will they be same as official 3.7.8 packages?
>
> Thanks,
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Building 3.7.8 rpms

2016-02-09 Thread Serkan Çoban
Hi,

I just build 3.7.8 RPMs from git. Is is ok to install them to
production? Will they be same as official 3.7.8 packages?

Thanks,
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Fail of one brick lead to crash VMs

2016-02-09 Thread FNU Raghavendra Manjunath
Hi Dominique,

Thanks for the logs. I will go through the logs. I have also CCed Pranith
who is the maintainer of the replicate feature.


Regards,
Raghavendra


On Tue, Feb 9, 2016 at 11:45 AM, Dominique Roux 
wrote:

> Logs are attached
>
> For claryfication:
> vmhost1-cluster1 -> Brick 1
> vmhost2-cluster2 -> Brick 2
> entrance -> Peer
>
> Time of testing (31.01.2016 16:13)
>
> Thanks for your help
>
> Regards,
> Dominique
>
>
> Werde Teil des modernen Arbeitens im Glarnerland auf www.digitalglarus.ch!
> Lese Neuigkeiten auf Twitter: www.twitter.com/DigitalGlarus
> Diskutiere mit auf Facebook:  www.facebook.com/digitalglarus
>
> On 02/08/2016 04:40 PM, FNU Raghavendra Manjunath wrote:
> > + Pranith
> >
> > In the meantime, can you please provide the logs of all the gluster
> > server machines  and the client machines?
> >
> > Logs can be found in /var/log/glusterfs directory.
> >
> > Regards,
> > Raghavendra
> >
> > On Mon, Feb 8, 2016 at 9:20 AM, Dominique Roux
> > mailto:dominique.r...@ungleich.ch>> wrote:
> >
> > Hi guys,
> >
> > I faced a problem a week ago.
> > In our environment we have three servers in a quorum. The gluster
> volume
> > is spreaded over two bricks and has the type replicated.
> >
> > We now, for simulating a fail of one brick, isolated one of the two
> > bricks with iptables, so that communication to the other two peers
> > wasn't possible anymore.
> > After that VMs (opennebula) which had I/O in this time crashed.
> > We stopped the glusterfsd hard (kill -9) and restarted it, what made
> > things work again (Certainly we also had to restart the failed VMs).
> But
> > I think this shouldn't happen. Since quorum was not reached (2/3
> hosts
> > were still up and connected).
> >
> > Here some infos of our system:
> > OS: CentOS Linux release 7.1.1503
> > Glusterfs version: glusterfs 3.7.3
> >
> > gluster volume info:
> >
> > Volume Name: cluster1
> > Type: Replicate
> > Volume ID:
> > Status: Started
> > Number of Bricks: 1 x 2 = 2
> > Transport-type: tcp
> > Bricks:
> > Brick1: srv01:/home/gluster
> > Brick2: srv02:/home/gluster
> > Options Reconfigured:
> > cluster.self-heal-daemon: enable
> > cluster.server-quorum-type: server
> > network.remote-dio: enable
> > cluster.eager-lock: enable
> > performance.stat-prefetch: on
> > performance.io-cache: off
> > performance.read-ahead: off
> > performance.quick-read: off
> > server.allow-insecure: on
> > nfs.disable: 1
> >
> > Hope you can help us.
> >
> > Thanks a lot.
> >
> > Best regards
> > Dominique
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org 
> > http://www.gluster.org/mailman/listinfo/gluster-users
> >
> >
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Fail of one brick lead to crash VMs

2016-02-09 Thread Dominique Roux
Logs are attached

For claryfication:
vmhost1-cluster1 -> Brick 1
vmhost2-cluster2 -> Brick 2
entrance -> Peer

Time of testing (31.01.2016 16:13)

Thanks for your help

Regards,
Dominique


Werde Teil des modernen Arbeitens im Glarnerland auf www.digitalglarus.ch!
Lese Neuigkeiten auf Twitter: www.twitter.com/DigitalGlarus
Diskutiere mit auf Facebook:  www.facebook.com/digitalglarus

On 02/08/2016 04:40 PM, FNU Raghavendra Manjunath wrote:
> + Pranith
> 
> In the meantime, can you please provide the logs of all the gluster
> server machines  and the client machines?
> 
> Logs can be found in /var/log/glusterfs directory.
> 
> Regards,
> Raghavendra
> 
> On Mon, Feb 8, 2016 at 9:20 AM, Dominique Roux
> mailto:dominique.r...@ungleich.ch>> wrote:
> 
> Hi guys,
> 
> I faced a problem a week ago.
> In our environment we have three servers in a quorum. The gluster volume
> is spreaded over two bricks and has the type replicated.
> 
> We now, for simulating a fail of one brick, isolated one of the two
> bricks with iptables, so that communication to the other two peers
> wasn't possible anymore.
> After that VMs (opennebula) which had I/O in this time crashed.
> We stopped the glusterfsd hard (kill -9) and restarted it, what made
> things work again (Certainly we also had to restart the failed VMs). But
> I think this shouldn't happen. Since quorum was not reached (2/3 hosts
> were still up and connected).
> 
> Here some infos of our system:
> OS: CentOS Linux release 7.1.1503
> Glusterfs version: glusterfs 3.7.3
> 
> gluster volume info:
> 
> Volume Name: cluster1
> Type: Replicate
> Volume ID:
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: srv01:/home/gluster
> Brick2: srv02:/home/gluster
> Options Reconfigured:
> cluster.self-heal-daemon: enable
> cluster.server-quorum-type: server
> network.remote-dio: enable
> cluster.eager-lock: enable
> performance.stat-prefetch: on
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> server.allow-insecure: on
> nfs.disable: 1
> 
> Hope you can help us.
> 
> Thanks a lot.
> 
> Best regards
> Dominique
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org 
> http://www.gluster.org/mailman/listinfo/gluster-users
> 
> 


logs_glusterlist.tar.xz
Description: application/xz
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS behaviour on stat syscall with relatime activated

2016-02-09 Thread Simon Turcotte-Langevin
Good day to you Pranith,

Once again, thank you for your time. Our use case does include a lot of small 
files and the read performances must not be impacted by a RELATIME-based 
solution. Even though this option could fix the RELATIME behavior on GlusterFS, 
it looks like the impact of the performance could be too great for us. 
Therefore, we will test the solution, but we will also consider alternative 
ways to detect usage of the files we serve.

Simon

From: Pranith Kumar Karampuri [mailto:pkara...@redhat.com]
Sent: 8 février 2016 00:26
To: Simon Turcotte-Langevin ; 
gluster-users@gluster.org
Cc: UPS_Development 
Subject: Re: [Gluster-users] GlusterFS behaviour on stat syscall with relatime 
activated


On 02/06/2016 12:19 AM, Simon Turcotte-Langevin wrote:
Good day to you Pranith,

Thank you for your answer, it was exactly this. However, we still have an issue 
with RELATIME on GlusterFS.

Stating the file does not modify atime anymore, with quick-read disabled, 
however cat-ing the file does not replicate the atime.
This is because of open-behind feature. Disable open-behind with: "gluster 
volume set  open-behind off". I believe you will see the atime 
behavior you want to see with it. This will reduce the performance of small 
file reads (< 64KB). Instead of one lookup over the network now it will do, 
lookup + open(This will be sent to both the replica bricks which updates atime) 
+ read (Only one of the bricks). Let me know if you want any more information.

Pranith



If I touch manually the file, the atime (or utimes) is replicated correctly.

So to sum it up:


· [node1] Touch –a file1

o   --> Access time is right on [node1] [node2] and [node3]

· [node1] Cat file1

o   --> Access time is right on [node1]

o   --> Access time is wrong on [node2] and [node3]

Would you have any idea what is going on behind the curtain, and if there is 
any way to fix that behavior?

Thank you,
Simon

From: Pranith Kumar Karampuri [mailto:pkara...@redhat.com]
Sent: 5 février 2016 00:55
To: Simon Turcotte-Langevin 
;
 gluster-users@gluster.org
Cc: UPS_Development 

Subject: Re: [Gluster-users] GlusterFS behaviour on stat syscall with relatime 
activated


On 02/03/2016 10:12 PM, Simon Turcotte-Langevin wrote:
Hi, we have multiple clusters of GlusterFS which are mostly alike. The typical 
setup is as such:


-  Cluster of 3 nodes

-  Replication factor of 3

-  Each node has 1 brick, mounted on XFS with RELATIME and NODIRATIME

-  Each node has 8 disks in RAID 0 hardware

The main problem we are facing is that observation of the access time of a file 
on the volume will update the access time.

The steps to reproduce the problem are:


-  Create a file (echo ‘some data’ > /mnt/gv0/file)

-  Touch its mtime and atime to some past date (touch –d 19700101 
/mnt/gv0/file)

-  Touch its mtime to the current timestamp (touch –m /mnt/gv0/file)

-  Stat the file until atime is updated (stat /mnt/gv0/file)

o   Sometimes it’s instant, sometimes it requires to execute the above command 
a couple of time
atime changes on open call.

Quick-read xlator opens the file and reads the content on 'lookup' which gets 
triggered in stat. It does that to serve reads from memory to reduce number of 
network round trips for small files. Could you disable that xlator and try the 
experiment? On my machine the time didn't change after I disabled that feature 
using:

"gluster volume set  quick-read off"

Pranith





On the IRC channel, I spoke to a developer (nickname ndevos) who said that it 
might be a getxattr() syscall that could be called when stat() is called on a 
replicated volume.



Anybody can reproduce this issue? Is it a bug, or is it working as intended? Is 
there any workaround?



Thank you,

Simon





___

Gluster-users mailing list

Gluster-users@gluster.org

http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS behaviour on stat syscall with relatime activated

2016-02-09 Thread Simon Turcotte-Langevin
Good day to you Pranith,

Thank you for your answer, it was exactly this. However, we still have an issue 
with RELATIME on GlusterFS.

Stating the file does not modify atime anymore, with quick-read disabled, 
however cat-ing the file does not replicate the atime.

If I touch manually the file, the atime (or utimes) is replicated correctly.

So to sum it up:


· [node1] Touch –a file1

o   --> Access time is right on [node1] [node2] and [node3]

· [node1] Cat file1

o   --> Access time is right on [node1]

o   --> Access time is wrong on [node2] and [node3]

Would you have any idea what is going on behind the curtain, and if there is 
any way to fix that behavior?

Thank you,
Simon

From: Pranith Kumar Karampuri [mailto:pkara...@redhat.com]
Sent: 5 février 2016 00:55
To: Simon Turcotte-Langevin ; 
gluster-users@gluster.org
Cc: UPS_Development 
Subject: Re: [Gluster-users] GlusterFS behaviour on stat syscall with relatime 
activated


On 02/03/2016 10:12 PM, Simon Turcotte-Langevin wrote:
Hi, we have multiple clusters of GlusterFS which are mostly alike. The typical 
setup is as such:


-  Cluster of 3 nodes

-  Replication factor of 3

-  Each node has 1 brick, mounted on XFS with RELATIME and NODIRATIME

-  Each node has 8 disks in RAID 0 hardware

The main problem we are facing is that observation of the access time of a file 
on the volume will update the access time.

The steps to reproduce the problem are:


-  Create a file (echo ‘some data’ > /mnt/gv0/file)

-  Touch its mtime and atime to some past date (touch –d 19700101 
/mnt/gv0/file)

-  Touch its mtime to the current timestamp (touch –m /mnt/gv0/file)

-  Stat the file until atime is updated (stat /mnt/gv0/file)

o   Sometimes it’s instant, sometimes it requires to execute the above command 
a couple of time
atime changes on open call.

Quick-read xlator opens the file and reads the content on 'lookup' which gets 
triggered in stat. It does that to serve reads from memory to reduce number of 
network round trips for small files. Could you disable that xlator and try the 
experiment? On my machine the time didn't change after I disabled that feature 
using:

"gluster volume set  quick-read off"

Pranith




On the IRC channel, I spoke to a developer (nickname ndevos) who said that it 
might be a getxattr() syscall that could be called when stat() is called on a 
replicated volume.



Anybody can reproduce this issue? Is it a bug, or is it working as intended? Is 
there any workaround?



Thank you,

Simon




___

Gluster-users mailing list

Gluster-users@gluster.org

http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Minutes of today's Gluster Community Bug Triage Meeting

2016-02-09 Thread Manikandan Selvaganesh
Hi all,

Thanks everyone for attending the Gluster Community Bug Triage meeting today
and here are the minutes of the meeting:

Meeting summary
---
* agenda: https://public.pad.fsfe.org/p/gluster-bug-triage (Manikandan,
  12:00:41)
* Roll Call  (Manikandan, 12:00:48)

* kkeithley_ will come up with a proposal to reduce the number of bugs
  against "mainline" in NEW state  (Manikandan, 12:05:23)
  * ACTION: kkeithley_ will come up with a proposal to reduce the number
of bugs against "mainline" in NEW state  (Manikandan, 12:06:15)

* msvbhat  and ndevos need to think about and decide how to provide/use
  debug builds  (Manikandan, 12:06:35)
  * ACTION: msvbhat  and ndevos need to think about and decide how to
provide/use debug build  (Manikandan, 12:07:34)

* Group Triage  (Manikandan, 12:08:38)
  * LINK:
http://gluster.readthedocs.org/en/latest/Contributors-Guide/Bug-Triage/
(Manikandan, 12:09:07)

* Open Floor  (Manikandan, 12:26:23)

Meeting ended at 12:29:30 UTC.


Action Items

* kkeithley_ will come up with a proposal to reduce the number of bugs
  against "mainline" in NEW state
* msvbhat  and ndevos need to think about and decide how to provide/use
  debug build.
* kkeithley_ will come up with a proposal to reduce the number of bugs
  against "mainline" in NEW state
* hagarth start/sync email on regular (nightly) automated tests
* msvbhat will look into using nightly builds for automated testing,
  and will report issues/success to the mailinglist
* msvbhat will look into lalatenduM's automated Coverity setup in Jenkins
  whichneed assistance  from an admin with more permissions 
* msvbhat  and ndevos need to think about and decide how to provide/use
  debug builds
* msvbhat  provide a simple step/walk-through on how to provide testcases
  for the nightly rpm tests
* ndevos to propose some test-cases for minimal libgfapi test
* Manikandan and Nandaja will keep updating on the bug automation workflow.

People Present (lines said)
---
* Manikandan (43)
* ndevos (6)
* hgowtham (5)
* skoduri (4)
* jiffin (3)
* zodbot (3)
* Saravanakmr (3)

See you all next week :-)

--
Thanks & Regards,
Manikandan Selvaganesh.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 60 minutes)

2016-02-09 Thread Manikandan Selvaganesh
Hi all,

This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
 (https://webchat.freenode.net/?channels=gluster-meeting  )
- date: every Tuesday
- time: 12:00 UTC
 (in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thank you :-)

--
Regards,
Manikandan Selvaganesh.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users