Re: [Gluster-devel] tests/basic/ec/quota.t failure

2016-02-23 Thread Xavier Hernandez

This seems the same problem solved by these patches:

For master: http://review.gluster.org/13446
For 3.7: http://review.gluster.org/13447

Xavi

On 24/02/16 06:09, Atin Mukherjee wrote:

The above test failed in one of the regression run [1]. Mind having a
look? I've filed a bug [2] for the same.

[1]
https://build.gluster.org/job/rackspace-regression-2GB-triggered/18454/consoleFull
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1311368

~Atin


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Proposal of WORM/Retention Translator

2016-02-23 Thread Karthik Subrahmanya
Hi all,

I'm Karthik Subrahmanya, working as an intern in Red Hat, Bangalore.

I, along with Joseph Fernandes (Mentor), will be implementing File-Level 
WORM-Retention feature for GlusterFS.

What is WORM-Retention FS?
1. WORM/Retention FS is a file system that supports immutable(read-only) and 
undeletable files.
2. Each file will have its own worm/retention attribute. Like Retention Period, 
Retention Time/Date, WORM/Retention State etc
3. It stores data in a tamper-proof and secure way and data accessibility 
policies.

What is already implemented in Gluster?
The existing WORM implementation in GlusterFS works at volume level as a switch.
It makes all the files in the volume read-only if it is switched on, which is 
might be desirable or applicable for Data Compliance usage.[1]

Our idea is to implement a file level WORM, in which it saves the 
WORM/Retention attribute for each file in the volume.


Please refer [2] for more on design.

Our approach will be step by step,

1. Implement WORM-Retention Semantics in GlusterFS using the WORM Xlator
2. Integrate WORM-Retention with Bitrot with WORM-Retention Feature for Data 
Validation
3. Help in implementing Control of atime, mtime, ctime [3], as its a 
requirement for us
4. Tiering based on Compliance (Stress Goal)

We have done a POC on WORM-Retention Semantics [4]. We are working on making it 
well baked.

Vijai (vmall...@redhat.com) and Raghavendra Talur (rta...@redhat.com) have 
helped us in implementing the POC.

Your valuable suggestions are most welcome and expecting your support in the 
future work!

Thanks & Regards,
Karthik Subrahmanya

[1] https://en.wikipedia.org/wiki/Regulatory_compliance
[2] 
http://www.gluster.org/community/documentation/index.php/Features/gluster_compliance_archive
[3] 
http://nongnu.13855.n7.nabble.com/distributed-files-directories-and-cm-time-updates-td207822.html
[4] http://review.gluster.org/13429
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] tests/basic/ec/quota.t failure

2016-02-23 Thread Mohammed Rafi K C
One more similar failure, In 
http://build.gluster.org/job/rackspace-regression-2GB-triggered/18434/

Rafi KC

On 02/24/2016 11:10 AM, Vijaikumar Mallikarjuna wrote:
>
> We will look into the issue.
>
> Thanks,
> Vijay
>
> On Feb 24, 2016 10:39 AM, "Atin Mukherjee"  > wrote:
>
> The above test failed in one of the regression run [1]. Mind having a
> look? I've filed a bug [2] for the same.
>
> [1]
> 
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/18454/consoleFull
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1311368
>
> ~Atin
>
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] tests/basic/ec/quota.t failure

2016-02-23 Thread Vijaikumar Mallikarjuna
We will look into the issue.

Thanks,
Vijay
On Feb 24, 2016 10:39 AM, "Atin Mukherjee"  wrote:

> The above test failed in one of the regression run [1]. Mind having a
> look? I've filed a bug [2] for the same.
>
> [1]
>
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/18454/consoleFull
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1311368
>
> ~Atin
>
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] tests/basic/ec/quota.t failure

2016-02-23 Thread Atin Mukherjee
The above test failed in one of the regression run [1]. Mind having a
look? I've filed a bug [2] for the same.

[1]
https://build.gluster.org/job/rackspace-regression-2GB-triggered/18454/consoleFull
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1311368

~Atin

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] readdir-ahead questions

2016-02-23 Thread 静蒋
1,In the code,readdir-ahead didn't package up the readdir request into
a bigger request, it just packaged up the dentries, if the dentries'
size was greater than the request size, the bigger request returned to
the client, wasn't it?
2,The requests from the Readdir-ahead Xlator wind down to next Xlator
,did they send to a server or brodcast to all the servers?
3,As you have said, the preload is in progress, a readdir from
application waits for its completion. And If I change the
buffer(request) size, will the application wait for a long time? Could
it be a stream, the readdir from application fetches dentries in the
buffer, and the readdir-ahead xlator pre-fetches dentries from the
servers?
4,When does it can be a larger buffer, like io-cache,which cached the
data/dentries read before? As you know, ls is so slow.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] 3.6.8 crashing a lot in production

2016-02-23 Thread Raghavendra Gowdappa


- Original Message -
> From: "Krutika Dhananjay" 
> To: "Raghavendra G" 
> Cc: "Gluster Devel" 
> Sent: Tuesday, February 23, 2016 9:23:33 PM
> Subject: Re: [Gluster-devel] 3.6.8 crashing a lot in production
> 
> Raghavendra,
> 
> The crash was due to bug(s) in clear-locks command implementation. Joe and I
> had had an offline discussion about this.

oh! Thanks for the update :).

> 
> -Krutika
> 
> 
> 
> From: "Raghavendra G" 
> To: "Krutika Dhananjay" 
> Cc: "Joe Julian" , "Gluster Devel"
> 
> Sent: Tuesday, February 23, 2016 8:54:01 PM
> Subject: Re: [Gluster-devel] 3.6.8 crashing a lot in production
> 
> Came across a glibc bug which could've caused some corruptions. On googling
> about possible problems, we found that there is an issue (
> https://bugzilla.redhat.com/show_bug.cgi?id=1305406 ) fixed in
> glibc-2.17-121.el7. From the bug we found the following test-script to
> determine if the glibc is buggy. And on running it, we ran it on the local
> setup using the following method given in the bug:
>  # objdump -r -d /lib64/libc.so.6 | grep -C 20 _int_free |
> grep -C 10 cmpxchg | head -21 | grep -A 3 cmpxchg | tail -1 | (grep '%r' &&
> echo "Your libc is likely buggy." || echo "Your libc looks OK.") 7cc36: 48
> 85 c9 test %rcx,%rcx Your libc is likely buggy.  Could you
> check if the above command on your setup gives the same output which says
> "Your libc is likely buggy." regards,
> 
> On Sat, Feb 13, 2016 at 7:46 AM, Krutika Dhananjay < kdhan...@redhat.com >
> wrote:
> 
> 
> 
> Taking a look. Give me some time.
> 
> -Krutika
> 
> 
> 
> 
> From: "Joe Julian" < j...@julianfamily.org >
> To: "Krutika Dhananjay" < kdhan...@redhat.com >, gluster-devel@gluster.org
> Sent: Saturday, February 13, 2016 6:02:13 AM
> Subject: Fwd: [Gluster-devel] 3.6.8 crashing a lot in production
> 
> 
> Could this be a regression from http://review.gluster.org/7981 ?
> 
>  Forwarded Message  Subject:  [Gluster-devel] 3.6.8 crashing
> a lot in production
> Date: Fri, 12 Feb 2016 16:20:59 -0800
> From: Joe Julian 
> 
> To:   gluster-us...@gluster.org , gluster-devel@gluster.org
> 
> 
> I have multiple bricks crashing in production. Any help would be greatly
> appreciated.
> 
> The crash log is in this bug report:
> https://bugzilla.redhat.com/show_bug.cgi?id=1307146 Looks like it's crashing
> in pl_inodelk_client_cleanup
> ___
> Gluster-devel mailing list Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
> 
> 
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
> 
> 
> --
> Raghavendra G
> 
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] 3.6.8 crashing a lot in production

2016-02-23 Thread Joe Julian
To satisfy this curiosity, however, ubuntu 12.04's 
libc6-2.15-0ubuntu10.5 produces:


objdump -r -d /lib/x86_64-linux-gnu/libc.so.6 | grep -C 20 _int_free | 
grep -C 10 cmpxchg | head -21 | grep -A 3 cmpxchg | tail -1 | (grep '%r' 
&& echo "Your libc is likely buggy." || echo "Your libc looks OK.")

Your libc looks OK.

On 02/23/2016 07:53 AM, Krutika Dhananjay wrote:

Raghavendra,

The crash was due to bug(s) in clear-locks command implementation. Joe 
and I had had an offline discussion about this.


-Krutika


*From: *"Raghavendra G" 
*To: *"Krutika Dhananjay" 
*Cc: *"Joe Julian" , "Gluster Devel"

*Sent: *Tuesday, February 23, 2016 8:54:01 PM
*Subject: *Re: [Gluster-devel] 3.6.8 crashing a lot in production

Came across a glibc bug which could've caused some corruptions. On
googling about possible problems, we found that there is an issue
(https://bugzilla.redhat.com/show_bug.cgi?id=1305406) fixed in
glibc-2.17-121.el7. From the bug we found the following
test-script to determine if the glibc is buggy. And on running it,
we ran it on the local setup using the following method given in
the bug:
 # objdump -r -d /lib64/libc.so.6 | grep -C 20
_int_free | grep -C 10 cmpxchg | head -21 | grep -A 3 cmpxchg |
tail -1 | (grep '%r' && echo "Your libc is likely buggy." || echo
"Your libc looks OK.") 7cc36: 48 85 c9 test %rcx,%rcx Your libc is
likely buggy.  Could you check if the above
command on your setup gives the same output which says "Your libc
is likely buggy." regards,

On Sat, Feb 13, 2016 at 7:46 AM, Krutika Dhananjay
mailto:kdhan...@redhat.com>> wrote:

Taking a look. Give me some time.

-Krutika



*From: *"Joe Julian" mailto:j...@julianfamily.org>>
*To: *"Krutika Dhananjay" mailto:kdhan...@redhat.com>>, gluster-devel@gluster.org

*Sent: *Saturday, February 13, 2016 6:02:13 AM
*Subject: *Fwd: [Gluster-devel] 3.6.8 crashing a lot in
production


Could this be a regression from
http://review.gluster.org/7981 ?

 Forwarded Message 
Subject:[Gluster-devel] 3.6.8 crashing a lot in production
Date:   Fri, 12 Feb 2016 16:20:59 -0800
From:   Joe Julian 

To: gluster-us...@gluster.org
,
gluster-devel@gluster.org 



I have multiple bricks crashing in production. Any help would be 
greatly
appreciated.

The crash log is in this bug report:
https://bugzilla.redhat.com/show_bug.cgi?id=1307146

Looks like it's crashing in pl_inodelk_client_cleanup
___
Gluster-devel mailing list
Gluster-devel@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-devel





___
Gluster-devel mailing list
Gluster-devel@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-devel




-- 
Raghavendra G





___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] 3.6.8 crashing a lot in production

2016-02-23 Thread Krutika Dhananjay
Raghavendra, 

The crash was due to bug(s) in clear-locks command implementation. Joe and I 
had had an offline discussion about this. 

-Krutika 
- Original Message -

> From: "Raghavendra G" 
> To: "Krutika Dhananjay" 
> Cc: "Joe Julian" , "Gluster Devel"
> 
> Sent: Tuesday, February 23, 2016 8:54:01 PM
> Subject: Re: [Gluster-devel] 3.6.8 crashing a lot in production

> Came across a glibc bug which could've caused some corruptions. On googling
> about possible problems, we found that there is an issue (
> https://bugzilla.redhat.com/show_bug.cgi?id=1305406 ) fixed in
> glibc-2.17-121.el7. From the bug we found the following test-script to
> determine if the glibc is buggy. And on running it, we ran it on the local
> setup using the following method given in the bug:
>  # objdump -r -d /lib64/libc.so.6 | grep -C 20 _int_free |
> grep -C 10 cmpxchg | head -21 | grep -A 3 cmpxchg | tail -1 | (grep '%r' &&
> echo "Your libc is likely buggy." || echo "Your libc looks OK.") 7cc36: 48
> 85 c9 test %rcx,%rcx Your libc is likely buggy.  Could you
> check if the above command on your setup gives the same output which says
> "Your libc is likely buggy." regards,

> On Sat, Feb 13, 2016 at 7:46 AM, Krutika Dhananjay < kdhan...@redhat.com >
> wrote:

> > Taking a look. Give me some time.
> 

> > -Krutika
> 

> > > From: "Joe Julian" < j...@julianfamily.org >
> > 
> 
> > > To: "Krutika Dhananjay" < kdhan...@redhat.com >,
> > > gluster-devel@gluster.org
> > 
> 
> > > Sent: Saturday, February 13, 2016 6:02:13 AM
> > 
> 
> > > Subject: Fwd: [Gluster-devel] 3.6.8 crashing a lot in production
> > 
> 

> > > Could this be a regression from http://review.gluster.org/7981 ?
> > 
> 

> > >  Forwarded Message  Subject:  [Gluster-devel] 3.6.8
> > > crashing
> > > a lot in production
> > 
> 
> > > Date: Fri, 12 Feb 2016 16:20:59 -0800
> > 
> 
> > > From: Joe Julian 
> > 
> 

> > > To:   gluster-us...@gluster.org , gluster-devel@gluster.org
> > 
> 

> > > I have multiple bricks crashing in production. Any help would be greatly
> > 
> 
> > > appreciated.
> > 
> 

> > > The crash log is in this bug report:
> > > https://bugzilla.redhat.com/show_bug.cgi?id=1307146 Looks like it's
> > > crashing
> > > in pl_inodelk_client_cleanup
> > 
> 
> > > ___
> > 
> 
> > > Gluster-devel mailing list Gluster-devel@gluster.org
> > > http://www.gluster.org/mailman/listinfo/gluster-devel
> > 
> 

> > ___
> 
> > Gluster-devel mailing list
> 
> > Gluster-devel@gluster.org
> 
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> 

> --
> Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] 3.6.8 crashing a lot in production

2016-02-23 Thread Raghavendra G
Came across a glibc bug which could've caused some corruptions. On googling
about possible problems, we found that there is an issue (
https://bugzilla.redhat.com/show_bug.cgi?id=1305406) fixed in
glibc-2.17-121.el7. From the bug we found the following test-script to
determine if the glibc is buggy. And on running it, we ran it on the local
setup using the following method given in the bug:
 # objdump -r -d /lib64/libc.so.6 | grep -C 20 _int_free |
grep -C 10 cmpxchg | head -21 | grep -A 3 cmpxchg | tail -1 | (grep '%r' &&
echo "Your libc is likely buggy." || echo "Your libc looks OK.") 7cc36: 48
85 c9 test %rcx,%rcx Your libc is likely buggy.  Could you
check if the above command on your setup gives the same output which says
"Your libc is likely buggy." regards,

On Sat, Feb 13, 2016 at 7:46 AM, Krutika Dhananjay 
wrote:

> Taking a look. Give me some time.
>
> -Krutika
>
> --
>
> *From: *"Joe Julian" 
> *To: *"Krutika Dhananjay" , gluster-devel@gluster.org
> *Sent: *Saturday, February 13, 2016 6:02:13 AM
> *Subject: *Fwd: [Gluster-devel] 3.6.8 crashing a lot in production
>
>
> Could this be a regression from http://review.gluster.org/7981 ?
>
>  Forwarded Message 
> Subject: [Gluster-devel] 3.6.8 crashing a lot in production
> Date: Fri, 12 Feb 2016 16:20:59 -0800
> From: Joe Julian  
> To: gluster-us...@gluster.org, gluster-devel@gluster.org
>
>
> I have multiple bricks crashing in production. Any help would be greatly
> appreciated.
>
> The crash log is in this bug report: 
> https://bugzilla.redhat.com/show_bug.cgi?id=1307146
>
> Looks like it's crashing in pl_inodelk_client_cleanup
> ___
> Gluster-devel mailing 
> listGluster-devel@gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-devel
>
>
>
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Regarding default_forget/releasedir/release() fops

2016-02-23 Thread Soumya Koduri



On 02/23/2016 05:02 PM, Jeff Darcy wrote:

Recently while doing some tests (which involved lots of inode_forget()),
I have noticed that my log file got flooded with below messages -

[2016-02-22 08:57:44.025565] W [defaults.c:2889:default_forget] (-->
/usr/local/lib/libglusterfs.so.0(_gf_log_callingfn+0x231)[0x7fd00f63c15d]
(-->
/usr/local/lib/libglusterfs.so.0(default_forget+0x44)[0x7fd00f6cda2b]
(--> /usr/local/lib/libglusterfs.so.0(+0x39706)[0x7fd00f64b706] (-->
/usr/local/lib/libglusterfs.so.0(+0x397d2)[0x7fd00f64b7d2] (-->
/usr/local/lib/libglusterfs.so.0(+0x3be08)[0x7fd00f64de08] )
0-gfapi: xlator does not implement forget_cbk

  From the code, looks like we throw a warning in default-tmpl.c if any
xlator hasn't implemented forget(), releasedir() and release().

Though I agree it warns us about possible leaks which may happen if
these fops are not supported, it is annoying to have these messages
flooded in the log file which grew >1GB within few minutes.

Could you please confirm if it was intentional to throw this warning so
that all xlators shall have these fops implemented or if we can change
the log level to DEBUG?


It is intentional, and I would prefer that it be resolved by having
translators implement these calls, but it doesn't need to be a warning.
DEBUG would be fine.


Thanks for the confirmation. I have posted below patches -

http://review.gluster.org/#/q/status:open+project:glusterfs+branch:master+topic:bug-1311124

Thanks,
Soumya




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Minutes of today's Gluster Community Bug Triage Meeting.

2016-02-23 Thread Manikandan Selvaganesh
Hi all,

We had good number of participants in today's Gluster Bug Triage meeting. 
Thanks to 
everyone who have attended it. Here are the minutes of today's meeting.

Meeting minutes:

Meeting ended Tue Feb 23 12:48:44 2016 UTC. Information about MeetBot at 
http://wiki.debian.org/MeetBot .
Minutes: 
http://meetbot.fedoraproject.org/gluster-meeting/2016-02-23/gluster_bug_triage.2016-02-23-12.00.html
Minutes (text): 
http://meetbot.fedoraproject.org/gluster-meeting/2016-02-23/gluster_bug_triage.2016-02-23-12.00.txt
Log: 
http://meetbot.fedoraproject.org/gluster-meeting/2016-02-23/gluster_bug_triage.2016-02-23-12.00.log.html

Meeting summary
---
* agenda: https://public.pad.fsfe.org/p/gluster-bug-triage  (Manikandan,
  12:00:29)
* Roll Call  (Manikandan, 12:00:38)

* Group Triage  (Manikandan, 12:06:38)
  * LINK: https://public.pad.fsfe.org/p/gluster-bugs-to-triage
(Manikandan, 12:06:48)
  * LINK:
http://gluster.readthedocs.org/en/latest/Contributors-Guide/Bug-Triage/
(Manikandan, 12:07:00)
  * LINK: https://public.pad.fsfe.org/p/gluster-bugs-to-triage
(Manikandan, 12:12:06)

* Open Floor  (Manikandan, 12:33:40)
  * We have reduced the backlogs to ~18 bugs  (Manikandan, 12:34:40)
  * Remember to setup bugzilla notifications:

https://github.com/gluster/glusterdocs/blob/master/Developer-guide/Bugzilla%20Notifications.md
(ndevos, 12:43:50)
  * ACTION: Scheduling moderators for Gluster Community Bug Triage
meeting for a month  (Manikandan, 12:47:58)

Meeting ended at 12:48:44 UTC.

Action Items

* Scheduling moderators for Gluster Community Bug Triage meeting for a
  month
* kkeithley_ will come up with a proposal to reduce the number of bugs
  against "mainline" in NEW state
* msvbhat  and ndevos need to think about and decide how to provide/use
  debug build.
* kkeithley_ will come up with a proposal to reduce the number of bugs
  against "mainline" in NEW state
* hagarth start/sync email on regular (nightly) automated tests
* msvbhat will look into using nightly builds for automated testing,
  and will report issues/success to the mailinglist
* msvbhat will look into lalatenduM's automated Coverity setup in Jenkins
  whichneed assistance  from an admin with more permissions
* msvbhat  and ndevos need to think about and decide how to provide/use
  debug builds
* msvbhat  provide a simple step/walk-through on how to provide testcases
  for the nightly rpm tests
* ndevos to propose some test-cases for minimal libgfapi test
* Manikandan and Nandaja will keep updating on the bug automation workflow.


People Present (lines said)
---
* Manikandan (59)
* jiffin (38)
* rafi (33)
* ggarg (21)
* ndevos (15)
* glusterbot (5)
* zodbot (4)
* kkeithley_ (4)
* hgowtham_ (3)
* skoduri (2)

--
Thanks & Regards,
Manikandan Selvaganesh.

- Original Message -
> Hi all,
> 
> This meeting is scheduled for anyone that is interested in learning more
> about, or assisting with the Bug Triage.
> 
> Meeting details:
> - location: #gluster-meeting on Freenode IRC
>  (https://webchat.freenode.net/?channels=gluster-meeting  )
> - date: every Tuesday
> - time: 12:00 UTC
>  (in your terminal, run: date -d "12:00 UTC")
> - agenda: https://public.pad.fsfe.org/p/gluster-bug-triage
> 
> Currently the following items are listed:
> * Roll Call
> * Status of last weeks action items
> * Group Triage
> * Open Floor
> 
> The last two topics have space for additions. If you have a suitable bug
> or topic to discuss, please add it to the agenda.
> 
> Appreciate your participation.
> 
> Thank you :-)
> 
> --
> Thanks & Regards,
> Manikandan Selvaganesh.
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Restricting add-brick when volume is stopped.

2016-02-23 Thread Anuradha Talur


- Original Message -
> From: "Kaushal M" 
> To: "Anuradha Talur" 
> Cc: "Gluster Devel" 
> Sent: Tuesday, February 23, 2016 1:04:32 PM
> Subject: Re: [Gluster-devel] Restricting add-brick when volume is stopped.
> 
> On Tue, Feb 23, 2016 at 12:57 PM, Anuradha Talur  wrote:
> > Hi,
> >
> > AFR has a requirement that when replica count is changed while adding
> > bricks to a volume, e.g., converting a replica 2 to replica 3, afr pending
> > xattrs are marked to indicate this change. (To prevent potential
> > data-loss)
> >
> > This is possible only when the volume is not stopped, which is a deviation
> > from the present behaviour that allows add-brick even when the volume is
> > stopped. I sent a patch : http://review.gluster.org/#/c/12451/ , if this
> > change is included, only such add-brick operations that change replica
> > count will be forbidden when the volume is stopped. I would like to know
> > if there are any objections to this.
> 
> This should be okay. But I'd like to know if other solutions are possible.
> 
> (I'm not an AFR guy, so the below is based on my (mis)understanding of
> how it works. Please correct me if I'm wrong.)
> Is there no way for AFR to detect that the 3 brick is an empty brick?
> When AFR requests the bricks for the pending xattrs, the new brick
> wouldn't return any. AFR could then do a full heal to the new brick in
> this case.
> 

The new brick wouldn't have any pending xattrs if all the operations after
adding the new brick succeeded on the other bricks. Meanwhile, if there
were any operations succeeded on new brick and not on old brick, it may
also result in reverse healing and loss of data.

> I don't know how complex it would be to do such a check, but would
> like to know if its possible.
> 
> In anycase, I'm okay with the suggested volume online check in
> GlusterD. I'll review the change and let you know if anything else is
> needed.
> 
> ~kaushal
> >
> > --
> > Thanks,
> > Anuradha.
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> 

-- 
Thanks,
Anuradha.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Regarding default_forget/releasedir/release() fops

2016-02-23 Thread Jeff Darcy
> Recently while doing some tests (which involved lots of inode_forget()),
> I have noticed that my log file got flooded with below messages -
> 
> [2016-02-22 08:57:44.025565] W [defaults.c:2889:default_forget] (-->
> /usr/local/lib/libglusterfs.so.0(_gf_log_callingfn+0x231)[0x7fd00f63c15d]
> (-->
> /usr/local/lib/libglusterfs.so.0(default_forget+0x44)[0x7fd00f6cda2b]
> (--> /usr/local/lib/libglusterfs.so.0(+0x39706)[0x7fd00f64b706] (-->
> /usr/local/lib/libglusterfs.so.0(+0x397d2)[0x7fd00f64b7d2] (-->
> /usr/local/lib/libglusterfs.so.0(+0x3be08)[0x7fd00f64de08] )
> 0-gfapi: xlator does not implement forget_cbk
> 
>  From the code, looks like we throw a warning in default-tmpl.c if any
> xlator hasn't implemented forget(), releasedir() and release().
> 
> Though I agree it warns us about possible leaks which may happen if
> these fops are not supported, it is annoying to have these messages
> flooded in the log file which grew >1GB within few minutes.
> 
> Could you please confirm if it was intentional to throw this warning so
> that all xlators shall have these fops implemented or if we can change
> the log level to DEBUG?

It is intentional, and I would prefer that it be resolved by having
translators implement these calls, but it doesn't need to be a warning.
DEBUG would be fine.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 60 minutes)

2016-02-23 Thread Manikandan Selvaganesh
Hi all,

This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
 (https://webchat.freenode.net/?channels=gluster-meeting  )
- date: every Tuesday
- time: 12:00 UTC
 (in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thank you :-)

--
Thanks & Regards,
Manikandan Selvaganesh.

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Gerrit search bookmarks that work in firefox too

2016-02-23 Thread Raghavendra Talur
Hi All,

I use these two gerrit bookmarks for easy search, just wanted to send on
the mailing list as many were interested in it.


*Patches that have passed all tests:*
http://review.gluster.org/#/q/status:open+label:CentOS-regression%2B1+AND+label:NetBSD-regression%2B1+AND+label:smoke%2B1


*Patches that have passed all tests and are waiting for my review:*
http://review.gluster.org/#/q/status:open+label:CentOS-regression%2B1+AND+label:NetBSD-regression%2B1+AND+label:smoke%2B1+AND+reviewer:self

Thanks,
Raghavendra Talur
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel