Re: [Gluster-devel] regarding special treatment of ENOTSUP for setxattr

2014-05-18 Thread Pranith Kumar Karampuri
Sent the following patch to remove the special treatment of ENOTSUP here: 
http://review.gluster.org/7788

Pranith
- Original Message -
 From: Kaleb KEITHLEY kkeit...@redhat.com
 To: gluster-devel@gluster.org
 Sent: Tuesday, May 13, 2014 8:01:53 PM
 Subject: Re: [Gluster-devel] regarding special treatment of ENOTSUP for   
 setxattr
 
 On 05/13/2014 08:00 AM, Nagaprasad Sathyanarayana wrote:
  On 05/07/2014 03:44 PM, Pranith Kumar Karampuri wrote:
 
  - Original Message -
  From: Raghavendra Gowdappa rgowd...@redhat.com
  To: Pranith Kumar Karampuri pkara...@redhat.com
  Cc: Vijay Bellur vbel...@redhat.com, gluster-devel@gluster.org,
  Anand Avati aav...@redhat.com
  Sent: Wednesday, May 7, 2014 3:42:16 PM
  Subject: Re: [Gluster-devel] regarding special treatment of ENOTSUP
  for setxattr
 
  I think with repetitive log message suppression patch being merged, we
  don't really need gf_log_occasionally (except if they are logged in
  DEBUG or
  TRACE levels).
  That definitely helps. But still, setxattr calls are not supposed to
  fail with ENOTSUP on FS where we support gluster. If there are special
  keys which fail with ENOTSUPP, we can conditionally log setxattr
  failures only when the key is something new?
 
 I know this is about EOPNOTSUPP (a.k.a. ENOTSUPP) returned by
 setxattr(2) for legitimate attrs.
 
 But I can't help but wondering if this isn't related to other bugs we've
 had with, e.g., lgetxattr(2) called on invalid xattrs?
 
 E.g. see https://bugzilla.redhat.com/show_bug.cgi?id=765202. We have a
 hack where xlators communicate with each other by getting (and setting?)
 invalid xattrs; the posix xlator has logic to filter out  invalid
 xattrs, but due to bugs this hasn't always worked perfectly.
 
 It would be interesting to know which xattrs are getting errors and on
 which fs types.
 
 FWIW, in a quick perusal of a fairly recent (3.14.3) kernel, in xfs
 there are only six places where EOPNOTSUPP is returned, none of them
 related to xattrs. In ext[34] EOPNOTSUPP can be returned if the
 user_xattr option is not enabled (enabled by default in ext4.) And in
 the higher level vfs xattr code there are many places where EOPNOTSUPP
 _might_ be returned, primarily only if subordinate function calls aren't
 invoked which would clear the default or return a different error.
 
 --
 
 Kaleb
 
 
 
 
 
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel
 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Regression tests: Should we test non-XFS too?

2014-05-18 Thread Dan Mons
On 15 May 2014 14:35, Ric Wheeler rwhee...@redhat.com wrote:

 it is up to those developers and users to test their preferred combination.


Not sure if this was quoting me or someone else.  BtrFS is in-tree for
most distros these days, and RHEL is putting it in as a technology
preview in 7, which likely means it'll be supported in a point
release down the road somewhere.  My question was merely if that's
going to be a bigger emphasis for Gluster.org folks to test into the
future, or if XFS is going to remain the default/recommended for a lot
longer yet.

If the answer is it depends on our customers' needs, then put me
down as one who needs something better than XFS.  I'll happily put in
the hard yards to test BtrFS with GlusterFS, but at the same time I'm
keen to know if that's a wise use of my time or a complete waste of my
time if I'm deviating too far from what RedHat/Gluster.org is planning
on blessing in the future.


 The reason to look at either ZFS or btrfs is not really performance driven
 in most cases.


Performance means different things to different people.  For me,
part of XFS's production performance is how frequently I need to
xfs_repair my 40TB bricks.  BtrFS/ZFS drastically reduces this sort of
thing thanks to various checksumming properties not native to other
current filesystems.

When I average my MB/s over 6 months in a 24x7 business, a weekend
long outage required to run xfs_repair my entire cluster has as much
impact (potentially even more) as a file system with slower file IO
performance.

XFS is great when it works.  When it doesn't, there's tears and
tantrums.  Over the course of a production year, that all impacts
performance when the resolution of my Munin graphs are that low.

-Dan
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Changes to Regression script

2014-05-18 Thread Pranith Kumar Karampuri


- Original Message -
 From: Vijay Bellur vbel...@redhat.com
 To: Pranith Kumar Karampuri pkara...@redhat.com
 Cc: gluster-infra gluster-in...@gluster.org, gluster-devel@gluster.org
 Sent: Saturday, 17 May, 2014 2:52:03 PM
 Subject: Re: [Gluster-devel] Changes to Regression script
 
 On 05/17/2014 02:10 PM, Pranith Kumar Karampuri wrote:
 
 
  - Original Message -
  From: Vijay Bellur vbel...@redhat.com
  To: gluster-infra gluster-in...@gluster.org
  Cc: gluster-devel@gluster.org
  Sent: Tuesday, May 13, 2014 4:13:02 PM
  Subject: [Gluster-devel] Changes to Regression script
 
  Hi All,
 
  Me and Kaushal have effected the following changes on regression.sh in
  build.gluster.org:
 
  1. If a regression run results in a core and all tests pass, that
  particular run will be flagged as a failure. Previously a core that
  would cause test failures only would get marked as a failure.
 
  2. Cores from a particular test run are now archived and are available
  at /d/archived_builds/. This will also prevent manual intervention for
  managing cores.
 
  3. Logs from failed regression runs are now archived and are available
  at /d/logs/glusterfs-timestamp.tgz
 
  Do let us know if you have any comments on these changes.
 
  This is already proving to be useful :-). I was able to debug one of the
  spurious failures for crypt.t. But the only problem is I was not able copy
  out the logs. Had to take avati's help to get the log files. Will it be
  possible to give access to these files so that anyone can download them?
 
 
 Good to know!
 
 You can access the .tgz files from:
 
 http://build.gluster.org:443/logs/

I was able to access these yesterday. But now it gives 404.

Pranith
 
 -Vijay
 
 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Changes to Regression script

2014-05-18 Thread Vijay Bellur

On 05/19/2014 09:41 AM, Pranith Kumar Karampuri wrote:



- Original Message -

From: Vijay Bellur vbel...@redhat.com
To: Pranith Kumar Karampuri pkara...@redhat.com
Cc: gluster-infra gluster-in...@gluster.org, gluster-devel@gluster.org
Sent: Saturday, 17 May, 2014 2:52:03 PM
Subject: Re: [Gluster-devel] Changes to Regression script

On 05/17/2014 02:10 PM, Pranith Kumar Karampuri wrote:



- Original Message -

From: Vijay Bellur vbel...@redhat.com
To: gluster-infra gluster-in...@gluster.org
Cc: gluster-devel@gluster.org
Sent: Tuesday, May 13, 2014 4:13:02 PM
Subject: [Gluster-devel] Changes to Regression script

Hi All,

Me and Kaushal have effected the following changes on regression.sh in
build.gluster.org:

1. If a regression run results in a core and all tests pass, that
particular run will be flagged as a failure. Previously a core that
would cause test failures only would get marked as a failure.

2. Cores from a particular test run are now archived and are available
at /d/archived_builds/. This will also prevent manual intervention for
managing cores.

3. Logs from failed regression runs are now archived and are available
at /d/logs/glusterfs-timestamp.tgz

Do let us know if you have any comments on these changes.


This is already proving to be useful :-). I was able to debug one of the
spurious failures for crypt.t. But the only problem is I was not able copy
out the logs. Had to take avati's help to get the log files. Will it be
possible to give access to these files so that anyone can download them?



Good to know!

You can access the .tgz files from:

http://build.gluster.org:443/logs/


I was able to access these yesterday. But now it gives 404.



Fixed.

-Vijay

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Spurious failures because of nfs and snapshots

2014-05-18 Thread Pranith Kumar Karampuri


- Original Message -
 From: Justin Clift jus...@gluster.org
 To: Pranith Kumar Karampuri pkara...@redhat.com
 Cc: Gluster Devel gluster-devel@gluster.org
 Sent: Monday, 19 May, 2014 10:26:04 AM
 Subject: Re: [Gluster-devel] Spurious failures because of nfs and snapshots
 
 On 16/05/2014, at 1:49 AM, Pranith Kumar Karampuri wrote:
  hi,
 In the latest build I fired for review.gluster.com/7766
 (http://build.gluster.org/job/regression/4443/console) failed because
 of spurious failure. The script doesn't wait for nfs export to be
 available. I fixed that, but interestingly I found quite a few scripts
 with same problem. Some of the scripts are relying on 'sleep 5' which
 also could lead to spurious failures if the export is not available in
 5 seconds.
 
 Cool.  Fixing this NFS problem across all of the tests would be really
 welcome.  That specific failed test (bug-1087198.t) is the most common
 one I've seen over the last few weeks, causing about half of all
 failures in master.
 
 Eliminating this class of regression failure would be really helpful. :)

This particular class is eliminated :-). Patch was merged on Friday.

Pranith
 
 + Justin
 
 --
 Open Source and Standards @ Red Hat
 
 twitter.com/realjustinclift
 
 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel