Re: [Gluster-devel] On knowledge transfer of some of the components

2017-08-20 Thread Raghavendra Gowdappa
+gluster-devel

I am thinking of covering following topics:

* Resolution of path to inodes 
  - named vs nameless lookups, 
  - fresh vs revalidate lookups, 
  - What kind of lookups can happen on various interfaces like fuse, aux-gfid 
mounts, gfapi and high level discussion of how they need to be handled (for 
eg., in dht)
  - anonymous fds, 
  - gfid/entry resolution in fuse/brick, 
  - state in fd/inode,
  - inode table management - fuse, bricks (with brick-multiplexing), gfapi
  - impact of graph switches, 
  - healing.

* performance xlators (read-caching) - read-ahead, io-cache, quick-read, 
open-behind
* performance xlators (write-caching) - write-behind
* performance xlators (dentry prefetching) - readdir-ahead
* rpc/transport. I had already given a talk on this. Will go through the 
recording to check whether I've missed out anything. will schedule a talk on 
this if new content need to be added
* Different ways concurrency/parallelism is introduced in Glusterfs - 
stack-wind/unwind, multithreads, synctasks
  - A brief overview of control flow of a fop in various threads like /dev/fuse 
reader thread, event-threads, io-threads etc

If I've missed out any topics, please let me know and if it is in my area of 
expertise will add it to the list. If any of you like to chime in on different 
topics you are welcome :) and we can work out a schedule.

I am thinking of scheduling a talk of 1 to an hour and half duration every 
Tuesday 7:30 pm IST. Will send out the actual dates/schedule if this time is 
comfortable for majority. I am open to alternative time slots too.

regards,
Raghavendra
- Original Message -
> From: "Raghavendra Gowdappa" 
> To: "Rafi" , "Nithya Balachandran" 
> , "Kotresh Hiremath Ravishankar"
> , "Sanoj Unnikrishnan" , "Milind 
> Changire" , "Csaba
> Henk" , "Mohit Agrawal" , "Krutika 
> Dhananjay" 
> Cc: "GlusterFS Maintainers" 
> Sent: Tuesday, July 25, 2017 10:13:41 AM
> Subject: On knowledge transfer of some of the components
> 
> Hi all,
> 
> Each one of you have been mentioned as a Peer to one or more components I am
> a maintainer of [1] and is relatively new to the component. So, we need to
> come up with ways where effective knowledge transfer is done to enable you
> to take independent decisions for issues concerned. Some of the ways I can
> think of are:
> 
> From me,
> * giving a high level architectural overview
> * giving a code walk-through explaining idiosyncrasies
> 
> From you,
> * scourging through bugzilla/mailing lists for issues on individual
> components and finding RCA and fixes. I can help you to prioritize and with
> discussions to the best of my capacity.
> * reading/changing/thinking about code and architecture. If the component is
> active, code reviewing is definitely a good way to start and to keep
> informed about component.
> * identifying weak points and suggesting improvements to the component. IOW,
> charting roadmap.
> 
> I would like to hear from you on how to go about this exercise. Suggestions
> and help with logistics of organizing talks/sessions (if necessary) are
> welcome.
> 
> [1] https://review.gluster.org/17583
> 
> regards,
> Raghavendra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] How commonly applications make use of fadvise?

2017-08-20 Thread Raghavendra G
On Sat, Aug 19, 2017 at 4:27 PM, Csaba Henk  wrote:

> Hi Niels,
>
> On Fri, Aug 11, 2017 at 2:33 PM, Niels de Vos  wrote:
> > On Fri, Aug 11, 2017 at 05:50:47PM +0530, Ravishankar N wrote:
> [...]
> >> To me it looks like fadvise (mm/fadvise.c) affects only the linux page
> cache
> >> behavior and is decoupled from the filesystem itself. What this means
> for
> >> fuse  is that the  'advise' is only to the content that the fuse kernel
> >> module has stored in that machine's page cache.  Exposing it as a FOP
> would
> >> likely involve adding a new fop to struct file_operations that is common
> >> across the entire VFS and likely  won't fly with the kernel folks. I
> could
> >> be wrong in understanding all of this. :-)
> >
> > Thanks for checking! If that is the case, we need a good use-case to add
> > a fadvise function pointer to the file_operations. It is not impossible
> > to convince the Linux VFS developers, but it would not be as trivial as
> > adding it to FUSE only (but that requires the VFS infrastructure to be
> > there).
>
> Well, question is: are strategies of the caching xlators' mapping well to
> the POSIX_FADV_* hint set? Would an application that might run on
> a GlusterFS storage backend use fadvise(2) anyway or would fadvise calls
> be added particularly to optimize the GlusterFS backed scenario?
>

+Pranith, +ravi.

If I am not wrong, afr too has strategies like eager-locking if writes are
sequential. Wondering whether afr can benefit from a feature like this.


> Because if usage of fadvise were specifically to address the GlusterFS
> backend -- either because of specifc semantic or specific behavior --, then
> I don't see much point in force-fitting this kind of tuning into the
> fadvise
> syscall. We can just as well operate then via xattrs.
>
> Csaba
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Weekly Untriaged Bugs

2017-08-20 Thread jenkins
[...truncated 6 lines...]
https://bugzilla.redhat.com/1476653 / core: cassandra fails on gluster-block 
with both replicate and ec volumes
https://bugzilla.redhat.com/1478411 / core: Directory listings on fuse mount 
are very slow due to small number of getdents() entries
https://bugzilla.redhat.com/1477404 / core: eager-lock should be off for 
cassandra to work at the moment
https://bugzilla.redhat.com/1477405 / core: eager-lock should be off for 
cassandra to work at the moment
https://bugzilla.redhat.com/1476654 / core: gluster-block default shard-size 
should be 64MB
https://bugzilla.redhat.com/1477014 / encryption-xlator: Using encryption, 
write and read a small file, then mess code returns
https://bugzilla.redhat.com/1474798 / fuse: File Corruption Occurs with 
upgraded Client
https://bugzilla.redhat.com/1476992 / fuse: inode table lru list leak with 
glusterfs fuse mount
https://bugzilla.redhat.com/1482877 / geo-replication: Extended attributes not 
supported by the backend storage
https://bugzilla.redhat.com/1480516 / glusterd: Gluster Bricks are not coming 
up after pod restart when bmux is ON
https://bugzilla.redhat.com/1483058 / glusterd: [quorum]: Replace brick is 
happened  when Quorum not met.
https://bugzilla.redhat.com/1475635 / io-cache: [Scale] : Client logs flooded 
with "inode context is NULL" error messages
https://bugzilla.redhat.com/1475637 / io-cache: [Scale] : Client logs flooded 
with "inode context is NULL" error messages
https://bugzilla.redhat.com/1475638 / io-cache: [Scale] : Client logs flooded 
with "inode context is NULL" error messages
https://bugzilla.redhat.com/1476295 / md-cache: md-cache uses incorrect xattr 
keynames for GF_POSIX_ACL keys
https://bugzilla.redhat.com/1476324 / md-cache: md-cache: xattr values should 
not be checked with string functions
https://bugzilla.redhat.com/1480653 / object-storage: Worm File Level policy 
won't work with Gluster-SWIFT
https://bugzilla.redhat.com/1480507 / project-infrastructure: tests: 
Pre-requisite setup to run geo-rep test case on regression machines.
https://bugzilla.redhat.com/1479608 / rpc: rpc_transport_inet_options_build() 
does not support IPv6
https://bugzilla.redhat.com/1476957 / tests: peer-parsing.t fails on NetBSD
https://bugzilla.redhat.com/1480788 / unclassified: File-level WORM allows mv 
over read-only files
[...truncated 2 lines...]

build.log
Description: Binary data
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] gluster --version when built from git

2017-08-20 Thread mabi
Hi,

I have manually compiled GlusterFS 3.8.14 from git 
(https://github.com/gluster/glusterfs/archive/v3.8.14.tar.gz) on my Raspberry 
Pi and noticed that when I run a "gluster --version" command I do not see 
3.8.14 as version number but the build date instead as you can see below:

$ gluster --version
glusterfs  built on Aug  4 2017 11:00:27
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. 
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General 
Public License.

How can I get to build GluserFS that when running "gluster --version" it shows 
the version number "3.8.14" instead of the build date? Is there a configure 
option for that maybe?

Regards,
Mabi___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel