[Gluster-devel] update on experimental branch rebase

2018-01-28 Thread Amar Tumballi
As the release-4.0 version branching is now done, thought it is a good time
to refresh experimental branch too.

Below is the difference between master and experimental right now.

---
[atumball@local glusterfs]$ git log origin/master... --oneline
44adf5c protocol: utilize the version 4 xdr
b87ecc0 rio: Added (s)setattr and (f)truncate support
76c1160 rio/posix2: Cleanup unused files
19973fa rio: Add iatt cleanup for iatt returned from DS
528f830 rio: Add ability to handle dirty inodes in MDS FOPs
a3221ff rio: Reorganize RIO server code for DS operations
ff8c878 rio/posix2: Add DS FOP support
403f4cd posix2: Reorganize posix2 in preparation for DS FOPs
d4f1f8d rio: Add layout search functionality for DS
9db5c42 rio: Add ability to handle remote inodes in lookup
ddc3e26 rio: Add mkdir FOP
c38a583 rio: Added client FOP generation code and create code
c6b021e posix2: fix ./tests/basic/0symbol-check.t
1e0f86b rio/posix2: Some generic fixes as the code is excercised
0112ee4 rio/posix2: Include posix inode/fd ops, implement entry ops
2788033 experimental/rio: Script to generate rio volfiles
d67dafb experimental/rio: RIO initialization and layout abstraction
d83cef5 snapshot/snapview-client : redefine options for GD2
a85213c Add new fields to translator options(quota and marker) for GD2
---

Let me know if you have any questions.

Regards,
Amar

-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] update on experimental branch rebase

2018-01-28 Thread Shyam Ranganathan
On 01/28/2018 08:29 AM, Amar Tumballi wrote:
> As the release-4.0 version branching is now done, thought it is a good
> time to refresh experimental branch too.

On Friday I discovered that RIO in experimental was broken, this si
fixed up in this patch: https://review.gluster.org/#/c/19354/1

So, if this is merged before rebasing to master, it would help us (if
not it is fine, we an resubmit).

Otherwise no issues from RIO perspective, things should merge fine this
time around as we got the POSIX reorganization changes in master as well.

> 
> Below is the difference between master and experimental right now.
> 
> ---
> [atumball@local glusterfs]$ git log origin/master... --oneline
> 44adf5c protocol: utilize the version 4 xdr
> b87ecc0 rio: Added (s)setattr and (f)truncate support
> 76c1160 rio/posix2: Cleanup unused files
> 19973fa rio: Add iatt cleanup for iatt returned from DS
> 528f830 rio: Add ability to handle dirty inodes in MDS FOPs
> a3221ff rio: Reorganize RIO server code for DS operations
> ff8c878 rio/posix2: Add DS FOP support
> 403f4cd posix2: Reorganize posix2 in preparation for DS FOPs
> d4f1f8d rio: Add layout search functionality for DS
> 9db5c42 rio: Add ability to handle remote inodes in lookup
> ddc3e26 rio: Add mkdir FOP
> c38a583 rio: Added client FOP generation code and create code
> c6b021e posix2: fix ./tests/basic/0symbol-check.t
> 1e0f86b rio/posix2: Some generic fixes as the code is excercised
> 0112ee4 rio/posix2: Include posix inode/fd ops, implement entry ops
> 2788033 experimental/rio: Script to generate rio volfiles
> d67dafb experimental/rio: RIO initialization and layout abstraction
> d83cef5 snapshot/snapview-client : redefine options for GD2
> a85213c Add new fields to translator options(quota and marker) for GD2
> ---
> 
> Let me know if you have any questions.
> 
> Regards,
> Amar
> 
> -- 
> Amar Tumballi (amarts)
> 
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-28 Thread Pranith Kumar Karampuri
+Ravi, +Raghavendra G

On 25 Jan 2018 8:49 am, "Pranith Kumar Karampuri" 
wrote:

>
>
> On 25 Jan 2018 8:43 am, "Lian, George (NSB - CN/Hangzhou)" <
> george.l...@nokia-sbell.com> wrote:
>
> Hi,
>
> I suppose the zero filled attr is for performance consider to NFS, but for
> fuse, it will lead issue such like hard LINK FOP,
> So I suggest could we add 2 attr field in the endof "struct iatt {", such
> like ia_fuse_nlink, ia_fuse_ctime,
> And in function gf_zero_fill_stat , saved the ia_nlink, ia_ctime to
> ia_fuse_nlink,ia_fuse_ctime before set its to zero,
> And restore it to valued nlink and ctime in function gf_fuse_stat2attr,
> So that kernel could get the correct nlink and ctime.
>
> Is it a considerable solution? Any risk?
>
> Please share your comments, thanks in advance!
>
>
> Adding csaba for helping with this.
>
>
> Best Regards,
> George
>
> -Original Message-
> From: gluster-devel-boun...@gluster.org [mailto:gluster-devel-bounces@
> gluster.org] On Behalf Of Niels de Vos
> Sent: Wednesday, January 24, 2018 7:43 PM
> To: Pranith Kumar Karampuri 
> Cc: Lian, George (NSB - CN/Hangzhou) ; Zhou,
> Cynthia (NSB - CN/Hangzhou) ; Li, Deqian
> (NSB - CN/Hangzhou) ; Gluster-devel@gluster.org;
> Sun, Ping (NSB - CN/Hangzhou) 
> Subject: Re: [Gluster-devel] a link issue maybe introduced in a bug fix "
> Don't let NFS cache stat after writes"
>
> On Wed, Jan 24, 2018 at 02:24:06PM +0530, Pranith Kumar Karampuri wrote:
> > hi,
> >In the same commit you mentioned earlier, there was this code
> > earlier:
> > -/* Returns 1 if the stat seems to be filled with zeroes. */ -int
> > -nfs_zero_filled_stat (struct iatt *buf) -{
> > -if (!buf)
> > -return 1;
> > -
> > -/* Do not use st_dev because it is transformed to store the
> xlator
> > id
> > - * in place of the device number. Do not use st_ino because by
> > this time
> > - * we've already mapped the root ino to 1 so it is not
> guaranteed
> > to be
> > - * 0.
> > - */
> > -if ((buf->ia_nlink == 0) && (buf->ia_ctime == 0))
> > -return 1;
> > -
> > -return 0;
> > -}
> > -
> > -
> >
> > I moved this to a common library function that can be used in afr as
> well.
> > Why was it there in NFS? +Niels for answering that question.
>
> Sorry, I dont know why that was done. It was introduced with the initial
> gNFS implementation, long before I started to work with Gluster. The only
> reference I have is this from
> xlators/nfs/server/src/nfs3-helpers.c:nfs3_stat_to_post_op_attr()
>
>  371 /* Some performance translators return zero-filled stats when
> they
>  372  * do not have up-to-date attributes. Need to handle this by
> not
>  373  * returning these zeroed out attrs.
>  374  */
>
> This may not be true for the current situation anymore.
>
> HTH,
> Niels
>
>
> >
> > If I give you a patch which will assert the error condition, would it
> > be possible for you to figure out the first xlator which is unwinding
> > the iatt with nlink count as zero but ctime as non-zero?
> >
> > On Wed, Jan 24, 2018 at 1:03 PM, Lian, George (NSB - CN/Hangzhou) <
> > george.l...@nokia-sbell.com> wrote:
> >
> > > Hi,  Pranith Kumar,
> > >
> > >
> > >
> > > Can you tell me while need set buf->ia_nlink to “0”in function
> > > gf_zero_fill_stat(), which API or Application will care it?
> > >
> > > If I remove this line and also update corresponding in function
> > > gf_is_zero_filled_stat,
> > >
> > > The issue seems gone, but I can’t confirm will lead to other issues.
> > >
> > >
> > >
> > > So could you please double check it and give your comments?
> > >
> > >
> > >
> > > My change is as the below:
> > >
> > >
> > >
> > > gf_boolean_t
> > >
> > > gf_is_zero_filled_stat (struct iatt *buf)
> > >
> > > {
> > >
> > > if (!buf)
> > >
> > > return 1;
> > >
> > >
> > >
> > > /* Do not use st_dev because it is transformed to store the
> > > xlator id
> > >
> > >  * in place of the device number. Do not use st_ino because
> > > by this time
> > >
> > >  * we've already mapped the root ino to 1 so it is not
> > > guaranteed to be
> > >
> > >  * 0.
> > >
> > >  */
> > >
> > > //if ((buf->ia_nlink == 0) && (buf->ia_ctime == 0))
> > >
> > > if (buf->ia_ctime == )
> > >
> > > return 1;
> > >
> > >
> > >
> > > return 0;
> > >
> > > }
> > >
> > >
> > >
> > > void
> > >
> > > gf_zero_fill_stat (struct iatt *buf)
> > >
> > > {
> > >
> > > //   buf->ia_nlink = 0;
> > >
> > > buf->ia_ctime = 0;
> > >
> > > }
> > >
> > >
> > >
> > > Thanks & Best Regards
> > >
> > > George
> > >
> > > *From:* Lian, George (NSB - CN/Hangzhou)
> > > *Sent:* Friday, January 19, 2018 10:03 AM
> > > *To:* Pranith Kumar Karampuri ; Zhou, Cynthia
> > > (NSB -
> > > CN/Hangzhou) 
> > > *Cc:* Li, Deqian (NSB - CN/Hangzhou) ;
> > > Gluster-devel@gluster.org; Sun, Ping (NSB - CN/Han

[Gluster-devel] Weekly Untriaged Bugs

2018-01-28 Thread jenkins
[...truncated 6 lines...]
https://bugzilla.redhat.com/1533046 / access-control: ACLs - permission denied
https://bugzilla.redhat.com/1531131 / access-control: Connexion refused with 
port 22
https://bugzilla.redhat.com/1536908 / build: gluster-block build as "fatal 
error: api/glfs.h: No such file or directory"
https://bugzilla.redhat.com/1531987 / build: increment of a boolean expression 
warning
https://bugzilla.redhat.com/1535495 / cli: Add option -h and --help to gluster 
cli
https://bugzilla.redhat.com/1535511 / cli: Gluster CLI shouldn't stop if log 
file couldn't be opened
https://bugzilla.redhat.com/1535528 / cli: Gluster cli show no help message in 
prompt
https://bugzilla.redhat.com/1535522 / cli: "gluster volume" cli message is not 
very helpful
https://bugzilla.redhat.com/1536913 / cli: tests/bugs/cli/bug-822830.t fails on 
Centos 7 and locally
https://bugzilla.redhat.com/1531407 / core: dict data type mismatches in the 
logs
https://bugzilla.redhat.com/1532192 / core: memory leak in glusterfsd process
https://bugzilla.redhat.com/1537602 / geo-replication: Georeplication tests 
intermittently fail
https://bugzilla.redhat.com/1535526 / glusterfind: glusterfind : wrong results 
while retrieving incremental list of files modified after last run
https://bugzilla.redhat.com/1534452 / libgfapi: Reading over than the file size 
on dispersed volume
https://bugzilla.redhat.com/1534453 / libgfapi: Reading over than the file size 
on dispersed volume
https://bugzilla.redhat.com/1534403 / libgfapi: Severe filesystem corruption 
with virtio and sharding. 100% reproducible
https://bugzilla.redhat.com/1536952 / project-infrastructure: build: add 
libcurl package to regression machines
https://bugzilla.redhat.com/1530111 / project-infrastructure: Logs unavailable 
if a regression run was aborted
https://bugzilla.redhat.com/1535776 / project-infrastructure: Regression job 
went missing
https://bugzilla.redhat.com/1538900 / protocol: Found a missing unref in 
rpc_clnt_reconnect
https://bugzilla.redhat.com/1536257 / replicate: Take full lock on files in 3 
way replication
https://bugzilla.redhat.com/1538978 / rpc: rpcsvc_request_handler thread should 
be made multithreaded
https://bugzilla.redhat.com/1532868 / sharding: gluster upgrade causes vm disk 
errors
https://bugzilla.redhat.com/1533815 / tests: Mark ./tests/basic/ec/heal-info.t 
as bad
https://bugzilla.redhat.com/1532112 / tests: Test case 
./tests/bugs/bug-1371806_1.t is failing
https://bugzilla.redhat.com/1534327 / tiering: After reboot a node which has 
hot brick, it's Tier Daemon TCP port will lost.
https://bugzilla.redhat.com/1536733 / unclassified: Add support for archive 
xlator.
[...truncated 2 lines...]

build.log
Description: Binary data
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Rafi KC attending DevConf and FOSDEM

2018-01-28 Thread Raghavendra G
On Fri, Jan 26, 2018 at 7:27 PM, Niels de Vos  wrote:

> On Fri, Jan 26, 2018 at 06:24:36PM +0530, Mohammed Rafi K C wrote:
> > Hi All,
> >
> > I'm attending both DevConf (25-28) and Fosdem (3-4). If any of you are
> > attending the conferences and would like to talk about gluster, please
> > feel free to ping me through irc nick rafi on freenode or message me on
> > +436649795838
>
> In addition to that at FOSDEM, there is a Gluster stand (+Ceph, and next
> to oVirt) on level 1 (ground floor) of the K building[0]. We'll try to
> have some of the developers and other contributors to the project around
> at all times. Come and talk to us about your use-cases, questions and
> words of encouragement ;-)
>
> There are several talks related to Gluster too! On Saturday there is
> "Optimizing Software Defined Storage for the Age of Flash" [1],



Thanks Niels for that.

Me, Manoj and Krutika will be in FOSDEM 18 (3rd and 4th Feb 2018). We would
be happy to chat with you on anything related to glusterfs :). Hopefully
we'll have some interesting results to share with you in the talk!!. Please
do plan to attend it if possible.

and on
> Sunday the Software Defined Storage DevRoom has scheduled many more.
>
> Hope to see you there!
> Niels
>
>
> 0. https://fosdem.org/2018/schedule/buildings/#k
> 1. https://fosdem.org/2018/schedule/event/optimizing_sds/
> 2. https://fosdem.org/2018/schedule/track/software_defined_storage/
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel