Re: [Gluster-devel] query about why glustershd can not afr_selfheal_recreate_entry because of "afr: Prevent null gfids in self-heal entry re-creation"

2018-01-15 Thread Ravishankar N

+ gluster-devel


On 01/15/2018 01:41 PM, Zhou, Cynthia (NSB - CN/Hangzhou) wrote:

Hi glusterfs expert,
    Good day,
    When I do some test about glusterfs self-heal I find following 
prints showing when dir/file type get error it cannot get self-healed.
*Could you help to check if it is an expected behavior ? because I 
find the code change **_https://review.gluster.org/#/c/17981/_**add 
check for **iatt->ia_typ**e,  so what if a file’s ia_type get 
corrupted ? in this case it should not get self-healed* ?


Yes, without knowing the ia-type , afr_selfheal_recreate_entry () cannot 
decide what type of FOP to do (mkdir/link/mknod ) to create the 
appropriate file on the sink. You would need to find out why the source 
brick is not returning valid ia_type. i.e. why replies[source].poststat 
is not valid.

Thanks,
Ravi


Thanks!
//heal info output
[root@sn-0:/home/robot]
# gluster v heal export info
Brick sn-0.local:/mnt/bricks/export/brick
Status: Connected
Number of entries: 0
Brick sn-1.local:/mnt/bricks/export/brick
/testdir - Is in split-brain
Status: Connected
Number of entries: 1
//sn-1 glustershd 
log///
[2018-01-15 03:53:40.011422] I [MSGID: 108026] 
[afr-self-heal-entry.c:887:afr_selfheal_entry_do] 
0-export-replicate-0: performing entry selfheal on 
b217d6af-4902-4f18-9a69-e0ccf5207572
[2018-01-15 03:53:40.013994] W [MSGID: 114031] 
[client-rpc-fops.c:2860:client3_3_lookup_cbk] 0-export-client-1: 
remote operation failed. Path: (null) 
(----) [No data available]
[2018-01-15 03:53:40.014025] E [MSGID: 108037] 
[afr-self-heal-entry.c:92:afr_selfheal_recreate_entry] 
0-export-replicate-0: Invalid ia_type (0) or 
gfid(----). source brick=1, 
pargfid=----, name=IORFILE_82_2
//gdb attached to sn-1 
glustershd/

root@sn-1:/var/log/glusterfs]
# gdb attach 2191
GNU gdb (GDB) 8.0.1
Copyright (C) 2017 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
<_http://gnu.org/licenses/gpl.html_>

This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<_http://www.gnu.org/software/gdb/bugs/_>.
Find the GDB manual and other documentation resources online at:
<_http://www.gnu.org/software/gdb/documentation/_>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
attach: No such file or directory.
Attaching to process 2191
[New LWP 2192]
[New LWP 2193]
[New LWP 2194]
[New LWP 2195]
[New LWP 2196]
[New LWP 2197]
[New LWP 2239]
[New LWP 2241]
[New LWP 2243]
[New LWP 2245]
[New LWP 2247]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
0x7f90aca037bd in __pthread_join (threadid=140259279345408, 
thread_return=0x0) at pthread_join.c:90

90 pthread_join.c: No such file or directory.
(gdb) break afr_selfheal_recreate_entry
Breakpoint 1 at 0x7f90a3b56dec: file afr-self-heal-entry.c, line 73.
(gdb) c
Continuing.
[Switching to Thread 0x7f90a1b8e700 (LWP 2241)]
Thread 9 "glustershdheal" hit Breakpoint 1, 
afr_selfheal_recreate_entry (frame=0x7f90980018d0, dst=0, source=1, 
sources=0x7f90a1b8ceb0 "", dir=0x7f9098011940, name=0x7f909c015d48 
"IORFILE_82_2",

inode=0x7f9098001bd0, replies=0x7f90a1b8c890) at afr-self-heal-entry.c:73
73 afr-self-heal-entry.c: No such file or directory.
(gdb) n
74  in afr-self-heal-entry.c
(gdb) n
75  in afr-self-heal-entry.c
(gdb) n
76  in afr-self-heal-entry.c
(gdb) n
77  in afr-self-heal-entry.c
(gdb) n
78  in afr-self-heal-entry.c
(gdb) n
79  in afr-self-heal-entry.c
(gdb) n
80  in afr-self-heal-entry.c
(gdb) n
81  in afr-self-heal-entry.c
(gdb) n
82  in afr-self-heal-entry.c
(gdb) n
83  in afr-self-heal-entry.c
(gdb) n
85  in afr-self-heal-entry.c
(gdb) n
86  in afr-self-heal-entry.c
(gdb) n
87  in afr-self-heal-entry.c
(gdb) print iatt->ia_type
$1 = IA_INVAL
(gdb) print gf_uuid_is_null(iatt->ia_gfid)
$2 = 1
(gdb) bt
#0 afr_selfheal_recreate_entry (frame=0x7f90980018d0, dst=0, source=1, 
sources=0x7f90a1b8ceb0 "", dir=0x7f9098011940, name=0x7f909c015d48 
"IORFILE_82_2", inode=0x7f9098001bd0, replies=0x7f90a1b8c890)

    at afr-self-heal-entry.c:87
#1 0x7f90a3b57d20 in __afr_selfheal_merge_dirent 
(frame=0x7f90980018d0, this=0x7f90a4024610, fd=0x7f9098413090, 
name=0x7f909c015d48 "IORFILE_82_2", inode=0x7f9098001bd0,
sources=0x7f90a1b8ceb0 "", healed_sinks=0x7f90a1b8ce70 
"\001\001A\230\220\177", locked_on=0x7f90a1b8ce50 
"\001\001\270\241\220\177", 

[Gluster-devel] Coverity covscan for 2018-01-15-5442fb50 (master branch)

2018-01-15 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-01-15-5442fb50
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-15 Thread Lian, George (NSB - CN/Hangzhou)
Hi,

Have you reproduced this issue? If yes, could you please confirm whether it is 
an issue or not?

And if it is an issue,  do you have any solution for this issue?

Thanks & Best Regards,
George

From: Lian, George (NSB - CN/Hangzhou)
Sent: Thursday, January 11, 2018 2:01 PM
To: Pranith Kumar Karampuri 
Cc: Zhou, Cynthia (NSB - CN/Hangzhou) ; 
Gluster-devel@gluster.org; Li, Deqian (NSB - CN/Hangzhou) 
; Sun, Ping (NSB - CN/Hangzhou) 

Subject: RE: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't 
let NFS cache stat after writes"

Hi,

Please see detail test step on 
https://bugzilla.redhat.com/show_bug.cgi?id=1531457

How reproducible:


Steps to Reproduce:
1.create a volume name "test" with replicated
2.set volume option cluster.consistent-metadata with on:
  gluster v set test cluster.consistent-metadata on
3. mount volume test on client on /mnt/test
4. create a file aaa size more than 1 byte
   echo "1234567890" >/mnt/test/aaa
5. shutdown a replicat node, let's say sn-1, only let sn-0 worked
6. cp /mnt/test/aaa /mnt/test/bbb; link /mnt/test/bbb /mnt/test/ccc


BRs
George

From: 
gluster-devel-boun...@gluster.org 
[mailto:gluster-devel-boun...@gluster.org] On Behalf Of Pranith Kumar Karampuri
Sent: Thursday, January 11, 2018 12:39 PM
To: Lian, George (NSB - CN/Hangzhou) 
>
Cc: Zhou, Cynthia (NSB - CN/Hangzhou) 
>; 
Gluster-devel@gluster.org; Li, Deqian (NSB - 
CN/Hangzhou) >; 
Sun, Ping (NSB - CN/Hangzhou) 
>
Subject: Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't 
let NFS cache stat after writes"



On Thu, Jan 11, 2018 at 6:35 AM, Lian, George (NSB - CN/Hangzhou) 
> wrote:
Hi,
>>> In which protocol are you seeing this issue? Fuse/NFS/SMB?
It is fuse, within mountpoint by “mount -t glusterfs  …“ command.

Could you let me know the test you did so that I can try to re-create and see 
what exactly is going on?
Configuration of the volume and the steps to re-create the issue you are seeing 
would be helpful in debugging the issue further.


Thanks & Best Regards,
George

From: 
gluster-devel-boun...@gluster.org 
[mailto:gluster-devel-boun...@gluster.org]
 On Behalf Of Pranith Kumar Karampuri
Sent: Wednesday, January 10, 2018 8:08 PM
To: Lian, George (NSB - CN/Hangzhou) 
>
Cc: Zhou, Cynthia (NSB - CN/Hangzhou) 
>; Zhong, Hua 
(NSB - CN/Hangzhou) 
>; Li, Deqian (NSB 
- CN/Hangzhou) >; 
Gluster-devel@gluster.org; Sun, Ping (NSB - 
CN/Hangzhou) >
Subject: Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't 
let NFS cache stat after writes"



On Wed, Jan 10, 2018 at 11:09 AM, Lian, George (NSB - CN/Hangzhou) 
> wrote:
Hi, Pranith Kumar,

I has create a bug on Bugzilla 
https://bugzilla.redhat.com/show_bug.cgi?id=1531457
After my investigation for this link issue, I suppose your changes on 
afr-dir-write.c with issue " Don't let NFS cache stat after writes" , your fix 
is like:
--
   if (afr_txn_nothing_failed (frame, this)) {
/*if it did pre-op, it will do post-op changing ctime*/
if (priv->consistent_metadata &&
afr_needs_changelog_update (local))
afr_zero_fill_stat (local);
local->transaction.unwind (frame, this);
}
In the above fix, it set the ia_nlink to ‘0’ if option consistent-metadata is 
set to “on”.
And hard link a file with which just created will lead to an error, and the 
error is caused in kernel function “vfs_link”:
if (inode->i_nlink == 0 && !(inode->i_state & I_LINKABLE))
 error =  -ENOENT;

could you please have a check and give some comments here?

When stat is "zero filled", understanding is that the higher layer protocol 
doesn't send stat value to the kernel and a separate lookup is sent by the 
kernel to get the latest stat value. In which protocol are you seeing this 
issue? Fuse/NFS/SMB?


Thanks & Best Regards,
George



--
Pranith



--
Pranith

Re: [Gluster-devel] [Gluster-users] Integration of GPU with glusterfs

2018-01-15 Thread Vijay Bellur
On Mon, Jan 15, 2018 at 12:06 AM, Ashish Pandey  wrote:

>
> It is disappointing to see the limitation being put by Nvidia on low cost
> GPU usage on data centers.
> https://www.theregister.co.uk/2018/01/03/nvidia_server_gpus/
>
> We thought of providing an option in glusterfs by which we can control if
> we want to use GPU or not.
> So, the concern of gluster eating out GPU's which could be used by others
> can be addressed.
>


It would be definitely be interesting to try this out. This limitation may
change in the future and the higher end GPUs are still usable in data
centers.

Do you have more details on the proposed implementation?

Thanks,
Vijay



>
> ---
> Ashish
>
>
>
> --
> *From: *"Jim Kinney" 
> *To: *gluster-us...@gluster.org, "Lindsay Mathieson" <
> lindsay.mathie...@gmail.com>, "Darrell Budic" ,
> "Gluster Users" 
> *Cc: *"Gluster Devel" 
> *Sent: *Friday, January 12, 2018 6:00:25 PM
> *Subject: *Re: [Gluster-devel] [Gluster-users] Integration of GPU
> withglusterfs
>
>
>
> On January 11, 2018 10:58:28 PM EST, Lindsay Mathieson <
> lindsay.mathie...@gmail.com> wrote:
> >On 12/01/2018 3:14 AM, Darrell Budic wrote:
> >> It would also add physical resource requirements to future client
> >> deploys, requiring more than 1U for the server (most likely), and I’m
> >
> >> not likely to want to do this if I’m trying to optimize for client
> >> density, especially with the cost of GPUs today.
> >
> >Nvidia has banned their GPU's being used in Data Centers now to, I
> >imagine they are planning to add a licensing fee.
>
> Nvidia banned only the lower cost, home user versions of their GPU line
> from datacenters.
> >
> >--
> >Lindsay Mathieson
> >
> >___
> >Gluster-users mailing list
> >gluster-us...@gluster.org
> >http://lists.gluster.org/mailman/listinfo/gluster-users
>
> --
> Sent from my Android device with K-9 Mail. All tyopes are thumb related
> and reflect authenticity.
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Coverity covscan for 2018-01-15-5442fb50 (master branch)

2018-01-15 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-01-15-5442fb50
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Coverity covscan for 2018-01-15-5442fb50 (master branch)

2018-01-15 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-01-15-5442fb50
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Coverity covscan for 2018-01-15-5442fb50 (master branch)

2018-01-15 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-01-15-5442fb50
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Coverity covscan for 2018-01-15-5442fb50 (master branch)

2018-01-15 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-01-15-5442fb50
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Coverity covscan for 2018-01-15-5442fb50 (master branch)

2018-01-15 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-01-15-5442fb50
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] GD 2 xlator option changes

2018-01-15 Thread Atin Mukherjee
On Mon, 15 Jan 2018 at 12:15, Nithya Balachandran 
wrote:

> Hi,
>
> A few questions about this:
>
> 1. What (if anything) should be done for options like these which have "!"
> ?
>
> /* Switch xlator options (Distribute special case) */
>
> { .key= "cluster.switch",
>
>   .voltype= "cluster/distribute",
>
>   .option = "!switch",
>
>   .type   = NO_DOC,
>
>   .op_version = 3,
>
>   .flags  = VOLOPT_FLAG_CLIENT_OPT
>
> },
>
>
The options starting with bang doesn't get loaded with all the graphs and
have special handling in glusterd-volgen code. I think we'd need these type
of options handled specially in GD2's volgen as well, Aravinda? From the
option perspective, I think we need to just copy the same option (with
bang) in the respective xlators.

Kaushal/Aravinda - Please do confirm.


>
> 2. How should the changed key names handled?
>
> In glusterd:
>
> { .key= "cluster.switch-pattern",
>
>   .voltype= "cluster/switch",
>
>   .option = "pattern.switch.case",
>
>   .type   = NO_DOC,
>
>   .op_version = 3,
>
>   .flags  = VOLOPT_FLAG_CLIENT_OPT
>
> },
>
>
> In dht src code:
> /* switch option */
>
> { .key  = {"pattern.switch.case"},
>
>   .type = GF_OPTION_TYPE_ANY,
>
>   .op_version = {3},
>
>   .flags = OPT_FLAG_CLIENT_OPT,
>
> },
>

We need to have both the patterns in the key as comma separated.


>
>
> Regards,
> Nithya
>
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Coverity covscan for 2018-01-15-5442fb50 (master branch)

2018-01-15 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-01-15-5442fb50
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Coverity covscan for 2018-01-15-5442fb50 (master branch)

2018-01-15 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-01-15-5442fb50
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 4.0: Making it happen! (Protocol changes and wireshark)

2018-01-15 Thread Amar Tumballi
Hi Niels,

Thanks for the feedback. The patches which bring these changes in glusterfs
are out for review here:

https://review.gluster.org/18768 (changing the auth-glusterfs version)
https://review.gluster.org/19098 (changing fops program actors)

Shyam, please star them for 4.0 watchlist for review help.

Also, as a dependent patchsets, please review this to reduce some warnings
in glusterfs logs to see more focused warnings.

https://review.gluster.org/19150
https://review.gluster.org/19166

Regards,
Amar


On Fri, Jan 12, 2018 at 5:16 PM, Niels de Vos  wrote:

> On Wed, Jan 10, 2018 at 03:36:55PM -0500, Shyam Ranganathan wrote:
> > Hi,
> >
> > As we are introducing a new protocol version, the existing gluster
> > wireshark plugin [1] needs to be updated.
> >
> > Further this needs to get released to wireshark users in some fashion,
> > which looks like a need to follow wireshark roadmap [2] (not sure if
> > this can be part of a maintenance release, which would possibly be based
> > on the quantum of changes etc.).
> >
> > This need not happen with 4.0 branching, but at least has to be
> > completed before 4.0 release.
> >
> > @neils once the protocol changes are complete, would this be possible to
> > complete by you in the next 6 odd weeks by the release (end of Feb)? Or,
> > if we need volunteers, please give a shout out here.
>
> Adding the new bits to the Wireshark dissector is pretty straight
> forward. Once the protocol changes have been done, it would be good to
> have a few .pcap files captured that can be used for developing and
> testing the changes. This can even be done in steps, as soon as one
> chunk of the protocol is finalized, a patch to upstream Wireshark can be
> sent already. We can improve it incrementally that way, also making it
> easier for multiple contributors to work on it.
>
> I can probably do some of the initial work, but would like assistance
> from others with testing and possibly improving certain parts. If
> someone can provide tcpdumps with updated protocol changes, that would
> be most welcome! Capture the dumps like this:
>
>   # tcpdump -i any -w /var/tmp/gluster-40-${proto_change}.pcap -s 0 tcp
> and not port 22
>   ... exercise the protocol bit that changed, include connection setup
>   ... press CTRL+C once done
>   # gzip /var/tmp/gluster-40-${proto_change}.pcap
>   ... Wireshark can read .pcap.gz without manual decompressing
>
> Attach the .pcap.gz to the GitHub issue for the protocol change and
> email gluster-devel@ once it is available so that a developer can start
> working on the Wireshark change.
>
> Thanks,
> Niels
>
>
> >
> > Shyam
> >
> > [1] Gluster wireshark plugin:
> > https://code.wireshark.org/review/gitweb?p=wireshark.git;
> a=tree;f=epan/dissectors;h=8c8303285a204bdff3b8b80e2811dc
> d9b7ab6fe0;hb=HEAD
> >
> > [2] Wireshark roadmap: https://wiki.wireshark.org/Development/Roadmap
> >
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers
>



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Coverity covscan for 2018-01-15-5442fb50 (master branch)

2018-01-15 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-01-15-5442fb50
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Coverity covscan for 2018-01-15-5442fb50 (master branch)

2018-01-15 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-01-15-5442fb50
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-15 Thread Pranith Kumar Karampuri
On Mon, Jan 15, 2018 at 8:46 AM, Lian, George (NSB - CN/Hangzhou) <
george.l...@nokia-sbell.com> wrote:

> Hi,
>
>
>
> Have you reproduced this issue? If yes, could you please confirm whether
> it is an issue or not?
>

Sorry, I am held up with some issue at work, so I think I will get some
time day after tomorrow to look at this. In the mean time I am adding more
people who know about afr to see if they get a chance to work on this
before me.


>
>
> And if it is an issue,  do you have any solution for this issue?
>
>
>
> Thanks & Best Regards,
>
> George
>
>
>
> *From:* Lian, George (NSB - CN/Hangzhou)
> *Sent:* Thursday, January 11, 2018 2:01 PM
> *To:* Pranith Kumar Karampuri 
> *Cc:* Zhou, Cynthia (NSB - CN/Hangzhou) ;
> Gluster-devel@gluster.org; Li, Deqian (NSB - CN/Hangzhou) <
> deqian...@nokia-sbell.com>; Sun, Ping (NSB - CN/Hangzhou) <
> ping@nokia-sbell.com>
> *Subject:* RE: [Gluster-devel] a link issue maybe introduced in a bug fix
> " Don't let NFS cache stat after writes"
>
>
>
> Hi,
>
>
>
> Please see detail test step on https://bugzilla.redhat.com/
> show_bug.cgi?id=1531457
>
>
>
> How reproducible:
>
>
>
>
>
> Steps to Reproduce:
>
> 1.create a volume name "test" with replicated
>
> 2.set volume option cluster.consistent-metadata with on:
>
>   gluster v set test cluster.consistent-metadata on
>
> 3. mount volume test on client on /mnt/test
>
> 4. create a file aaa size more than 1 byte
>
>echo "1234567890" >/mnt/test/aaa
>
> 5. shutdown a replicat node, let's say sn-1, only let sn-0 worked
>
> 6. cp /mnt/test/aaa /mnt/test/bbb; link /mnt/test/bbb /mnt/test/ccc
>
>
>
>
>
> BRs
>
> George
>
>
>
> *From:* gluster-devel-boun...@gluster.org [mailto:gluster-devel-bounces@
> gluster.org ] *On Behalf Of *Pranith
> Kumar Karampuri
> *Sent:* Thursday, January 11, 2018 12:39 PM
> *To:* Lian, George (NSB - CN/Hangzhou) 
> *Cc:* Zhou, Cynthia (NSB - CN/Hangzhou) ;
> Gluster-devel@gluster.org; Li, Deqian (NSB - CN/Hangzhou) <
> deqian...@nokia-sbell.com>; Sun, Ping (NSB - CN/Hangzhou) <
> ping@nokia-sbell.com>
> *Subject:* Re: [Gluster-devel] a link issue maybe introduced in a bug fix
> " Don't let NFS cache stat after writes"
>
>
>
>
>
>
>
> On Thu, Jan 11, 2018 at 6:35 AM, Lian, George (NSB - CN/Hangzhou) <
> george.l...@nokia-sbell.com> wrote:
>
> Hi,
>
> >>> In which protocol are you seeing this issue? Fuse/NFS/SMB?
>
> It is fuse, within mountpoint by “mount -t glusterfs  …“ command.
>
>
>
> Could you let me know the test you did so that I can try to re-create and
> see what exactly is going on?
>
> Configuration of the volume and the steps to re-create the issue you are
> seeing would be helpful in debugging the issue further.
>
>
>
>
>
> Thanks & Best Regards,
>
> George
>
>
>
> *From:* gluster-devel-boun...@gluster.org [mailto:gluster-devel-bounces@
> gluster.org] *On Behalf Of *Pranith Kumar Karampuri
> *Sent:* Wednesday, January 10, 2018 8:08 PM
> *To:* Lian, George (NSB - CN/Hangzhou) 
> *Cc:* Zhou, Cynthia (NSB - CN/Hangzhou) ;
> Zhong, Hua (NSB - CN/Hangzhou) ; Li, Deqian
> (NSB - CN/Hangzhou) ; Gluster-devel@gluster.org;
> Sun, Ping (NSB - CN/Hangzhou) 
> *Subject:* Re: [Gluster-devel] a link issue maybe introduced in a bug fix
> " Don't let NFS cache stat after writes"
>
>
>
>
>
>
>
> On Wed, Jan 10, 2018 at 11:09 AM, Lian, George (NSB - CN/Hangzhou) <
> george.l...@nokia-sbell.com> wrote:
>
> Hi, Pranith Kumar,
>
>
>
> I has create a bug on Bugzilla https://bugzilla.redhat.com/
> show_bug.cgi?id=1531457
>
> After my investigation for this link issue, I suppose your changes on
> afr-dir-write.c with issue " Don't let NFS cache stat after writes" , your
> fix is like:
>
> --
>
>if (afr_txn_nothing_failed (frame, this)) {
>
> /*if it did pre-op, it will do post-op changing
> ctime*/
>
> if (priv->consistent_metadata &&
>
> afr_needs_changelog_update (local))
>
> afr_zero_fill_stat (local);
>
> local->transaction.unwind (frame, this);
>
> }
>
> In the above fix, it set the ia_nlink to ‘0’ if option
> consistent-metadata is set to “on”.
>
> And hard link a file with which just created will lead to an error, and
> the error is caused in kernel function “vfs_link”:
>
> if (inode->i_nlink == 0 && !(inode->i_state & I_LINKABLE))
>
>  error =  -ENOENT;
>
>
>
> could you please have a check and give some comments here?
>
>
>
> When stat is "zero filled", understanding is that the higher layer
> protocol doesn't send stat value to the kernel and a separate lookup is
> sent by the kernel to get the latest 

Re: [Gluster-devel] [Gluster-users] Integration of GPU with glusterfs

2018-01-15 Thread Ashish Pandey

It is disappointing to see the limitation being put by Nvidia on low cost GPU 
usage on data centers. 
https://www.theregister.co.uk/2018/01/03/nvidia_server_gpus/ 

We thought of providing an option in glusterfs by which we can control if we 
want to use GPU or not. 
So, the concern of gluster eating out GPU's which could be used by others can 
be addressed. 

--- 
Ashish 



- Original Message -

From: "Jim Kinney"  
To: gluster-us...@gluster.org, "Lindsay Mathieson" 
, "Darrell Budic" , 
"Gluster Users"  
Cc: "Gluster Devel"  
Sent: Friday, January 12, 2018 6:00:25 PM 
Subject: Re: [Gluster-devel] [Gluster-users] Integration of GPU with glusterfs 



On January 11, 2018 10:58:28 PM EST, Lindsay Mathieson 
 wrote: 
>On 12/01/2018 3:14 AM, Darrell Budic wrote: 
>> It would also add physical resource requirements to future client 
>> deploys, requiring more than 1U for the server (most likely), and I’m 
> 
>> not likely to want to do this if I’m trying to optimize for client 
>> density, especially with the cost of GPUs today. 
> 
>Nvidia has banned their GPU's being used in Data Centers now to, I 
>imagine they are planning to add a licensing fee. 

Nvidia banned only the lower cost, home user versions of their GPU line from 
datacenters. 
> 
>-- 
>Lindsay Mathieson 
> 
>___ 
>Gluster-users mailing list 
>gluster-us...@gluster.org 
>http://lists.gluster.org/mailman/listinfo/gluster-users 

-- 
Sent from my Android device with K-9 Mail. All tyopes are thumb related and 
reflect authenticity. 
___ 
Gluster-devel mailing list 
Gluster-devel@gluster.org 
http://lists.gluster.org/mailman/listinfo/gluster-devel 

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel