Re: [Gluster-devel] could you help to check about a glusterfs issue seems to be related to ctime

2020-03-12 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
Hi,
This is abnormal test case, however, when this happened it will have big impact 
on the apps using those files. And this can not be restored automatically 
unless disable some xlator, I think it is unacceptable for the user apps.


cynthia

From: Kotresh Hiremath Ravishankar 
Sent: 2020年3月12日 14:37
To: Zhou, Cynthia (NSB - CN/Hangzhou) 
Cc: Gluster Devel 
Subject: Re: could you help to check about a glusterfs issue seems to be 
related to ctime

All the perf xlators depend on time (mostly mtime I guess). In my setup, only 
quick read was enabled and hence disabling it worked for me.
All perf xlators needs to be disabled to make it work correctly. But I still 
failed to understand how normal this kind of workload ?

Thanks,
Kotresh

On Thu, Mar 12, 2020 at 11:20 AM Zhou, Cynthia (NSB - CN/Hangzhou) 
mailto:cynthia.z...@nokia-sbell.com>> wrote:
When disable both quick-read and performance.io-cache off everything is back to 
normal
I attached the log when only enable quick-read and performance.io-cache is 
still on glusterfs trace log
When execute command “cat /mnt/export/testfile”
Can you help to find why this still to fail to show correct content?
The file size showed is 141, but actually in brick it is longer than that.


cynthia


From: Zhou, Cynthia (NSB - CN/Hangzhou)
Sent: 2020年3月12日 12:53
To: 'Kotresh Hiremath Ravishankar' 
mailto:khire...@redhat.com>>
Cc: 'Gluster Devel' 
mailto:gluster-devel@gluster.org>>
Subject: RE: could you help to check about a glusterfs issue seems to be 
related to ctime

From my local test only when disable both features.ctime and ctime.noatime this 
issue is gone.
Or
Do echo 3 >/proc/sys/vm/drop_caches after each time when some client change the 
file , can cat command show correct data(same as brick )

cynthia

From: Zhou, Cynthia (NSB - CN/Hangzhou)
Sent: 2020年3月12日 9:53
To: 'Kotresh Hiremath Ravishankar' 
mailto:khire...@redhat.com>>
Cc: Gluster Devel mailto:gluster-devel@gluster.org>>
Subject: RE: could you help to check about a glusterfs issue seems to be 
related to ctime

Hi,
Thanks for your responding!
I’ve tried to disable quick-read:
[root@mn-0:/home/robot]
# gluster v get export all| grep quick
performance.quick-read  off
performance.nfs.quick-read  off

however, this issue still exists.
Two clients see different contents.

it seems only after I disable utime this issue is completely gone.
features.ctime  off
ctime.noatime   off


Do you know why is this?


Cynthia
Nokia storage team
From: Kotresh Hiremath Ravishankar 
mailto:khire...@redhat.com>>
Sent: 2020年3月11日 22:05
To: Zhou, Cynthia (NSB - CN/Hangzhou) 
mailto:cynthia.z...@nokia-sbell.com>>
Cc: Gluster Devel mailto:gluster-devel@gluster.org>>
Subject: Re: could you help to check about a glusterfs issue seems to be 
related to ctime

Hi,

I figured out what's happening. The issue is that the file has 'c|a|m' time set 
to future (The file is created after the date is set to +30 days). This
is done from client-1. On client-2 with correct date, when data is appended, it 
doesn't update the mtime and ctime because of both mtime and ctime is less than
already set time on the file. This protection is required to keep the latest 
time when two clients are writing to the same file. We update c|m|a time only 
if it's greater than
existing time. As a result, the perf xlators on client1 which relies on mtime 
doesn't send read to server as it thinks nothing is changed as in this case the 
times haven't
changed.

Workarounds:
1. Disabling quick-read solved the issue for me.
I don't know how real this kind of workload is? Is this a normal scenario ?
The other thing to do is to remove that protection of updating time only if 
it's greater but that would open up the race when two clients are updating the 
same file.
This would result in keeping the older time than the latest. This requires code 
change and I don't think that should be done.

Thanks,
Kotresh

On Wed, Mar 11, 2020 at 3:02 PM Kotresh Hiremath Ravishankar 
mailto:khire...@redhat.com>> wrote:
Exactly, I am also curious about this. I will debug and update about what's 
exactly happening.

Thanks,
Kotresh

On Wed, Mar 11, 2020 at 1:56 PM Zhou, Cynthia (NSB - CN/Hangzhou) 
mailto:cynthia.z...@nokia-sbell.com>> wrote:
I used to think the file is cached in some client side buffer, because I’ve 
checked from different sn brick, the file content are all correct. But when I 
open client side trace level log, and cat the file, I only find 
lookup/open/flush fop from fuse-bridge side, I am just wondering how is file 
content served to client side? Should not there be readv fop seen from trace 
log?

cynthia

From: Zhou, Cynthia (NSB - CN/Hangzhou)
Sent: 2020年3月11日 15:54
To: 'Kotresh Hiremath Ravishankar' 
mailto:khire...@redhat.com>>
Subject: RE: could you help to check about a glusterfs issue seems to be 
related to ctime

Does that require, that for all the time client should be time synched

Re: [Gluster-devel] could you help to check about a glusterfs issue seems to be related to ctime

2020-03-12 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
Hi,
One more question, I find each client has the same future time stamp where are 
those time stamps from, since Since it is different from any brick stored time 
stamp. And after I modify files  from clients, it remains the same.
[root@mn-0:/home/robot]
# stat /mnt/export/testfile
  File: /mnt/export/testfile
  Size: 193 Blocks: 1  IO Block: 131072 regular file
Device: 28h/40d Inode: 10383279039841136109  Links: 1
Access: (0644/-rw-r--r--)  Uid: (0/root)   Gid: (  
615/_nokfsuifileshare)
Access: 2020-04-11 12:20:22.114365172 +0300
Modify: 2020-04-11 12:20:22.121552573 +0300
Change: 2020-04-11 12:20:22.121552573 +0300

[root@mn-0:/home/robot]
# date
Thu Mar 12 11:27:33 EET 2020
[root@mn-0:/home/robot]

[root@mn-0:/home/robot]
# stat /mnt/bricks/export/brick/testfile
  File: /mnt/bricks/export/brick/testfile
  Size: 193 Blocks: 16 IO Block: 4096   regular file
Device: fc02h/64514dInode: 512015  Links: 2
Access: (0644/-rw-r--r--)  Uid: (0/root)   Gid: (  
615/_nokfsuifileshare)
Access: 2020-04-11 12:20:22.100395536 +0300
Modify: 2020-03-12 11:25:04.095981276 +0200
Change: 2020-03-12 11:25:04.095981276 +0200
Birth: 2020-04-11 08:53:26.805163816 +0300


[root@mn-1:/root]
# stat /mnt/bricks/export/brick/testfile
  File: /mnt/bricks/export/brick/testfile
  Size: 193 Blocks: 16 IO Block: 4096   regular file
Device: fc02h/64514dInode: 512015  Links: 2
Access: (0644/-rw-r--r--)  Uid: (0/root)   Gid: (  
615/_nokfsuifileshare)
Access: 2020-04-11 12:20:22.100395536 +0300
Modify: 2020-03-12 11:25:04.094913452 +0200
Change: 2020-03-12 11:25:04.095913453 +0200
Birth: 2020-03-12 07:53:26.803783053 +0200



From: Zhou, Cynthia (NSB - CN/Hangzhou)
Sent: 2020年3月12日 16:09
To: 'Kotresh Hiremath Ravishankar' 
Cc: Gluster Devel 
Subject: RE: could you help to check about a glusterfs issue seems to be 
related to ctime

Hi,
This is abnormal test case, however, when this happened it will have big impact 
on the apps using those files. And this can not be restored automatically 
unless disable some xlator, I think it is unacceptable for the user apps.


cynthia

From: Kotresh Hiremath Ravishankar 
mailto:khire...@redhat.com>>
Sent: 2020年3月12日 14:37
To: Zhou, Cynthia (NSB - CN/Hangzhou) 
mailto:cynthia.z...@nokia-sbell.com>>
Cc: Gluster Devel mailto:gluster-devel@gluster.org>>
Subject: Re: could you help to check about a glusterfs issue seems to be 
related to ctime

All the perf xlators depend on time (mostly mtime I guess). In my setup, only 
quick read was enabled and hence disabling it worked for me.
All perf xlators needs to be disabled to make it work correctly. But I still 
failed to understand how normal this kind of workload ?

Thanks,
Kotresh

On Thu, Mar 12, 2020 at 11:20 AM Zhou, Cynthia (NSB - CN/Hangzhou) 
mailto:cynthia.z...@nokia-sbell.com>> wrote:
When disable both quick-read and performance.io-cache off everything is back to 
normal
I attached the log when only enable quick-read and performance.io-cache is 
still on glusterfs trace log
When execute command “cat /mnt/export/testfile”
Can you help to find why this still to fail to show correct content?
The file size showed is 141, but actually in brick it is longer than that.


cynthia


From: Zhou, Cynthia (NSB - CN/Hangzhou)
Sent: 2020年3月12日 12:53
To: 'Kotresh Hiremath Ravishankar' 
mailto:khire...@redhat.com>>
Cc: 'Gluster Devel' 
mailto:gluster-devel@gluster.org>>
Subject: RE: could you help to check about a glusterfs issue seems to be 
related to ctime

From my local test only when disable both features.ctime and ctime.noatime this 
issue is gone.
Or
Do echo 3 >/proc/sys/vm/drop_caches after each time when some client change the 
file , can cat command show correct data(same as brick )

cynthia

From: Zhou, Cynthia (NSB - CN/Hangzhou)
Sent: 2020年3月12日 9:53
To: 'Kotresh Hiremath Ravishankar' 
mailto:khire...@redhat.com>>
Cc: Gluster Devel mailto:gluster-devel@gluster.org>>
Subject: RE: could you help to check about a glusterfs issue seems to be 
related to ctime

Hi,
Thanks for your responding!
I’ve tried to disable quick-read:
[root@mn-0:/home/robot]
# gluster v get export all| grep quick
performance.quick-read  off
performance.nfs.quick-read  off

however, this issue still exists.
Two clients see different contents.

it seems only after I disable utime this issue is completely gone.
features.ctime  off
ctime.noatime   off


Do you know why is this?


Cynthia
Nokia storage team
From: Kotresh Hiremath Ravishankar 
mailto:khire...@redhat.com>>
Sent: 2020年3月11日 22:05
To: Zhou, Cynthia (NSB - CN/Hangzhou) 
mailto:cynthia.z...@nokia-sbell.com>>
Cc: Gluster Devel mailto:gluster-devel@gluster.org>>
Subject: Re: could you help to check about a glusterfs issue seems to be 
related to ctime

Hi,

I figured out what's happening. The issue is that the file has 'c|a|m' time set 
to future (The file i

[Gluster-devel] Switching GlusterFS upstream bugs to Github issues

2020-03-12 Thread Deepshikha Khandelwal
Hi everyone,

We have migrated most of the current upstream bugs(attached below) from the
GlusterFS community Bugzilla product to Github issues.

1. All the issues created as a part of a migration will have Bugzilla URL,
description, comments history, Github labels ('Migrated', 'Type: Bug',
Prio) with a list of assignees.
2. All the component's bug except project-infrastructure will go to
gluster/glusterfs
repo.
3. project-infrastructure component's bugs will migrate under
gluster/project-infrastructure
 repo.
4. We are freezing the GlusterFS community product on Bugzilla. It will be
closed for new bug entries. You have to create an issue on Github repo from
now onwards.
5. All the bugs have been closed on Bugzilla with the corresponding Github
issue URL.
6. The changes have been reflected in the developer contributing workflow
[1].
7. 'Migrated' and 'Type: Bug' GitHub labels has been added on the issues.

Discussions on this are happening on the mailing list, and few of the
references are below:

   -
   https://lists.gluster.org/pipermail/gluster-infra/2020-February/006030.html
   -
   https://lists.gluster.org/pipermail/gluster-infra/2020-February/006009.html
   -
   https://lists.gluster.org/pipermail/gluster-infra/2020-February/006040.html

[1] https://github.com/gluster/glusterfs/blob/master/.github/ISSUE_TEMPLATE

Let us know if you see any issues.

Thank you,
Deepshikha
"Bug ID","Product","Component","Assignee","Status","Resolution","Summary","Changed"
1708505,"GlusterFS","disperse","aspan...@redhat.com","NEW"," ---","[EC] /tests/basic/ec/ec-data-heal.t is failing as heal is not happening properly","2019-05-24 15:29:28"
1464639,"GlusterFS","upcall","b...@gluster.org","NEW"," ---","Possible stale read in afr due to un-notified pending xattr change","2019-11-04 22:30:11"
1665361,"GlusterFS","project-infrastructure","b...@gluster.org","NEW"," ---","Alerts for offline nodes","2019-09-02 09:56:51"
1686396,"GlusterFS","core","b...@gluster.org","NEW"," ---","ls and rm run on contents of same directory from a single mount point results in ENOENT errors","2019-03-19 10:53:11"
1694943,"GlusterFS","core","b...@gluster.org","NEW"," ---","parallel-readdir slows down directory listing","2020-01-08 14:37:15"
1736564,"GlusterFS","core","b...@gluster.org","NEW"," ---","GlusterFS files missing randomly.","2019-08-31 18:02:48"
1743195,"GlusterFS","core","b...@gluster.org","NEW"," ---","can't start gluster after upgrade from 5 to 6","2019-09-02 09:40:40"
1743215,"GlusterFS","glusterd","b...@gluster.org","NEW"," ---","glusterd-utils: 0-management: xfs_info exited with non-zero exit status [Permission denied]","2019-11-18 08:05:38"
1744883,"GlusterFS","core","b...@gluster.org","NEW"," ---","GlusterFS problem dataloss","2020-01-08 14:38:57"
1747414,"GlusterFS","libglusterfsclient","b...@gluster.org","NEW"," ---","EIO error on check_and_dump_fuse_W call","2019-09-11 20:11:44"
1748205,"GlusterFS","selfheal","b...@gluster.org","NEW"," ---","null gfid entries can not be healed","2019-09-10 19:31:24"
1749272,"GlusterFS","disperse","b...@gluster.org","NEW"," ---","The version of the file in the disperse volume created with different nodes is incorrect","2019-09-10 19:33:05"
1749369,"GlusterFS","write-behind","b...@gluster.org","NEW"," ---","Segmentation fault occurs while truncate file","2019-09-19 11:34:39"
1751575,"GlusterFS","encryption-xlator","b...@gluster.org","NEW"," ---","File corruption in encrypted volume during read operation","2019-09-23 12:18:48"
1753413,"GlusterFS","selfheal","b...@gluster.org","NEW"," ---","Self-heal daemon crashes","2019-09-23 12:16:25"
1753994,"GlusterFS","core","b...@gluster.org","NEW"," ---","Mtime is not updated on setting it to older date online when sharding enabled","2019-10-28 11:09:23"
1757804,"GlusterFS","project-infrastructure","b...@gluster.org","NEW"," ---","Code coverage - call frequency is wrong","2019-10-28 09:38:53"
1758139,"GlusterFS","project-infrastructure","b...@gluster.org","NEW"," ---","Modify logging at Gluster regression tests","2019-10-07 18:44:51"
1759829,"GlusterFS","write-behind","b...@gluster.org","NEW"," ---","write-behind xlator generate coredump,when run ""glfs_vol_set_IO_ERR.t""","2019-10-28 09:39:56"
1761088,"GlusterFS","replicate","b...@gluster.org","NEW"," ---","Git fetch fails when fetah on gluster storage","2019-10-28 09:39:32"
1761350,"GlusterFS","replicate","b...@gluster.org","NEW"," ---","Directories are not healed, when dirs are created on the backend bricks and performed lookup from mount path.","2019-10-28 09:39:13"
1762311,"GlusterFS","build","b...@gluster.org","NEW"," ---","build-aux/pkg-version has bashishms","2020-01-28 19:30:27"
1763987,"GlusterFS","libgfapi","b...@gluster.org","NEW"," ---","libgfapi: ls -R cifs /mount_point/.snaps directory failed with ""ls: reading directory /root/dir/.snaps/test2: Not a directory""","2019-10-28 09:37:56"
1768380,"GlusterF

Re: [Gluster-devel] Switching GlusterFS upstream bugs to Github issues

2020-03-12 Thread sankarshan
Thank you for making this happen. This is the first phase of adopting
a more GitHub based development workflow including actions.

On Thu, 12 Mar 2020 at 21:29, Deepshikha Khandelwal  wrote:
>
> Hi everyone,
>
> We have migrated most of the current upstream bugs(attached below) from the 
> GlusterFS community Bugzilla product to Github issues.
>
> 1. All the issues created as a part of a migration will have Bugzilla URL, 
> description, comments history, Github labels ('Migrated', 'Type: Bug', Prio) 
> with a list of assignees.
> 2. All the component's bug except project-infrastructure will go to 
> gluster/glusterfs repo.
> 3. project-infrastructure component's bugs will migrate under 
> gluster/project-infrastructure repo.
> 4. We are freezing the GlusterFS community product on Bugzilla. It will be 
> closed for new bug entries. You have to create an issue on Github repo from 
> now onwards.
> 5. All the bugs have been closed on Bugzilla with the corresponding Github 
> issue URL.
> 6. The changes have been reflected in the developer contributing workflow [1].
> 7. 'Migrated' and 'Type: Bug' GitHub labels has been added on the issues.
>
> Discussions on this are happening on the mailing list, and few of the 
> references are below:
>
> https://lists.gluster.org/pipermail/gluster-infra/2020-February/006030.html
> https://lists.gluster.org/pipermail/gluster-infra/2020-February/006009.html
> https://lists.gluster.org/pipermail/gluster-infra/2020-February/006040.html
>
> [1] https://github.com/gluster/glusterfs/blob/master/.github/ISSUE_TEMPLATE
>
> Let us know if you see any issues.
>
> Thank you,
> Deepshikha


-- 
sankars...@kadalu.io | TZ: UTC+0530
kadalu.io : Making it easy to provision storage in k8s!
___

Community Meeting Calendar:

Schedule -
Every Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel