Re: [Gluster-devel] CI failure - NameError: name 'unicode' is not defined (related to changelogparser.py)

2019-06-06 Thread Deepshikha Khandelwal
Hi Yaniv,

We are working on this. The builders are picking up python3.6 which is
leading to modules missing and such undefined errors.

Kotresh has sent a patch https://review.gluster.org/#/c/glusterfs/+/22829/
to fix the issue.



On Thu, Jun 6, 2019 at 11:49 AM Yaniv Kaul  wrote:

> From [1].
>
> I think it's a Python2/3 thing, so perhaps a CI issue additionally (though
> if our code is not Python 3 ready, let's ensure we use Python 2 explicitly
> until we fix this).
>
> *00:47:05.207* ok  14 [ 13/386] <  34> 'gluster --mode=script 
> --wignore volume start patchy'*00:47:05.207* ok  15 [ 13/ 70] <  36> 
> '_GFS --attribute-timeout=0 --entry-timeout=0 --volfile-id=patchy 
> --volfile-server=builder208.int.aws.gluster.org 
> /mnt/glusterfs/0'*00:47:05.207* Traceback (most recent call 
> last):*00:47:05.207*   File 
> "./tests/basic/changelog/../../utils/changelogparser.py", line 233, in 
> *00:47:05.207* parse(sys.argv[1])*00:47:05.207*   File 
> "./tests/basic/changelog/../../utils/changelogparser.py", line 221, in 
> parse*00:47:05.207* process_record(data, tokens, changelog_ts, 
> callback)*00:47:05.207*   File 
> "./tests/basic/changelog/../../utils/changelogparser.py", line 178, in 
> process_record*00:47:05.207* callback(record)*00:47:05.207*   File 
> "./tests/basic/changelog/../../utils/changelogparser.py", line 182, in 
> default_callback*00:47:05.207* 
> sys.stdout.write(u"{0}\n".format(record))*00:47:05.207*   File 
> "./tests/basic/changelog/../../utils/changelogparser.py", line 128, in 
> __str__*00:47:05.207* return unicode(self).encode('utf-8')*00:47:05.207* 
> NameError: name 'unicode' is not defined*00:47:05.207* not ok  16 [ 53/   
>   39] <  42> '2 check_changelog_op /d/backends/patchy0/.glusterfs/changelogs 
> RENAME' -> 'Got "0" instead of "2"'
>
>
> Y.
>
> [1] https://build.gluster.org/job/centos7-regression/6318/console
>
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/836554017
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/486278655
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-Maintainers] Fwd: Build failed in Jenkins: regression-test-with-multiplex #1359

2019-06-06 Thread Amar Tumballi Suryanarayan
Got time to test subdir-mount.t failing in brick-mux scenario.

I noticed some issues, where I need further help from glusterd team.

subdir-mount.t expects 'hook' script to run after add-brick to make sure
the required subdirectories are healed and are present in new bricks. This
is important as subdir mount expects the subdirs to exist for successful
mount.

But in case of brick-mux setup, I see that in some cases (6/10), hook
script (add-brick/post-hook/S13-create-subdir-mount.sh) started getting
executed after 20second of finishing the add-brick command. Due to this,
the mount which we execute after add-brick failed.

My question is, what is making post hook script to run so late ??

I can recreate the issues locally on my laptop too.


On Sat, Jun 1, 2019 at 4:55 PM Atin Mukherjee  wrote:

> subdir-mount.t has started failing in brick mux regression nightly. This
> needs to be fixed.
>
> Raghavendra - did we manage to get any further clue on uss.t failure?
>
> -- Forwarded message -
> From: 
> Date: Fri, 31 May 2019 at 23:34
> Subject: [Gluster-Maintainers] Build failed in Jenkins:
> regression-test-with-multiplex #1359
> To: , , ,
> , 
>
>
> See <
> https://build.gluster.org/job/regression-test-with-multiplex/1359/display/redirect?page=changes
> >
>
> Changes:
>
> [atin] glusterd: add an op-version check
>
> [atin] glusterd/svc: glusterd_svcs_stop should call individual wrapper
> function
>
> [atin] glusterd/svc: Stop stale process using the glusterd_proc_stop
>
> [Amar Tumballi] lcov: more coverage to shard, old-protocol, sdfs
>
> [Kotresh H R] tests/geo-rep: Add EC volume test case
>
> [Amar Tumballi] glusterfsd/cleanup: Protect graph object under a lock
>
> [Mohammed Rafi KC] glusterd/shd: Optimize the glustershd manager to send
> reconfigure
>
> [Kotresh H R] tests/geo-rep: Add tests to cover glusterd geo-rep
>
> [atin] glusterd: Optimize code to copy dictionary in handshake code path
>
> --
> [...truncated 3.18 MB...]
> ./tests/basic/afr/stale-file-lookup.t  -  9 second
> ./tests/basic/afr/granular-esh/replace-brick.t  -  9 second
> ./tests/basic/afr/granular-esh/add-brick.t  -  9 second
> ./tests/basic/afr/gfid-mismatch.t  -  9 second
> ./tests/performance/open-behind.t  -  8 second
> ./tests/features/ssl-authz.t  -  8 second
> ./tests/features/readdir-ahead.t  -  8 second
> ./tests/bugs/upcall/bug-1458127.t  -  8 second
> ./tests/bugs/transport/bug-873367.t  -  8 second
> ./tests/bugs/replicate/bug-1498570-client-iot-graph-check.t  -  8 second
> ./tests/bugs/replicate/bug-1132102.t  -  8 second
> ./tests/bugs/quota/bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs.t
> -  8 second
> ./tests/bugs/quota/bug-1104692.t  -  8 second
> ./tests/bugs/posix/bug-1360679.t  -  8 second
> ./tests/bugs/posix/bug-1122028.t  -  8 second
> ./tests/bugs/nfs/bug-1157223-symlink-mounting.t  -  8 second
> ./tests/bugs/glusterfs/bug-861015-log.t  -  8 second
> ./tests/bugs/glusterd/sync-post-glusterd-restart.t  -  8 second
> ./tests/bugs/glusterd/bug-1696046.t  -  8 second
> ./tests/bugs/fuse/bug-983477.t  -  8 second
> ./tests/bugs/ec/bug-1227869.t  -  8 second
> ./tests/bugs/distribute/bug-1088231.t  -  8 second
> ./tests/bugs/distribute/bug-1086228.t  -  8 second
> ./tests/bugs/cli/bug-1087487.t  -  8 second
> ./tests/bugs/cli/bug-1022905.t  -  8 second
> ./tests/bugs/bug-1258069.t  -  8 second
> ./tests/bugs/bitrot/1209752-volume-status-should-show-bitrot-scrub-info.t
> -  8 second
> ./tests/basic/xlator-pass-through-sanity.t  -  8 second
> ./tests/basic/quota-nfs.t  -  8 second
> ./tests/basic/glusterd/arbiter-volume.t  -  8 second
> ./tests/basic/ctime/ctime-noatime.t  -  8 second
> ./tests/line-coverage/cli-peer-and-volume-operations.t  -  7 second
> ./tests/gfid2path/get-gfid-to-path.t  -  7 second
> ./tests/bugs/upcall/bug-1369430.t  -  7 second
> ./tests/bugs/snapshot/bug-1260848.t  -  7 second
> ./tests/bugs/shard/shard-inode-refcount-test.t  -  7 second
> ./tests/bugs/shard/bug-1258334.t  -  7 second
> ./tests/bugs/replicate/bug-767585-gfid.t  -  7 second
> ./tests/bugs/replicate/bug-1448804-check-quorum-type-values.t  -  7 second
> ./tests/bugs/replicate/bug-1250170-fsync.t  -  7 second
> ./tests/bugs/posix/bug-1175711.t  -  7 second
> ./tests/bugs/nfs/bug-915280.t  -  7 second
> ./tests/bugs/md-cache/setxattr-prepoststat.t  -  7 second
> ./tests/bugs/md-cache/bug-1211863_unlink.t  -  7 second
> ./tests/bugs/glusterfs/bug-848251.t  -  7 second
> ./tests/bugs/distribute/bug-1122443.t  -  7 second
> ./tests/bugs/changelog/bug-1208470.t  -  7 second
> ./tests/bugs/bug-1702299.t  -  7 second
> ./tests/bugs/bug-1371806_2.t  -  7 second
> ./tests/bugs/bitrot/1209818-vol-info-show-scrub-process-properly.t  -  7
> second
> ./tests/bugs/bitrot/1209751-bitrot-scrub-tunable-reset.t  -  7 second
> ./tests/bugs/bitrot/1207029-bitrot-daemon-should-start-on-valid-node.t  -
> 7 second
> ./tests/bitrot/br-stub.t  -  7 second
> ./tests/basic/gluste

Re: [Gluster-devel] [Gluster-users] Memory leak in glusterfs

2019-06-06 Thread Nithya Balachandran
Hi Abhishek,

Please use statedumps taken at intervals to determine where the memory is
increasing. See [1] for details.

Regards,
Nithya

[1] https://docs.gluster.org/en/latest/Troubleshooting/statedump/


On Fri, 7 Jun 2019 at 08:13, ABHISHEK PALIWAL 
wrote:

> Hi Nithya,
>
> We are having the setup where copying the file to and deleting it from
> gluster mount point to update the latest file. We noticed due to this
> having some memory increase in glusterfsd process.
>
> To find the memory leak we are using valgrind but didn't get any help.
>
> That's why contacted to glusterfs community.
>
> Regards,
> Abhishek
>
> On Thu, Jun 6, 2019, 16:08 Nithya Balachandran 
> wrote:
>
>> Hi Abhishek,
>>
>> I am still not clear as to the purpose of the tests. Can you clarify why
>> you are using valgrind and why you think there is a memory leak?
>>
>> Regards,
>> Nithya
>>
>> On Thu, 6 Jun 2019 at 12:09, ABHISHEK PALIWAL 
>> wrote:
>>
>>> Hi Nithya,
>>>
>>> Here is the Setup details and test which we are doing as below:
>>>
>>>
>>> One client, two gluster Server.
>>> The client is writing and deleting one file each 15 minutes by script
>>> test_v4.15.sh.
>>>
>>> IP
>>> Server side:
>>> 128.224.98.157 /gluster/gv0/
>>> 128.224.98.159 /gluster/gv0/
>>>
>>> Client side:
>>> 128.224.98.160 /gluster_mount/
>>>
>>> Server side:
>>> gluster volume create gv0 replica 2 128.224.98.157:/gluster/gv0/
>>> 128.224.98.159:/gluster/gv0/ force
>>> gluster volume start gv0
>>>
>>> root@128:/tmp/brick/gv0# gluster volume info
>>>
>>> Volume Name: gv0
>>> Type: Replicate
>>> Volume ID: 7105a475-5929-4d60-ba23-be57445d97b5
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 1 x 2 = 2
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: 128.224.98.157:/gluster/gv0
>>> Brick2: 128.224.98.159:/gluster/gv0
>>> Options Reconfigured:
>>> transport.address-family: inet
>>> nfs.disable: on
>>> performance.client-io-threads: off
>>>
>>> exec script: ./ps_mem.py -p 605 -w 61 > log
>>> root@128:/# ./ps_mem.py -p 605
>>> Private + Shared = RAM used Program
>>> 23668.0 KiB + 1188.0 KiB = 24856.0 KiB glusterfsd
>>> -
>>> 24856.0 KiB
>>> =
>>>
>>>
>>> Client side:
>>> mount -t glusterfs -o acl -o resolve-gids 128.224.98.157:gv0
>>> /gluster_mount
>>>
>>>
>>> We are using the below script write and delete the file.
>>>
>>> *test_v4.15.sh *
>>>
>>> Also the below script to see the memory increase whihle the script is
>>> above script is running in background.
>>>
>>> *ps_mem.py*
>>>
>>> I am attaching the script files as well as the result got after testing
>>> the scenario.
>>>
>>> On Wed, Jun 5, 2019 at 7:23 PM Nithya Balachandran 
>>> wrote:
>>>
 Hi,

 Writing to a volume should not affect glusterd. The stack you have
 shown in the valgrind looks like the memory used to initialise the
 structures glusterd uses and will free only when it is stopped.

 Can you provide more details to what it is you are trying to test?

 Regards,
 Nithya


 On Tue, 4 Jun 2019 at 15:41, ABHISHEK PALIWAL 
 wrote:

> Hi Team,
>
> Please respond on the issue which I raised.
>
> Regards,
> Abhishek
>
> On Fri, May 17, 2019 at 2:46 PM ABHISHEK PALIWAL <
> abhishpali...@gmail.com> wrote:
>
>> Anyone please reply
>>
>> On Thu, May 16, 2019, 10:49 ABHISHEK PALIWAL 
>> wrote:
>>
>>> Hi Team,
>>>
>>> I upload some valgrind logs from my gluster 5.4 setup. This is
>>> writing to the volume every 15 minutes. I stopped glusterd and then copy
>>> away the logs.  The test was running for some simulated days. They are
>>> zipped in valgrind-54.zip.
>>>
>>> Lots of info in valgrind-2730.log. Lots of possibly lost bytes in
>>> glusterfs and even some definitely lost bytes.
>>>
>>> ==2737== 1,572,880 bytes in 1 blocks are possibly lost in loss
>>> record 391 of 391
>>> ==2737== at 0x4C29C25: calloc (in
>>> /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
>>> ==2737== by 0xA22485E: ??? (in
>>> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
>>> ==2737== by 0xA217C94: ??? (in
>>> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
>>> ==2737== by 0xA21D9F8: ??? (in
>>> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
>>> ==2737== by 0xA21DED9: ??? (in
>>> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
>>> ==2737== by 0xA21E685: ??? (in
>>> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
>>> ==2737== by 0xA1B9D8C: init (in
>>> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
>>> ==2737== by 0x4E511CE: xlator_init (in
>>> /usr/lib64/libglusterfs.so.0.0.1)
>>> ==2737== by 0x4E8A2B8: ??? (in /usr/lib64/libglusterfs.so.0.0.1)
>>> ==2737== by 0x4E8AAB3: glusterfs_graph_activate (in
>>> /usr/lib64/libglusterfs.so.0.0.1)
>>> ==2737== by 0x409C35: 

Re: [Gluster-devel] [Gluster-users] Memory leak in glusterfs

2019-06-06 Thread ABHISHEK PALIWAL
Hi Nithya,

We are having the setup where copying the file to and deleting it from
gluster mount point to update the latest file. We noticed due to this
having some memory increase in glusterfsd process.

To find the memory leak we are using valgrind but didn't get any help.

That's why contacted to glusterfs community.

Regards,
Abhishek

On Thu, Jun 6, 2019, 16:08 Nithya Balachandran  wrote:

> Hi Abhishek,
>
> I am still not clear as to the purpose of the tests. Can you clarify why
> you are using valgrind and why you think there is a memory leak?
>
> Regards,
> Nithya
>
> On Thu, 6 Jun 2019 at 12:09, ABHISHEK PALIWAL 
> wrote:
>
>> Hi Nithya,
>>
>> Here is the Setup details and test which we are doing as below:
>>
>>
>> One client, two gluster Server.
>> The client is writing and deleting one file each 15 minutes by script
>> test_v4.15.sh.
>>
>> IP
>> Server side:
>> 128.224.98.157 /gluster/gv0/
>> 128.224.98.159 /gluster/gv0/
>>
>> Client side:
>> 128.224.98.160 /gluster_mount/
>>
>> Server side:
>> gluster volume create gv0 replica 2 128.224.98.157:/gluster/gv0/
>> 128.224.98.159:/gluster/gv0/ force
>> gluster volume start gv0
>>
>> root@128:/tmp/brick/gv0# gluster volume info
>>
>> Volume Name: gv0
>> Type: Replicate
>> Volume ID: 7105a475-5929-4d60-ba23-be57445d97b5
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: 128.224.98.157:/gluster/gv0
>> Brick2: 128.224.98.159:/gluster/gv0
>> Options Reconfigured:
>> transport.address-family: inet
>> nfs.disable: on
>> performance.client-io-threads: off
>>
>> exec script: ./ps_mem.py -p 605 -w 61 > log
>> root@128:/# ./ps_mem.py -p 605
>> Private + Shared = RAM used Program
>> 23668.0 KiB + 1188.0 KiB = 24856.0 KiB glusterfsd
>> -
>> 24856.0 KiB
>> =
>>
>>
>> Client side:
>> mount -t glusterfs -o acl -o resolve-gids 128.224.98.157:gv0
>> /gluster_mount
>>
>>
>> We are using the below script write and delete the file.
>>
>> *test_v4.15.sh *
>>
>> Also the below script to see the memory increase whihle the script is
>> above script is running in background.
>>
>> *ps_mem.py*
>>
>> I am attaching the script files as well as the result got after testing
>> the scenario.
>>
>> On Wed, Jun 5, 2019 at 7:23 PM Nithya Balachandran 
>> wrote:
>>
>>> Hi,
>>>
>>> Writing to a volume should not affect glusterd. The stack you have shown
>>> in the valgrind looks like the memory used to initialise the structures
>>> glusterd uses and will free only when it is stopped.
>>>
>>> Can you provide more details to what it is you are trying to test?
>>>
>>> Regards,
>>> Nithya
>>>
>>>
>>> On Tue, 4 Jun 2019 at 15:41, ABHISHEK PALIWAL 
>>> wrote:
>>>
 Hi Team,

 Please respond on the issue which I raised.

 Regards,
 Abhishek

 On Fri, May 17, 2019 at 2:46 PM ABHISHEK PALIWAL <
 abhishpali...@gmail.com> wrote:

> Anyone please reply
>
> On Thu, May 16, 2019, 10:49 ABHISHEK PALIWAL 
> wrote:
>
>> Hi Team,
>>
>> I upload some valgrind logs from my gluster 5.4 setup. This is
>> writing to the volume every 15 minutes. I stopped glusterd and then copy
>> away the logs.  The test was running for some simulated days. They are
>> zipped in valgrind-54.zip.
>>
>> Lots of info in valgrind-2730.log. Lots of possibly lost bytes in
>> glusterfs and even some definitely lost bytes.
>>
>> ==2737== 1,572,880 bytes in 1 blocks are possibly lost in loss record
>> 391 of 391
>> ==2737== at 0x4C29C25: calloc (in
>> /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
>> ==2737== by 0xA22485E: ??? (in
>> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
>> ==2737== by 0xA217C94: ??? (in
>> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
>> ==2737== by 0xA21D9F8: ??? (in
>> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
>> ==2737== by 0xA21DED9: ??? (in
>> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
>> ==2737== by 0xA21E685: ??? (in
>> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
>> ==2737== by 0xA1B9D8C: init (in
>> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
>> ==2737== by 0x4E511CE: xlator_init (in
>> /usr/lib64/libglusterfs.so.0.0.1)
>> ==2737== by 0x4E8A2B8: ??? (in /usr/lib64/libglusterfs.so.0.0.1)
>> ==2737== by 0x4E8AAB3: glusterfs_graph_activate (in
>> /usr/lib64/libglusterfs.so.0.0.1)
>> ==2737== by 0x409C35: glusterfs_process_volfp (in
>> /usr/sbin/glusterfsd)
>> ==2737== by 0x409D99: glusterfs_volumes_init (in /usr/sbin/glusterfsd)
>> ==2737==
>> ==2737== LEAK SUMMARY:
>> ==2737== definitely lost: 1,053 bytes in 10 blocks
>> ==2737== indirectly lost: 317 bytes in 3 blocks
>> ==2737== possibly lost: 2,374,971 bytes in 524 blocks
>> ==2737== still reachable: 53,277 bytes in 201 blocks
>> ==2737=

Re: [Gluster-devel] [Gluster-users] Memory leak in glusterfs

2019-06-06 Thread Nithya Balachandran
Hi Abhishek,

I am still not clear as to the purpose of the tests. Can you clarify why
you are using valgrind and why you think there is a memory leak?

Regards,
Nithya

On Thu, 6 Jun 2019 at 12:09, ABHISHEK PALIWAL 
wrote:

> Hi Nithya,
>
> Here is the Setup details and test which we are doing as below:
>
>
> One client, two gluster Server.
> The client is writing and deleting one file each 15 minutes by script
> test_v4.15.sh.
>
> IP
> Server side:
> 128.224.98.157 /gluster/gv0/
> 128.224.98.159 /gluster/gv0/
>
> Client side:
> 128.224.98.160 /gluster_mount/
>
> Server side:
> gluster volume create gv0 replica 2 128.224.98.157:/gluster/gv0/
> 128.224.98.159:/gluster/gv0/ force
> gluster volume start gv0
>
> root@128:/tmp/brick/gv0# gluster volume info
>
> Volume Name: gv0
> Type: Replicate
> Volume ID: 7105a475-5929-4d60-ba23-be57445d97b5
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: 128.224.98.157:/gluster/gv0
> Brick2: 128.224.98.159:/gluster/gv0
> Options Reconfigured:
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
>
> exec script: ./ps_mem.py -p 605 -w 61 > log
> root@128:/# ./ps_mem.py -p 605
> Private + Shared = RAM used Program
> 23668.0 KiB + 1188.0 KiB = 24856.0 KiB glusterfsd
> -
> 24856.0 KiB
> =
>
>
> Client side:
> mount -t glusterfs -o acl -o resolve-gids 128.224.98.157:gv0
> /gluster_mount
>
>
> We are using the below script write and delete the file.
>
> *test_v4.15.sh *
>
> Also the below script to see the memory increase whihle the script is
> above script is running in background.
>
> *ps_mem.py*
>
> I am attaching the script files as well as the result got after testing
> the scenario.
>
> On Wed, Jun 5, 2019 at 7:23 PM Nithya Balachandran 
> wrote:
>
>> Hi,
>>
>> Writing to a volume should not affect glusterd. The stack you have shown
>> in the valgrind looks like the memory used to initialise the structures
>> glusterd uses and will free only when it is stopped.
>>
>> Can you provide more details to what it is you are trying to test?
>>
>> Regards,
>> Nithya
>>
>>
>> On Tue, 4 Jun 2019 at 15:41, ABHISHEK PALIWAL 
>> wrote:
>>
>>> Hi Team,
>>>
>>> Please respond on the issue which I raised.
>>>
>>> Regards,
>>> Abhishek
>>>
>>> On Fri, May 17, 2019 at 2:46 PM ABHISHEK PALIWAL <
>>> abhishpali...@gmail.com> wrote:
>>>
 Anyone please reply

 On Thu, May 16, 2019, 10:49 ABHISHEK PALIWAL 
 wrote:

> Hi Team,
>
> I upload some valgrind logs from my gluster 5.4 setup. This is writing
> to the volume every 15 minutes. I stopped glusterd and then copy away the
> logs.  The test was running for some simulated days. They are zipped in
> valgrind-54.zip.
>
> Lots of info in valgrind-2730.log. Lots of possibly lost bytes in
> glusterfs and even some definitely lost bytes.
>
> ==2737== 1,572,880 bytes in 1 blocks are possibly lost in loss record
> 391 of 391
> ==2737== at 0x4C29C25: calloc (in
> /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==2737== by 0xA22485E: ??? (in
> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
> ==2737== by 0xA217C94: ??? (in
> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
> ==2737== by 0xA21D9F8: ??? (in
> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
> ==2737== by 0xA21DED9: ??? (in
> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
> ==2737== by 0xA21E685: ??? (in
> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
> ==2737== by 0xA1B9D8C: init (in
> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
> ==2737== by 0x4E511CE: xlator_init (in
> /usr/lib64/libglusterfs.so.0.0.1)
> ==2737== by 0x4E8A2B8: ??? (in /usr/lib64/libglusterfs.so.0.0.1)
> ==2737== by 0x4E8AAB3: glusterfs_graph_activate (in
> /usr/lib64/libglusterfs.so.0.0.1)
> ==2737== by 0x409C35: glusterfs_process_volfp (in /usr/sbin/glusterfsd)
> ==2737== by 0x409D99: glusterfs_volumes_init (in /usr/sbin/glusterfsd)
> ==2737==
> ==2737== LEAK SUMMARY:
> ==2737== definitely lost: 1,053 bytes in 10 blocks
> ==2737== indirectly lost: 317 bytes in 3 blocks
> ==2737== possibly lost: 2,374,971 bytes in 524 blocks
> ==2737== still reachable: 53,277 bytes in 201 blocks
> ==2737== suppressed: 0 bytes in 0 blocks
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
>

>>>
>>> --
>>>
>>>
>>>
>>>
>>> Regards
>>> Abhishek Paliwal
>>> ___
>>> Gluster-users mailing list
>>> gluster-us...@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
>
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bl