OK, here the results go.
I've taken 5 statedumps with 30 mins between each statedump. Also,
before taking the statedump, I've recorded memory usage.
Memory consumption:
1. root 1010 0.0 9.6 7538188 374864 ? Ssl чер07 0:16
/usr/sbin/glusterfs -s localhost --volfile-id
On Wed, Jun 8, 2016 at 12:33 PM, Oleksandr Natalenko <
oleksa...@natalenko.name> wrote:
> Yup, I can do that, but please note that RSS does not change. Will
> statedump show VIRT values?
>
> Also, I'm looking at the numbers now, and see that on each reconnect VIRT
> grows by ~24M (once per ~10–15
Yup, I can do that, but please note that RSS does not change. Will
statedump show VIRT values?
Also, I'm looking at the numbers now, and see that on each reconnect
VIRT grows by ~24M (once per ~10–15 mins). Probably, that could give you
some idea what is going wrong.
08.06.2016 09:50,
Oleksandr,
Could you take statedump of the shd process once in 5-10 minutes and send
may be 5 samples of them when it starts to increase? This will help us find
what datatypes are being allocated a lot and can lead to coming up with
possible theories for the increase.
On Wed, Jun 8, 2016 at 12:03
Also, I've checked shd log files, and found out that for some reason shd
constantly reconnects to bricks: [1]
Please note that suggested fix [2] by Pranith does not help, VIRT value
still grows:
===
root 1010 0.0 9.6 7415248 374688 ? Ssl чер07 0:14
/usr/sbin/glusterfs -s
Also, I see lots of entries in pmap output:
===
7ef9ff8f3000 4K - [ anon ]
7ef9ff8f4000 8192K rw--- [ anon ]
7efa000f4000 4K - [ anon ]
7efa000f5000 8192K rw--- [ anon ]
===
If I sum them, I get the following:
===
# pmap 15109 | grep '[ anon ]' |
I believe, multi-threaded shd has not been merged at least into 3.7
branch prior to 3.7.11 (incl.), because I've found this [1].
[1] https://www.gluster.org/pipermail/maintainers/2016-April/000628.html
06.06.2016 12:21, Kaushal M написав:
Has multi-threaded SHD been merged into 3.7.* by any
Has multi-threaded SHD been merged into 3.7.* by any chance? If not,
what I'm saying below doesn't apply.
We saw problems when encrypted transports were used, because the RPC
layer was not reaping threads (doing pthread_join) when a connection
ended. This lead to similar observations of huge VIRT
Hello.
We use v3.7.11, replica 2 setup between 2 nodes + 1 dummy node for
keeping volumes metadata.
Now we observe huge VSZ (VIRT) usage by glustershd on dummy node:
===
root 15109 0.0 13.7 76552820 535272 ? Ssl тра26 2:11
/usr/sbin/glusterfs -s localhost --volfile-id