Hello.
25.10.2016 10:08, Maxence Sartiaux wrote:
I need to migrate a old 2 node cluster to a proxmox cluster with a
replicated gluster storage between those two (and a third arbitrer
node).
Id like to create a volume with a single node, migrate the data on
this volume from the old server and th
Hello.
25.10.2016 09:11, Pavel Cernohorsky wrote:
Unfortunately it is not
possible to use valgrind properly, because libgfapi seems to leak just
by initializing and deinitializing (tested with different code).
Use Valgrind with Massif tool. That would definitely help.
_
IIRC, latest opversion is 30712.
On October 22, 2016 1:38:44 PM GMT+02:00, mabi wrote:
>Hello,
>
>I just upgraded from GlusterFS 3.7.12 to 3.7.16 and checked the
>op-version for all my volumes and found out that I am still using
>op-version 30706 as you can see below:
>
>Option Value
>--
Correct.
On September 7, 2016 1:51:08 AM GMT+03:00, Pranith Kumar Karampuri
wrote:
>On Wed, Sep 7, 2016 at 12:24 AM, Oleksandr Natalenko <
>oleksa...@natalenko.name> wrote:
>
>> Hello,
>>
>> thanks, but that is not what I want. I have no issues debugging gfapi
Created BZ for it [1].
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1373630
On вівторок, 6 вересня 2016 р. 23:32:51 EEST Pranith Kumar Karampuri wrote:
> I included you on a thread on users, let us see if he can help you out.
>
> On Mon, Aug 29, 2016 at 4:02 PM, Oleksandr
Hello,
thanks, but that is not what I want. I have no issues debugging gfapi apps,
but have an issue with GlusterFS FUSE client not being handled properly by
Massif tool.
Valgrind+Massif does not handle all forked children properly, and I believe
that happens because of some memory corruption
Hello.
On субота, 3 вересня 2016 р. 03:06:50 EEST Pranith Kumar Karampuri wrote:
> On a completely different note, I see that you used massif for doing this
> analysis. Oleksandr is looking for some help in using massif to provide
> more information in a different usecase. Could you help him?
Yup
I'd say, like this:
/.shard/d2/18/D218CD1C-4BD9-40D7-9810-86B3F7932509.1
18.07.2016 10:31, Gandalf Corvotempesta написав:
AFAIK gluster store each shard on a single directory.
With huge files this could lead to millions of small shard file in the
same directory that certainly lead to a performa
is meaningless.
Pranith?
08.06.2016 10:06, Pranith Kumar Karampuri написав:
On Wed, Jun 8, 2016 at 12:33 PM, Oleksandr Natalenko
wrote:
Yup, I can do that, but please note that RSS does not change. Will
statedump show VIRT values?
Also, I'm looking at the numbers now, and see that on each rec
rease.
On Wed, Jun 8, 2016 at 12:03 PM, Oleksandr Natalenko
wrote:
Also, I've checked shd log files, and found out that for some reason
shd constantly reconnects to bricks: [1]
Please note that suggested fix [2] by Pranith does not help, VIRT
value still grows:
===
root 1010 0.0 9
leaking
threads in a similar way.
On Mon, Jun 6, 2016 at 1:54 PM, Oleksandr Natalenko
wrote:
Hello.
We use v3.7.11, replica 2 setup between 2 nodes + 1 dummy node for
keeping
volumes metadata.
Now we observe huge VSZ (VIRT) usage by glustershd on dummy node:
===
root 15109 0.0
anon ]' | grep 8192K | wc -l
9261
$ echo "9261*(8192+4)" | bc
75903156
===
Which is something like 70G+ I have got in VIRT.
06.06.2016 11:24, Oleksandr Natalenko написав:
Hello.
We use v3.7.11, replica 2 setup between 2 nodes + 1 dummy node for
keeping volumes metadata.
Now we ob
m not sure how multi-threaded shd works, but it could be leaking
threads in a similar way.
On Mon, Jun 6, 2016 at 1:54 PM, Oleksandr Natalenko
wrote:
Hello.
We use v3.7.11, replica 2 setup between 2 nodes + 1 dummy node for
keeping
volumes metadata.
Now we observe huge VSZ (VIRT) usage by glus
Hello.
We use v3.7.11, replica 2 setup between 2 nodes + 1 dummy node for
keeping volumes metadata.
Now we observe huge VSZ (VIRT) usage by glustershd on dummy node:
===
root 15109 0.0 13.7 76552820 535272 ? Ssl тра26 2:11
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glu
30.05.2016 05:08, Sankarshan Mukhopadhyay написав:
It would perhaps be worthwhile to extend this release timeline/cadence
discussion into (a) End-of-Life definition and invocation (b) whether
a 'long term support' (assuming that is what LTS is) is of essentially
any value to users of GlusterFS.
My 2 cents on timings etc.
Rationale:
1. deliver new features to users as fast as possible to get the feedback;
2. leave an option of using LTS branch for those who do not want update too
often.
Definition:
* "stable release" — .0 tag that receives critical bugfixes and security
updates for 1
On субота, 30 квітня 2016 р. 19:46:30 EEST Gandalf Corvotempesta wrote:
> All servers has 2x10G? just for HA or load balanced?
Everything is 2×10G, for both HA (virtual chassis) and load balancing.
> 3xreplica2 means you have 3 gluster servers with a replica of 2 or you have
> 3 servers replicate
ogy and how many sites are you
> hosting?
>
> Any front cache like varnish?
>
> We are planning two clusters: one for mass hosting (in the order of ten
> thousands of websites) and one for virtual machines (the webservers that
> would access to the hosting cluster)
> Il 30 a
Yup, what are you interested in?
On April 30, 2016 3:39:00 PM GMT+03:00, Gandalf Corvotempesta
wrote:
>Anyone using gluster as storage backends for web servers (hosting
>wordpress, joomla, .)
>in production environment willing to share some info ?
>__
On четвер, 7 квітня 2016 р. 16:12:07 EEST Jeff Darcy wrote:
> "Considered wrong" might be overstating the case. It might be useful to
> keep in mind that the fuse-bridge code predates GFAPI by a considerable
> amount. In fact, significant parts of GFAPI were borrowed from the
> existing fuse-brid
Unfortunately, one couldn't convert replica 2 into replica 3 arbiter 1 now,
but I really hope to get this feature before 3.8 release for 3.7 branch. See
latest community meeting log for discussion [1] (starting from 15:27:03).
[1] https://meetbot.fedoraproject.org/gluster-meeting/2016-03-23/
wee
And for 256b inode:
(597904 - 33000) / (1066036 - 23) == 530 bytes per inode.
So I still consider 1k to be good estimation for average workload.
Regards,
Oleksandr.
On четвер, 17 березня 2016 р. 09:58:14 EET Ravishankar N wrote:
> Looks okay to me Oleksandr. You might want to make a github gi
ishankar N wrote:
> On 03/05/2016 03:45 PM, Oleksandr Natalenko wrote:
> > In order to estimate GlusterFS arbiter brick size, I've deployed test
> > setup
> > with replica 3 arbiter 1 volume within one node. Each brick is located on
> > separate HDD (XFS with
Ravi, I will definitely arrange the results into some short handy
document and post it here.
Also, @JoeJulian on IRC suggested me to perform this test on XFS bricks
with inode size of 256b and 1k:
===
22:38 <@JoeJulian> post-factum: Just wondering what 256 byte inodes
might look like for tha
Ravi,
here is the summary: [1]
Regards,
Oleksandr.
[1] https://gist.github.com/e8265ca07f7b19f30bb3
On четвер, 17 березня 2016 р. 09:58:14 EET Ravishankar N wrote:
> On 03/16/2016 10:57 PM, Oleksandr Natalenko wrote:
> > OK, I've repeated the test with the following hierarchy:
Hi.
On вівторок, 8 березня 2016 р. 19:13:05 EET Ravishankar N wrote:
> I think the first one is right because you still haven't used up all the
> inodes.(2036637 used vs. the max. permissible 3139091). But again this
> is an approximation because not all files would be 899 bytes. For
> example if
In order to estimate GlusterFS arbiter brick size, I've deployed test setup
with replica 3 arbiter 1 volume within one node. Each brick is located on
separate HDD (XFS with inode size == 512). Using GlusterFS v3.7.6 + memleak
patches. Volume options are kept default.
Here is the script that cre
David,
could you please cross-post your observations to the following bugreport:
https://bugzilla.redhat.com/show_bug.cgi?id=1309462
?
It seems you have faced similar issue.
On понеділок, 22 лютого 2016 р. 16:46:01 EET David Robinson wrote:
> The 3.7.8 FUSE client is significantly slower than
Hmm, OK. I've rechecked 3.7.8 with the following patches (latest
revisions):
===
Soumya Koduri (3):
gfapi: Use inode_forget in case of handle objects
inode: Retire the inodes from the lru list in inode_table_destroy
rpc: Fix for rpc_transport_t leak
===
Here is Valgrind output
oumya?
[1] https://github.com/pfactum/xglfs
[2] https://github.com/pfactum/xglfs/blob/master/xglfs_destroy.c#L30
[3] https://gist.github.com/aec72b6164a695cf2d44
11.02.2016 10:12, Oleksandr Natalenko написав:
And here goes "rsync" test results (v3.7.8 + two patches by Soumya).
2 volumes
2016 15:37, Oleksandr Natalenko написав:
Hi, folks.
Here go new test results regarding client memory leak.
I use v3.7.8 with the following patches:
===
Soumya Koduri (2):
inode: Retire the inodes from the lru list in inode_table_destroy
gfapi: Use inode_forget in case of hand
Also, you may take a look at my dirty GlusterFS FUSE API client that
uses fuse_main():
https://github.com/pfactum/xglfs
11.02.2016 02:10, Samuel Hall написав:
Hello everyone,
I am trying to find where FUSE is initialized and the fuse_main
function is called by Gluster.
Normally the call by F
Hi, folks.
Here go new test results regarding client memory leak.
I use v3.7.8 with the following patches:
===
Soumya Koduri (2):
inode: Retire the inodes from the lru list in inode_table_destroy
gfapi: Use inode_forget in case of handle objects
===
Those are the only 2 not merged
jay Bellur написав:
On 01/29/2016 01:09 PM, Oleksandr Natalenko wrote:
Here is intermediate summary of current memory leaks in FUSE client
investigation.
I use GlusterFS v3.7.6 release with the following patches:
===
Kaleb S KEITHLEY (1):
fuse: use-after-free fix in fuse-bridge, revisite
github.com/87baa0a778ba54f0f7f7
[3] https://gist.github.com/7013b493d19c8c5fffae
[4] https://gist.github.com/cc38155b57e68d7e86d5
[5] https://gist.github.com/6a24000c77760a97976a
[6] https://gist.github.com/74bd7a9f734c2fd21c33
On понеділок, 1 лютого 2016 р. 14:24:22 EET Soumya Koduri wrote:
On 02
8ba54f0f7f7
[3] https://gist.github.com/7013b493d19c8c5fffae
[4] https://gist.github.com/cc38155b57e68d7e86d5
[5] https://gist.github.com/6a24000c77760a97976a
[6] https://gist.github.com/74bd7a9f734c2fd21c33
On понеділок, 1 лютого 2016 р. 14:24:22 EET Soumya Koduri wrote:
> On 02/01/
Koduri написав:
On 01/31/2016 03:05 PM, Oleksandr Natalenko wrote:
Unfortunately, this patch doesn't help.
RAM usage on "find" finish is ~9G.
Here is statedump before drop_caches: https://gist.github.com/
fc1647de0982ab447e20
[mount/fuse.fuse - usage-type gf_common_mt_inod
r.org/13324
>
> Hopefully with this patch the
> memory leaks should disapear.
>
> Xavi
>
> On 29.01.2016 19:09, Oleksandr
>
> Natalenko wrote:
> > Here is intermediate summary of current memory
>
> leaks in FUSE client
>
> > investigation.
> >
&g
Here is intermediate summary of current memory leaks in FUSE client
investigation.
I use GlusterFS v3.7.6 release with the following patches:
===
Kaleb S KEITHLEY (1):
fuse: use-after-free fix in fuse-bridge, revisited
Pranith Kumar K (1):
mount/fuse: Fix use-after-free crash
Soumy
the lru list in inode_table_destroy
upcall: free the xdr* allocations
===
I've repeated "rsync" test under Valgrind, and here is Valgrind output:
https://gist.github.com/f8e0151a6878cacc9b1a
I see DHT-related leaks.
On понеділок, 25 січня 2016 р. 02:46:32 EET Oleksandr
552486
max_size=725575836
max_num_allocs=7552489
total_allocs=90843958
[cluster/distribute.asterisk_records-dht - usage-type gf_common_mt_char
memusage]
size=586404954
num_allocs=7572836
max_size=586405157
max_num_allocs=7572839
total_allocs=80463096
===
Ideas?
On понеділок, 25 січня 2016 р. 02:46:3
it looks promising :)
>
>
> Cordialement,
> Mathieu CHATEAU
> http://www.lotp.fr
>
> 2016-01-23 22:30 GMT+01:00 Oleksandr Natalenko :
> > OK, now I'm re-performing tests with rsync + GlusterFS v3.7.6 + the
> > following
> > patches:
> >
> >
BTW, am I the only one who sees in
max_size=4294965480
almost 2^32? Could that be integer overflow?
On неділя, 24 січня 2016 р. 13:23:55 EET Oleksandr Natalenko wrote:
> The leak definitely remains. I did "find /mnt/volume -type d" over GlusterFS
> volume, with mentioned pat
EET Mathieu Chateau wrote:
> Thanks for all your tests and times, it looks promising :)
>
>
> Cordialement,
> Mathieu CHATEAU
> http://www.lotp.fr
>
> 2016-01-23 22:30 GMT+01:00 Oleksandr Natalenko :
> > OK, now I'm re-performing tests with rsync + GlusterFS v3.
:
> On Thu, Jan 21, 2016 at 10:49 AM, Pranith Kumar Karampuri <
>
> pkara...@redhat.com> wrote:
> > On 01/18/2016 02:28 PM, Oleksandr Natalenko wrote:
> >> XFS. Server side works OK, I'm able to mount volume again. Brick is 30%
> >> full.
> >
>
rent patches will be incorporated
into 3.7.7.
On пʼятниця, 22 січня 2016 р. 12:53:36 EET Kaleb S. KEITHLEY wrote:
> On 01/22/2016 12:43 PM, Oleksandr Natalenko wrote:
> > On пʼятниця, 22 січня 2016 р. 12:32:01 EET Kaleb S. KEITHLEY wrote:
> >> I presume by this you mean you
On пʼятниця, 22 січня 2016 р. 12:32:01 EET Kaleb S. KEITHLEY wrote:
> I presume by this you mean you're not seeing the "kernel notifier loop
> terminated" error in your logs.
Correct, but only with simple traversing. Have to test under rsync.
> Hmmm. My system is not leaking. Last 24 hours the R
OK, compiles and runs well now, but still leaks. Will try to load the volume
with rsync.
On четвер, 21 січня 2016 р. 20:40:45 EET Kaleb KEITHLEY wrote:
> On 01/21/2016 06:59 PM, Oleksandr Natalenko wrote:
> > I see extra GF_FREE (node); added with two patches:
> >
> > ===
I see extra GF_FREE (node); added with two patches:
===
$ git diff HEAD~2 | gist
https://gist.github.com/9524fa2054cc48278ea8
===
Is that intentionally? I guess I face double-free issue.
On четвер, 21 січня 2016 р. 17:29:53 EET Kaleb KEITHLEY wrote:
> On 01/20/2016 04:08 AM, Oleksandr Natale
ibc.so.6
===
On четвер, 21 січня 2016 р. 17:29:53 EET Kaleb KEITHLEY wrote:
> On 01/20/2016 04:08 AM, Oleksandr Natalenko wrote:
> > Yes, there are couple of messages like this in my logs too (I guess one
> > message per each remount):
> >
> > ===
> > [2016-01-
n if the worker thread exits
> prematurely.
>
> If that solves the problem, we could try to determine the cause of the
> premature exit and solve it.
>
> Xavi
>
> On 20/01/16 10:08, Oleksandr Natalenko wrote:
> > Yes, there are couple of messages like this in my l
NT. I'm not sure if there could be any other error
> that can cause this.
>
> Xavi
>
> On 20/01/16 00:13, Oleksandr Natalenko wrote:
> > Here is another RAM usage stats and statedump of GlusterFS mount
> > approaching to just another OOM:
> >
> > ===
>
And another statedump of FUSE mount client consuming more than 7 GiB of RAM:
https://gist.github.com/136d7c49193c798b3ade
DHT-related leak?
On середа, 13 січня 2016 р. 16:26:59 EET Soumya Koduri wrote:
> On 01/13/2016 04:08 PM, Soumya Koduri wrote:
> > On 01/12/2016 12:46 PM,
Here is another RAM usage stats and statedump of GlusterFS mount approaching
to just another OOM:
===
root 32495 1.4 88.3 4943868 1697316 ? Ssl Jan13 129:18 /usr/sbin/
glusterfs --volfile-server=server.example.com --volfile-id=volume /mnt/volume
===
https://gist.github.com/86198201c79e
n D status,and
> the brick process and relate thread also be in the D status.
> And the brick dev disk util is 100% .
>
> On Sun, Jan 17, 2016 at 6:13 AM, Oleksandr Natalenko
>
> wrote:
> > Wrong assumption, rsync hung again.
> >
> > On субота, 16 січня 2016 р
Wrong assumption, rsync hung again.
On субота, 16 січня 2016 р. 22:53:04 EET Oleksandr Natalenko wrote:
> One possible reason:
>
> cluster.lookup-optimize: on
> cluster.readdir-optimize: on
>
> I've disabled both optimizations, and at least as of now rsync still does
>
р. 16:09:51 EET Oleksandr Natalenko wrote:
> Another observation: if rsyncing is resumed after hang, rsync itself
> hangs a lot faster because it does stat of already copied files. So, the
> reason may be not writing itself, but massive stat on GlusterFS volume
> as well.
>
> 15.0
Another observation: if rsyncing is resumed after hang, rsync itself
hangs a lot faster because it does stat of already copied files. So, the
reason may be not writing itself, but massive stat on GlusterFS volume
as well.
15.01.2016 09:40, Oleksandr Natalenko написав:
While doing rsync over
Here is similar issue described on serverfault.com:
https://serverfault.com/questions/716410/rsync-crashes-machine-while-performing-sync-on-glusterfs-mounted-share
I've checked GlusterFS logs with no luck — as if nothing happened.
P.S. GlusterFS v3.7.6.
15.01.2016 09:40, Oleksandr Nata
While doing rsync over millions of files from ordinary partition to
GlusterFS volume, just after approx. first 2 million rsync hang happens,
and the following info appears in dmesg:
===
[17075038.924481] INFO: task rsync:10310 blocked for more than 120
seconds.
[17075038.931948] "echo 0 > /pro
ill some issues.
13.01.2016 12:56, Soumya Koduri написав:
On 01/13/2016 04:08 PM, Soumya Koduri wrote:
On 01/12/2016 12:46 PM, Oleksandr Natalenko wrote:
Just in case, here is Valgrind output on FUSE client with 3.7.6 +
API-related patches we discussed before:
https://gist.github.com/cd660
Just in case, here is Valgrind output on FUSE client with 3.7.6 +
API-related patches we discussed before:
https://gist.github.com/cd6605ca19734c1496a4
12.01.2016 08:24, Soumya Koduri написав:
For fuse client, I tried vfs drop_caches as suggested by Vijay in an
earlier mail. Though all the ino
x and post the patch soon.
Thanks for your patience!
-Soumya
On 01/07/2016 07:34 PM, Oleksandr Natalenko wrote:
OK, I've patched GlusterFS v3.7.6 with 43570a01 and 5cffb56b (the
most
recent
revisions) and NFS-Ganesha v2.3.0 with 8685abfc (most recent revision
too).
On traversing GlusterFS vol
OK, I've patched GlusterFS v3.7.6 with 43570a01 and 5cffb56b (the most recent
revisions) and NFS-Ganesha v2.3.0 with 8685abfc (most recent revision too).
On traversing GlusterFS volume with many files in one folder via NFS mount I
get an assertion:
===
ganesha.nfsd: inode.c:716: __inode_forget:
owing Ganesha error:
===
ganesha.nfsd: inode.c:716: __inode_forget: Assertion `inode->nlookup >=
nlookup' failed.
===
06.01.2016 08:40, Soumya Koduri написав:
On 01/06/2016 03:53 AM, Oleksandr Natalenko wrote:
OK, I've repeated the same traversing test with patched GlusterFS A
OK, I've repeated the same traversing test with patched GlusterFS API, and
here is new Valgrind log:
https://gist.github.com/17ecb16a11c9aed957f5
Still leaks.
On вівторок, 5 січня 2016 р. 22:52:25 EET Soumya Koduri wrote:
> On 01/05/2016 05:56 PM, Oleksandr Natalenko wrote:
> >
Correct, I used FUSE mount. Shouldn't gfapi be used by FUSE mount helper (/
usr/bin/glusterfs)?
On вівторок, 5 січня 2016 р. 22:52:25 EET Soumya Koduri wrote:
> On 01/05/2016 05:56 PM, Oleksandr Natalenko wrote:
> > Unfortunately, both patches didn't make any difference fo
; gets changed. This is not related to any of the above
patches (or rather Gluster) and I am currently debugging it.
Thanks,
Soumya
On 12/25/2015 11:34 PM, Oleksandr Natalenko wrote:
1. test with Cache_Size = 256 and Entries_HWMark = 4096
Before find . -type f:
root 3120 0.6 11.0 879120 2
Here is another Valgrind log of similar scenario but with drop_caches before
umount:
https://gist.github.com/06997ecc8c7bce83aec1
Also, I've tried to drop caches on production VM with GluserFS volume mounted
and memleaking for several weeks with absolutely no effect:
===
root 945 0.1 48
age -
> >
> >> From: "Pranith Kumar Karampuri"
> >> To: "Oleksandr Natalenko" , "Soumya Koduri"
> >> Cc: gluster-users@gluster.org,
> >> gluster-de...@gluster.org
> >> Sent: Monday, December 28, 2015 9:32:07 AM
>
. 20:28:13 EET Soumya Koduri wrote:
> On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote:
> > Another addition: it seems to be GlusterFS API library memory leak
> > because NFS-Ganesha also consumes huge amount of memory while doing
> > ordinary "find . -type f" via NF
s full valgrind output:
https://gist.github.com/eebd9f94ababd8130d49
One may see the probability of massive leaks at the end of valgrind output
related to both GlusterFS and NFS-Ganesha code.
On пʼятниця, 25 грудня 2015 р. 23:29:07 EET Soumya Koduri wrote:
> On 12/25/2015 08:56 PM, Oleksan
, server-side GlusterFS cache or server kernel page cache is the
cause).
There are ~1.8M files on this test volume.
On пʼятниця, 25 грудня 2015 р. 20:28:13 EET Soumya Koduri wrote:
> On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote:
> > Another addition: it seems to be GlusterFS AP
What units Cache_Size is measured in? Bytes?
25.12.2015 16:58, Soumya Koduri написав:
On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote:
Another addition: it seems to be GlusterFS API library memory leak
because NFS-Ganesha also consumes huge amount of memory while doing
ordinary "find .
/usr/bin/ganesha.nfsd -L /var/log/ganesha.log -f
/etc/ganesha/ganesha.conf -N NIV_EVENT
===
1.4G is too much for simple stat() :(.
Ideas?
24.12.2015 16:32, Oleksandr Natalenko написав:
Still actual issue for 3.7.6. Any suggestions?
24.09.2015 10:14, Oleksandr Natalenko написав:
In our
Still actual issue for 3.7.6. Any suggestions?
24.09.2015 10:14, Oleksandr Natalenko написав:
In our GlusterFS deployment we've encountered something like memory
leak in GlusterFS FUSE client.
We use replicated (×2) GlusterFS volume to store mail (exim+dovecot,
maildir format). Here is
list of the inode cache. I have sent a
patch for that.
http://review.gluster.org/#/c/12242/ [3]
Regards,
Raghavendra Bhat
On Thu, Sep 24, 2015 at 1:44 PM, Oleksandr Natalenko
wrote:
I've checked statedump of volume in question and haven't found lots
of iobuf as mentioned in that
I've checked statedump of volume in question and haven't found lots of
iobuf as mentioned in that bugreport.
However, I've noticed that there are lots of LRU records like this:
===
[conn.1.bound_xl./bricks/r6sdLV07_vd0_mail/mail.lru.1]
gfid=c4b29310-a19d-451b-8dd1-b3ac2d86b595
nlookup=1
fd-coun
We use bare GlusterFS installation with no oVirt involved.
24.09.2015 10:29, Gabi C wrote:
google vdsm memory leak..it's been discussed on list last year and
earlier this one...
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.glust
In our GlusterFS deployment we've encountered something like memory leak
in GlusterFS FUSE client.
We use replicated (×2) GlusterFS volume to store mail (exim+dovecot,
maildir format). Here is inode stats for both bricks and mountpoint:
===
Brick 1 (Server 1):
Filesystem
Hello.
I'm trying to investigate how GlusterFS manages cache on both server and
client side, but unfortunately cannot find any exhaustive, appropriate
and up
to date information.
The disposition is that we have, saying, 2 GlusterFS nodes (server_a and
server_b) with replicated volume some_volu
81 matches
Mail list logo