s locking cause such problems? It works quite well on NFS.
John
--
John Madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmad...@ivytech.edu
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mail
and io-cache. I'll see about eliminating them to further test behavior.
John
--
John Madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmad...@ivytech.edu
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluste
type performance/write-behind
option cache-size 128MB
option flush-behind off
subvolumes iocache
end-volume
volume iothreads
type performance/io-threads
option thread-count 100
subvolumes writeback
end-volume
--
John Madden
Sr UNIX Syst
dent though, so it's hard to really point my
finger at GlusterFS itself. And it's worth noting that using NFS means
I'm not using a parallel filesystem, so there is no redundancy, and
redundancy is worth a lot.
John
--
John Madden
Sr UNIX Systems Engineer
Ivy Tech Com
- try your setup without it?
Also, writebehind is a client-side thing, so remove that from the server
(server is a normal userspace process, so it'll get the benefit of
kernel filesystem cache).
John
--
John Madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmad
-Read---
-Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
16 303 0 1714 0 253 0 295 0 1780 0
415 0
pxhorde0,300M,27770,31,63545,6,4150,0,31808,29,66540,4,582.4,0,16,303,0,1714,0,253,0,295,0,1780,0,415,0
--
John Madden
Sr
picture. If you
actually need an imap cluster, use murder.
John
--
John madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmad...@ivytech.edu
On Dec 26, 2009, at 21:34, "David Touzeau" wrote:
>
>
> Dear
> I'm using glusterfs 3.0, i'm tr
working).
John
--
John Madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmad...@ivytech.edu
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
twork card, etc. io-threads counts, for example, only seem to drive
load average higher, as they all sit there chewing up cpu anyway, so you
lower them and get lower overall system load but higher latency. But
why would the glusterfs process need cpu time anyway?
John
--
John Madden
Sr UN
Please check the memory usage on the server and check if the swap space is
used much. You can try reducing the io-threads
count to 8 (in place of 16).
I had tried 16, 8, and 4 on io-threads. No swap usage at all.
John
--
John Madden
Sr UNIX Systems Engineer
Ivy Tech Community College
ut lots of writes too.
Also we would like to know the GlusterFS version no used for this setup.
Can you please try the setup without io-cache and let us know.
That was actually my most recent configuration but the behavior was
unchanged, high cpu usage, high latency.
John
--
John Madden
Sr UN
x27;m sure have already been done.
I obviously have more testing to do myself.
John
--
John Madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmad...@ivytech.edu
___
Gluster-users mailing list
Gluster-users@gluster.org
usterfs as their php sessions store
plus shared data (temp files, misc app data) with the sessions store
being the hard-hit service. When I crank up the incoming user rate,
(causing new sessions to be created and then read), things get wonky as
explained before.
John
--
John Madden
Sr UN
ut lots of writes too.
Also we would like to know the GlusterFS version no used for this setup.
Apologies. This is on 2.0.8, pre-built RPMs off the site.
John
--
John Madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmad...@i
subvolumes writeback
end-volume
volume iothreads
type performance/io-threads
option thread-count 4 # default is 16
subvolumes io-cache
end-volume
TIA,
John
--
John Madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmad...@ivytech.edu
15 matches
Mail list logo