Hello,
I found a new issue with glusterfs 3.2.1 - im getting a glusterfs process for each mountpoint and
they are consuming all of the CPU time.
strace won't show a thing - so no system calls are made
Mounting the same volumes on another server works fine.
Has anyone seen such a thing? Oder
A dump from shooting one of the processes with SIGUSR1 can be found here:
http://www.xidras.com/**logfiles/corehttp://www.xidras.com/logfiles/core
If you have used 'SIGUSR1' then you should have a file
/tmp/glusterdump.pid, can you post that too (you can do 'gzip' on it for
saving some
Am 22.06.2011 13:59, schrieb Amar Tumballi:
If you have used 'SIGUSR1' then you should have a file /tmp/glusterdump.pid,
can you post that too
(you can do 'gzip' on it for saving some bandwidths).
-Amar
Jup - I have the files here - but not much content:
Georg
/tmp # cat
Hi,
I m using glusterfs 3.2.0. I had configured a 2 node distributed storage
system.
The server configuration file on both storage nodes is :
volume posix
type storage/posix
option directory /data/export
end-volume
volume locks
type features/locks
subvolumes posix
end-volume
volume
Hi,
I've been evaluating GlusterFS (3.2.0) for a small replicated cluster set up
on Amazon EC2, and I think i've found what might be a bug or some sort of
unexpected behaviour during the self-heal process.
Here's the volume info[1]:
Volume Name: test-volume
Type: Replicate
Status: Started
Any reason for not using glusterd and CLI ?
Please use the cli to create volumes.
On Jun 22, 2011 6:39 PM, sonali.gupta sonali.gu...@99acres.com wrote:
Hi,
I m using glusterfs 3.2.0. I had configured a 2 node distributed storage
system.
The server configuration file on both storage nodes is
It looks like the disconnection happened in the middle of a write
transaction (after the lock phase, before the unlock phase). And the
server's detection of client disconnection (via TCP_KEEPALIVE) seems to have
not happened before the client reconnected. The client, having witnessed the
I was really hoping to get some suggestions on this - but I know everyone is
equally busy.
It's looking kind of grim for my GlusterFS project - the users all moved off
after the last outage - and I'm open to ideas on how to bring them back.
James Burnash
Unix Engineer
Knight Capital Group
Just FYI - I've upgraded my installation of 3.1.3 to 3.1.5, hoping that it can
either give me better diagnostics messages or just free me from some of the
known bugs.
I'll give updates on progress, but so far doing yum update worked just fine,
and though one storage server out of four seemed
Does anyone know how we can get the native GlusterFS clients to clear their
cached data short of restarting them (or dismounting and remounting their
gluster storage?
Clients are running 3.1.3, servers are now running 3.1.5.
Thanks,
James Burnash
Unix Engineer
Knight Capital Group
On Wed, Jun 22, 2011 at 11:53 AM, Burnash, James jburn...@knight.comwrote:
Does anyone know how we can get the native GlusterFS clients to clear their
cached data short of restarting them (or dismounting and remounting their
gluster storage?
Clients are running 3.1.3, servers are now running
Excellent - thank you sir!
James Burnash
Unix Engineer
Knight Capital Group
From: Harshavardhana [mailto:har...@gluster.com]
Sent: Wednesday, June 22, 2011 2:56 PM
To: Burnash, James
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] GlusterFS 3.1.5 now available
On Wed, Jun 22, 2011
I'll add it :)
James Burnash
Unix Engineer
Knight Capital Group
From: John Mark Walker [mailto:jwal...@gluster.com]
Sent: Wednesday, June 22, 2011 3:04 PM
To: Harshavardhana; Burnash, James
Cc: gluster-users@gluster.org
Subject: RE: [Gluster-users] GlusterFS 3.1.5 now available
This looks like
That was to clear page-cache on the client side, usually some of the issues
related to cache-coherency with stat-prefetch, quick-read and io-cache get
resolved temporarily.
In general this is not a good practice since the application has to
invalidate it caches proactively and keep track of them
After the upgrade, one of my four storage servers was successfully running
glusterd, but no glusterfsd processes were present for bricks on that server.
I believe this had to do with the fact that this was the first server I created
under 3.1.1, and that I had somehow misconfigured peers for
The client problem was that, after doing the above to the storage servers,
the client(s) obviously had cache issues. Doing an “ls” on a native
GlusterFS mount showed only several subdirectories out of many, and a “du
–sh” of the mount point gave this message on the client:
** **
du:
hi john,
many thanks for the heads up on this new version.
just for everyone's edification (i'm always curious if/when people
upgrade) i've just upgraded our 4 servers from 3.1.3 to 3.1.5. so, if you
dont hear from me - then you can assume everything went well.
regards,
paul
On 21 June 2011
On 06/22/2011 02:44 PM, Burnash, James wrote:
g01/pfs-ro1-client-0=0x jc1letgfs17
g01/pfs-ro1-client-0=0x0608 jc1letgfs18
g01/pfs-ro1-client-20=0x jc1letgfs14
g01/pfs-ro1-client-20=0x0200 jc1letgfs15
18 matches
Mail list logo