On Sun, Jan 16, 2011 at 5:02 PM, Max Ivanov ivanov.ma...@gmail.com wrote:
time tar cf - M | pv /dev/null 15.8 MB/sec (native) 3.48MB/sec
(FUSE) 254 Kb/sec (NFS)
This test shows why glusterfs native protocol is better than NFS when you
need to scale out storage. Even with a context
Hey chaps,
Anyone got any pointers as to what this might be? This is still causing
a lot of problems for us whenever we attempt to do df.
-- joe.
-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Joe Warren-Meeks
Sent: 15
On 01/17/2011 10:47 AM, Joe Warren-Meeks wrote:
Hey chaps,
Anyone got any pointers as to what this might be? This is still causing
a lot of problems for us whenever we attempt to do df.
-- joe.
-Original Message-
However, for some reason, they've got into a bit of a state such
(sorry about topposting.)
Just changing the timeout would only mask the problem. The real issue is
that running 'df' on either node causes a hang.
All other operations seem fine, files can be created and deleted as
normal with the results showing up on both.
I'd like to work out why it's
Hi,
Does anybody use a GlusterFS (3.1.1) system for hosting user's home
folders or other forms of collections of small files? I'm having major
issues with performance which I'm pretty sure comes down to the
hardware - but I would appreciate more feedback / ideas / comments.
While I had been
Hello,
I checked the logfiles one more time. I know why it was impossible to find the
problem.
When the server rebooted and connects to glusterfs server it logs:
[2010-08-23 10:15:36] N [client-protocol.c:6246:client_setvolume_cbk] vgfs-01-001: Connected to
10.0.1.X:7002, attached to remote
hi amar!
I compiled the version 3.1.2 today and tested again.
there are identical the same problems and the same behavior like v 3.1.1
regards
markus
Am 14.01.2011 10:39, schrieb Markus Fröhlich:
hi amar!
I made the test like you want me to do and the glusterfsd dies again.
virt-zabbix-02:~
Disconnection logs were wrongly put in a lower log level during RPC
migration. They are fixed in the latest code.
On Mon, Jan 17, 2011 at 7:36 PM, Georg Höllrigl
georg.hoellr...@xidras.comwrote:
Hello,
I checked the logfiles one more time. I know why it was impossible to find
the problem.
Dear Gluster Community,
We are pleased to announce that GlusterFS v3.1.2 has been released and
is now available for download at:
http://ftp.gluster.com/pub/gluster/glusterfs/3.1/3.1.2/glusterfs-3.1.2.tar.gz
The release notes can be found here:
After a couple years I'm back looking at glusterfs again for a new project.
I downloaded, compiled and installed glusterfs 3.1.2 from tarball. I
did this on 2 ec2 servers.
Everything compiled and installed fine. I started the daemon on both
servers and then on the primary server I tried to do
Primary server 10.XXX.142.178 log:
+--+
[2011-01-17 22:42:04.172075] I
[glusterd-handler.c:673:glusterd_handle_cli_list_friends] glusterd:
Received cli list req
[2011-01-17 22:42:19.156481] I
Restarted daemons on both servers and checked logs:
+--+
[2011-01-18 00:50:58.452742] I
[glusterd-handler.c:673:glusterd_handle_cli_list_friends] glusterd:
Received cli list req
[2011-01-18
On 01/17/2011 10:57 PM, Anand Avati wrote:
Looks like you have a stale process running. Can you force kill all
gluster daemons, rm -rf /etc/glusterd and start fresh? Please ensure
name resolution works fine between the hosts.
Avati
Primary:
# ps -ef | grep gluster
root 807
On 01/17/2011 11:22 PM, Gerry Reno wrote:
On 01/17/2011 10:57 PM, Anand Avati wrote:
Looks like you have a stale process running. Can you force kill all
gluster daemons, rm -rf /etc/glusterd and start fresh? Please ensure
name resolution works fine between the hosts.
Avati
14 matches
Mail list logo