oflag=dsync is going to be really slow on any disk, or am I missing
something here?
On Tue, 2015-02-17 at 10:03 +0800, Punit Dambiwal wrote:
Hi Vijay,
Please find the volume info here :-
[root@cpu01 ~]# gluster volume info
Volume Name: ds01
Type: Distributed-Replicate
Volume
Your getinode isn't working...
+ '[' 0 -ne 0 ']'
++ stat -c %i /mnt/gluster
+ inode=
+ '[' 1 -ne 0 ']'
How old is your mount.glusterfs script?
On Tue, 2015-01-27 at 08:52 +0100, Bartłomiej Syryjczyk wrote:
W dniu 2015-01-27 o 08:46, Franco Broi pisze:
[franco@charlie4 ~]$ stat -c %i /data
So what is the inode of your mounted gluster filesystem? And does
running 'mount' show it as being fuse.glusterfs?
On Tue, 2015-01-27 at 09:05 +0100, Bartłomiej Syryjczyk wrote:
W dniu 2015-01-27 o 09:00, Franco Broi pisze:
Your getinode isn't working...
+ '[' 0 -ne 0 ']'
++ stat -c %i
error
if [ -z $inode ]; then
inode=0;
fi
if [ $inode -ne 1 ]; then
err=1;
fi
On Tue, 2015-01-27 at 09:12 +0100, Bartłomiej Syryjczyk wrote:
W dniu 2015-01-27 o 09:08, Franco Broi pisze:
So what is the inode of your mounted gluster filesystem? And does
Could this be a case of Oracle Linux being evil?
On Tue, 2015-01-27 at 16:20 +0800, Franco Broi wrote:
Well I'm stumped, just seems like the mount.glusterfs script isn't
working. I'm still running 3.5.1 and the getinode bit of my script looks
like this:
...
Linux)
getinode
What do you get is you do this?
bash-4.1# stat -c %i /mnt/gluster
1
-bash-4.1# echo $?
0
On Tue, 2015-01-27 at 09:47 +0100, Bartłomiej Syryjczyk wrote:
W dniu 2015-01-27 o 09:20, Franco Broi pisze:
Well I'm stumped, just seems like the mount.glusterfs script isn't
working. I'm still
On Tue, 2015-01-27 at 14:09 +0100, Bartłomiej Syryjczyk wrote:
OK, I removed part 2/dev/null, and see:
stat: cannot stat ‘/mnt/gluster’: Resource temporarily unavailable
So I decided to add sleep just before line number 298 (this one with
stat). And it works! Is it normal?
Glad that you
Must be something wrong with your mount.glusterfs script, you could try
running it with sh -x to see what command it tries to run.
On Tue, 2015-01-27 at 07:28 +0100, Bartłomiej Syryjczyk wrote:
W dniu 2015-01-27 o 02:54, Pranith Kumar Karampuri pisze:
On 01/26/2015 02:27 PM, Bartłomiej
Seems to mount and then umount it because the inode isn't 1, weird!
On Tue, 2015-01-27 at 08:29 +0100, Bartłomiej Syryjczyk wrote:
W dniu 2015-01-27 o 07:45, Franco Broi pisze:
Must be something wrong with your mount.glusterfs script, you could try
running it with sh -x to see what command
-27 at 08:43 +0100, Bartłomiej Syryjczyk wrote:
W dniu 2015-01-27 o 08:36, Franco Broi pisze:
Seems to mount and then umount it because the inode isn't 1, weird!
Why it must be 1? Are you sure it's inode, not exit code?
Can you check your system?
---
[root@apache2 ~]# stat -c %i /brick
Hi
Just something to note. I created a stripe volume for testing then
deleted it and created a distributed volume, for the second create I had
to use force as it thought the first brick was already part of a volume.
When I came to write to the distributed volume, all the files went to
the first
to physical
hardware units, ie I could disconnect a brick and move it to another
server.
Thanks
Jason
-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Franco Broi
Sent: Thursday, December 04, 2014 7:56 PM
To: gluster
and there is
some configuration I can tweak to make things faster.
Andy
On Dec 7, 2014, at 8:43 PM, Franco Broi franco.b...@iongeo.com wrote:
On Fri, 2014-12-05 at 14:22 +, Kiebzak, Jason M. wrote:
May I ask why you chose to go with 4 separate bricks per server rather
than one large brick
get the same sort of performance from normal NFS then I would say
your IPoIB stack isn't performing very well but I assume you've tested
that with something like iperf?
On Dec 7, 2014, at 9:15 PM, Franco Broi franco.b...@iongeo.com wrote:
Our theoretical peak throughput is about
1 DHT volume comprising 16 50TB bricks spread across 4 servers. Each
server has 10Gbit Ethernet.
Each brick is a ZOL RADIZ2 pool with a single filesystem.
___
Gluster-users mailing list
Gluster-users@gluster.org
Not adding an extra column is least likely to break any scripts that
currently parse the status output by splitting on spaces.
On Wed, 2014-11-26 at 20:27 +0530, Atin Mukherjee wrote:
I would vote for 2nd one.
~Atin
On 11/26/2014 06:49 PM, Mohammed Rafi K C wrote:
Hi All,
We
Can't see how any of that could account for 1000% cpu unless it's just
stuck in a loop.
On Tue, 2014-11-18 at 18:00 +1000, Lindsay Mathieson wrote:
On 18 November 2014 17:46, Franco Broi franco.b...@iongeo.com wrote:
Try strace -Ff -e file -p 'glusterfsd pid'
Thanks, Attached
glusterfsd is the filesystem daemon. You could trace strace'ing it to
see what it's doing.
On Tue, 2014-11-18 at 17:09 +1000, Lindsay Mathieson wrote:
And its happening on both nodes now, they have become near unusable.
On 18 November 2014 17:03, Lindsay Mathieson
Try strace -Ff -e file -p 'glusterfsd pid'
On Tue, 2014-11-18 at 17:42 +1000, Lindsay Mathieson wrote:
Sorry, meant to send to the list. strace attached.
On 18 November 2014 17:35, Pranith Kumar Karampuri pkara...@redhat.com
wrote:
On 11/18/2014 12:32 PM, Lindsay Mathieson wrote:
I've never added a brick with existing files but I did start a new
Gluster volume on disks that already contained data and I was able to
access the files without problem. Of course the files will be out of
place but the first time you access them, Gluster will add links to
speed up future
for any info!
-Original Message-
From: Franco Broi [mailto:franco.b...@iongeo.com]
Sent: Thursday, 16 October 2014 10:06 AM
To: SINCOCK John
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Is it ok to add a new brick with files already
on it?
I've never added a brick
: Franco Broi; gluster-users
Subject: Re: [Gluster-users] Is it ok to add a new brick with
files already on it?
So Gluster, at its core, uses rsync to copy the data to the
other bricks. Why not let Gluster do the heavy
(after nearly 2 years) I'm still not sure I'm giving you
accurate information.
Thanks again, I really do appreciate any advice that can really nail this
down and clarify the situation.
No problem.
-Original Message-
From: Franco Broi [mailto:franco.b...@iongeo.com]
Sent
of broken things
Cutting Edge
http://cuttingedge.com.au
On 7 October 2014 14:56, Franco Broi franco.b...@iongeo.com wrote:
Our bricks are 50TB, running ZOL, 16 disks raidz2. Works OK with Gluster
now that they fixed xattrs.
8k writes with fsync 170MB/Sec, reads 335MB/Sec.
On Tue
Not an issue for us, were at 92% on an 800TB distributed volume, 16
bricks spread across 4 servers. Lookups can be a bit slow but raw IO
hasn't changed.
On Tue, 2014-10-07 at 09:16 +1000, Dan Mons wrote:
On 7 October 2014 08:56, Jeff Darcy jda...@redhat.com wrote:
I can't think of a good
Dan Mons
Unbreaker of broken things
Cutting Edge
http://cuttingedge.com.au
On 7 October 2014 14:16, Franco Broi franco.b...@iongeo.com wrote:
Not an issue for us, were at 92% on an 800TB distributed volume, 16
bricks spread across 4 servers. Lookups can be a bit slow but raw IO
, good enough for us to switch back from using gNFS for
interactive applications.
All my mountpoints look good too. no more crashes.
Thanks for all the good work, hopefully you wont be hearing much from me
for a while!
Cheers,
On Wed, 2014-08-06 at 08:54 +0800, Franco Broi wrote:
I think all
On Mon, 2014-08-04 at 12:31 +0200, Niels de Vos wrote:
On Mon, Aug 04, 2014 at 05:05:10PM +0800, Franco Broi wrote:
A bit more background to this.
I was running 3.4.3 on all the clients (120+ nodes) but I also have a
3.5 volume which I wanted to mount on the same nodes. The 3.4.3
updates!
Cheers,
On Tue, 2014-08-05 at 14:24 +0800, Franco Broi wrote:
On Mon, 2014-08-04 at 12:31 +0200, Niels de Vos wrote:
On Mon, Aug 04, 2014 at 05:05:10PM +0800, Franco Broi wrote:
A bit more background to this.
I was running 3.4.3 on all the clients (120+ nodes) but I also
I've had a sudden spate of mount points failing with Transport endpoint
not connected and core dumps. The dumps are so large and my root
partitions so small that I haven't managed to get a decent traceback.
BFD: Warning: //core.2351 is truncated: expected core file size =
165773312, found:
on.
Cheers,
On Mon, 2014-08-04 at 12:53 +0530, Pranith Kumar Karampuri wrote:
CC dht folks
Pranith
On 08/04/2014 11:52 AM, Franco Broi wrote:
I've had a sudden spate of mount points failing with Transport endpoint
not connected and core dumps. The dumps are so large and my root
partitions so
On Wed, 2014-07-16 at 08:32 +0530, Santosh Pradhan wrote:
On 07/15/2014 06:24 AM, Franco Broi wrote:
I think the option I need is noac.
noac is no-attribute-caching in NFS. By default, NFS client would cache
the meta-data/attributes of FILEs for 3 seconds and directory for 30
seconds
I think the option I need is noac.
On Fri, 2014-07-11 at 16:10 +0530, Santosh Pradhan wrote:
On 07/11/2014 02:37 PM, Franco Broi wrote:
On Fri, 2014-07-11 at 14:22 +0530, Santosh Pradhan wrote:
On 07/11/2014 06:10 AM, Franco Broi wrote:
Hi
Is there any way to make Gluster emulate
On Fri, 2014-07-11 at 14:22 +0530, Santosh Pradhan wrote:
On 07/11/2014 06:10 AM, Franco Broi wrote:
Hi
Is there any way to make Gluster emulate the behaviour of a NFS
filesystem exported with the sync option? By that I mean is it possible
to write a file from one client and guarantee
-management: Commit of operation 'Volume Start' failed on localhost
On Wed, 2014-06-25 at 13:21 +0800, Franco Broi wrote:
Ok, I'm going to try this tomorrow. Anyone have anything else to add??
What's the worst that can happen?
On Mon, 2014-06-23 at 20:11 +0530, Kaushal M wrote:
On Wed
Hi
Is there any way to make Gluster emulate the behaviour of a NFS
filesystem exported with the sync option? By that I mean is it possible
to write a file from one client and guarantee that the data will be
instantly available on close to all other clients?
Cheers,
On Thu, 2014-07-10 at 21:02 -0400, James wrote:
On Thu, Jul 10, 2014 at 8:40 PM, Franco Broi franco.b...@iongeo.com wrote:
Is there any way to make Gluster emulate the behaviour of a NFS
filesystem exported with the sync option? By that I mean is it possible
to write a file from one client
Ok, I'm going to try this tomorrow. Anyone have anything else to add??
What's the worst that can happen?
On Mon, 2014-06-23 at 20:11 +0530, Kaushal M wrote:
On Wed, Jun 18, 2014 at 6:58 PM, Justin Clift jus...@gluster.org wrote:
On 18/06/2014, at 9:36 AM, Kaushal M wrote:
You are right.
]
(--/usr/sbin/glusterd(main+0x5d2) [0x406802]
(--/usr/sbin/glusterd(glusterfs_volumes_init+0xb7) [0x4051b7]
(--/usr/sbin/glusterd(glusterfs_process_volfp+0x103) [0x4050c3]))) 0-:
received signum (0), shutting down
Thanks,
Lala
- Original Message -
From: Franco Broi franco.b
.
- Original Message -
From: Franco Broi franco.b...@iongeo.com
To: Lalatendu Mohanty lmoha...@redhat.com
Cc: Susant Palai spa...@redhat.com, Niels de Vos nde...@redhat.com,
Pranith Kumar Karampuri pkara...@redhat.com, gluster-users@gluster.org,
Raghavendra Gowdappa rgowd
a close eye on this issue and upgrade as soon as the 3.5
looks stable
Thanks very much for the info.
Cheers and Regards,
John
-Original Message-
From: Franco Broi [mailto:franco.b...@iongeo.com]
Sent: Wednesday, 18 June 2014 3:15 PM
To: SINCOCK John
Cc: gluster-users
Hi John
I got this yesterday, it was copied to this list.
Cheers,
On Tue, 2014-06-17 at 04:55 -0400, Susant Palai wrote:
Hi Franco:
The following patches address the ENOTEMPTY issue.
1. http://review.gluster.org/#/c/7733/
2.
information.
Susant.
- Original Message -
From: Franco Broi franco.b...@iongeo.com
To: Pranith Kumar Karampuri pkara...@redhat.com
Cc: Susant Palai spa...@redhat.com, gluster-users@gluster.org,
Raghavendra Gowdappa rgowd...@redhat.com, kdhan...@redhat.com,
vsomy...@redhat.com, nbala
-Original Message-
From: Franco Broi [mailto:franco.b...@iongeo.com]
Sent: Monday, June 02, 2014 6:35 PM
To: Gnan Kumar, Yalla
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Distributed volumes
Just do an ls on the bricks, the paths are the same as the mounted filesystem
Name: dst
Type: Distribute
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: primary:/export/sdd1/brick
Brick2: secondary:/export/sdd1/brick
-Original Message-
From: Franco Broi [mailto:franco.b...@iongeo.com]
Sent: Tuesday, June 03, 2014 12:56 PM
, June 03, 2014 1:19 PM
To: Gnan Kumar, Yalla
Cc: Franco Broi; gluster-users@gluster.org
Subject: Re: [Gluster-users] Distributed volumes
You have only 1 file on the gluster volume, the 1GB disk image/volume that
you created. This disk image is attached to the VM as a file system
dir13021
drwxrwxr-x 2 1348 200 2 May 16 12:05 dir13022
.
Maybe Gluster is losing track of the files??
Pranith
On 06/02/2014 02:48 PM, Franco Broi wrote:
Hi Pranith
Here's a listing of the brick logs, looks very odd especially the size
of the log for data10.
[root@nas3 bricks
, Pranith Kumar Karampuri wrote:
- Original Message -
From: Pranith Kumar Karampuri pkara...@redhat.com
To: Franco Broi franco.b...@iongeo.com
Cc: gluster-users@gluster.org
Sent: Monday, June 2, 2014 7:01:34 AM
Subject: Re: [Gluster-users] glusterfsd process spinning
Just do an ls on the bricks, the paths are the same as the mounted
filesystem.
On Mon, 2014-06-02 at 12:26 +, yalla.gnan.ku...@accenture.com
wrote:
Hi All,
I have created a distributed volume of 1 GB , using two bricks from
two different servers.
I have written 7 files whose
Hi
I've been suffering from continual problems with my gluster filesystem
slowing down due to what I thought was congestion on a single brick
being caused by a problem with the underlying filesystem running slow
but I've just noticed that the glusterfsd process for that particular
brick is
Doesn't work for me either on Centos 5, have to modprobe fuse.
On 1 Jun 2014 10:46, Gene Liverman glive...@westga.edu wrote:
Just setup my first Gluster share (replicated on 3 nodes) and it works fine on
RHEL 6 but when trying to mount it on RHEL 5.8 I get the following in my logs:
[2014-06-01
dd read test on the disk shows 700MB/Sec, which is about normal for
these bricks.
On Sun, 2014-06-01 at 13:23 +0800, Franco Broi wrote:
The volume is almost completely idle now and the CPU for the brick
process has returned to normal. I've included the profile and I think it
shows
OK. I've just unmounted the data2 volume from a machine called tape1,
and now try to remount - it's hanging.
/bin/sh /sbin/mount.glusterfs nas5-10g:/data2 /data2 -o
rw,log-level=info,log-file=/var/log/glusterfs/glusterfs_data2.log
on the server
# gluster vol status
Status of volume: data2
, 2014-05-29 at 10:34 +0800, Franco Broi wrote:
OK. I've just unmounted the data2 volume from a machine called tape1,
and now try to remount - it's hanging.
/bin/sh /sbin/mount.glusterfs nas5-10g:/data2 /data2 -o
rw,log-level=info,log-file=/var/log/glusterfs/glusterfs_data2.log
connected clients cannot support the feature being set. These clients
need to be upgraded or disconnected before running this command again
On Thu, 2014-05-29 at 10:34 +0800, Franco Broi wrote:
OK. I've just unmounted the data2 volume from a machine called tape1,
and now try to remount - it's hanging
On Tue, 2014-05-27 at 14:38 +0530, Vijay Bellur wrote:
On 05/26/2014 11:50 AM, Franco Broi wrote:
Hi
Is there any way now, or plans to implement a pause or freeze function
for Gluster so that a brick can be taken offline momentarily from a DHT
volume without affecting all the clients
Hi
My clients are running 3.4.1, when I try to mount from lots of machine
simultaneously, some of the mounts hang. Stopping and starting the
volume clears the hung mounts.
Errors in the client logs
[2014-05-28 01:47:15.930866] E
[client-handshake.c:1741:client_query_portmap_cbk]
Hi
Is there any way now, or plans to implement a pause or freeze function
for Gluster so that a brick can be taken offline momentarily from a DHT
volume without affecting all the clients?
Cheers,
___
Gluster-users mailing list
Have you checked /var/log/glusterfs/nfs.log?
I've found that setting nfs.disable to on and then off will sometimes
restart failed gluster nfs servers.
On Thu, 2014-05-22 at 12:13 -0400, Ted Miller wrote:
I have a running replica 3 setup, running on Centos 6, fully updated.
I was trying to
.
On Wed, 2014-05-21 at 08:35 -0700, Doug Schouten wrote:
Hi,
glusterfs is using ~ 5% of memory (24GB total) and glusterfsd is using 1%
The I/O cache size (performance.cache-size) is 1GB.
cheers, Doug
On 20/05/14 08:37 PM, Franco Broi wrote:
Are you running out of memory? How much
If you are trying to use lstat+mkdir as a locking mechanism so that you
can run multiple instances of the same program, it will probably fail
more often on a Fuse filesystem than a local one. It should probably be
using FLOCK or open a file with O_CREAT|O_EXCL.
On Tue, 2014-05-20 at 11:58
Are you running out of memory? How much memory are the gluster daemons
using?
On Tue, 2014-05-20 at 11:16 -0700, Doug Schouten wrote:
Hello,
I have a rather simple Gluster configuration that consists of 85TB
distributed across six nodes. There is one particular node that seems to
: off
On Thu, 2014-05-01 at 09:55 +0800, Franco Broi wrote:
Installed 3.4.3 exactly 2 weeks ago on all our brick servers and I'm
happy to report that we've not had a crash since.
Thanks for all the good work.
On Tue, 2014-04-15 at 14:22 +0800, Franco Broi wrote:
The whole system came
0x004075e4 in main (argc=11, argv=0x7fffabef9e38) at
glusterfsd.c:1983
On Mon, 2014-05-19 at 14:39 +0800, Franco Broi wrote:
Just had an NFS crash on my test system running 3.5.
Load of messages like this:
[2014-05-19 06:24:59.347147] E [rpc-drc.c:499:rpcsvc_add_op_to_cache]
0-rpc
Not sure if this helps, was a while back.
Forwarded Message
From: Franco Broi franco.b...@iongeo.com
To: Justin Clift jcl...@redhat.com
Cc: Vijay Bellur vbel...@redhat.com, gluster-users@gluster.org
gluster-users@gluster.org
Subject: Re: [Gluster-users] Changing the server
On Wed, 2014-05-14 at 12:31 +0200, Olav Peeters wrote:
Hi,
from what I read here:
http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.5
... if you are on 3.4.0 AND have NO quota configured, it should be
safe to just replace a version
specific
Hi
I had to restart some of my servers to do hardware maintenance and
although the volume came back ok and all the clients can see all the
data, when I do a gluster vol status I see the following for all the
bricks on one of the servers.
Brick nas2-10g:/data6/gvol N/A Y 3721
I've
On Wed, 2014-04-30 at 21:29 +0200, Michal Pazdera wrote:
What we would like to achieve is the same behavior as NFS or Lustre
does
where running client jobs hang until
the target is back online and then continues in the job.
This is what we would like too.
I think it's reasonable for a job
Installed 3.4.3 exactly 2 weeks ago on all our brick servers and I'm
happy to report that we've not had a crash since.
Thanks for all the good work.
On Tue, 2014-04-15 at 14:22 +0800, Franco Broi wrote:
The whole system came to a grinding halt today and no amount of
restarting daemons would
Hi
I have a 2 node test system, on one of the nodes the volume disks are
full and gluster will not start after an upgrade to 3.5. Previous
version was 3.4.3. I am assuming that gluster wont start because the
disks are full although I can't see anything in the log that might
suggest that that is
Anyone seen this problem?
server
Apr 16 14:34:28 nas1 kernel: [7506182.154332] TCP: TCP: Possible SYN flooding
on port 49156. Sending cookies. Check SNMP counters.
Apr 16 14:34:31 nas1 kernel: [7506185.142589] TCP: TCP: Possible SYN flooding
on port 49157. Sending cookies. Check SNMP
I've increased my tcp_max_syn_backlog to 4096 in the hope it will
prevent it from happening again but I'm not sure what caused it in the
first place.
On Wed, 2014-04-16 at 17:25 +0800, Franco Broi wrote:
Anyone seen this problem?
server
Apr 16 14:34:28 nas1 kernel: [7506182.154332] TCP
rx_small_cnt: 1504058385
rx_big_cnt: 2957794484
wake_queue: 462814
stop_queue: 462814
tx_linearized: 1011916
On Wed, 2014-04-16 at 11:38 -0700, Harshavardhana wrote:
Perhaps a driver bug? - have you verified ethtool -S output?
On Wed, Apr 16, 2014 at 2:42 AM, Franco Broi
PM, Franco Broi franco.b...@iongeo.com wrote:
What should I be looking for? See below.
I thought that maybe it coincided with a bunch of machines waking from
sleep, but I don't think that is the case.
[root@nas1 ~]# ethtool -S eth2
NIC statistics:
rx_packets: 116095907410
.
On Tue, 2014-04-15 at 08:35 +0800, Franco Broi wrote:
On Mon, 2014-04-14 at 17:29 -0700, Harshavardhana wrote:
Just distributed.
Pure distributed setup you have to take a downtime, since the data
isn't replicated.
If I shutdown the server processes, wont the clients just wait
I seriously doubt this is the right filesystem for you, we have problems
listing directories with a few hundred files, never mind millions.
On Tue, 2014-04-15 at 10:45 +0900, Terada Michitaka wrote:
Dear All,
I have a problem with slow writing when there are 10 million files.
(Top
, Apr 14, 2014 at 11:24 PM, Franco Broi
franco.b...@iongeo.com wrote:
I seriously doubt this is the right filesystem for
you, we have problems
listing directories with a few hundred files, never
mind millions
Just discovered that doing a yum update of glusterfs on a running server
is a bad idea. This was just a test system but I wouldn't have expected
updating the software to cause the running daemons to fail.
___
Gluster-users mailing list
are ready I'll give it a test before deploying for real.
Cheers,
On Mon, 2014-04-14 at 21:35 +0100, Justin Clift wrote:
On 14/04/2014, at 5:45 PM, Justin Clift wrote:
On 14/04/2014, at 3:40 AM, Franco Broi wrote:
Hi Vijay
How do I get the fix? I'm running 3.4.1, should I upgrade
On Mon, 2014-04-14 at 17:26 -0700, Harshavardhana wrote:
On Mon, Apr 14, 2014 at 4:23 PM, Franco Broi franco.b...@iongeo.com wrote:
Thanks Justin.
Will it be possible to apply this new version as a rolling update to my
servers keeping the volume online?
If you have distributed
On Mon, 2014-04-14 at 17:29 -0700, Harshavardhana wrote:
Just distributed.
Pure distributed setup you have to take a downtime, since the data
isn't replicated.
If I shutdown the server processes, wont the clients just wait for it to
come back up? Ie like NFS hard mounts? I don't mind
This message seems spurious, there have been no network changes and the
system has been in operation for many months.
I restarted the volume and it worked for a while and crashed in the same
way.
[2014-04-14 01:17:59.291103] E [nlm4.c:968:nlm4_establish_callback] 0-nfs-NLM:
Unable to get NLM
(Connection refused)
On Mon, 2014-04-14 at 09:43 +0800, Franco Broi wrote:
This message seems spurious, there have been no network changes and the
system has been in operation for many months.
I restarted the volume and it worked for a while and crashed in the same
way.
[2014-04-14 01:17
Am I the only person using Gluster suffering from very slow directory
access? It's so seriously bad that it almost makes Gluster unusable.
Using NFS instead of the Fuse client masks the problem as long as the
directories are cached but it's still hellishly slow when you first
access them.
Has
glusterfs just segfaulted, I have the core file
Core was generated by `/usr/local/sbin/glusterfs --log-level=INFO
--log-file=/var/log/glusterfs/gluste'.
Program terminated with signal 11, Segmentation fault.
#0 0x7f3d5d49523c in __ioc_page_wakeup (page=0x7f3d5031f840, op_errno=0)
at
you are running commands simultaneously or at least running a command
before an old one finishes.
~kaushal
On Tue, Mar 18, 2014 at 11:24 AM, Franco Broi franco.b...@iongeo.com wrote:
What causes this error? And how do I get rid of it?
[root@nas4 ~]# gluster vol status
Another
Hi
Some time ago during a testing phase I made a volume called test-volume
and exported it via gluster's inbuilt NFS. I then destroyed the volume
and made a new one called data but showmount -e still shows test-volume
and not my new volume. I can mount the data volume from other servers,
just not
that is wrong. Could you please give more details on your cluster? And
the glusterd logs of the misbehaving peer (if possible for all the
peers). It would help in tracking it down.
On Tue, Mar 18, 2014 at 12:24 PM, Franco Broi franco.b...@iongeo.com wrote:
Restarted the glusterd daemons on all 4
does this option do and why isn't it enabled by default?
___
From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on
behalf of Franco Broi [franco.b...@iongeo.com]
Sent: Friday, February 21, 2014 7:25 PM
To: Vijay Bellur
Cc: gluster-users
-boun...@gluster.org] on
behalf of Franco Broi [franco.b...@iongeo.com]
Sent: Friday, February 21, 2014 7:25 PM
To: Vijay Bellur
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Very slow ls
On 21 Feb 2014 22:03, Vijay Bellur vbel...@redhat.com wrote:
On 02/18/2014 12:42 AM, Franco Broi
You need to check the log files for obvious problems. On the servers
these should be in /var/log/glusterfs/ and you can turn on logging for
the fuse client like this:
# mount -olog-level=debug,log-file=/var/log/glusterfs/glusterfs.log -t
glusterfs
On Tue, 2014-02-18 at 17:30 -0300,
On 18 Feb 2014 00:13, Vijay Bellur vbel...@redhat.com wrote:
On 02/17/2014 07:00 AM, Franco Broi wrote:
I mounted the filesystem with trace logging turned on and can see that
after the last successful READDIRP there is a lot of other connections
being made the clients repeatedly which
From: Vijay Bellur [vbel...@redhat.com]
Sent: Monday, February 17, 2014 10:13 AM
To: Franco Broi; gluster-users@gluster.org
Subject: Re: [Gluster-users] Very slow ls
On 02/17/2014 07:00 AM, Franco Broi wrote:
I mounted the filesystem with trace logging
Hi all
I've been trying to understand why Gluster dht is so slow when listing some
directories and not others. strace'ing ls on a directory of directories shows
it pause on the last getdents. I wrote a simple perl script to open the
directory and read the entries, it shows the same effect,
Anyone have any experience using readdir-readahead for gluster dht? I need to
do something to improve ls performance, it's so slow it makes the filesystem
almost unusable.
This email and any files transmitted with it are confidential and are intended
solely
Hi Sunny
I doubt I know much more about Gluster than I suspect you do. All I can
suggest is that you turn on some of the debugging options to try and trace the
problem.
Maybe some of the more experienced Gluster users on this list can help.
Cheers,
On 3 Feb 2014 17:29, Dragon
Looks like you have a problem getting to one of your servers:
[2014-02-03 21:03:06.231215] E [socket.c:2157:socket_connect_finish]
0-bigdata-client-0: connection to x.x.x.x:49153 failed (No route to host)
On Mon, 2014-02-03 at 16:15 -0600, Branden Timm wrote:
I should mention that the
This explains what should happen and you were correct, it should they another
brick if the hash target brick has less than min free space available.
http://hekafs.org/index.php/2012/03/glusterfs-algorithms-distribution/
On 28 Jan 2014 22:28, Dragon sungh...@gmx.de wrote:
Hi,
The node is not
Hi
Been having intermittent problems with files appearing to empty with the
sticky bit set (-T), seems to happen after a machine has just
booted and the filesystem mounted. Remounting the filesystem cures the
problem.
All machines mount from the same point:
mount -t glusterfs
The target brick for dht is determined using a hash, it doesn't do any sort of
capacity balancing. You need to make some space on all the bricks.
On 28 Jan 2014 21:02, Dragon sungh...@gmx.de wrote:
Hi,
after find out that the Fuseclient runs in Version 3.4.2 i updated all 3 Nodes
to 3.4.2,
1 - 100 of 120 matches
Mail list logo