oflag=dsync is going to be really slow on any disk, or am I missing
something here?
On Tue, 2015-02-17 at 10:03 +0800, Punit Dambiwal wrote:
> Hi Vijay,
>
>
> Please find the volume info here :-
>
>
> [root@cpu01 ~]# gluster volume info
>
>
> Volume Name: ds01
> Type: Distributed-Replicate
On Tue, 2015-01-27 at 14:09 +0100, Bartłomiej Syryjczyk wrote:
> OK, I removed part "2>/dev/null", and see:
> stat: cannot stat ‘/mnt/gluster’: Resource temporarily unavailable
>
> So I decided to add sleep just before line number 298 (this one with
> stat). And it works! Is it normal?
>
Glad t
What do you get is you do this?
bash-4.1# stat -c %i /mnt/gluster
1
-bash-4.1# echo $?
0
On Tue, 2015-01-27 at 09:47 +0100, Bartłomiej Syryjczyk wrote:
> W dniu 2015-01-27 o 09:20, Franco Broi pisze:
> > Well I'm stumped, just seems like the mount.glusterfs script isn't
>
Could this be a case of Oracle Linux being evil?
On Tue, 2015-01-27 at 16:20 +0800, Franco Broi wrote:
> Well I'm stumped, just seems like the mount.glusterfs script isn't
> working. I'm still running 3.5.1 and the getinode bit of my script looks
> like th
this is required if the stat returns error
if [ -z "$inode" ]; then
inode="0";
fi
if [ $inode -ne 1 ]; then
err=1;
fi
On Tue, 2015-01-27 at 09:12 +0100, Bartłomiej Syryjczyk wrote:
> W dniu 2015-01-27 o 09:08, Franco Broi pisze:
> > So
So what is the inode of your mounted gluster filesystem? And does
running 'mount' show it as being fuse.glusterfs?
On Tue, 2015-01-27 at 09:05 +0100, Bartłomiej Syryjczyk wrote:
> W dniu 2015-01-27 o 09:00, Franco Broi pisze:
> > Your getinode isn't working.
Your getinode isn't working...
+ '[' 0 -ne 0 ']'
++ stat -c %i /mnt/gluster
+ inode=
+ '[' 1 -ne 0 ']'
How old is your mount.glusterfs script?
On Tue, 2015-01-27 at 08:52 +0100, Bartłomiej Syryjczyk wrote:
> W dniu 2015-01-27 o 08:46, Franco
-27 at 08:43 +0100, Bartłomiej Syryjczyk wrote:
> W dniu 2015-01-27 o 08:36, Franco Broi pisze:
> > Seems to mount and then umount it because the inode isn't 1, weird!
> >
> >
> Why it must be 1? Are you sure it's inode, not exit code?
> Can you check your system?
Seems to mount and then umount it because the inode isn't 1, weird!
On Tue, 2015-01-27 at 08:29 +0100, Bartłomiej Syryjczyk wrote:
> W dniu 2015-01-27 o 07:45, Franco Broi pisze:
> > Must be something wrong with your mount.glusterfs script, you could try
> > running it wi
Must be something wrong with your mount.glusterfs script, you could try
running it with sh -x to see what command it tries to run.
On Tue, 2015-01-27 at 07:28 +0100, Bartłomiej Syryjczyk wrote:
> W dniu 2015-01-27 o 02:54, Pranith Kumar Karampuri pisze:
> >
> > On 01/26/2015 02:27 PM, Bartłomiej
Hi
Just something to note. I created a stripe volume for testing then
deleted it and created a distributed volume, for the second create I had
to use force as it thought the first brick was already part of a volume.
When I came to write to the distributed volume, all the files went to
the first b
00-150MB/s.
Does seem slow.
If you get the same sort of performance from normal NFS then I would say
your IPoIB stack isn't performing very well but I assume you've tested
that with something like iperf?
>
>
> On Dec 7, 2014, at 9:15 PM, Franco Broi wrote:
>
> >
&
to know if I am wrong here and there is
> some configuration I can tweak to make things faster.
>
> Andy
>
> On Dec 7, 2014, at 8:43 PM, Franco Broi wrote:
>
> > On Fri, 2014-12-05 at 14:22 +, Kiebzak, Jason M. wrote:
> >> May I ask why you chose to go with
ned to physical
hardware units, ie I could disconnect a brick and move it to another
server.
>
> Thanks
> Jason
>
> -Original Message-
> From: gluster-users-boun...@gluster.org
> [mailto:gluster-users-boun...@gluster.org] On Behalf Of Franco Broi
> Sent: Thursday,
1 DHT volume comprising 16 50TB bricks spread across 4 servers. Each
server has 10Gbit Ethernet.
Each brick is a ZOL RADIZ2 pool with a single filesystem.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/li
Not adding an extra column is least likely to break any scripts that
currently parse the status output by splitting on spaces.
On Wed, 2014-11-26 at 20:27 +0530, Atin Mukherjee wrote:
> I would vote for 2nd one.
>
> ~Atin
>
> On 11/26/2014 06:49 PM, Mohammed Rafi K C wrote:
> >
> >
> > Hi Al
Can't see how any of that could account for 1000% cpu unless it's just
stuck in a loop.
On Tue, 2014-11-18 at 18:00 +1000, Lindsay Mathieson wrote:
> On 18 November 2014 17:46, Franco Broi wrote:
> >
> > Try strace -Ff -e file -p 'gluster
Try strace -Ff -e file -p 'glusterfsd pid'
On Tue, 2014-11-18 at 17:42 +1000, Lindsay Mathieson wrote:
> Sorry, meant to send to the list. strace attached.
>
> On 18 November 2014 17:35, Pranith Kumar Karampuri
> wrote:
> >
> > On 11/18/2014 12:32 PM, Lindsay Mathieson wrote:
> >>
> >> 2 Node
glusterfsd is the filesystem daemon. You could trace strace'ing it to
see what it's doing.
On Tue, 2014-11-18 at 17:09 +1000, Lindsay Mathieson wrote:
> And its happening on both nodes now, they have become near unusable.
>
> On 18 November 2014 17:03, Lindsay Mathieson
> wrote:
> > ps. There
If you want software raid then zol is a good option, we've been running Gluster
on zol for a couple of years.
On 28 Oct 2014 19:37, John Hearns wrote:
-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Lindsay Mathieson
S
gluster without deleting it all
> > and then spending 5 more days transferring all the files again via gluster.
> >
>
> No need to apologise, getting your head around this stuff is difficult,
> even now (after nearly 2 years) I'm still not sure I'm giving you
ing 5 more days transferring all the files again via gluster.
>
No need to apologise, getting your head around this stuff is difficult,
even now (after nearly 2 years) I'm still not sure I'm giving you
accurate information.
> Thanks again, I really do appreciate any advice tha
ix [mailto:ryan@gmail.com]
> Sent: Thursday, 16 October 2014 11:58 AM
> To: SINCOCK John
> Cc: Franco Broi; gluster-users
>
>
> Subject: Re: [Gluster-users] Is it ok to add a new brick with
> files already on it?
>
ks will
> just sit there, invisible to glusterfs.
>
> Thanks again for any info!
>
>
> -Original Message-
> From: Franco Broi [mailto:franco.b...@iongeo.com]
> Sent: Thursday, 16 October 2014 10:06 AM
> To: SINCOCK John
> Cc: gluster-users@gluster.org
> Su
I've never added a brick with existing files but I did start a new
Gluster volume on disks that already contained data and I was able to
access the files without problem. Of course the files will be out of
place but the first time you access them, Gluster will add links to
speed up future lookups.
from, surely it depends on the disk
capacity???
> -Dan
>
>
> Dan Mons
> Unbreaker of broken things
> Cutting Edge
> http://cuttingedge.com.au
>
>
> On 7 October 2014 14:56, Franco Broi wrote:
> > Our bricks are 50TB, running ZOL, 16 dis
changed,
> performance wise.
>
> -Dan
>
>
> Dan Mons
> Unbreaker of broken things
> Cutting Edge
> http://cuttingedge.com.au
>
>
> On 7 October 2014 14:16, Franco Broi wrote:
> >
> > Not an issue for us, were at 92% on an 800TB distr
Not an issue for us, were at 92% on an 800TB distributed volume, 16
bricks spread across 4 servers. Lookups can be a bit slow but raw IO
hasn't changed.
On Tue, 2014-10-07 at 09:16 +1000, Dan Mons wrote:
> On 7 October 2014 08:56, Jeff Darcy wrote:
> > I can't think of a good reason for such a
, good enough for us to switch back from using gNFS for
interactive applications.
All my mountpoints look good too. no more crashes.
Thanks for all the good work, hopefully you wont be hearing much from me
for a while!
Cheers,
On Wed, 2014-08-06 at 08:54 +0800, Franco Broi wrote:
> I think all
updates!
Cheers,
On Tue, 2014-08-05 at 14:24 +0800, Franco Broi wrote:
> On Mon, 2014-08-04 at 12:31 +0200, Niels de Vos wrote:
> > On Mon, Aug 04, 2014 at 05:05:10PM +0800, Franco Broi wrote:
> > >
> > > A bit more background to this.
> > >
> > >
On Mon, 2014-08-04 at 12:31 +0200, Niels de Vos wrote:
> On Mon, Aug 04, 2014 at 05:05:10PM +0800, Franco Broi wrote:
> >
> > A bit more background to this.
> >
> > I was running 3.4.3 on all the clients (120+ nodes) but I also have a
> > 3.5 volume which I
p to run gdb on.
Cheers,
On Mon, 2014-08-04 at 12:53 +0530, Pranith Kumar Karampuri wrote:
> CC dht folks
>
> Pranith
> On 08/04/2014 11:52 AM, Franco Broi wrote:
> > I've had a sudden spate of mount points failing with Transport endpoint
> > not connected and co
I've had a sudden spate of mount points failing with Transport endpoint
not connected and core dumps. The dumps are so large and my root
partitions so small that I haven't managed to get a decent traceback.
BFD: Warning: //core.2351 is truncated: expected core file size >=
165773312, found: 15410
On Wed, 2014-07-16 at 08:32 +0530, Santosh Pradhan wrote:
> On 07/15/2014 06:24 AM, Franco Broi wrote:
> > I think the option I need is noac.
>
> noac is no-attribute-caching in NFS. By default, NFS client would cache
> the meta-data/attributes of FILEs for 3 seconds an
I think the option I need is noac.
On Fri, 2014-07-11 at 16:10 +0530, Santosh Pradhan wrote:
> On 07/11/2014 02:37 PM, Franco Broi wrote:
> > On Fri, 2014-07-11 at 14:22 +0530, Santosh Pradhan wrote:
> >> On 07/11/2014 06:10 AM, Franco Broi wrote:
> >>> Hi
> &
On Fri, 2014-07-11 at 14:22 +0530, Santosh Pradhan wrote:
> On 07/11/2014 06:10 AM, Franco Broi wrote:
> > Hi
> >
> > Is there any way to make Gluster emulate the behaviour of a NFS
> > filesystem exported with the sync option? By that I mean is it possible
> > to
e magic
that can be done when the file is opened??
>
> Regards,
> Raghavendra Bhat
>
> > Pranith
> > On 07/11/2014 06:10 AM, Franco Broi wrote:
> >> Hi
> >>
> >> Is there any way to make Gluster emulate the behaviour of a NFS
> >> file
On Thu, 2014-07-10 at 21:02 -0400, James wrote:
> On Thu, Jul 10, 2014 at 8:40 PM, Franco Broi wrote:
> > Is there any way to make Gluster emulate the behaviour of a NFS
> > filesystem exported with the sync option? By that I mean is it possible
> > to write a file from one
Hi
Is there any way to make Gluster emulate the behaviour of a NFS
filesystem exported with the sync option? By that I mean is it possible
to write a file from one client and guarantee that the data will be
instantly available on close to all other clients?
Cheers,
__
e]
0-management: Commit of operation 'Volume Start' failed on localhost
On Wed, 2014-06-25 at 13:21 +0800, Franco Broi wrote:
> Ok, I'm going to try this tomorrow. Anyone have anything else to add??
> What's the worst that can happen?
>
> On Mon, 2014-06-23 at
Ok, I'm going to try this tomorrow. Anyone have anything else to add??
What's the worst that can happen?
On Mon, 2014-06-23 at 20:11 +0530, Kaushal M wrote:
> On Wed, Jun 18, 2014 at 6:58 PM, Justin Clift wrote:
> > On 18/06/2014, at 9:36 AM, Kaushal M wrote:
> >> You are right. Since you had i
not end up on the longest-lived node!
>
> I'll have to keep a close eye on this issue and upgrade as soon as the 3.5
> looks stable
>
> Thanks very much for the info.
>
> Cheers and Regards,
> John
>
>
>
>
> -Original Message-
> From: Fran
rted working
again.
>
> - Original Message -----
> From: "Franco Broi"
> To: "Lalatendu Mohanty"
> Cc: "Susant Palai" , "Niels de Vos" ,
> "Pranith Kumar Karampuri" , gluster-users@gluster.org,
> "Raghavendra Gowd
sr/sbin/glusterd(glusterfs_process_volfp+0x103) [0x4050c3]))) 0-:
received signum (0), shutting down
>
> Thanks,
> Lala
> >
> >
> > - Original Message -
> > From: "Franco Broi"
> > To: "Susant Palai"
> > Cc: "Prani
Hi John
I got this yesterday, it was copied to this list.
Cheers,
On Tue, 2014-06-17 at 04:55 -0400, Susant Palai wrote:
Hi Franco:
>The following patches address the ENOTEMPTY issue.
>
> 1. http://review.gluster.org/#/c/7733/
> 2. http://review.gluster.
the client logs for more
> information.
>
> Susant.
>
> - Original Message -
> From: "Franco Broi"
> To: "Pranith Kumar Karampuri"
> Cc: "Susant Palai" , gluster-users@gluster.org,
> "Raghavendra Gowdappa" , kdhan...@redh
-
There are no active volume tasks
>
> Pranith
> >
> > Thanks,
> > Susant~
> >
> > - Original Message -
> > From: "Pranith Kumar Karampuri"
> > To: "Franco Broi"
> > Cc: gluster-users@gluster.or
0
drwxrwxr-x 2 1348 200 2 May 16 12:05 dir13021
drwxrwxr-x 2 1348 200 2 May 16 12:05 dir13022
.
Maybe Gluster is losing track of the files??
>
> Pranith
>
> On 06/02/2014 02:48 PM, Franco Broi wrote:
> > Hi Pranith
> >
> > Here's a listing of the
ms...@gmail.com]
> Sent: Tuesday, June 03, 2014 1:19 PM
> To: Gnan Kumar, Yalla
> Cc: Franco Broi; gluster-users@gluster.org
> Subject: Re: [Gluster-users] Distributed volumes
>
> You have only 1 file on the gluster volume, the 1GB disk image/volume that
> you created. This disk
nfo
>
> Volume Name: dst
> Type: Distribute
> Status: Started
> Number of Bricks: 2
> Transport-type: tcp
> Bricks:
> Brick1: primary:/export/sdd1/brick
> Brick2: secondary:/export/sdd1/brick
>
>
>
> -Original Message-
> From: Franco Broi [mailto:fran
4096 May 27 08:43 ../
> -rw-rw-rw- 1 108 115 1073741824 Jun 2 09:35
> volume-0ec560be-997f-46da-9ec8-e9d6627f2de1
> root@secondary:/export/sdd1/brick#
> -
>
>
> Thanks
> Kumar
>
>
>
>
>
>
>
>
> -Original Message-
&g
Just do an ls on the bricks, the paths are the same as the mounted
filesystem.
On Mon, 2014-06-02 at 12:26 +, yalla.gnan.ku...@accenture.com
wrote:
> Hi All,
>
>
>
> I have created a distributed volume of 1 GB , using two bricks from
> two different servers.
>
> I have written 7 files w
-0400, Pranith Kumar Karampuri wrote:
>
> - Original Message -
> > From: "Pranith Kumar Karampuri"
> > To: "Franco Broi"
> > Cc: gluster-users@gluster.org
> > Sent: Monday, June 2, 2014 7:01:34 AM
> > Subject: Re: [Gluster-users]
dd read test on the disk shows 700MB/Sec, which is about normal for
these bricks.
On Sun, 2014-06-01 at 13:23 +0800, Franco Broi wrote:
> The volume is almost completely idle now and the CPU for the brick
> process has returned to normal. I've included the profile and I think it
Doesn't work for me either on Centos 5, have to modprobe fuse.
On 1 Jun 2014 10:46, Gene Liverman wrote:
Just setup my first Gluster share (replicated on 3 nodes) and it works fine on
RHEL 6 but when trying to mount it on RHEL 5.8 I get the following in my logs:
[2014-06-01 02:01:29.580163] I [
Hi
I've been suffering from continual problems with my gluster filesystem
slowing down due to what I thought was congestion on a single brick
being caused by a problem with the underlying filesystem running slow
but I've just noticed that the glusterfsd process for that particular
brick is running
more
connected clients cannot support the feature being set. These clients
need to be upgraded or disconnected before running this command again
On Thu, 2014-05-29 at 10:34 +0800, Franco Broi wrote:
>
> OK. I've just unmounted the data2 volume from a machine called tape1,
> and now tr
n Thu, 2014-05-29 at 10:34 +0800, Franco Broi wrote:
>
> OK. I've just unmounted the data2 volume from a machine called tape1,
> and now try to remount - it's hanging.
>
> /bin/sh /sbin/mount.glusterfs nas5-10g:/data2 /data2 -o
> rw,log-level=info,log-file=/var/log/glus
OK. I've just unmounted the data2 volume from a machine called tape1,
and now try to remount - it's hanging.
/bin/sh /sbin/mount.glusterfs nas5-10g:/data2 /data2 -o
rw,log-level=info,log-file=/var/log/glusterfs/glusterfs_data2.log
on the server
# gluster vol status
Status of volume: data2
Glu
Hi
My clients are running 3.4.1, when I try to mount from lots of machine
simultaneously, some of the mounts hang. Stopping and starting the
volume clears the hung mounts.
Errors in the client logs
[2014-05-28 01:47:15.930866] E
[client-handshake.c:1741:client_query_portmap_cbk] 0-data2-client-
On Tue, 2014-05-27 at 14:38 +0530, Vijay Bellur wrote:
> On 05/26/2014 11:50 AM, Franco Broi wrote:
> > Hi
> >
> > Is there any way now, or plans to implement a pause or freeze function
> > for Gluster so that a brick can be taken offline momentarily from a DHT
>
Hi
Is there any way now, or plans to implement a pause or freeze function
for Gluster so that a brick can be taken offline momentarily from a DHT
volume without affecting all the clients?
Cheers,
___
Gluster-users mailing list
Gluster-users@gluster.org
Have you checked /var/log/glusterfs/nfs.log?
I've found that setting nfs.disable to on and then off will sometimes
restart failed gluster nfs servers.
On Thu, 2014-05-22 at 12:13 -0400, Ted Miller wrote:
> I have a running replica 3 setup, running on Centos 6, fully updated.
>
> I was trying t
16GB.
On Wed, 2014-05-21 at 08:35 -0700, Doug Schouten wrote:
> Hi,
>
> glusterfs is using ~ 5% of memory (24GB total) and glusterfsd is using < 1%
>
> The I/O cache size (performance.cache-size) is 1GB.
>
> cheers, Doug
>
> On 20/05/14 08:37 PM, Franco Broi
Are you running out of memory? How much memory are the gluster daemons
using?
On Tue, 2014-05-20 at 11:16 -0700, Doug Schouten wrote:
> Hello,
>
> I have a rather simple Gluster configuration that consists of 85TB
> distributed across six nodes. There is one particular node that seems to
If you are trying to use lstat+mkdir as a locking mechanism so that you
can run multiple instances of the same program, it will probably fail
more often on a Fuse filesystem than a local one. It should probably be
using FLOCK or open a file with O_CREAT|O_EXCL.
On Tue, 2014-05-20 at 11:58 +0200,
Not sure if this helps, was a while back.
Forwarded Message
From: Franco Broi
> To: Justin Clift
> Cc: Vijay Bellur , gluster-users@gluster.org
> Subject: Re: [Gluster-users] Changing the server hostname
> Date: Wed, 8 Jan 2014 09:19:13 +0800
>
> Thanks
#16 0x004075e4 in main (argc=11, argv=0x7fffabef9e38) at
glusterfsd.c:1983
On Mon, 2014-05-19 at 14:39 +0800, Franco Broi wrote:
> Just had an NFS crash on my test system running 3.5.
>
> Load of messages like this:
>
> [2014-05-19 06:24:59.347147] E
: off
On Thu, 2014-05-01 at 09:55 +0800, Franco Broi wrote:
> Installed 3.4.3 exactly 2 weeks ago on all our brick servers and I'm
> happy to report that we've not had a crash since.
>
> Thanks for all the good work.
>
> On Tue, 2014-04-15 at 14:22 +0800, Franco Broi
On Wed, 2014-05-14 at 12:31 +0200, Olav Peeters wrote:
> Hi,
> from what I read here:
> http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.5
>
> ... if you are on 3.4.0 AND have NO quota configured, it should be
> safe to just replace a version
> specific /etc/yum.repos.d/glust
Hi
I had to restart some of my servers to do hardware maintenance and
although the volume came back ok and all the clients can see all the
data, when I do a gluster vol status I see the following for all the
bricks on one of the servers.
Brick nas2-10g:/data6/gvol N/A Y 3721
I've
On Wed, 2014-04-30 at 21:29 +0200, Michal Pazdera wrote:
> What we would like to achieve is the same behavior as NFS or Lustre
> does
> where running client jobs hang until
> the target is back online and then continues in the job.
This is what we would like too.
I think it's reasonable for a jo
Installed 3.4.3 exactly 2 weeks ago on all our brick servers and I'm
happy to report that we've not had a crash since.
Thanks for all the good work.
On Tue, 2014-04-15 at 14:22 +0800, Franco Broi wrote:
> The whole system came to a grinding halt today and no amount of
> re
Hi
I have a 2 node test system, on one of the nodes the volume disks are
full and gluster will not start after an upgrade to 3.5. Previous
version was 3.4.3. I am assuming that gluster wont start because the
disks are full although I can't see anything in the log that might
suggest that that is t
I check for badcrcs when running the
> Myri10GE software? for further details.
> =
>
> May be contacting your Myricom vendors would be a right start?
>
> On Wed, Apr 16, 2014 at 4:15 PM, Franco Broi wrote:
> > What should I be looking for? See bel
:42 AM, Franco Broi wrote:
> >
> > I've increased my tcp_max_syn_backlog to 4096 in the hope it will
> > prevent it from happening again but I'm not sure what caused it in the
> > first place.
> >
> > On Wed, 2014-04-16 at 17:25 +0800, Franco Broi wrote:
&
I've increased my tcp_max_syn_backlog to 4096 in the hope it will
prevent it from happening again but I'm not sure what caused it in the
first place.
On Wed, 2014-04-16 at 17:25 +0800, Franco Broi wrote:
> Anyone seen this problem?
>
> server
>
> Apr 16 14:34:28 nas
Anyone seen this problem?
server
Apr 16 14:34:28 nas1 kernel: [7506182.154332] TCP: TCP: Possible SYN flooding
on port 49156. Sending cookies. Check SNMP counters.
Apr 16 14:34:31 nas1 kernel: [7506185.142589] TCP: TCP: Possible SYN flooding
on port 49157. Sending cookies. Check SNMP counter
Just discovered that doing a yum update of glusterfs on a running server
is a bad idea. This was just a test system but I wouldn't have expected
updating the software to cause the running daemons to fail.
___
Gluster-users mailing list
Gluster-users@glu
Line 15, average-latency value is about 30 ms.
> I cannot judge this value is a normal(ordinary?) performance or not.
>
> Is it slow?
>
> Thanks,
> --Michika Terada
>
>
>
>
> 2014-04-15 16:05 GMT+09:00 Franco Broi :
>>
>>
>> My bug report is here
>&
r
> technology.
>
>
> On Mon, Apr 14, 2014 at 11:24 PM, Franco Broi
> wrote:
>
> I seriously doubt this is the right filesystem for
> you, we have problems
>
I seriously doubt this is the right filesystem for you, we have problems
listing directories with a few hundred files, never mind millions.
On Tue, 2014-04-15 at 10:45 +0900, Terada Michitaka wrote:
> Dear All,
>
>
>
> I have a problem with slow writing when there are 10 million files.
> (To
rospect terrifies me.
On Tue, 2014-04-15 at 08:35 +0800, Franco Broi wrote:
> On Mon, 2014-04-14 at 17:29 -0700, Harshavardhana wrote:
> > >
> > > Just distributed.
> > >
> >
> > Pure distributed setup you have to take a downtime, since the data
> >
On Mon, 2014-04-14 at 17:29 -0700, Harshavardhana wrote:
> >
> > Just distributed.
> >
>
> Pure distributed setup you have to take a downtime, since the data
> isn't replicated.
If I shutdown the server processes, wont the clients just wait for it to
come back up? Ie like NFS hard mounts? I don'
On Mon, 2014-04-14 at 17:26 -0700, Harshavardhana wrote:
> On Mon, Apr 14, 2014 at 4:23 PM, Franco Broi wrote:
> >
> > Thanks Justin.
> >
> > Will it be possible to apply this new version as a rolling update to my
> > servers keeping the volume online?
s
the rpms are ready I'll give it a test before deploying for real.
Cheers,
On Mon, 2014-04-14 at 21:35 +0100, Justin Clift wrote:
> On 14/04/2014, at 5:45 PM, Justin Clift wrote:
> > On 14/04/2014, at 3:40 AM, Franco Broi wrote:
> >> Hi Vijay
> >>
> >>
Hi Vijay
How do I get the fix? I'm running 3.4.1, should I upgrade to a newer
version?
I need a quick fix, this is causing us a lot of grief.
Cheers,
On Sun, 2014-04-13 at 19:32 -0700, Vijay Bellur wrote:
> On 04/13/2014 06:43 PM, Franco Broi wrote:
> >
> > This message s
(Connection refused)
On Mon, 2014-04-14 at 09:43 +0800, Franco Broi wrote:
> This message seems spurious, there have been no network changes and the
> system has been in operation for many months.
>
> I restarted the volume and it worked for a while and crashed in the same
> way.
&
This message seems spurious, there have been no network changes and the
system has been in operation for many months.
I restarted the volume and it worked for a while and crashed in the same
way.
[2014-04-14 01:17:59.291103] E [nlm4.c:968:nlm4_establish_callback] 0-nfs-NLM:
Unable to get NLM po
Am I the only person using Gluster suffering from very slow directory
access? It's so seriously bad that it almost makes Gluster unusable.
Using NFS instead of the Fuse client masks the problem as long as the
directories are cached but it's still hellishly slow when you first
access them.
Has th
glusterfs just segfaulted, I have the core file
Core was generated by `/usr/local/sbin/glusterfs --log-level=INFO
--log-file=/var/log/glusterfs/gluste'.
Program terminated with signal 11, Segmentation fault.
#0 0x7f3d5d49523c in __ioc_page_wakeup (page=0x7f3d5031f840, op_errno=0)
at page.
ently, there is something else
> that is wrong. Could you please give more details on your cluster? And
> the glusterd logs of the misbehaving peer (if possible for all the
> peers). It would help in tracking it down.
>
>
>
> On Tue, Mar 18, 2014 at 12:24 PM, Franco Broi wro
Hi
Some time ago during a testing phase I made a volume called test-volume
and exported it via gluster's inbuilt NFS. I then destroyed the volume
and made a new one called data but showmount -e still shows test-volume
and not my new volume. I can mount the data volume from other servers,
just not
ur case, since you can run commands on other nodes, most likely
> you are running commands simultaneously or at least running a command
> before an old one finishes.
>
> ~kaushal
>
> On Tue, Mar 18, 2014 at 11:24 AM, Franco Broi wrote:
> >
> > What causes this error? A
What causes this error? And how do I get rid of it?
[root@nas4 ~]# gluster vol status
Another transaction could be in progress. Please try again after sometime.
Looks normal on any other server.
___
Gluster-users mailing list
Gluster-users@gluster.or
On Tue, 2014-02-25 at 14:40 +, Justin Clift wrote:
> On 23/02/2014, at 4:11 AM, Franco Broi wrote:
> > All the client filesystems core-dumped. Lost a lot of production time.
>
> Ugh, that sounds remarkable bad. :(
>
> Out of curiosity, do you still have any of th
all.
What does this option do and why isn't it enabled by default?
___
From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on
behalf of Franco Broi [franco.b...@iongeo.com]
Sent: Friday, February 21, 2014 7:25 PM
To: Vijay Bellur
Cc
ster-users-boun...@gluster.org] on
behalf of Franco Broi [franco.b...@iongeo.com]
Sent: Friday, February 21, 2014 7:25 PM
To: Vijay Bellur
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Very slow ls
On 21 Feb 2014 22:03, Vijay Bellur wrote:
>
> On 02/18/2014 12:42 AM, Franco Broi w
On 21 Feb 2014 22:03, Vijay Bellur wrote:
>
> On 02/18/2014 12:42 AM, Franco Broi wrote:
> >
> > On 18 Feb 2014 00:13, Vijay Bellur wrote:
> > >
> > > On 02/17/2014 07:00 AM, Franco Broi wrote:
> > > >
> > > > I mounted
You need to check the log files for obvious problems. On the servers
these should be in /var/log/glusterfs/ and you can turn on logging for
the fuse client like this:
# mount -olog-level=debug,log-file=/var/log/glusterfs/glusterfs.log -t
glusterfs
On Tue, 2014-02-18 at 17:30 -0300, Targino
1 - 100 of 130 matches
Mail list logo