under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
[root@mseas-data2 ~]#
Thanks,
Steve
___
Sure
Start of upgrade: 15:36
Start of issue: 21:51
On Mon, Aug 22, 2016 at 9:15 PM, Atin Mukherjee wrote:
>
>
> On Tue, Aug 23, 2016 at 4:17 AM, Steve Dainard wrote:
>
>> About 5 hours after upgrading gluster 3.7.6 -> 3.7.13 on Centos 7, one of
>> my gluster serv
As a potential solution on the compute node side, can you have users copy
relevant data from the gluster volume to a local disk (ie $TMDIR), operate
on that disk, write output files to that disk, and then write the results
back to persistent storage once the job is complete?
There are lots of fact
'm capturing the strace output now, hopefully something useful is shown.
Thanks
On Thu, Jun 16, 2016 at 7:31 PM, Vijay Bellur wrote:
> On Thu, Jun 16, 2016 at 3:05 PM, Steve Dainard wrote:
> > I'm restoring some data to gluster from TSM backups and the client errors
> > out
TSM client is generating (TSM
fault).
Thanks,
Steve
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
clients then statedump shouldn't
> be showing the same. Was unmount successful? Do you see any related error
> log entry in mount & glusterd log?
>
> -Atin
> Sent from one plus one
> On 04-Mar-2016 10:23 pm, "Steve Dainard" wrote:
>
>> Except tha
NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
On Fri, Mar 4, 2016 at 8:53 AM, Steve Dai
in-op-version=1
glusterd.client4.identifier=10.0.231.11:1022
glusterd.client4.volname=storage
glusterd.client4.max-op-version=30603
glusterd.client4.min-op-version=1
On Thu, Mar 3, 2016 at 5:28 PM, Atin Mukherjee
wrote:
> -Atin
> Sent from one plus one
> On 04-Mar-2016 3:35 am,
:
apt-cache policy glusterfs-client
glusterfs-client:
Installed: 3.7.6-2
Candidate: 3.7.6-2
Version table:
*** 3.7.6-2 0
500
http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.6/Debian/wheezy/apt/
wheezy/main amd64 Packages
Thanks
On Wed, Mar 2, 2016 at 10:29 PM, Gaurav Garg
n I retrieve this info?
Thanks
On Tue, Mar 1, 2016 at 10:19 PM, Gaurav Garg wrote:
> Hi Steve,
>
> Which version you have upgraded client, could you tell us client
> op-version after upgrade ?
>
>
> have you upgraded all of your clients ?
>
>
> Thanks,
> Gaurav
>
Gluster 3.7.6
'storage' is a distributed volume
# gluster volume set storage rebal-throttle lazy
volume set: failed: One or more connected clients cannot support the
feature being set. These clients need to be upgraded or disconnected before
running this command again
I found a client connected u
to crash? Seems like you added
> a brick .
> 4) If possible can you recollect the order in which you added the peers
> and the version. Also the upgrade sequence.
>
> May be you can race a bug in bugzilla with the information.
>
> Regards
> Rafi KC
> On 1 Mar 2016 12:58 am, Ste
this? Can I just stop glusterd on gluster03 and
change the cksum value?
On Thu, Feb 25, 2016 at 12:49 PM, Mohammed Rafi K C
wrote:
>
>
> On 02/26/2016 01:53 AM, Mohammed Rafi K C wrote:
>
>
>
> On 02/26/2016 01:32 AM, Steve Dainard wrote:
>
> I haven't done anything
vm-storage
Number of entries: 0
On Thu, Feb 25, 2016 at 12:02 PM, Steve Dainard wrote:
> I haven't done anything more than peer thus far, so I'm a bit confused as
> to how the volume info fits in, can you expand on this a bit?
>
> Failed commits? Is this split brain on the repl
16 at 11:02 PM, Manikandan Selvaganesh
wrote:
> Hi Steve,
>
> We suspect the mismatching in accounting is probably because of the
> xattr's being not cleaned up properly. Please ensure you do the following
> steps and make sure the xattr's are cleaned up properly before q
For what it's worth, I've never been able to lose a brick in a 2 brick
replica volume and still be able to write data.
I've also found the documentation confusing as to what 'Option:
cluster.server-quorum-type' actually means.
Default Value: (null)
Description: This feature is on the server-side i
gluster on a private subnet so its a bit
odd.. but I don't know if its related.
On Tue, Feb 9, 2016 at 5:38 PM, Ravishankar N wrote:
> Hi Steve,
> The patch already went in for 3.6.3
> (https://bugzilla.redhat.com/show_bug.cgi?id=1187547). What version are you
> using? If it
otas, waiting for
xattrs to be generated, then enabling limits?
3. Shouldn't there be a command to re-trigger quota accounting on a
directory that confirms the attrs are set correctly and checks that
the contribution attr actually match disk usage?
On Tue, Feb 2, 2016 at 3:00 AM, Manikandan Selvaga
it makes sense to back-port this
fix if possible.
Thanks,
Steve
1. https://www.gluster.org/pipermail/gluster-users/2014-November/019512.html
2. https://bugzilla.redhat.com/show_bug.cgi?id=1166020
___
Gluster-users mailing list
Gluster-users@gluste
oing to set limits again.
As a note, this is a pretty brutal process on a system with 140T of
storage, and I can't imagine how much worse this would be if my nodes
had more than 12 disks per, or if I was at PB scale.
On Mon, Jan 25, 2016 at 12:31 PM, Steve Dainard wrote:
> Here's a l
tice I have a brick issue on a different volume 'vm-storage'.
In regards to the 3.7 upgrade. I'm a bit hesitant to move to the
current release, I prefer to stay on a stable release with maintenance
updates if possible.
On Mon, Jan 25, 2016 at 12:09 PM, Manikandan Selvaganesh
wrote:
&
ts-CanSISE quota not being accurate
that I missed that the 'Used' space on /data4/climate is listed higher
then the total gluster volume capacity.
On Mon, Jan 25, 2016 at 10:52 AM, Steve Dainard wrote:
> Hi Manikandan
>
> I'm using 'du' not df in this case.
>
Hi Manikandan
I'm using 'du' not df in this case.
On Thu, Jan 21, 2016 at 9:20 PM, Manikandan Selvaganesh
wrote:
> Hi Steve,
>
> If you would like disk usage using df utility by taking quota limits into
> consideration, then you are expected to run the following comm
to proceed? I would appreciate any
input on this one.
Steve
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Thu, Jan 21, 2016 at 10:07 AM, Steve Dainard wrote:
> I have a distributed volume with quota's enabled:
>
> Volume Name: storage
> Type: Distribute
> Volume ID: 26d355cb-c486-481f-ac16-e25390e73775
> Status: Started
> Number of Bricks: 4
> Transport-type: tcp
> Bric
often does the quota mechanism check its accuracy? And
how could it get so far off?
Can I get gluster to rescan that location and update the quota usage?
Thanks,
Steve
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/ma
12-24.el7_1.x86_64
samba-winbind-modules-4.1.12-24.el7_1.x86_64
glusterfs-libs-3.6.7-1.el7.x86_64
samba-winbind-clients-4.1.12-24.el7_1.x86_64
Thanks
On Fri, Oct 2, 2015 at 4:42 PM, Steve Dainard wrote:
> Hi Diego,
>
> Awesome, works - much appreciated.
>
> As far as I can sear
I wouldn't think you'd need any 'arbiter' nodes (in quotes because in 3.7+
there is an actual arbiter node at the volume level). You have 4 nodes, and
if you lose 1, you're at 3/4 or 75%.
Personally I've not had much luck with 2 node (with or without the fake
arbiter node) as storage for Ovirt VM'
intenance/split-brains.
BTW I agree with your issues in regards to releases. I've found the best
method is to stick to a branch marked as stable. I tested 3.7.3 and it was
a bit of a disaster, but 3.6.6 hasn't given me any grief yet.
Steve
On Fri, Oct 30, 2015 at 6:40 AM, Mauro M. wrot
thin pool
would be created on a single PV per node. Dual 10Gig bonded network, 64GB
ram, single 6 core intel CPU.
Single GlusterFS volume for one large namespace.
Thanks for any input.
Steve
___
Gluster-users mailing list
Gluster-users@gluste
On Thu, Oct 1, 2015 at 2:24 AM, Pranith Kumar Karampuri
wrote:
> hi,
> In releases till now from day-1 with replication, there is a corner
> case bug which can wipe out the all the bricks in that replica set when the
> disk/brick(s) are replaced.
>
> Here are the steps that could lead to th
>vfs objects = glusterfs
>glusterfs:loglevel = 7
>glusterfs:logfile = /var/log/samba/glusterfs-projects.log
>glusterfs:volume = export
>
> HTH,
>
> Diego
>
> On Thu, Oct 1, 2015 at 4:15 PM, Steve Dainard wrote:
>> samba-vfs-glusterfs-4.1.12-23.el7_1
samba-vfs-glusterfs-4.1.12-23.el7_1.x86_64
gluster 3.6.6
I've shared a gluster volume using samba vfs with the options:
vfs objects = glusterfs
glusterfs:volume = test
path = /
I can do the following:
(Windows client):
-Create new directory
-Create new file -- an error pops up "Unable to create t
If you have enough Linux background to think of implementing gluster
storage, why not virtualize on Linux as well?
If you're using the standard Hyperv free version you don't get
clustering support anyways, so standalone KVM gives you the same basic
capabilities and you can use virt-manager to mana
nning all the VM's hang. But I can write to a
fuse mount point so the volume isn't RO.
Thanks,
Steve
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
ica 2 nodes which has a drastic performance impact vs
a normal replica 2 configuration.
Steve
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
On Wed, Apr 30, 2014 at 5:56 AM, Venky Shankar wrote:
> On 04/29/2014 11:12 PM, Steve Dainard wrote:
>
> Fixed by editing the geo-rep volumes gsyncd.conf file, changing
> /nonexistent/gsyncd to /usr/libexec/glusterfs/gsyncd on both the master
> nodes.
>
>
> That is not
service its overwritten?
Steve
On Tue, Apr 29, 2014 at 12:11 PM, Steve Dainard wrote:
> Just setup geo-replication between two replica 2 pairs, gluster version
> 3.5.0.2.
>
> Following this guide:
> https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html/Admin
6] I [monitor(monitor):150:monitor] Monitor:
worker(/mnt/storage/lv-storage-domain/rep1) died before establishing
connection
Thanks,
Steve
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
Do you have a link to the doc's that mention a specific sequence particular
to geo-replication enabled volumes? I don't see anything on the gluster doc
page here:
http://www.gluster.org/community/documentation/index.php/Main_Page.
Thanks,
Steve
On Tue, Apr 29, 2014 at 2:29 AM, Vijayku
Hi Danny,
Did you get anywhere with this geo-rep issue? I have a similar problem
running on CentOS 6.5 when trying anything other than 'start' with geo-rep.
Thanks,
*Steve *
On Tue, Feb 25, 2014 at 9:45 AM, Danny Sauer wrote:
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash:
.918050] I [socket.c:3480:socket_init] 0-glusterfs: SSL
support is NOT enabled
[2014-04-24 14:08:03.918068] I [socket.c:3495:socket_init] 0-glusterfs:
using system polling thread
[2014-04-24 14:08:04.146710] I [input.c:36:cli_batch] 0-: Exiting with: -1
*Steve Dainard *
IT Infrastructure Man
miovision.corp rep1 gluster://10.0.11.4:/rep1
faulty
ovirt001.miovision.corp miofiles gluster://10.0.11.4:/miofiles
faulty
How can I manually remove a geo-rep agreement?
Thanks,
*Steve *
___
Gluster
nted using line in rc.local:
mount -t glusterfs -o log-level=WARNING,log-file=/var/log/gluster-mount.log
localhost:/audio /opt/audio-files
Any thoughts, I'm manually changing the priority to fix at present?
Steve
___
Gluster-users mailin
work:
[root@s6a]/fs/titan/auth/sbin# mount -t nfs -o
rw,vers=3,tcp,mountport=38465,port=38465 costello:38465:/gfs01 /mnt/tmp
mount.nfs: mounting costello:38465:/gfs01 failed, reason given by server: No
such file or directory
The volume name is correct.
TIA,
Steve
lance
space allocation based on the size of data contained in the bricks, or in
the file systems, or something else?
Steve
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
be necessary to
capture all of glusters configuration?
Thanks,
Steve
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
ombie processes are no longer
being "created".
Cheers,
Steve
From: Carlos Capriotti [mailto:capriotti.car...@gmail.com]
Sent: 25 March 2014 12:30
To: Steve Thomas
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Gluster 3.4.2 on Redhat 6.5
Steve:
Tested that myself - not
Some further information:
When I run the command
"gluster volume status audio detail"
I get the Zombie process created So it's not the HERE document as I
previously thought... it's the command itself.
Does this happen with anyone else?
Thanks,
Steve
From
e
errors=("${errors[@]}" "$brick offline")
fi
;;
esac
done < <( sudo gluster volume status ${VOLUME} detail)
Anyone spot why this would be an issue?
Thanks,
Steve
From: Carlos Capriotti [mailto:capriotti.car...@gmail.com]
Sen
Hi all...
Further investigation shows in excess of 500 glusterd zombie processes and
continuing to climb on the box ...
Any suggestions? Am happy to provide logs etc to get to the bottom of this
_
From: Steve Thomas
Sent: 21 March 2014 13:21
To
e status
gluster volume status audio detail
Does anyone have any suggestions as to why glusterd is resulting in these
zombie processes?
Thanks for help in advance,
Steve
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolon
e status
gluster volume status audio detail
Does anyone have any suggestions as to why glusterd is resulting in these
zombie processes?
Thanks for help in advance,
Steve
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolon
,
Steve
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
of the bricks to be active, as
long as 51% of the gluster peers are connected".
I also don't understand why 'subvolumes' are referred to here. Is this old
terminology for 'volumes'?
Thoughts?
*Steve Dainard *
IT Infrastructure Manager
Miovision <http://miovision
S services, or does
it send ack when that data has been replicated in memory of all the replica
member nodes?
I suppose the question could also be, does the data have to be on disk of
one of the nodes, before it is replicated to the other nodes?
Thanks,
*Steve Dainard *
IT Infrastructure Manager
0.10.3:/mnt/storage/lv-storage-domain/rep1
Number of entries: 0
Seeing as the hosts are both in quorum I was surprised to see this, but
these id's have been repeating in my logs.
Thanks,
*Steve Dainard *
IT Infrastructure Manager
Miovision <http://miovision.com/> | *Rethink Traf
havoc if the volume was used as a VM store so
it would need to be properly cautioned.
Otherwise, anyone know of an opensource solution that could do this?
*Steve Dainard *
IT Infrastructure Manager
Miovision <http://miovision.com/> | *Rethink Traffic*
519-513-2407 ex.250
877-646-
were dropped from the OS. I missed this.
Changed host files on the 3 machines and can now mount and see the entire
volume.
Thank You again for your help!
Steve
From: David Coulson [da...@davidcoulson.net]
Sent: Wednesday, November 21, 2012 3:46 PM
To: Steve
Hi David, as requested,
Thanks,
Steve
[root@nas-0-0 ~]# lsof -n | grep 24007
glusterd 7943 root6u IPv4 468358 0t0TCP
127.0.0.1:24007->127.0.0.1:1021 (ESTABLISHED)
glusterd 7943 root8u IPv4 402704 0t0TCP
*:24
luster.org [gluster-users-boun...@gluster.org] on
behalf of Eco Willson [ewill...@redhat.com]
Sent: Wednesday, November 21, 2012 2:52 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume
Steve,
The simplest way to troubleshoot (assuming that the nodes
Eco,
they all appear to be using 24007 and 24009, none of them are running on 24010
or 24011.
Steve
[root@nas-0-0 ~]# lsof | grep 24010
[root@nas-0-0 ~]# lsof | grep 24011
[root@nas-0-0 ~]# lsof | grep 24009
glusterfs 3536 root 18u IPv4 143541 0t0TCP
, 24009,24010 and 24011 closed
Steve
From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on
behalf of Eco Willson [ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 6:32 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] FW
Hi Eco,
I believe you are asking that I run
find /mount/glusterfs >/dev/null
only? That should take care of the issue?
Thanks for your time,
Steve
From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on
behalf of Eco Willson [ew
20, 2012 4:03 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume
Steve,
On 11/20/2012 12:03 PM, Steve Postma wrote:
> The do show expected size. I have a backup of /etc/glusterd and
> /etc/glusterfs from before upgrade.
Can we see the vol file from
data:/data
From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on
behalf of Eco Willson [ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 3:02 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume
Steve
information
lost with upgrade?
The file structure appears intact on each brick.
Steve
From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on
behalf of Eco Willson [ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 1:29 PM
To
nt -a" does not appear to do anything.
I have to run "mount -t xfs /dev/mapper/the_raid-lv_data /data"
manually to mount it.
Any help with troubleshooting why we are only seeing data from 1 brick of 3
would be appreciated,
Thanks,
Steve Postma
_______
Thanks, your right. Can telnet to both ports. 24009 and 24007
From: John Mark Walker [johnm...@redhat.com]
Sent: Monday, November 19, 2012 3:44 PM
To: Steve Postma
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] cant mount gluster volume
I connect on 24009 glusterfs and fail on 27040 glusterd
Steve
From: John Mark Walker [johnm...@redhat.com]
Sent: Monday, November 19, 2012 3:31 PM
To: Steve Postma
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] cant mount gluster volume
What happens if
ISTEN 24022/sshd
tcp0 0 localhost6.localdomain:6012 *:*
LISTEN 9462/sshd
IPTABLES service has been stopped on all machines.
Any help you could give me would be greatly appreciated.
Thanks,
Steve Postma
____
From:
It appears I am reaching the file system, but:
0-gdata-client-0: failed to get the port number for remote subvolume
0-gdata-dht: Failed to get hashed subvol for /
I have googled/troubleshot but am unable to find solution. Your help would be
Greatly app
than with 3.2.6 (using CentOS 5.8).
Steve
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
idea why.
Steve
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
se case is totally different. Even if
the throughput were massive with a lot of bricks, it doesn't help me if
every individual client still doesn't perform.
Steve
--
----
Steve Thompson, Cornell School of Chemi
irectories and samba shares. I've already given
up on MooseFS after several months' work.
Thanks for all your comments,
Steve
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
On Wed, 26 Sep 2012, John Mark Walker wrote:
Have you tried something other than dd? Such as copying files to and from a
directory?
Yes, I've used Bonnie++ and just plain cp. All of them exhibit the same
symptoms.
Steve
___
Gluster-users ma
erformance is always
so low, even with a single brick, and (2) why writing to a 2-brick
distributed non-replicated volume is only half the performance of a
1-brick volume.
Someone give me a clue, please.
Steve
___
Gluster-users mailing list
Glu
On 02/21/2011 12:47 PM, Joe Landman wrote:
On 02/21/2011 12:45 PM, Steve Wilson wrote:
On 02/21/2011 09:54 AM, paul simpson wrote:
hi fabricio,
many thanks for your input. indeed i am using xfs - but that seems
to be
mentioned in the gluster docs without any mention of problems. we
ually get out of sync with each
other on these kinds of files. For a while, we dropped replication and
only ran the volume as distributed. This has worked reliably for the
past week or so without any errors that we were seeing before: no such
file, invalid argument, etc.
Steve
again thanks
On 02/08/2011 02:32 PM, Steve Wilson wrote:
On 02/07/2011 11:49 PM, Raghavendra G wrote:
Hi Steve,
Are the back-end file systems working correctly? I am seeing lots of
errors in server log files while accessing back-end filesystem.
gluster-01-brick.log.1:[2011-01-26 03:43:07.353445] E
On 02/07/2011 11:49 PM, Raghavendra G wrote:
Hi Steve,
Are the back-end file systems working correctly? I am seeing lots of errors in
server log files while accessing back-end filesystem.
gluster-01-brick.log.1:[2011-01-26 03:43:07.353445] E [posix.c:2193:posix_open]
post-posix: open on
On 01/28/2011 12:49 PM, Steve Wilson wrote:
I'm running a pair of replicated/distributed GlusterFS 3.1.2 servers,
each with 8 bricks. Here's the command I used to create the data volume:
gluster volume create post replica 2 transport tcp
pablo:/gluster/01/brick stanley:/gluste
P
port
netcat -z $server $test_port
if [ $? -eq 0 ]; then
mount -tglusterfs $server:/volume /mnt/volume
break
fi
done
fi
Steve
___
Gluster-users mailing list
Gluster-user
ernel: [162563.036625] svc: failed to
register lockdv1 RPC service (errno 97).
Steve
On 01/31/2011 10:51 AM, Steve Wilson wrote:
Hi,
Sure, I'll attach them to a message directly to you so that I don't
hit the list with some large attachments.
This morning I noticed another, probably relat
4:52 WinXP-Pro.xml
-rw--- 1 stevew sysmgr 7754 2011-01-21 14:52 WinXP-Pro.xml
-T 1 stevew sysmgr 7754 1969-12-31 19:00 WinXP-Pro.xml-prev
-T 1 stevew sysmgr 7754 1969-12-31 19:00 WinXP-Pro.xml-prev
Thanks,
Steve
On 01/31/2011 12:27 AM, Raghavendra G wrote:
Hi Steve
le or directory)
Any thoughts or ideas?
Thanks!
Steve
--
Steven M. Wilson, Systems and Network Manager
Markey Center for Structural Biology
Purdue University
(765) 496-1946
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/c
going the Gluster FUSE route with distributed volume files.
Steve
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
On 10/12/2010 06:22 AM, Marcus Bointon wrote:
On 28 Sep 2010, at 09:11, Marcus Bointon wrote:
When I say they're out of sync I mean that there are files on one but not the
other (both ways around, so both additions and deletions have not happened at
some point) - I'm using cluster/replic
Original-Nachricht
> Datum: Fri, 24 Sep 2010 15:07:51 -0500
> Von: "Larry Bates"
> An: gluster-users@gluster.org
> Betreff: Re: [Gluster-users] Gluster-users Digest, Vol 29, Issue 25
> I've been using SuperMicro from ABMX (www.abmx.com) and have been very
> happy.
> ABMX offers
users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Steve Wilson
Sent: Monday, July 26, 2010 3:35 PM
To: gluster-users@gluster.org
Subject: [Gluster-users] Inconsistent volume
I have a volume that is distributed and replicated. While deleting a directory structure
on t
sistent state? I've tried using the "ls -alR"
command to force a self-heal but for some reason this always causes the
volume to become unresponsive from any client after 10 minutes or so.
Some clients/servers are running version 3.0.4 while the others are
running 3.0.5.
Thanks!
clients work with 3.0.5 servers and vice-versa? If so,
then it sounds like I could slide in the upgrade without major
disruption of service.
Thanks,
Steve
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman
Original-Nachricht
> Datum: Wed, 24 Mar 2010 23:01:55 +0100
> Von: Oliver Hoffmann
> An: gluster-users@gluster.org
> Betreff: Re: [Gluster-users] Setup for production - which one would you
> choose?
> Yep, thanx.
>
> @Stephan: It is not a matter of knowing how use tar a
or
packaging systems (RPM, DEB and friends). I use btw Gentoo Linux and installing
from source is part of the idea behind Gentoo Linux.
> --
> Regards,
> Stephan
>
Steve
> PS: yes, I know it's the user-list.
>
>
> On Wed, 24 Mar 2010 17:14:32 +
> Ian R
setup I have:
local disc/partition -> File System (in my case XFS) -> GlusterFS
And this works without issues.
> Thanks,
> Keith
>
Steve
--
Jetzt kostenlos herunterladen: Internet Explorer 8 und Mozilla Firefox 3.5 -
sicherer, schneller und einfacher! http://portal.gmx.net/de/go
looks like the rpms they used were gluster-devel and
gluster-src. And my other great concern is this is in production, so I'd
like to test it on one of our servers first.
Caveats? Gotchas? Downtime? All the other routine nagging questions from
jittery Admins.
real0m9.776s
user0m0.070s
sys 0m0.310s
uranos test #
---
> Talk of XFS being stable is encouraging me to give it a shot.
>
> XFS isn't shipped with RHEL 5.3, but then neither is FUSE! (both
> should be in 5.4 though, finally).
>
> Thanks, Jeff.
30
series as it has an bug with XFS. There are patchs for 2.6.29 and 2.6.30 but
none of them is included in the main line of the Kernel. Maybe released 2.6.31
Kernel will fix the issue? RC8 however has still the same issue as
2.6.29/2.6.30.
> Kind Regards,
>
> Jasper
>
Steve
Original-Nachricht
> Datum: Wed, 19 Aug 2009 00:37:18 +0200
> Von: Stephan von Krawczynski
> An: "Steve"
> CC: gluster-users@gluster.org
> Betreff: Re: [Gluster-users] 2.0.6
>
> > [lots of flame by Steve]
> > Steve
>
> Dear Steve
1 - 100 of 117 matches
Mail list logo