Sure
Start of upgrade: 15:36
Start of issue: 21:51
On Mon, Aug 22, 2016 at 9:15 PM, Atin Mukherjee wrote:
>
>
> On Tue, Aug 23, 2016 at 4:17 AM, Steve Dainard wrote:
>
>> About 5 hours after upgrading gluster 3.7.6 -> 3.7.13 on Centos 7, one of
>> my gluster serv
As a potential solution on the compute node side, can you have users copy
relevant data from the gluster volume to a local disk (ie $TMDIR), operate
on that disk, write output files to that disk, and then write the results
back to persistent storage once the job is complete?
There are lots of fact
'm capturing the strace output now, hopefully something useful is shown.
Thanks
On Thu, Jun 16, 2016 at 7:31 PM, Vijay Bellur wrote:
> On Thu, Jun 16, 2016 at 3:05 PM, Steve Dainard wrote:
> > I'm restoring some data to gluster from TSM backups and the client errors
> > out
I'm restoring some data to gluster from TSM backups and the client errors
out trying to retrieve xattrs at some point during the restore, killing
progress:
...
Restoring 8,118,878
/storage/data/climate/ANUSPLIN/ANUSPLIN300/monthly/pcp_grids/1918/pcp300_04.asc
[Done]
ANS1587W Unable to read ex
clients then statedump shouldn't
> be showing the same. Was unmount successful? Do you see any related error
> log entry in mount & glusterd log?
>
> -Atin
> Sent from one plus one
> On 04-Mar-2016 10:23 pm, "Steve Dainard" wrote:
>
>> Except tha
NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
On Fri, Mar 4, 2016 at 8:53 AM, Steve Dai
in-op-version=1
glusterd.client4.identifier=10.0.231.11:1022
glusterd.client4.volname=storage
glusterd.client4.max-op-version=30603
glusterd.client4.min-op-version=1
On Thu, Mar 3, 2016 at 5:28 PM, Atin Mukherjee
wrote:
> -Atin
> Sent from one plus one
> On 04-Mar-2016 3:35 am,
wrote:
> Hi Steve,
>
> As atin pointed out to take statedump by running #kill -SIGUSR1 $(pidof
> glusterd) command. it will create .dump file in /var/run/gluster/
> directory. client-op-version information will be present in dump file.
>
> Thanks,
> ~Gaurav
>
> ----- Orig
>
> - Original Message -
> From: "Steve Dainard"
> To: "gluster-users@gluster.org List"
> Sent: Wednesday, March 2, 2016 1:10:27 AM
> Subject: [Gluster-users] gluster 3.7.6 volume set: failed: One or more
> connected clients cannot support the featu
Gluster 3.7.6
'storage' is a distributed volume
# gluster volume set storage rebal-throttle lazy
volume set: failed: One or more connected clients cannot support the
feature being set. These clients need to be upgraded or disconnected before
running this command again
I found a client connected u
to crash? Seems like you added
> a brick .
> 4) If possible can you recollect the order in which you added the peers
> and the version. Also the upgrade sequence.
>
> May be you can race a bug in bugzilla with the information.
>
> Regards
> Rafi KC
> On 1 Mar 2016 12:58 am, Ste
this? Can I just stop glusterd on gluster03 and
change the cksum value?
On Thu, Feb 25, 2016 at 12:49 PM, Mohammed Rafi K C
wrote:
>
>
> On 02/26/2016 01:53 AM, Mohammed Rafi K C wrote:
>
>
>
> On 02/26/2016 01:32 AM, Steve Dainard wrote:
>
> I haven't done anything
vm-storage
Number of entries: 0
On Thu, Feb 25, 2016 at 12:02 PM, Steve Dainard wrote:
> I haven't done anything more than peer thus far, so I'm a bit confused as
> to how the volume info fits in, can you expand on this a bit?
>
> Failed commits? Is this split brain on the repl
> as I have mentioned already we have also done Versioning of xattr's which
> solves the
> issue you are facing in 3.7. It would be really helpful in a production
> environment if
> you could upgrade to 3.7
>
> --
> Thanks & Regards,
> Manikandan Sel
For what it's worth, I've never been able to lose a brick in a 2 brick
replica volume and still be able to write data.
I've also found the documentation confusing as to what 'Option:
cluster.server-quorum-type' actually means.
Default Value: (null)
Description: This feature is on the server-side i
is 3.6.3 or newer, can you share the logs if this happens
> again? (or possibly try if you can reproduce the issue on your setup).
> Thanks,
> Ravi
>
>
> On 02/10/2016 02:25 AM, FNU Raghavendra Manjunath wrote:
>
>
> Adding Pranith, maintainer of the replicate feature.
>
>
ffix and the cleanup process can go on independently which solves the
> issue that you
> have.
>
> [1] https://manikandanselvaganesh.wordpress.com/
>
> [2] http://review.gluster.org/12386
>
> --
> Thanks & Regards,
> Manikandan Selvaganesh.
>
> - Original Me
There is a thread from 2014 mentioning that the heal process on a
replica volume was de-sparsing sparse files.(1)
I've been experiencing the same issue on Gluster 3.6.x. I see there is
a bug closed for a fix on Gluster 3.7 (2) and I'm wondering if this
fix can be back-ported to Gluster 3.6.x?
My
oing to set limits again.
As a note, this is a pretty brutal process on a system with 140T of
storage, and I can't imagine how much worse this would be if my nodes
had more than 12 disks per, or if I was at PB scale.
On Mon, Jan 25, 2016 at 12:31 PM, Steve Dainard wrote:
> Here's a l
gt; Hi Steve,
>
> Also, do you have any plans to upgrade to the latest version. With 3.7,
> we have re factored some approaches used in quota and marker and that have
> fixed quite some issues.
>
> --
> Thanks & Regards,
> Manikandan Selvaganesh.
>
> - Original Mes
ts-CanSISE quota not being accurate
that I missed that the 'Used' space on /data4/climate is listed higher
then the total gluster volume capacity.
On Mon, Jan 25, 2016 at 10:52 AM, Steve Dainard wrote:
> Hi Manikandan
>
> I'm using 'du' not df in this case.
>
>
> 'gluster v quota VOLNAME hard-timeout 0s'
> 'gluster v quota VOLNAME soft-timeout 0s'
>
> Appreciate your curiosity in exploring and if you would like to know more
> about quota
> please refer[1]
>
> [1]
> http://gluster.readthedocs
Thu, Jan 21, 2016 at 10:07 AM, Steve Dainard wrote:
> I have a distributed volume with quota's enabled:
>
> Volume Name: storage
> Type: Distribute
> Volume ID: 26d355cb-c486-481f-ac16-e25390e73775
> Status: Started
> Number of Bricks: 4
> Transport-type: tcp
> Bric
I have a distributed volume with quota's enabled:
Volume Name: storage
Type: Distribute
Volume ID: 26d355cb-c486-481f-ac16-e25390e73775
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.0.231.50:/mnt/raid6-storage/storage
Brick2: 10.0.231.51:/mnt/raid6-storage/storage
Bric
12-24.el7_1.x86_64
samba-winbind-modules-4.1.12-24.el7_1.x86_64
glusterfs-libs-3.6.7-1.el7.x86_64
samba-winbind-clients-4.1.12-24.el7_1.x86_64
Thanks
On Fri, Oct 2, 2015 at 4:42 PM, Steve Dainard wrote:
> Hi Diego,
>
> Awesome, works - much appreciated.
>
> As far as I can sear
I wouldn't think you'd need any 'arbiter' nodes (in quotes because in 3.7+
there is an actual arbiter node at the volume level). You have 4 nodes, and
if you lose 1, you're at 3/4 or 75%.
Personally I've not had much luck with 2 node (with or without the fake
arbiter node) as storage for Ovirt VM'
Personally I'd be much more interested in development/testing resources
going into large scale glusterfs clusters, rather than small office setups
or home use. Keep in mind this is a PB scale filesystem clustering
technology.
For home use I don't really see what advantage replica 2 would provide.
Hello,
Is anyone using Gluster snapshots on large multi-node clusters?
I'm building a 220T distributed replica 2 cluster with 12 nodes and I'd
like some feedback in regards to issues experienced, or potential issues.
Each node has 12 disks in an adaptec 71605 RAID 6 array. The LVM thin pool
woul
On Thu, Oct 1, 2015 at 2:24 AM, Pranith Kumar Karampuri
wrote:
> hi,
> In releases till now from day-1 with replication, there is a corner
> case bug which can wipe out the all the bricks in that replica set when the
> disk/brick(s) are replaced.
>
> Here are the steps that could lead to th
>vfs objects = glusterfs
>glusterfs:loglevel = 7
>glusterfs:logfile = /var/log/samba/glusterfs-projects.log
>glusterfs:volume = export
>
> HTH,
>
> Diego
>
> On Thu, Oct 1, 2015 at 4:15 PM, Steve Dainard wrote:
>> samba-vfs-glusterfs-4.1.12-23.el7_1
samba-vfs-glusterfs-4.1.12-23.el7_1.x86_64
gluster 3.6.6
I've shared a gluster volume using samba vfs with the options:
vfs objects = glusterfs
glusterfs:volume = test
path = /
I can do the following:
(Windows client):
-Create new directory
-Create new file -- an error pops up "Unable to create t
If you have enough Linux background to think of implementing gluster
storage, why not virtualize on Linux as well?
If you're using the standard Hyperv free version you don't get
clustering support anyways, so standalone KVM gives you the same basic
capabilities and you can use virt-manager to mana
Gluster 3.6.6 / CentOS 7.1 / dual Intel E5-2630v3 / 128GB RAM /
Mellanox 10G Ethernet
I just added a 3rd replica to a 2 replica volume and I'm noticing the
network throughput is very slow replicating to the new node,
~30-60MB/s. I'm on 10gig with SSD bricks and typically get 300+MB/s
for normal fi
Hello,
Has anyone else tried using the arbiter option in gluster 3.7 and
noticed performance issues?
I've opened a bug report here:
https://bugzilla.redhat.com/show_bug.cgi?id=1255110 because the client
(in my case ovirt hypervisor) is writing IO to the arbiter node as
well as the replica 2 nodes
On Wed, Apr 30, 2014 at 5:56 AM, Venky Shankar wrote:
> On 04/29/2014 11:12 PM, Steve Dainard wrote:
>
> Fixed by editing the geo-rep volumes gsyncd.conf file, changing
> /nonexistent/gsyncd to /usr/libexec/glusterfs/gsyncd on both the master
> nodes.
>
>
> That is not
service its overwritten?
Steve
On Tue, Apr 29, 2014 at 12:11 PM, Steve Dainard wrote:
> Just setup geo-replication between two replica 2 pairs, gluster version
> 3.5.0.2.
>
> Following this guide:
> https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html/Admin
Just setup geo-replication between two replica 2 pairs, gluster version
3.5.0.2.
Following this guide:
https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/chap-User_Guide-Geo_Rep-Preparation-Settingup_Environment.html
Status is faulty/passive:
# glust
mar Koppad wrote:
>
> On 04/28/2014 08:31 PM, Steve Dainard wrote:
>
> 3.4.2 doesn't have the force option.
>
> Oh, I got confused with releases.
>
>
> I went through an upgrade to 3.5 which ended in my replica pairs not
> being able to sync and all commands co
Hi Danny,
Did you get anywhere with this geo-rep issue? I have a similar problem
running on CentOS 6.5 when trying anything other than 'start' with geo-rep.
Thanks,
*Steve *
On Tue, Feb 25, 2014 at 9:45 AM, Danny Sauer wrote:
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> I have the
.918050] I [socket.c:3480:socket_init] 0-glusterfs: SSL
support is NOT enabled
[2014-04-24 14:08:03.918068] I [socket.c:3495:socket_init] 0-glusterfs:
using system polling thread
[2014-04-24 14:08:04.146710] I [input.c:36:cli_batch] 0-: Exiting with: -1
*Steve Dainard *
IT Infrastructure Man
Version: glusterfs-server-3.4.2-1.el6.x86_64
I have an issue where I'm not getting the correct status for
geo-replication, this is shown below. Also I've had issues where I've not
been able to stop geo-replication without using a firewall rule on the
slave. I would get back a cryptic error and not
According to this BZ https://bugzilla.redhat.com/show_bug.cgi?id=764826 its
possible to set rysnc bandwidth options for geo-replication on vers 3.2.1.
Is this supported in 3.4.2? I just added the option referenced in the above
link and the replication agreement status changed to faulty.
Thanks,
of the bricks to be active, as
long as 51% of the gluster peers are connected".
I also don't understand why 'subvolumes' are referred to here. Is this old
terminology for 'volumes'?
Thoughts?
*Steve Dainard *
IT Infrastructure Manager
Miovision <http://miovision
S services, or does
it send ack when that data has been replicated in memory of all the replica
member nodes?
I suppose the question could also be, does the data have to be on disk of
one of the nodes, before it is replicated to the other nodes?
Thanks,
*Steve Dainard *
IT Infrastructure Manager
0.10.3:/mnt/storage/lv-storage-domain/rep1
Number of entries: 0
Seeing as the hosts are both in quorum I was surprised to see this, but
these id's have been repeating in my logs.
Thanks,
*Steve Dainard *
IT Infrastructure Manager
Miovision <http://miovision.com/> | *Rethink Traf
havoc if the volume was used as a VM store so
it would need to be properly cautioned.
Otherwise, anyone know of an opensource solution that could do this?
*Steve Dainard *
IT Infrastructure Manager
Miovision <http://miovision.com/> | *Rethink Traffic*
519-513-2407 ex.250
877-646-
46 matches
Mail list logo