Sure
Start of upgrade: 15:36
Start of issue: 21:51
On Mon, Aug 22, 2016 at 9:15 PM, Atin Mukherjee <amukh...@redhat.com> wrote:
>
>
> On Tue, Aug 23, 2016 at 4:17 AM, Steve Dainard <sdain...@spd1.com> wrote:
>
>> About 5 hours after upgrading gluster 3.7.6 ->
As a potential solution on the compute node side, can you have users copy
relevant data from the gluster volume to a local disk (ie $TMDIR), operate
on that disk, write output files to that disk, and then write the results
back to persistent storage once the job is complete?
There are lots of
utput now, hopefully something useful is shown.
Thanks
On Thu, Jun 16, 2016 at 7:31 PM, Vijay Bellur <vbel...@redhat.com> wrote:
> On Thu, Jun 16, 2016 at 3:05 PM, Steve Dainard <sdain...@spd1.com> wrote:
> > I'm restoring some data to gluster from TSM backups and the client err
I'm restoring some data to gluster from TSM backups and the client errors
out trying to retrieve xattrs at some point during the restore, killing
progress:
...
Restoring 8,118,878
/storage/data/climate/ANUSPLIN/ANUSPLIN300/monthly/pcp_grids/1918/pcp300_04.asc
[Done]
ANS1587W Unable to read
lly weird, if you unmount the clients then statedump shouldn't
> be showing the same. Was unmount successful? Do you see any related error
> log entry in mount & glusterd log?
>
> -Atin
> Sent from one plus one
> On 04-Mar-2016 10:23 pm, "Steve Dainard" <sdain...@s
RRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
On Fri, Mar 4, 2016 at 8:53 AM, Steve Dainard
On 04-Mar-2016 3:35 am, "Steve Dainard" <sdain...@spd1.com> wrote:
> >
> > FYI Gluster storage node hostnames are gluster0[1-6].
> >
> > Full dump attached. I see a few clients not on 30706. Most notably the
> two debian 7 servers (using packages fro
~Gaurav
>
> - Original Message -
> From: "Steve Dainard" <sdain...@spd1.com>
> To: "Gaurav Garg" <gg...@redhat.com>
> Cc: "gluster-users@gluster.org List" <gluster-users@gluster.org>
> Sent: Thursday, March 3, 2016 12:07:25
; Thanks,
> Gaurav
>
>
> - Original Message -
> From: "Steve Dainard" <sdain...@spd1.com>
> To: "gluster-users@gluster.org List" <gluster-users@gluster.org>
> Sent: Wednesday, March 2, 2016 1:10:27 AM
> Subject: [Gluster-users] gluster
Gluster 3.7.6
'storage' is a distributed volume
# gluster volume set storage rebal-throttle lazy
volume set: failed: One or more connected clients cannot support the
feature being set. These clients need to be upgraded or disconnected before
running this command again
I found a client connected
; a brick .
> 4) If possible can you recollect the order in which you added the peers
> and the version. Also the upgrade sequence.
>
> May be you can race a bug in bugzilla with the information.
>
> Regards
> Rafi KC
> On 1 Mar 2016 12:58 am, Steve Dainard <sdain...@spd1
age
Number of entries: 0
On Thu, Feb 25, 2016 at 12:02 PM, Steve Dainard <sdain...@spd1.com> wrote:
> I haven't done anything more than peer thus far, so I'm a bit confused as
> to how the volume info fits in, can you expand on this a bit?
>
> Failed commits? Is this split brain on
go on independently which solves the
> issue that you
> have.
>
> [1] https://manikandanselvaganesh.wordpress.com/
>
> [2] http://review.gluster.org/12386
>
> --
> Thanks & Regards,
> Manikandan Selvaganesh.
>
> - Original Message -
> From: "Vijaikumar
an you share the logs if this happens
> again? (or possibly try if you can reproduce the issue on your setup).
> Thanks,
> Ravi
>
>
> On 02/10/2016 02:25 AM, FNU Raghavendra Manjunath wrote:
>
>
> Adding Pranith, maintainer of the replicate feature.
>
>
> Regards,
> Rag
There is a thread from 2014 mentioning that the heal process on a
replica volume was de-sparsing sparse files.(1)
I've been experiencing the same issue on Gluster 3.6.x. I see there is
a bug closed for a fix on Gluster 3.7 (2) and I'm wondering if this
fix can be back-ported to Gluster 3.6.x?
My
nd I can't imagine how much worse this would be if my nodes
had more than 12 disks per, or if I was at PB scale.
On Mon, Jan 25, 2016 at 12:31 PM, Steve Dainard <sdain...@spd1.com> wrote:
> Here's a l link to a tarball of one of the gluster hosts logs:
> https://dl.dropboxusercontent.co
not being accurate
that I missed that the 'Used' space on /data4/climate is listed higher
then the total gluster volume capacity.
On Mon, Jan 25, 2016 at 10:52 AM, Steve Dainard <sdain...@spd1.com> wrote:
> Hi Manikandan
>
> I'm using 'du' not df in this case.
>
> On Thu, Ja
3.7.0-1/Administrator%20Guide/Directory%20Quota/
>
> --
> Thanks & Regards,
> Manikandan Selvaganesh.
>
> - Original Message -
> From: "Steve Dainard" <sdain...@spd1.com>
> To: "gluster-users@gluster.org List" <gluster-users@glust
I have a distributed volume with quota's enabled:
Volume Name: storage
Type: Distribute
Volume ID: 26d355cb-c486-481f-ac16-e25390e73775
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.0.231.50:/mnt/raid6-storage/storage
Brick2: 10.0.231.51:/mnt/raid6-storage/storage
at 10:07 AM, Steve Dainard <sdain...@spd1.com> wrote:
> I have a distributed volume with quota's enabled:
>
> Volume Name: storage
> Type: Distribute
> Volume ID: 26d355cb-c486-481f-ac16-e25390e73775
> Status: Started
> Number of Bricks: 4
> Transport-type: tcp
> Bricks:
64
samba-winbind-clients-4.1.12-24.el7_1.x86_64
Thanks
On Fri, Oct 2, 2015 at 4:42 PM, Steve Dainard <sdain...@spd1.com> wrote:
> Hi Diego,
>
> Awesome, works - much appreciated.
>
> As far as I can search this isn't listed anywhere on the gluster.org
> docs, but there is a li
Personally I'd be much more interested in development/testing resources
going into large scale glusterfs clusters, rather than small office setups
or home use. Keep in mind this is a PB scale filesystem clustering
technology.
For home use I don't really see what advantage replica 2 would provide.
I wouldn't think you'd need any 'arbiter' nodes (in quotes because in 3.7+
there is an actual arbiter node at the volume level). You have 4 nodes, and
if you lose 1, you're at 3/4 or 75%.
Personally I've not had much luck with 2 node (with or without the fake
arbiter node) as storage for Ovirt
Hello,
Is anyone using Gluster snapshots on large multi-node clusters?
I'm building a 220T distributed replica 2 cluster with 12 nodes and I'd
like some feedback in regards to issues experienced, or potential issues.
Each node has 12 disks in an adaptec 71605 RAID 6 array. The LVM thin pool
;kernel share modes = No
>vfs objects = glusterfs
>glusterfs:loglevel = 7
>glusterfs:logfile = /var/log/samba/glusterfs-projects.log
>glusterfs:volume = export
>
> HTH,
>
> Diego
>
> On Thu, Oct 1, 2015 at 4:15 PM, Steve Dainard <sdain...@spd1.com> wrot
On Thu, Oct 1, 2015 at 2:24 AM, Pranith Kumar Karampuri
wrote:
> hi,
> In releases till now from day-1 with replication, there is a corner
> case bug which can wipe out the all the bricks in that replica set when the
> disk/brick(s) are replaced.
>
> Here are the steps
If you have enough Linux background to think of implementing gluster
storage, why not virtualize on Linux as well?
If you're using the standard Hyperv free version you don't get
clustering support anyways, so standalone KVM gives you the same basic
capabilities and you can use virt-manager to
samba-vfs-glusterfs-4.1.12-23.el7_1.x86_64
gluster 3.6.6
I've shared a gluster volume using samba vfs with the options:
vfs objects = glusterfs
glusterfs:volume = test
path = /
I can do the following:
(Windows client):
-Create new directory
-Create new file -- an error pops up "Unable to create
Gluster 3.6.6 / CentOS 7.1 / dual Intel E5-2630v3 / 128GB RAM /
Mellanox 10G Ethernet
I just added a 3rd replica to a 2 replica volume and I'm noticing the
network throughput is very slow replicating to the new node,
~30-60MB/s. I'm on 10gig with SSD bricks and typically get 300+MB/s
for normal
Hello,
Has anyone else tried using the arbiter option in gluster 3.7 and
noticed performance issues?
I've opened a bug report here:
https://bugzilla.redhat.com/show_bug.cgi?id=1255110 because the client
(in my case ovirt hypervisor) is writing IO to the arbiter node as
well as the replica 2
On Wed, Apr 30, 2014 at 5:56 AM, Venky Shankar vshan...@redhat.com wrote:
On 04/29/2014 11:12 PM, Steve Dainard wrote:
Fixed by editing the geo-rep volumes gsyncd.conf file, changing
/nonexistent/gsyncd to /usr/libexec/glusterfs/gsyncd on both the master
nodes.
That is not required. Did
vkop...@redhat.comwrote:
On 04/28/2014 08:31 PM, Steve Dainard wrote:
3.4.2 doesn't have the force option.
Oh, I got confused with releases.
I went through an upgrade to 3.5 which ended in my replica pairs not
being able to sync and all commands coming back with no output.
Individually
service its overwritten?
Steve
On Tue, Apr 29, 2014 at 12:11 PM, Steve Dainard sdain...@miovision.comwrote:
Just setup geo-replication between two replica 2 pairs, gluster version
3.5.0.2.
Following this guide:
https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html
Version: glusterfs-server-3.4.2-1.el6.x86_64
I have an issue where I'm not getting the correct status for
geo-replication, this is shown below. Also I've had issues where I've not
been able to stop geo-replication without using a firewall rule on the
slave. I would get back a cryptic error and
:socket_init] 0-glusterfs: SSL
support is NOT enabled
[2014-04-24 14:08:03.918068] I [socket.c:3495:socket_init] 0-glusterfs:
using system polling thread
[2014-04-24 14:08:04.146710] I [input.c:36:cli_batch] 0-: Exiting with: -1
*Steve Dainard *
IT Infrastructure Manager
Miovision http
Hi Danny,
Did you get anywhere with this geo-rep issue? I have a similar problem
running on CentOS 6.5 when trying anything other than 'start' with geo-rep.
Thanks,
*Steve *
On Tue, Feb 25, 2014 at 9:45 AM, Danny Sauer da...@dannysauer.com wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash:
According to this BZ https://bugzilla.redhat.com/show_bug.cgi?id=764826 its
possible to set rysnc bandwidth options for geo-replication on vers 3.2.1.
Is this supported in 3.4.2? I just added the option referenced in the above
link and the replication agreement status changed to faulty.
Thanks,
, or does
it send ack when that data has been replicated in memory of all the replica
member nodes?
I suppose the question could also be, does the data have to be on disk of
one of the nodes, before it is replicated to the other nodes?
Thanks,
*Steve Dainard *
IT Infrastructure Manager
Miovision
Seeing as the hosts are both in quorum I was surprised to see this, but
these id's have been repeating in my logs.
Thanks,
*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | *Rethink Traffic*
*Blog http://miovision.com/blog | **LinkedIn
https://www.linkedin.com
was used as a VM store so
it would need to be properly cautioned.
Otherwise, anyone know of an opensource solution that could do this?
*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | *Rethink Traffic*
519-513-2407 ex.250
877-646-8476 (toll-free)
*Blog http://miovision.com
40 matches
Mail list logo