Hi Serkan,
On 14/10/15 15:13, Serkan Çoban wrote:
Hi Xavier,
I'm not sure if I understand you. Are you saying you will create two separate
gluster volumes or you will add both bricks to the same distributed-dispersed
volume ?
Is adding more than one brick from same host to a disperse glust
On 10/14/2015 05:09 PM, Sander Zijlstra wrote:
> LS,
>
> I recently reconfigured one of my gluster nodes and forgot to update the MTU
> size on the switch while I did configure the host with jumbo frames.
>
> The result was that the complete cluster had communication issues.
>
> All systems a
I have twice now tried to configure geo-replication of our
Stripe-Replicate volume to a remote Stripe volume but it always seems to
have issues.
root@james:~# gluster volume info
Volume Name: gluster_shared_storage
Type: Replicate
Volume ID: 5f446a10-651b-4ce0-a46b-69871f498dbc
Status: Started
On 10/14/2015 05:50 PM, Roman wrote:
> Hi,
>
> Its hard to comment plans and things like these, but I suggest everyone
> will be happy to have a possibility to upgrade from 3 to 4 without new
> installation, OK with offline upgrade also (shut down volumes and
> upgrade). And I'm somehow pretty s
I don't think this is possible. I'd like to what why do you want to use
use a root file system as a Gluster volume, what's your use case.
Technically this is impossible (wrt GlusterD) as we then have no way to
segregate the configuration data.
~Atin
On 10/15/2015 12:09 AM, satish kondapalli wrote
On 10/14/2015 11:08 PM, Игорь Бирюлин wrote:
Thanks for detailed description.
Do you have a plans add resolution GFID split-brain by 'gluster volume
heal VOLNAME split-brain ...' ?
Not at the moment..
What the main different between GFID split-brain and data split brain?
On nodes this file a
On 10/15/2015 12:46 AM, Lindsay Mathieson wrote:
On 14 October 2015 at 15:17, Pranith Kumar Karampuri
mailto:pkara...@redhat.com>> wrote:
I didn't understand the reason for recreating the setup. Is
upgrading rpms/debs not enough?
Pranith
The distro I'm using (Proxmox/Debian)
On 10/14/2015 10:43 PM, Mohamed Pakkeer wrote:
Hi Pranith,
Will this patch improve the heal performance on distributed disperse
volume?. Currently we are getting 10MB/s heal performance on 10G
backed network. SHD daemon takes 5 days to complete the heal operation
for single 4TB( 3.5 TB data
Admittedly an odd case, but...
o I have simple a simple geo-replication setup: master -> slave.
o I've mounted the master's volume on the master host.
o I've also setup rsyncd server on the master:
[master-volume]
path = /mnt/master-volume
read only = false
o I now rsync from
On 14 October 2015 at 15:17, Pranith Kumar Karampuri
wrote:
> I didn't understand the reason for recreating the setup. Is upgrading
> rpms/debs not enough?
>
> Pranith
>
The distro I'm using (Proxmox/Debian) broke backward compatibility with
their latest major upgrade, essentially you have to re
is anyone has any thoughts on this?
Sateesh
On Tue, Oct 13, 2015 at 5:44 PM, satish kondapalli
wrote:
> Hi,
>
> I want to mount gluster volume as a root file system for my node.
>
> Node will boot from network( only kernel and initrd images) but my root
> file system has to be one of the glust
Thanks for detailed description.
Do you have a plans add resolution GFID split-brain by 'gluster volume heal
VOLNAME split-brain ...' ?
What the main different between GFID split-brain and data split brain? On
nodes this file absolutely different by data content and size or it isn't
'data' in glust
On 10/14/2015 10:05 PM, Игорь Бирюлин wrote:
Thanks for your replay.
If I do listing in mount point (/repo):
# ls /repo/xxx/keyrings/debian-keyring.gpg
ls: cannot access /repo/xxx/keyrings/debian-keyring.gpg: Input/output
error
#
In log /var/log/glusterfs/repo.log I see:
[2015-10-14 16:27:36
Hi Pranith,
Will this patch improve the heal performance on distributed disperse
volume?. Currently we are getting 10MB/s heal performance on 10G backed
network. SHD daemon takes 5 days to complete the heal operation for single
4TB( 3.5 TB data) disk failure.
Regards,
Backer
On Wed, Oct 14, 2015
Thanks for your replay.
If I do listing in mount point (/repo):
# ls /repo/xxx/keyrings/debian-keyring.gpg
ls: cannot access /repo/xxx/keyrings/debian-keyring.gpg: Input/output error
#
In log /var/log/glusterfs/repo.log I see:
[2015-10-14 16:27:36.006815] W [MSGID: 108008]
[afr-self-heal-name.c:35
On 10/14/2015 07:02 PM, Игорь Бирюлин wrote:
Hello,
today in my 2 nodes replica set I've found split-brain. Command 'ls'
start told 'Input/output error'.
What does the mount log (/var/log/glusterfs/.log) say
when you get this error?
Can you run getfattr as root for the file from *both* b
- Original Message -
> From: "Pranith Kumar Karampuri"
> To: "Ben Turner" , "Humble Devassy Chirammal"
> , "Atin Mukherjee"
>
> Cc: "gluster-users"
> Sent: Wednesday, October 14, 2015 1:39:14 AM
> Subject: Re: [Gluster-users] Speed up heal performance
>
>
>
> On 10/13/2015 07:11 PM,
We had a connectivity issue on a "tar+ssh" geo-rep link yesterday that
caused a lot of issues. When the link came back up it immediately went
into a "faulty" state, and the logs were showing "Operation not permitted"
and "File Exists" errors in a loop.
We were finally able to get things back on t
Hello,
today in my 2 nodes replica set I've found split-brain. Command 'ls' start
told 'Input/output error'.
But command 'gluster v heal VOLNAME info split-brain' does not show problem
files:
# gluster v heal repofiles info split-brain
Brick dist-int-master03.xxx:/storage/gluster_brick_repofiles
Nu
Hi Xavier,
>I'm not sure if I understand you. Are you saying you will create two separate
>gluster volumes or you will add both bricks to the same distributed-dispersed
>volume ?
Is adding more than one brick from same host to a disperse gluster
volume recommended? I meant two different gluster
Hi,
Its hard to comment plans and things like these, but I suggest everyone
will be happy to have a possibility to upgrade from 3 to 4 without new
installation, OK with offline upgrade also (shut down volumes and upgrade).
And I'm somehow pretty sure, that this upgrade process should be pretty
fla
I'd recommend Proxmox as vritualizing platform. In new version (4) HA works
with few clicks and no need for external fencing devices (all is done by
the watchdog from now on). Also it runs fine with GlusterFS as VM storage
(I'm running about 20 VMs / KVM / on gluster and thinking of moving 10 more
LS,
I recently reconfigured one of my gluster nodes and forgot to update the MTU
size on the switch while I did configure the host with jumbo frames.
The result was that the complete cluster had communication issues.
All systems are part of a distributed striped volume with a replica size of 2
Hi All,
In 30 minutes from now we will have the regular weekly Gluster
Community meeting.
Meeting details:
- location: #gluster-meeting on Freenode IRC
- date: every Wednesday
- time: 12:00 UTC, 14:00 CEST, 17:30 IST
(in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pa
Hi Serkan,
On 13/10/15 15:53, Serkan Çoban wrote:
Hi Xavier and thanks for your answers.
Servers will have 26*8TB disks.I don't want to loose more than 2 disk
for raid,
so my options are HW RAID6 24+2 or 2 * HW RAID5 12+1,
A RAID5 of more than 8-10 disks is normally considered unsafe because
Hi all,
I'm pleased to announce the release of GlusterFS-3.7.5. This release
includes 70 changes after 3.7.4. The list of fixed bugs is included
below.
Tarball and RPMs can be downloaded from
http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.5/
Ubuntu debs are available from
https://lau
26 matches
Mail list logo