het SAN op 18 april uit en uit het
datacenter gehaald worden. Aan jullie de optie of dit overdag gebeurd of in de
nacht ivm met risico utp kabel’s en power kabels die per ongelijk geraakt
kunnen worden.
Mvg, Jiri
> On 31 Mar 2016, at 15:49, Jiri Hoogeveen <j.hoogev...@bluebillywig.com&
://www.surfsara.nl/ |
Regular day off on friday
On 14 Apr 2015, at 15:01, Jiri Hoogeveen j.hoogev...@bluebillywig.com
mailto:j.hoogev...@bluebillywig.com wrote:
Hi Sander,
If I take a look at
http://www.gluster.org/community/documentation/index.php/OperatingVersions
http://www.gluster.org
Hi Pavel,
killing the brick proces, is the way to go.
This way, all other bricks on that server, will keep working.
After you replace/fix the disk,
A restart of the glusterd proces should me should be enough, to get the brick
back online. (self-healing scan, can take some IO)
Do you have
Hi Sander,
Since version 3.6 the remove brick command migrates the data away from the
brick being removed, right?
It should :)
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/pdf/Administration_Guide/Red_Hat_Storage-3-Administration_Guide-en-US.pdf
page 100, is I think a good
Hi Sander,
It sounds to me, that it triggered the self-healing, which will do a scan on
the bricks. Depending on the number of files on the brick, it can use a lot of
CPU.
Does the logs say anything useful?
Grtz,
Jiri Hoogeveen
On 09 Apr 2015, at 14:18, Sander Zijlstra sander.zijls
Hi Gerald,
Yes, we are using GlusterFS 3.3.2 with Ubuntu 12.04, KVM and bonding 802.3ad on
2 x 1Gbps nic. This way every tcp session can go over a different nic.
For vmWare vSphere we use the NFS of GlusterFS and for KVM the native glusterfs
client.
This setup is working nice.
Grtz, Jiri
packages.
Best,
Josh
- Original Message -
From: Jiri Hoogeveen j.hoogev...@bluebillywig.com
To: Gerald Brandt g...@majentis.com
Cc: gluster-users@gluster.org List gluster-users@gluster.org
Sent: Thursday, December 5, 2013 11:31:34 AM
Subject: Re: [Gluster-users] Ubuntu GlusterFS
Hi,
We use vSphere 4 / vCenter in combination with glusterfs and NFSv3 over tcp,
and for use it works.
Grtz, Jiri
On Sep 23, 2013, at 8:37 PM, RedShift redsh...@telenet.be wrote:
How does that impact compatibility with vcenter? ESXi uses TCP NFS mounts...
- Original Message -
Hi Jake,
If you wane have a 200GB replicated volume. You will need something like this.
or 2 Peers with each 1 disk or partition of 200GB for glusterfs
or 2 Peers with each 2 disk or partitions of 100GB for gluster
or 4 Peers with each 1 disk or partition of 100GB for glusterfs
The thing is,
Hi,
When I take a look at the init script on ubuntu 12.04. The _netdev is only
working for the following network filesystems
nfs|nfs4|smbfs|ncp|ncpfs|cifs|coda|ocfs2|gfs|pvfs|pvfs2|fuse.httpfs|fuse.curlftpfs
I guess when ubuntu adds glusterfs to this list and create a umountglusterfs.sh
Hi Khoi,
I found a large changelog in the source package
http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.3.2.tar.gz
For production, depends on what you are going to do.
If you work a lot with virtual machine, then 3.4.0 will probably what you want.
We are stay with 3.3.2
Hi Sabuj,
Can you give some more information about the dir? how many files are in it
We have some very larger dir, where ls can take up to 10min.
Does ls -i work for you?
Grtz, Jiri
On Apr 15, 2013, at 4:05 PM, Sabuj Pattanayek wrote:
It doesn't look like there's a problem in the actual
Hi,
There it happend, after a netsplit.
03:01 -!- ServerMode/#gluster [+i] by calvino.freenode.net
Grtz, Jiri
On Dec 15, 2012, at 1:43 PM, Andrew Holway wrote:
Hi,
Any ops here? irc channel seems broken.
Ta,
Andrew
___
Gluster-users
Hello,
We have a strange issue. We see, two the same files in the same directory.
We see the same issue on NFS and native glusterfs client.
ls -li
10059405875149890901 -rw-rw-r-- 303 524 500 1928880 Oct 30 16:59 filename.mp3
10059405875149890901 -rw-rw-r-- 303 524 500 1928880 Oct 30 16:59
this issue?
Grtz, Jiri Hoogeveen
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
15 matches
Mail list logo