Hello,
Thank you for the detailed and informative answer Kaushal. I appreciate
your input.
As far as I understand this doesn't sound like something that is on the map
for being resolved any time soon? Would running 3.6 make any difference?
Talking about version - which version is recommended for
I nuked the entire partition with mkfs, just to be *sure*, and I still
get the error message:
volume create: gv0: failed: /brick1/gv0 is already part of a volume
Clearly, there's some bit of data being kept somewhere else besides in
/brick1?
On 2015-02-11 01:03, Kaushal M wrote:
This
Hello everyone,
I have the following situation. I put some read and write load on my test
GlusterFS setup as follows:
# dd if=/dev/zero of=file2 bs=1M count=3000
# cat file2 /dev/null
While doing the above I tried to gather some statistics and found out that
'status fd' doesn't really show
Hello,
switching from 3.4.2 to 3.6.2 reduces our average test performance dramatically.
Our test setup: directly connected 1 GBit/s hosts setup with:
rm -rf /home/gluster/.glusterfs/
rm /home/gluster/*
setfattr -x trusted.glusterfs.volume-id /home/gluster/
setfattr -x
Dear Pranith
I would be interested to know what the cluster.ensure-durability off option
does, could you explain or point to the documentation?
RegardsML
On Thursday, February 12, 2015 8:24 AM, Pranith Kumar Karampuri
pkara...@redhat.com wrote:
On 02/12/2015 04:37 AM, Nico
On 02/12/2015 04:37 AM, Nico Schottelius wrote:
Hello,
switching from 3.4.2 to 3.6.2 reduces our average test performance dramatically.
Our test setup: directly connected 1 GBit/s hosts setup with:
rm -rf /home/gluster/.glusterfs/
rm /home/gluster/*
setfattr -x
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi Guys,
one of my bricks in a replicated glusterfs cluster is restarted and up
again. Now the self heal of gluster is running permanently and can't
heal one file in a volume.
root@cinder1:~# getfattr -e hex -d -m .
Hi all,
In a little more than one hour from now we will have the regular weekly
Gluster Community meeting.
Meeting details:
- location: #gluster-meeting on Freenode IRC
- date: every Wednesday
- time: 7:00 EST, 12:00 UTC, 13:00 CET, 17:30 IST
(in your terminal, run: date -d 12:00 UTC)
-
Hi,
First a polite request. In the future, when you have any questions, please
ask them on the mailing list. This way, other users also have a chance of
helping you and your problem will be archived and be available for others
to search.
Now, looking at the list of files you've given, it seems
This happens because of 2 things,
1. GlusterFS writes an extended attribute containing the volume-id to every
brick directory when a volume is created. This is done to prevent data
being written to the root partition, in case the partition containing the
brick wasn't mounted for any reason.
2.
Hi,
I don't know if this way it's the correct, so if I reported to other
account, please let me know.
I have a problem after upgrade my cluster to version 3.6 from version 3.5.
When I have started the volumes, I have seen a lot of errors:
[2015-02-11 11:23:18.231142] W
I would serve your web files off of a Gluster mount point. The performance is
excellent.
http://www.linuxfunda.com/2013/08/02/how-to-install-and-configure-glusterfs-on-centos-rhel-56/
Sent from my iPad
On Feb 10, 2015, at 11:13 PM, Ryan Jones ryan.jone...@gmail.com wrote:
I'm sure this
On Wed, Feb 11, 2015 at 11:43:49AM +0100, Niels de Vos wrote:
Hi all,
In a little more than one hour from now we will have the regular weekly
Gluster Community meeting.
Meeting details:
- location: #gluster-meeting on Freenode IRC
- date: every Wednesday
- time: 7:00 EST, 12:00 UTC,
13 matches
Mail list logo