On 04/28/2014 07:00 PM, Joe Julian wrote:
Where is this process documented?
On April 28, 2014 6:03:16 AM PDT, Venky Shankar vshan...@redhat.com wrote:
On 04/27/2014 11:55 PM, James Le Cuirot wrote:
Hello all,
I'm new to Gluster but have successfully tried geo-rep with 3.5.0.
I've
read
Hello everybody,
I am new to GlusterFS and i 'am willing to use it as a filesystem for my file
archiving facility. In terms of data recovery and safety, i don't want to use
replication for storage and cost saving and i will only rely on hardware
redundancy. However, in case of any problem with
Hello,
i have a 3 Node Gluster with one distributed Volume. All 3 Nodes are identically in their Hardware. All have 6x3TB Seagate ST3000DM001. The disk on each Node are in a Softwareraid6 (Blocksize 4096 Stripe 512) with ext4 filesystem on Debian Wheezy. As the Raid was new i testet it with
hello,
I receive the email when I subscribing at
http://supercolony.gluster.org/mailman/listinfo/gluster-users
if I want to join community question and answer site, how can I do
cloud you help me?
Thanks!
Albert
Hi all,
I want to create a distributed replicated volume.
And I want to be able to expand this volume.
To test this, I have 6 nodes each have a 10G disk to share.
First I create the initial volume.
# gluster volume create testvol replica 2 transport tcp
node-01:/export/sdb1/brick
Hi,
I have 2 servers with a distributed volume “home” and I’d like to set
volume options like auth.allow, and am getting an error when I go through
the procedure in the admin guide.
gluster volume set home auth.allow IPADDR
“volume set: failed: One or more connected clients cannot support the
Actually there are tons of possible tune ups, but let's start from the
beginning.
Gluster documents, Red Hat documents, and several other sources suggest
using XFS, and NOT ext4.
Second, How did you measure performance ? What tool ?
From my personal experience, first of all, measure performance
Do you have a link to the doc's that mention a specific sequence particular
to geo-replication enabled volumes? I don't see anything on the gluster doc
page here:
http://www.gluster.org/community/documentation/index.php/Main_Page.
Thanks,
Steve
On Tue, Apr 29, 2014 at 2:29 AM, Vijaykumar Koppad
Hi Tom.
This is a bug.
This feature was introduced to prevent setting some options when old
clients are connected to a gluster volume. This was needed because the
old clients wouldn't be able to understand the newer options, which
could lead to unintended behaviour.
But auth.allow has been
Hello Carlos,
thx for your reply. I generelly think that gluster should run out of the box with an good speed if the mainsystem has good speed. E.g. if i mesure 300MB/s Gluster should min run with 100MB/s, but your right its a complex thematic.
As i wrote above i mesure the performance
Hi. I have setup a distributed volume called home, and then I expanded it
by adding another brick from a second server. When I run a job to create
10 50G files, I am seeing the traffic only going to a single server, even
though I am using DNS RR. Any ideas what I have set wrong? Thank you!
Hi Tom,
After adding brick have you 'rebalanced' the volume ?
# gluster volume rebalance volname start
On Tue, Apr 29, 2014 at 7:28 AM, Tom Young tom.yo...@corvidtec.com wrote:
Hi. I have setup a distributed volume called home, and then I expanded it
by adding another brick from a second
On Tue, Apr 29, 2014 at 5:29 AM, Matthew Rinella mrine...@apptio.com wrote:
I just built a pair of AWS Red Hat 6.5 instances to create a gluster
replicated pair file system. I can install everything, peer probe, and
create the volume, but as soon as I try to start the volume, glusterd dumps
Hi everyone,
I have some questions about replication
when i write file into glusterfs volume, it is 2 replica volume.
does it return ok when one of the replica write operation complete or both
replica write operation complete?
Can I require it return ok when all replica write complete?
sorry~
For a replica volume a write will return only after it is completed on
all the replicas.
On Tue, Apr 29, 2014 at 8:06 PM, 可樂我 colacolam...@gmail.com wrote:
Hi everyone,
I have some questions about replication
when i write file into glusterfs volume, it is 2 replica volume.
does it return ok
That was it, thank you. It's properly balanced, and distributing the writes
perfectly.
Tom Young
Corvid Technologies
145 Overhill Drive, Mooresville, NC 28117
(704)799-6944 x156
www.corvidtec.com
-Original Message-
From: Harshavardhana [mailto:har...@harshavardhana.net]
Sent: Tuesday,
Fixed by editing the geo-rep volumes gsyncd.conf file, changing
/nonexistent/gsyncd to /usr/libexec/glusterfs/gsyncd on both the master
nodes.
Any reason why this is in the default template? Also any reason why when I
stop glusterd, change the template on both master nodes and start the
gluster
I've also seen this behavior. I fixed it in the same method.
The strange part is that using Vagrant for testing, I didn't see the same
behavior. It only happens on bare metal boxes in my case. I'm not sure why that
is though... I'm using the same version CentOS, etc.
-CJ
From: Steve Dainard
If this is the case, how would I work around the crypto libs?
-Original Message-
From: Kaushal M [mailto:kshlms...@gmail.com]
Sent: Tuesday, April 29, 2014 7:38 AM
To: Matthew Rinella
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] volume start causes glusterd to core dump in
On 29/04/2014, at 10:42 AM, 可樂我 wrote:
hello,
I receive the email when I subscribing at
http://supercolony.gluster.org/mailman/listinfo/gluster-users
if I want to join community question and answer site, how can I do
cloud you help me?
Oops, that's an old message we need to update. The
On 29/04/2014, at 8:13 PM, Matthew Rinella wrote:
If this is the case, how would I work around the crypto libs?
What's the output from this command, on the Gluster servers
where it's crashing? :)
$ rpm -qa|grep -i openssl|sort
+ Justin
--
Open Source and Standards @ Red Hat
I have this result:
openssl-1.0.1e-16.el6_5.7.x86_64
So Im going to admit I also have FIPS mode enabled on my hosts, which is
necessary, unfortunately. This causes a huge problem with puppet because of
the crypto algorithms allowed/disallowed. Im wondering if this is going to be
an issue
On 29/04/2014, at 10:47 PM, Justin Clift wrote:
On 29/04/2014, at 10:42 AM, 可樂我 wrote:
hello,
I receive the email when I subscribing at
http://supercolony.gluster.org/mailman/listinfo/gluster-users
if I want to join community question and answer site, how can I do
cloud you help me?
On 29/04/2014, at 10:54 PM, Matthew Rinella wrote:
I have this result:
openssl-1.0.1e-16.el6_5.7.x86_64
Hmmm, that's the same version as a host here that's working.
So Im going to admit I also have FIPS mode enabled on my hosts, which is
necessary, unfortunately. This causes a huge
On 29/04/2014, at 6:42 PM, Steve Dainard wrote:
Fixed by editing the geo-rep volumes gsyncd.conf file, changing
/nonexistent/gsyncd to /usr/libexec/glusterfs/gsyncd on both the master nodes.
That doesn't sound good. :(
Do you have the time/inclination to file a bug about this
in Bugzilla?
Thanks Kaushal M.
When I write file into glusterfs replication volume,
if one replica write failure(brink server offline or brick mount point has
deleted),
it will tell me write failure, this is write operation failure
Cloud glusterfs do this?
Or I have only way to find it in log file??
Thank
Hi All,
I am happy to announce that Niels de Vos will be the release maintainer
for release-3.5. Kaleb Keithley will continue to function as the release
maintainer for release-3.4. Please join me in congratulating Niels on
his new role and extend your co-operation to him for further 3.5.x
I know about FIPS only by name and I'm not familiar with it.
A simple google search reveals that MD5 is not FIPS compliant and cannot be
used in FIPS enabled mode. Can someone confirm this?
~kaushal
On Wed, Apr 30, 2014 at 5:25 AM, Justin Clift jus...@gluster.org wrote:
On 29/04/2014, at
Congratulations Niels!
On Wed, Apr 30, 2014 at 12:02 AM, Vijay Bellur vbel...@redhat.com wrote:
Hi All,
I am happy to announce that Niels de Vos will be the release maintainer for
release-3.5. Kaleb Keithley will continue to function as the release
maintainer for release-3.4. Please join me
Just tested this. It is the use of MD5 in FIPS mode which is causing the
crash. I'll open a bug so that this can be tracked.
~kaushal
On Wed, Apr 30, 2014 at 10:01 AM, Kaushal M kshlms...@gmail.com wrote:
I know about FIPS only by name and I'm not familiar with it.
A simple google search
Thanks
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
31 matches
Mail list logo