On Tuesday 03 February 2015 02:32 AM, Vijay Bellur wrote:
On 02/02/2015 01:23 PM, Kingsley wrote:
Thanks for that Vijay. I installed gluster via yum, but yum check-update
reveals no updates to download. How should I apply those patches, or ...
do you have a rough idea when 3.6.3 will be availabl
Hello List,
So I've been frustraded by intermittent performance problems throughout
January. The problem occurs on a two node setup running 3.4.5, 16 gigs
of ram with a bunch of local disk. For sometimes an hour for sometimes
weeks at a time (I have extensive graphs in OpenNMS) our Gluster box
Hi Barry!
Your observation is right. Sometime after 3.0 (not sure which exact
version, probably 3.1) Gluster introduced POSIX acl support (on the server
side). Until then, if fuse let a request through into Gluster, server
assumed request to be authenticated - however fuse does not support POSIX
a
Hi Omar,
We will run this test-case in our lab machines.
Meanwhile can you provide the brick logs and execute the attached script
for each brick-path, and provide the output from the script
Thanks,
Vijay
On Tuesday 03 February 2015 08:06 AM, Omkar Kulkarni wrote:
Hi Vijay,
I was recreate t
A discussion thread on this list from last May 2014,
http://www.gluster.org/pipermail/gluster-users.old/2014-May/017283.html,
discussed how Gluster is limited to 32 groups, due to FUSE, or maybe to 96
groups, due to the AUTH header size in the RPC library being used, unless
the administrator enable
Check your client logs. There should be no differences between mounting from
either server. The server specified is only used for retrieving the volume
configuration.
The client log should show why it's failing.
Also, Ubuntu ships an ancient broken version. Use our official ppa. You can
find
Hi list.
Did I ask this question the wrong way, or does nobody know how to
diagnose this issue?
On 2015-01-29 09:28, Ernie Dunbar wrote:
> I've created a GlusterFS server pair, with GlusterFS v 3.2.5 (because Debian
> uses that version, and all our servers are Debian), using the official
Hi Pranith,
I finally understood what you meant the secure ports, because the issue
occurred in one of our setups once more. It seems one of the clients on
serv1 could not open a connection to the glusterfsd running on serv0. I'd
actually started a mail trail about it (believing it might be som
It seems I found out what goes wrong here - and this was useful learning
to me:
On one of the replica servers, the client mount did not have an open port
to communicate with the other krfsd process. To illustrate:
root@serv1:/root> ps -ef | grep replicated_vol
root 30627 1 0 Jan29 ?
Perhaps it's not obvious to the broader community, but a bunch of people
have put a bunch of work into various projects under the "4.0" banner.
Some of the results can be seen in the various feature pages here:
http://www.gluster.org/community/documentation/index.php/Planning40
Now that the vario
On 02/02/2015 01:23 PM, Kingsley wrote:
Thanks for that Vijay. I installed gluster via yum, but yum check-update
reveals no updates to download. How should I apply those patches, or ...
do you have a rough idea when 3.6.3 will be available? I'd like to roll
our cluster out into production soon bu
Hello,
We have a 4 server cluster running gluster 3.6.2, and I would like to
enable acl controls. Is there a way to enable >32 group limits for users
accessing the filesystem thru samba? Here is the entry in the fstab, and
also the volume info. Any help figuring this out, is much appreciated.
> >gluster volume set open-behind off turns off this xlator in
> >the client stack. There is no way to turn off debug/io-stats. Any reason
> >why you would like to turn off io-stats translator?
>
> For improving efficiency.
It might not be a very fruitful kind of optimization. Repeating an
exper
Update:
Sound that the active node is finally fixed, but sound that rsync process are
running from nodeA (I don’t understand the master notion so) and nodeA is the
more used node so its load average become dangerously high.
How to force a geo-replication to be stated from a specific node (maste
Hello,
I am testing GlusterFS for the first time and have installed the latest
GlusterFS 3.5 stable version on Debian 7 on brand new SuperMicro hardware with
ZFS instead of hardware RAID. My ZFS pool is a RAIDZ-2 with 6 SATA disks of 2
TB each.
After setting up a first and single test brick on
But now I have strange issue:
After creating the geo-rep session and starting it (from nodeB):
[root@nodeB]# gluster vol geo-replication myvol slaveA::myvol status detail
MASTER NODE MASTER VOLMASTER BRICK SLAVE
STATUS CHECKPOINT STATUSCRAWL STATUSF
For the record, after adding
operating-version=2
on every nodes (ABC) AND slave node, the commands are working
--
Cyril Peponnet
On Feb 2, 2015, at 9:46 AM, PEPONNET, Cyril N (Cyril)
mailto:cyril.pepon...@alcatel-lucent.com>>
wrote:
More informations here:
I update the state of the peer in
Hello,
I have a replica-2 volume in which I store a large number of files that
are updated frequently (critical log files, etc). My files are generally
stable, but one thing that does worry me from time to time is that files
show up on one of the bricks in the output of gluster v heal
info. T
I'm trying to make gluster run-time boot, but it has been impossible for me
to centos 7. The strange thing is that if I make the execution of the
following commands by hand, everything works great, but if I try to do the
same as a Service or into a rc.local not work.
Gluster version I have install
>> Hi,
>>
>> With glusterfs version 3.6.1,there are so many translator in my client
>> *.vol ,how to shrink some ,such as "debug/io-stats" and
>> "performance/open-behind".
>>
>>
>gluster volume set open-behind off turns off this xlator in
>the client stack. There is no way to turn off debug/io
More informations here:
I update the state of the peer in the uid file located in /v/l/g/peers from
state 10 to state 3 (as it is on other node) and now the node is in cluster.
gluster system:: execute gsec_create now create a proper file from master node
with every node’s key in it.
Now from
Every node is connected:
root@nodeA geo-replication]# gluster peer status
Number of Peers: 2
Hostname: nodeB
Uuid: 6a9da7fc-70ec-4302-8152-0e61929a7c8b
State: Peer in Cluster (Connected)
Hostname: nodeC
Uuid: c12353b5-f41a-4911-9329-fee6a8d529de
State: Peer in Cluster (Connected)
[root@nodeB ~
I upgraded one of my bricks from 3.6.1 to 3.6.2 and I can no longer do a
'gluster volume heal homegfs info'. It hangs and never returns any
information.
I was trying to ensure that gfs01a had finished healing before upgrading
the other machines (gfs01b, gfs02a, gfs02b) in my configuration (see
Thanks for that Vijay. I installed gluster via yum, but yum check-update
reveals no updates to download. How should I apply those patches, or ...
do you have a rough idea when 3.6.3 will be available? I'd like to roll
our cluster out into production soon but would prefer to get this fixed
first.
C
I'm planning to set up a distributed replicated Gluster installation on
two NAS, where one NAS replicates the other. The NAS are not identical,
not even in storage size. One has a 12 TB RAID5, the other is virgin and
has 3x3TB disks. The entire data to store is currently less than 1TB,
but I al
Looks like node C is in diconnected state. Please let us know the output
of `gluster peer status` from all the master nodes and slave nodes.
--
regards
Aravinda
On 01/22/2015 12:27 AM, PEPONNET, Cyril N (Cyril) wrote:
So,
On master node of my 3 node setup:
1) gluster system:: execute gsec_cr
Hi,
In my small file test, 3 types of volume are created in same GlusterFS
Cluster (4 nodes)
- Distritubed
- Distributed Replicated
- Distributed Striped
10 million small files (8 byte) create and search each test cycle life
Nodes info :
- OS : CentOS 6.6 64bit
- RAM : 64GB
Hello,
I eventually took the time to check this out again, and read the thread
noticed below.
Le 27/01/2015 14:51, Jan-Hendrik Zab a écrit :
Hey,
is the first command working without setting LC_NUMERIC?
Yes, I even tried to re-set my locale to fr_FR.utf8, and to run the
command without set
28 matches
Mail list logo