We have addressed few parts of the rebalance performance which should be
backported to 3.7 soon.
Regards,
Susant
- Original Message -
From: Raghavendra Bhat rab...@redhat.com
To: Alex Crow ac...@integrafin.co.uk
Cc: gluster-users@gluster.org
Sent: Thursday, 30 April, 2015 2:30:41
On 04/30/2015 02:32 PM, gjprabu wrote:
Hi bturner,
I am getting below error while adding server.event
gluster v set integvol server.event-threads 3
volume set: failed: option : server.event-threads does not exist
Did you mean server.gid-timeout or ...manage-gids?
This
Upgrade to 3.6.3 and set client.event-threads and server.event-threads to at
least 4:
Previously, epoll thread did socket even-handling and the same thread was used for
serving the client or processing the response received from the server. Due to this,
other requests were in a queue untill
Hi bturner,
I am getting below error while adding server.event
gluster v set integvol server.event-threads 3
volume set: failed: option : server.event-threads does not exist
Did you mean server.gid-timeout or ...manage-gids?
Glusterfs version has been upgraded to 3.6.3
On Wed, Apr 29, 2015 at 7:04 PM, Tuomas Kuosmanen tig...@redhat.com wrote:
* How would be the best way to install? We need to write the
instructions, and those should be simple and still sensible
so that you get a real, functional setup in the end that you
can use. Maybe some
On Thursday 30 April 2015 01:55 PM, Alex Crow wrote:
Upgrade to 3.6.3 and set client.event-threads and
server.event-threads to at least 4:
Previously, epoll thread did socket even-handling and the same
thread was used for serving the client or processing the response
received from the
On 04/30/2015 06:49 AM, Behrooz Shafiee wrote:
Hi,
I was comparing GlusterFS native and NFS clients and I noticed, NFS
client is significantly slower for large writes. I wrote about 200, 1GB
files using a 1MB block sizes and NFS throughput was almost half of
native client. Can anyone explain
Hello,
I'll soon be trying to implement the following scenario:
https://joejulian.name/blog/how-to-expand-glusterfs-replicated-clusters-by-one-server/
Instead of the deprecated replace-brick command, can I temporarily
increase the replica number to 3, wait for data to propagate, and after
Okay, I did some digging. On the client there was many errors such as:
[2015-04-29 15:47:08.700174] W [client-rpc-fops.c:2774:client3_3_lookup_cbk]
0-img-client-0: remote operation failed: Transport endpoint is not
connected. Path: /www/img/gallery/9722926_4130.jpg
- Original Message -
From: Atin Mukherjee amukh...@redhat.com
To: gjprabu gjpr...@zohocorp.com
Cc: Ben Turner btur...@redhat.com, gluster-users@gluster.org
Sent: Thursday, April 30, 2015 7:37:19 AM
Subject: Re: [Gluster-users] client is terrible with large amount of small
files
:
cli.log-20150430 http://termbin.com/ui7r
etc-glusterfs-glusterd.vol.log-20150430 http://termbin.com/tmof
glustershd.log-20150430 http://termbin.com/jz22
img-rebalance.log-20150430 http://termbin.com/y5zi
nfs.log http://termbin.com/3qsm
nfs.log-20150430 http://termbin.com/u8e7
var-gl-images.log
- Original Message -
From: Alex ale...@icecat.biz
To: gluster-users@gluster.org
Sent: Thursday, April 30, 2015 6:52:58 AM
Subject: Re: [Gluster-users] Write operations failing on clients
Okay, I did some digging. On the client there was many errors such as:
[2015-04-29
On Wed, Apr 29, 2015 at 11:20:20AM -0400, Meghana Madhusudhan wrote:
Hi,
Looks like the URL wasn't complete , sorry about that.
Please find the correct link here,
https://plus.google.com/events/c9omal6366f2cfkcd0iuee5ta1o
Just a reminder that this takes place in ~1 hour from now. If you
Oh and this is output of some status commands:
http://termbin.com/bvzz
Mount\umount worked just fine.
Alex
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
- Original Message -
From: Ron Trompert ron.tromp...@surfsara.nl
To: Ben Turner btur...@redhat.com
Cc: gluster-users@gluster.org
Sent: Thursday, April 30, 2015 1:25:42 AM
Subject: Re: [Gluster-users] Poor performance with small files
Hi Ben,
Thanks for the info.
My apologies
- Original Message -
From: Vijay Bellur vbel...@redhat.com
To: Behrooz Shafiee shafie...@gmail.com, Gluster-users@gluster.org List
gluster-users@gluster.org
Sent: Thursday, April 30, 2015 6:44:11 AM
Subject: Re: [Gluster-users] Poor performance of NFS client for large writes
On 04/30/2015 03:09 PM, gjprabu wrote:
Hi Amukher,
How to resolve this issue, till we need to wait for 3.7 release or
any work around is there.
You will have to as this feature is in for 3.7.
RegardsPrabu
On Thu, 30 Apr 2015 14:49:46 +0530 Atin
- Original Message -
From: Alex ale...@icecat.biz
To: gluster-users@gluster.org
Sent: Thursday, April 30, 2015 6:52:58 AM
Subject: Re: [Gluster-users] Write operations failing on clients
Okay, I did some digging. On the client there was many errors such as:
[2015-04-29
Thanks, it clarifies write slowdown! But my reads with NFS are as fast as
GlusterFS native client. Does it mean the server which NFS was mounted with
is actually hosting those files so no extra hop and same performance?
Thanks,
On 30 Apr 2015 8:27 am, Ben Turner btur...@redhat.com wrote:
-
Hello.
We've been using glusterfs for five months without any problems until
yesterday: suddenly all clients who tried to write something began to hang
with D status (waiting for disk). Also at the same time gluster nodes
began to consume very high CPU which never happened before. htop command
- Original Message -
From: Behrooz Shafiee shafie...@gmail.com
To: Ben Turner btur...@redhat.com
Cc: Gluster-users@gluster.org List gluster-users@gluster.org
Sent: Thursday, April 30, 2015 9:34:31 AM
Subject: Re: [Gluster-users] Poor performance of NFS client for large writes
We are sorry for the inconvenience caused during the hangout session.
There is a network outage at our place. We shall do the recording again
and share the link sometime next week.
Thanks,
Soumya
On 04/30/2015 06:08 PM, Niels de Vos wrote:
On Wed, Apr 29, 2015 at 11:20:20AM -0400, Meghana
HI,
Are your PUBLIC_ADDRESSES different from the NODES addresses (they must
not be used on any real interface)? Is the /mnt/datavol mounted on all
nodes?
I also found I had to copy /var/lib/samba/private/* to the
/mnt/datavol/lock area to get samba working.
Cheers
Alex
On 30/04/15
Hi Alex and friends,
I have installed CTDB and all related packages . I did all the required
steps to configure ctdb and samba configuration, but somehow the lockfile
functionality does not work.
*Following is the ctdb configuration:*
CTDB_RECOVERY_LOCK=/mnt/datavaol/lock/lockfile
Are your files split brained:
gluster v heal img info split-brain
I see alot of problem with your self heal daemon connecting:
[2015-04-29 16:15:37.137215] E [socket.c:2161:socket_connect_finish]
0-img-client-4: connection to 192.168.114.185:49154 failed (Connection refused)
[2015-04-29
Also I see:
/var/log/glusterfs/img-rebalance.log-20150430
[2015-04-29 14:49:40.793369] E [dht-rebalance.c:1515:gf_defrag_fix_layout]
0-img-dht: Fix layout failed for /www/thumbs
[2015-04-29 14:49:40.793625] E [dht-rebalance.c:1515:gf_defrag_fix_layout]
0-img-dht: Fix layout failed for /www
The Lock file has to be on the Shared BRICK mounted. Make sure that GlusterFS
is running.
The CTDB files must be on the same Gluster Volume mounted on each server for it
to work correctly.
Here’s my steps.
=== CTDB install
[root@gls001 ~]# yum install ctdb
[root@gls001 ~]# cd /etc/ctdb/
On Thu, 2015-04-30 at 13:28 +1000, Dan Mons wrote:
Specific to Linux, the NFS client uses standard filesystem caching
which has a few pros and cons of it's own.
Native GlusterFS uses up application space RAM and is a hard-set
number that you must define. In our studio, our standard rollout
28 matches
Mail list logo