Andre Felipe Machado wrote:
Hello,
Try to disable flush behind option.
It behaves *badly* with slow speed net, drives and machines...
It can cause the cited lock up behaviour.
Your server got caught in I/O dead lock.
Regards.
Andre Felipe Machado
http://www.techforce.com.br
Thanks, we tried
Hi Hiren,
You can use io-threads translator to make effective use of multiple cores.
regards,
- Original Message -
From: Hiren Joshi j...@moonfruit.com
To: gluster-users@gluster.org
Sent: Wednesday, November 18, 2009 6:30:24 PM GMT +04:00 Abu Dhabi / Muscat
Subject: [Gluster-users]
Hi,
Which settings in AFR in gluster directly affects performance when
there are lots of ls and stat calls made on files mounted using
gluster?
Thank you.
-Paras
___
Gluster-users mailing list
Gluster-users@gluster.org
What's the best way to add bricks, and get distribute to use them? I added 2
more bricks, and the total size increased for the filesystem, but i can't get
any traffic on the new disks. I remounted the filesystem, and ran an ls -Rl ,
but i still don't see any traffic to the disks. I do see
Hi Marek,
Is it possible for you to provide a backtrace of the crash? And can you file a
bug report at bugs.gluster.com? Also if possible can you upgrade to 2.8 release
and confirm whether you face the same problem?
regards,
- Original Message -
From: Marek m...@kis.p.lodz.pl
To:
Hi Thomas,
If you are creating files in the directories existing at the time of adding a
brick, the new brick is not considered for distribution, since the layout of
distribution of files in those directories has already been constructed (and
stored in the directories using extended
This seems to have helped, thanks.
On Nov 19, 2009, at 12:12 PM, Amar Tumballi wrote:
What's the best way to add bricks, and get distribute to use them? I
added 2 more bricks, and the total size increased for the filesystem,
but i can't get any traffic on the new disks. I remounted the
Hi Vikas,
Thanks for the response.
I am having performance issues upon migration to gluster. Follwoing
should explain the issue.
There are scripts running which access data from the gluster mount
point . Earlier NFS was used . With NFS the scripts seem to have no
problem getting completed