Hi list,
We have an ancient hack that fuse not
just passes on the statvfs data it's getting
from the storage, but tweaks it by setting
f_bsize / f_frsize to values of its own
preference. [1]
The supposed advantage is that f_bsize
serves as a hint to applications for the
preferred io size. (And re
On 2018-03-06, Amar Tumballi wrote:
>> If anyone would like our test scripts, I can either tar them up and
>> email them or put them in github - either is fine with me. (they rely
>> on current builds of docker and docker-compose)
>>
>>
> Sure, sharing the test cases makes it very easy for us to s
Hi Niels,
On Fri, Aug 11, 2017 at 2:33 PM, Niels de Vos wrote:
> On Fri, Aug 11, 2017 at 05:50:47PM +0530, Ravishankar N wrote:
[...]
>> To me it looks like fadvise (mm/fadvise.c) affects only the linux page cache
>> behavior and is decoupled from the filesystem itself. What this means for
>> fus
On 2013-03-21, Csaba Henk wrote:
>
> This behavior is confirmed -- it's exactly reproducible.
>
> I'll try to get back to you tomorrow with an update. If that won't happen
> (because not getting any cleverer...)
> then I can chime back only after 4th of April, I
Hi Samuli,
ddOn 2013-03-20, Samuli Heinonen wrote:
>
> Dear all,
>
> I'm running GlusterFS 3.4 alpha2 together with oVirt 3.2. This is solely a
> test system and it doesn't have
> much data or anything important in it. Currently it has only 2 VM's running
> and disk usage is around 15 GB. I
Hi Carl,
On 2011-07-07, Carl Chenet wrote:
> On 07/07/2011 15:25, Kaushik BV wrote:
>> Hi Chaica,
>>
>> This primarily means that the RPC communtication between the master
>> gsyncd module and slave gsyncd module is broken, this could happen to
>> various reasons. Check if it satisies all the pre
plication that is stored
> in: /etc/glusterd/geo-replication/secret.pem?
>
> This second one is apparently the correct way. It took support correcting
> me to fix that for me.
>
> -greg
>
> gluster-users-boun...@gluster.org wrote on 06/30/2011 09:43:03 AM:
>
>>
>&g
/Gluster_3.2:_Troubleshooting_Geo-replication
Csaba
On Thu, Jun 30, 2011 at 4:43 PM, Adrian Carpenter wrote:
> Yes I can ssh between all the boxes without password as root.
>
>
> On 30 Jun 2011, at 15:27, Csaba Henk wrote:
>
>> t seems that the connection gets dropped
> [2011-06-30 12:36:05.839916] I [monitor(monitor):43:monitor] Monitor:
> starting gsyncd worker
> [2011-06-30 12:36:05.905232] I [gsyncd:286:main_i] : syncing:
> gluster://localhost:user-volume -> file:///geo-tank/user-vo
Hi Adrian,
On Tue, Jun 28, 2011 at 12:04 PM, Adrian Carpenter wrote:
> Thanks Csaba,
>
> So far as I am aware nothing tampered with the xattrs, and all the bricks
> etc are time synchronised. Anyway I did as you suggest, now for one volume
> (I have three being geo-rep'd) I consistently ge
Hi,
This means that the geo-replication indexing ("xtime" extended attributes)
has gone inconsistent. If these xattrs wasn't tampered with by an outside
actor (ie. anything that is not the gsyncd process spawned upon the
"geo-replication start", and its children), then this happens if the clock
o
On 05/17/11 13:04, anthony garnier wrote:
Hi,
I've put the Client log in Debug mod :
# gluster volume geo-replication /soft/venus config log-level DEBUG
geo-replication config updated successfully
# gluster volume geo-replication /soft/venus config log-file
/usr/local/var/log/glusterfs/geo-repli
Lakshmi,
On 05/16/11 17:32, Lakshmipathi.G wrote:
Hi -
Do you have passwordless ssh login to slave machine? After setting
passwordless login ,please try this -
#gluster volume geo-replication athena root@$(hostname):/soft/venus start
or
#gluster volume geo-replication athena $(hostname):/soft
On 05/16/11 17:06, anthony garnier wrote:
Hi,
I'm currently trying to use géo-rep on the local data-node into a
directory but it fails with status "faulty"
[...]
I've done this cmd :
# gluster volume geo-replication athena /soft/venus config
# gluster volume geo-replication athena /soft/venus
On 2011-05-12, Cedric Lagneau wrote:
> My initial problem on the testing platform is not solved: glusterd
> geo-replication command stop working after about one day.
>
> On Master:
> #cat ssh%3A%2F%2Froot%40slave.mydomain.com%3Afile%3A%2F%2F%2Fdata%2Ftest2.log
> [2011-05-12 10:50:53.451495] I [m
On 2011-04-29, Leon Meßner wrote:
> On Mon, Apr 25, 2011 at 10:54:24PM +0530, Venky Shankar wrote:
>> On Monday 25 April 2011 02:12 AM, Leon Meßner wrote:
>> > Hi,
>> >
>> > i wanted to know if anyone succeeded in building glusterfs-3.1.4 on
>> > FreeBSD. The problems will probably by manifold. On
On Tue, May 3, 2011 at 5:03 PM, Csaba Henk wrote:
> [repost for the ML after subscription, pls. reply to this one]
>
> Hi,
>
> On Tue, May 3, 2011 at 4:25 PM, Kaushik BV wrote:
>> to locate the slave log-file do the following:
>> execute this
[repost for the ML after subscription, pls. reply to this one]
Hi,
On Tue, May 3, 2011 at 4:25 PM, Kaushik BV wrote:
> to locate the slave log-file do the following:
> execute this command on the slave domain:
> #gluster volume geo-replication
> ssh://r...@slave.mydomain.com:file:
Hi Richard,
On 2010-05-12, Richard Crane wrote:
> My attempts on two systems to compile v 3.0.4 fail with the following =
> errors -- has anyone been successful?
OS X related changes are in the git tree now, you can pull from:
git://git.gluster.com/glusterfs.git
or you may wait 'till 3.0.5 whi
Hi Christopher,
On 2010-05-11, Christopher Nelson wrote:
> There appears to be a race condition or a cycle with autofs and gluster
> 3.0.4.
>
> When gluster tries to stat the mount point in fuse-bridge.c, it hangs.
> When I comment out the code in lines: 3389-3415 it hangs on the call to
> mo
20 matches
Mail list logo