1,The GlusterFS server will ignore the O_DIRECT flag by default, how to make
the server work in direct-io mode?
2,By "mount -t glusterfs XXX:/testvol -o direct-io-mode=enable mountpoint",the
GlusterFS client will work in direct-io mode,but the file will be cached in the
hasded server,how to
Yes, same message on gluster03's brick log:
[2016-06-16 10:07:55.619621] E [MSGID: 115059]
[server-rpc-fops.c:811:server_getxattr_cbk] 0-storage-server: 23783173:
GETXATTR
/data/climate/ANUSPLIN/ANUSPLIN300/monthly/pcp_grids/1918/pcp300_08.asc
(0e98a94b-7b86-4a72-88a9-a99a787e059d) ((null)) ==>
No, we had not.. As the developer is no longer around and I have not looked
at the code, I was not sure if I should. We have been using the code
changes against 3.6.1 and it works fine.
On Fri, Jun 17, 2016 at 5:19 PM, Joe Julian wrote:
> Have you offered those patches
On 17/06/16 18:01, ABHISHEK PALIWAL wrote:
Hi,
I am using Gluster 3.7.6 and performing plug in plug out of the board
but getting following brick logs after plug in board again:
[2016-06-17 07:14:36.122421] W [trash.c:1858:trash_mkdir]
0-c_glusterfs-trash: mkdir issued on /.trashcan/,
Hi,
I am using Gluster 3.7.6 and performing plug in plug out of the board but
getting following brick logs after plug in board again:
[2016-06-17 07:14:36.122421] W [trash.c:1858:trash_mkdir]
0-c_glusterfs-trash: mkdir issued on /.trashcan/, which is not permitted
[2016-06-17 07:14:36.122487] E
Have you offered those patches upstream?
On June 16, 2016 1:02:24 AM PDT, "B.K.Raghuram" wrote:
>Thanks a lot Atin,
>
>The problem is that we are using a forked version of 3.6.1 which has
>been
>modified to work with ZFS (for snapshots) but we do not have the
>resources
>to
Hola Manuel,
as Ravishankar has said, you should use sharding instead of stripe.
Regarding the disperse, the minimum number of servers you would need is
3. Disperse requires at least 3 bricks to create a configuration with
redundancy 1 (this is equivalent to a replica 2 in terms of
I'd tried that sometime back but ran into some merge conflicts and was not
sure who to turn to :) May I come to you for help with that?!
On Fri, Jun 17, 2016 at 3:29 PM, Atin Mukherjee wrote:
>
>
> On 06/17/2016 03:21 PM, B.K.Raghuram wrote:
> > Thanks a ton Atin. That
On 06/17/2016 07:00 AM, Ravishankar N wrote:
which one is faster?
Replica volumes are faster than disperse because there is no erasure
code math to be done during I/O
Looks like I over-estimated the perf. impact of the math. You might be
interested in reading Xavi's email about it on
On 06/17/2016 03:21 PM, B.K.Raghuram wrote:
> Thanks a ton Atin. That fixed cherry-pick. Will build it and let you
> know how it goes. Does it make sense to try and merge the whole upstream
> glusterfs repo for the 3.6 branch in order to get all the other bug
> fixes? That may bring in many more
Thanks a ton Atin. That fixed cherry-pick. Will build it and let you know
how it goes. Does it make sense to try and merge the whole upstream
glusterfs repo for the 3.6 branch in order to get all the other bug fixes?
That may bring in many more merge conflicts though..
On Fri, Jun 17, 2016 at
Hello,
Thank you for information.
It will be a killing feature []
Vincent
From: Mohammed Rafi K C
Sent: Thursday, June 16, 2016 11:16:08 AM
To: Vincent Miszczak; gluster-users@gluster.org
Subject: Re: [Gluster-users] Add hot tier brick
On 06/17/2016 12:44 PM, B.K.Raghuram wrote:
> Thanks Atin.. I'm not familiar with pulling patches the review system
> but will try:)
It's not that difficult. Open the gerrit review link, go to the download
drop box at the top right corner, click on it and then you will see a
cherry pick option,
Thanks Atin.. I'm not familiar with pulling patches the review system but
will try:)
On Fri, Jun 17, 2016 at 12:35 PM, Atin Mukherjee
wrote:
>
>
> On 06/16/2016 06:17 PM, Atin Mukherjee wrote:
> >
> >
> > On 06/16/2016 01:32 PM, B.K.Raghuram wrote:
> >> Thanks a lot Atin,
>
On 06/16/2016 06:17 PM, Atin Mukherjee wrote:
>
>
> On 06/16/2016 01:32 PM, B.K.Raghuram wrote:
>> Thanks a lot Atin,
>>
>> The problem is that we are using a forked version of 3.6.1 which has
>> been modified to work with ZFS (for snapshots) but we do not have the
>> resources to port that
15 matches
Mail list logo