I had a question on the expected behaviour of simple distributed volumes
when a brick fails for the following scenarios (as in, will the scenario
succeed for fail) :
- New file creation if that file name hashes to the failed brick
- New file creation if that file name hashes to one of the remainin
issue, please revert us
> back with the logs.
>
> --
> Regards,
> Manikandan Selvaganesh.
>
>
> On Wed, Jul 27, 2016 at 10:51 AM, Manikandan Selvaganesh <
> mselv...@redhat.com> wrote:
>
>> Hi Ram,
>>
>> Apologies. I was stuck on something else. I will up
quota would not work properly. This quota-version
> is introduced recently which adds suffix to the quota related extended
> attributes.
>
> On Jul 25, 2016 6:36 PM, "B.K.Raghuram" wrote:
>
>> Manikandan,
>>
>> We just overwrote the setup with a fresh install an
ou
> are willing to turn on bind-insecure option, then I do see this problem
> going away.
>
> P.S : This option is been turned on by default in 3.7.
>
> ~Atin
>
> On Sun, Jul 24, 2016 at 8:51 PM, Atin Mukherjee
> wrote:
>
>> Will have a look at the logs
ul 25, 2016 at 5:35 PM, Atin Mukherjee
> wrote:
>
>>
>>
>> On Mon, Jul 25, 2016 at 4:37 PM, B.K.Raghuram wrote:
>>
>>> Atin,
>>>
>>> Couple of quick questions about the upgrade and in general about the
>>> meaning of some of the p
sulted in a checksum mismatch resulting into
> peer rejection. But we can confirm it from log files and respective info
> file content.
>
>
> On Saturday 23 July 2016, B.K.Raghuram wrote:
>
>> Unfortunately, the setup is at a customer's place which is not remotely
&g
blem
> going away.
>
> P.S : This option is been turned on by default in 3.7.
>
> ~Atin
>
> On Sun, Jul 24, 2016 at 8:51 PM, Atin Mukherjee
> wrote:
>
>> Will have a look at the logs tomorrow.
>>
>>
>> On Sunday 24 July 2016, B.K.Raghuram wrote:
>
>
> On Friday 22 July 2016, B.K.Raghuram wrote:
>
>> When we upgrade some nodes from 3.6.1 to 3.7.13, some of the nodes give a
>> peer status of "peer rejected" while some dont. Is there a reason for this
>> discrepency and will the steps mentioned in
&g
When we upgrade some nodes from 3.6.1 to 3.7.13, some of the nodes give a
peer status of "peer rejected" while some dont. Is there a reason for this
discrepency and will the steps mentioned in
http://gluster-documentations.readthedocs.io/en/latest/Administrator%20Guide/Resolving%20Peer%20Rejected/
I have not gone through this implementation nor the new iscsi
implementation being worked on for 3.9 but I thought I'd share the design
behind a distributed iscsi implementation that we'd worked on some time
back based on the istgt code with a libgfapi hook.
The implementation used the idea of usi
This is probably a naive question but could I know from which version
onwards was support for extended ACLs supported? Specifically, I wanted to
know from which version would all setfacl/getfacl commands work as expected.
Also, a broader question emnating from this is would it be useful for a
fea
n June 16, 2016 1:02:24 AM PDT, "B.K.Raghuram" wrote:
>
>> Thanks a lot Atin,
>>
>> The problem is that we are using a forked version of 3.6.1 which has been
>> modified to work with ZFS (for snapshots) but we do not have the resources
>> to port that ov
I'd tried that sometime back but ran into some merge conflicts and was not
sure who to turn to :) May I come to you for help with that?!
On Fri, Jun 17, 2016 at 3:29 PM, Atin Mukherjee wrote:
>
>
> On 06/17/2016 03:21 PM, B.K.Raghuram wrote:
> > Thanks a ton Atin. That fix
:07 PM, Atin Mukherjee wrote:
> I've resolved the merge conflicts and files are attached. Copy these
> files and follow the instructions from the cherry pick command which
> failed.
>
> ~Atin
>
> On 06/17/2016 02:55 PM, B.K.Raghuram wrote:
> >
> > Thanks At
Thanks Atin.. I'm not familiar with pulling patches the review system but
will try:)
On Fri, Jun 17, 2016 at 12:35 PM, Atin Mukherjee
wrote:
>
>
> On 06/16/2016 06:17 PM, Atin Mukherjee wrote:
> >
> >
> > On 06/16/2016 01:32 PM, B.K.Raghuram wrote:
> >>
,
-Ram
On Thu, Jun 16, 2016 at 11:02 AM, Atin Mukherjee
wrote:
>
>
> On 06/16/2016 10:49 AM, B.K.Raghuram wrote:
> >
> >
> > On Wed, Jun 15, 2016 at 5:01 PM, Atin Mukherjee > <mailto:amukh...@redhat.com>> wrote:
> >
> >
> >
> &g
On Wed, Jun 15, 2016 at 5:01 PM, Atin Mukherjee wrote:
>
>
> On 06/15/2016 04:24 PM, B.K.Raghuram wrote:
> > Hi,
> >
> > We're using gluster 3.6.1 and we periodically find that gluster commands
> > fail saying the it could not get the lock on one of th
Hi,
We're using gluster 3.6.1 and we periodically find that gluster commands
fail saying the it could not get the lock on one of the brick machines. The
logs on that machine then say something like :
[2016-06-15 08:17:03.076119] E [glusterd-op-sm.c:3058:glusterd_op_ac_lock]
0-management: Unable t
I just wanted to check if gluster replace brick commit force is
"officially" deprecated in 3.6? Is there any other way to do a planned
replace of just one of the bricks in a replica pair? Add/remove brick
requires that new bricks be added in replica count multiples which may not
be always available
Had come across this suggestion in sometime back. Is this a valid way to go
about replacing a brick or is it not safe?
http://blog.dave.vc/2013/08/replacing-lost-brick-in-gluster.html
___
Gluster-users mailing list
Gluster-users@gluster.org
http://superc
How does one figure out which node is holding the lock? A restart of
glusterd on all the nodes in the pool did not seem to resolve the problem.
On Fri, May 16, 2014 at 5:39 PM, Vijay Bellur wrote:
> On 05/16/2014 12:24 PM, B.K.Raghuram wrote:
>
>> A hard accidental power down of o
A hard accidental power down of our boxes now results in the message
"Another transaction could be in progress. Please try again after
sometime." for most volume operations. I'm presuming this is the result of
some sort of cluster lock. How does one get around the problem? Where are
such lock files
I'm trying to use the libgfapi python bindings to write a django app
that can traverse a gluster volume. The script seems to work fine when
run from a command line but seg faults when run from within django.
Would anyone know why?
Also, there seems to be no neat way to differentiate between a
dire
Just pulled down the the latest rpms from
http://download.gluster.org/pub/gluster/glusterfs/samba/ and post
install, it does not seem to have the winbindd which is needed for ad
authentication. Could these be packaged in as well?
___
Gluster-users mailing
Hi,
We have built samba 4.1 from source with the gluster vfs module
enabled. I am able to access (read) and browse a volume from a windows
machine. However, when I try to create or edit a file that resides on
the volume from a windows box, it hangs forever. On the backend, I see
that many temporar
Is there any documentation on the semantics of these return values for
common gluster commands? If not, is there a common guideline on how to
interpret these values?
Thanks in advance..
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supe
We have a gigabit ethernet lan on which there is no other traffic and
I am getting the following numbers when I do a remove-brick. The
sequence of steps is that I create a 2 way replicated volume, populate
it with 300 files totalling 100MB. I then add a pair of bricks to the
volume and then a remov
Here are the steps that I did to reproduce the problem. Essentially,
if you try to remove a brick that is not the same as the localhost
then it seems to migrate the files on the localhost brick instead and
hence there is a lot of data loss.. If instead, I try to remove the
localhost brick, it works
I have gluster 3.4.1 on 4 boxes with hostnames n9, n10, n11, n12. I
did the following sequence of steps and ended up with losing data so
what did I do wrong?!
- Create a distributed volume with bricks on n9 and n10
- Started the volume
- NFS mounted the volume and created 100 files on it. Found th
Hi,
Given the recent threads about remove-brick vs replace brick, I was
wondering if someone could throw some light on the best ways to handle
each of the following situations where each brick/volume could contain
large amounts of data: In all these cases, the new brick will have a
different host
I am trying to get the list of peers in the pool using the --xml
output of the gluster peer status command. I would like to get the
list of the current host where the command is being run as well, if it
is part of the pool. Is there any way of doing this?
Since gluster peer status does not give me
Hi,
Just wanted to know if the sizeTotal and sizeFree figures returned by
gluster volume status represent data for the individual bricks or for
the whole volume. The reason for the doubt is that it is returned in
the level in the xml output.
Also, there is a 1 in the node level. What does this
r
For someone who is implementing gluster for the first time, is there a list
of best practices for common scenarios? For eg. what are the series of
steps to be done if one is noticing that a node is starting to fail or what
does one do to add new storage capacity to a volume? While the admin guide
l
Hi,
I would like to get the replica or stripe count of a volume from a
script. Is there a command to get this info without having to parse
the output of the volume info command? Or can I parse through a config
file somewhere to get this?
Thanks,
-Ram
__
34 matches
Mail list logo