On Wednesday 16 July 2014 10:18 AM, David Raffelt wrote:
Hi Raghavendra,
No
Thanks
Dave



As per the cmd_log_history file (a hidden file present in the log directory which stores the CLI commands executed on that peer), rebalance seems to be running (or was run).

[2013-12-17 03:08:59.081232]  : volume rebalance data start : SUCCESS
[2013-12-17 03:09:14.631826]  : volume rebalance data status : SUCCESS
[2013-12-17 03:09:22.761097]  : volume rebalance data status : SUCCESS
[2013-12-17 03:09:27.748014]  : volume rebalance data status : SUCCESS
[2013-12-17 03:09:28.839242]  : volume rebalance data status : SUCCESS
[2013-12-17 03:10:39.982747]  : volume rebalance data status : SUCCESS
[2013-12-17 03:14:30.919676]  : volume rebalance data status : SUCCESS
[2013-12-17 03:14:33.772300]  : volume rebalance data status : SUCCESS
[2013-12-17 03:29:14.467954]  : volume rebalance data status : SUCCESS
[2013-12-17 03:29:43.303852]  : volume rebalance data status : SUCCESS
[2013-12-17 03:30:04.309054]  : volume rebalance data status : SUCCESS
[2013-12-17 04:35:45.631119]  : volume rebalance data status : SUCCESS


I think this is what has happened. As part of rebalance layout might have changed for some directories and distribute tries to repair it by doing a self-heal when a lookup is performed on the directory. Distribute performs self-heal as root. But when the requests from that client comes to brick process, the requests from root are changed by default to nfsnobody (uid: 65534) and that uid does not have permissions to do some modifications (in this case self-heal) on the directory which brick thinks is owned by root. So self-heal does not happen properly and because of that some operations performed (in this case rename of a file within that directory)
fails.

Dave,
Please let me know if I have missed anything. This is my observation based on the log files.

CCing Raghavendra G who might be able to clarify whether this is what happened.

Regards,
Raghavendra Bhat

On 16 July 2014 14:47, Raghavendra Bhat <rab...@redhat.com <mailto:rab...@redhat.com>> wrote:

    On Tuesday 15 July 2014 01:57 PM, David Raffelt wrote:
    Hi Raghavendra,
    Thanks for looking into this. Attached are the log files from the
    3 peers. The glusterfs server is running on "Beauty".  All 3
    peers mount the native gluster client on /home. Each peer has a
    direct connection to each other, addressable via the /etc/hosts
    file.

    Note that I do not see any new output in the log when this error
    occurs.  Also note that I tried to replicate this issue on Ubuntu
    14.04 with a single brick and could not replicate it.

    Below is some more output that might help.
    Thanks!
    Dave



    *dave@beauty:~$ glusterfs --version*
    glusterfs 3.5git built on Jun 30 2014 15:58:19
    Repository revision: git://git.gluster.com/glusterfs.git
    <http://git.gluster.com/glusterfs.git>
    Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
    GlusterFS comes with ABSOLUTELY NO WARRANTY.
    It is licensed to you under your choice of the GNU Lesser
    General Public License, version 3 or any later version (LGPLv3
    or later), or the GNU General Public License, version 2 (GPLv2),
    in all cases as published by the Free Software Foundation.


    *dave@beauty:~$ uname -r*
    3.15.4-1-ARCH


    *dave@beauty:~$ sudo gluster volume info *
    Volume Name: data
    Type: Distribute
    Volume ID: 1d5948c7-9b7a-40ca-8aa7-85c74bcef3bc
    Status: Started
    Number of Bricks: 3
    Transport-type: tcp
    Bricks:
    Brick1: beauty:/export/beauty
    Brick2: beast:/export/beast
    Brick3: benji:/export/benji
    Options Reconfigured:
    performance.cache-size: 32MB
    performance.write-behind-window-size: 1MB
    auth.allow:
    
172.30.25.173,172.30.25.158,172.30.25.234,172.30.26.76,172.30.26.77,192.168.0.1,192.168.1.1,192.168.1.2,192.168.2.2,192.168.3.2,192.168.4.1,192.168.4.2,192.168.5.1,192.168.5.2
    nfs.disable: off
    diagnostics.brick-log-level: ERROR
    diagnostics.client-log-level: ERROR
    server.root-squash: enable




    Hi Dave,

    Was rebalance running when you did above operations?


    Regards,
    Raghavendra Bhat

















    On 15 July 2014 15:29, Raghavendra Bhat <rab...@redhat.com
    <mailto:rab...@redhat.com>> wrote:

        On Monday 14 July 2014 09:10 PM, Pranith Kumar Karampuri wrote:
        CCed Raghavendra Bhat who may know about the issue

        Pranith
        On 07/14/2014 08:01 PM, Joe Julian wrote:
        https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS

        Please file a bug report.

        On July 14, 2014 12:38:11 AM PDT, David Raffelt
        <d.raff...@brain.org.au> <mailto:d.raff...@brain.org.au>
        wrote:

            Hi All,
            After a recent update to gluster 3.5 we are having some
            issues renaming files when root squashing is enabled
            and the folder group permissions are not set to write.

            For example if I create a folder with the following
            permissions
            $ mkdir test
            $ chmod g-w test
            $ ls -l
            drwxr-xr-x  2 dave dave  22 Jul 14 17:16 test

            When I create a file /within/ this folder, and try to
            rename it I get a file permissions error.

            $ cd test
            $ touch asdf
            $ mv asdf asdf2
            mv: cannot move ‘asdf’ to ‘asdf2’: Permission denied

            A strace on the mv command reveals the rename system
            call fails with:
            rename("asdf", "asdf2") = -1 EACCES (Permission denied)

            However I can copy the file and delete the old one fine.

            If I either disable gluster root squashing, or change
            the test_dir folder group permission to write then I
            can rename the file without any problems.

            System details are:
            Arch linux
            System umask is set to 002
            Distributed volume, 3 peers, 1 brick per peer.

            Any help is much appreciated!
            Dave



        Hi Dave,

        Can you please provide the brick and client log files? Which
        client you were using? fuse or nfs?

        Regards,
        Raghavendra Bhat


            
------------------------------------------------------------------------

            Gluster-users mailing list
            Gluster-users@gluster.org  <mailto:Gluster-users@gluster.org>
            http://supercolony.gluster.org/mailman/listinfo/gluster-users


-- Sent from my Android device with K-9 Mail. Please excuse my
        brevity.


        _______________________________________________
        Gluster-users mailing list
        Gluster-users@gluster.org  <mailto:Gluster-users@gluster.org>
        http://supercolony.gluster.org/mailman/listinfo/gluster-users



        _______________________________________________
        Gluster-users mailing list
        Gluster-users@gluster.org  <mailto:Gluster-users@gluster.org>
        http://supercolony.gluster.org/mailman/listinfo/gluster-users


        _______________________________________________
        Gluster-users mailing list
        Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
        http://supercolony.gluster.org/mailman/listinfo/gluster-users




-- *David Raffelt (PhD)*
    Postdoctoral Fellow

    The Florey Institute of Neuroscience and Mental Health
    Melbourne Brain Centre - Austin Campus
    245 Burgundy Street
    Heidelberg Vic 3084
    Ph: +61 3 9035 7024
    www.florey.edu.au


    _______________________________________________
    Gluster-users mailing list
    Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
    http://supercolony.gluster.org/mailman/listinfo/gluster-users




--
*David Raffelt (PhD)*
Postdoctoral Fellow

The Florey Institute of Neuroscience and Mental Health
Melbourne Brain Centre - Austin Campus
245 Burgundy Street
Heidelberg Vic 3084
Ph: +61 3 9035 7024
www.florey.edu.au

_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to