If you change permissions on a 0 length mode 1000 file with trusted.dht.linkto attributes set, you'll cause all sorts of issues. If you /need/ to do something with those files, just delete them.

On 08/22/2013 03:53 PM, ??? wrote:
Sure there's a bug about fd leak during rebalance. It has been fixed and I have back port it in our production cluster. About the permission issue I have to run find cmd too.

? 2013?8?23????,Justin Dossey ??:

    I had the same problem after a rebalance (going from 2 bricks to 6
    bricks).  It took about a week to get everything straightened out
    (and I reported details on what I did to fix it in this mailing
    list).

    I dread the next rebalance (going from 6 bricks to 8 bricks)! For
    this rollout, I still have five rebalances remaining before I can
    declare the GlusterFS migration complete.

    To recap,
    1. After a rebalance in which one of my nodes (not the "master",
    which I initiated the rebalance from) had to be rebooted due to
    too many open files on system (caused by the rebalance), many
    files appeared to clients to have 000 or 1000 (---------- or
    T---------) permissions.  Many of these files could not even be
    chmodded by root over NFS, returning error 576 when I tried.
    2. I found that in many cases, files which had this problem had
    entries on more than two bricks (and my replica count is 2).  The
    entries had different permissions and some were zero-length files.
     It appears that different clients got different entries at
    different times, so one might see a file as inaccessible while
    another could read it without issues.
    3. I wrote a script to remove the zero-length files (and their
    .glusterfs shadow links), and set permissions properly on all the
    files.  Luckily, all the files on my volume have uniform
    permissions (files are all 0644, directories are all 0755).
    4. I ran a find command every ten minutes to find and correct bad
    permissions.  The script in (3) didn't appear to have gotten them
    all for some reason.
    5. No more files have appeared with this problem since August 6th.
     I'm still running the find every day.
    6. After the permissions problems appeared to be resolved, I ran a
    check to verify that all the files present on the volume before
    the rebalance were present after the rebalance.  Thankfully, the
    data appears to have all survived.

    The only feedback I got on this mailing list was that nothing was
    wrong.


    On Wed, Aug 21, 2013 at 11:21 PM, Vijay Bellur <vbel...@redhat.com
    <javascript:_e({}, 'cvml', 'vbel...@redhat.com');>> wrote:

        On 08/22/2013 09:12 AM, ??? wrote:

            Hi Joe thank you but the sticky permissions is exposed to
            client side
            due to potential bug related to glusterfs rebalance.



        Can you please provide output of ls -l that shows these files
        after rebalance?

        -Vijay



            2013/8/20 Joe Julian <j...@julianfamily.org
            <javascript:_e({}, 'cvml', 'j...@julianfamily.org');>
            <mailto:j...@julianfamily.org <javascript:_e({}, 'cvml',
            'j...@julianfamily.org');>>>


                Sticky pointers are normal. See the extended
            attributes on them to
                see where they point,

                getfattr -m trusted.* -d $filename

                To diagnose your client issue, look in your client log.


                "???" <yongta...@gmail.com <javascript:_e({}, 'cvml',
            'yongta...@gmail.com');> <mailto:yongta...@gmail.com
            <javascript:_e({}, 'cvml', 'yongta...@gmail.com');>>> wrote:

                    Dear gluster experts,

                    We're running glusterfs 3.3 and we have met file
            permission
                    probelems after gluster volume rebalance. Files
            got stick
                    permissions T--------- after rebalance which break
            our client
                    normal fops unexpectedly.
                    Any one known this issue?
                    Thank you for your help.


                --
                Sent from my Android device with K-9 Mail. Please
            excuse my brevity.




            --
            ???


            _______________________________________________
            Gluster-users mailing list
            Gluster-users@gluster.org <javascript:_e({}, 'cvml',
            'Gluster-users@gluster.org');>
            http://supercolony.gluster.org/mailman/listinfo/gluster-users


        _______________________________________________
        Gluster-users mailing list
        Gluster-users@gluster.org <javascript:_e({}, 'cvml',
        'Gluster-users@gluster.org');>
        http://supercolony.gluster.org/mailman/listinfo/gluster-users




-- Justin Dossey
    CTO, PodOmatic



--
???


_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to