On 07/19/2017 08:02 PM, Sahina Bose wrote:
[Adding gluster-users]

On Wed, Jul 19, 2017 at 2:52 PM, yayo (j) <jag...@gmail.com <mailto:jag...@gmail.com>> wrote:

    Hi all,

    We have an ovirt cluster hyperconverged with hosted engine on 3
    full replicated node . This cluster have 2 gluster volume:

    - data: volume for the Data (Master) Domain (For vm)
    - engine: volume fro the hosted_storage  Domain (for hosted engine)

    We have this problem: "engine" gluster volume have always unsynced
    elements and we cant' fix the problem, on command line we have
    tried to use the "heal" command but elements remain always
    unsynced ....

    Below the heal command "status":

    [root@node01 ~]# gluster volume heal engine info
    Brick node01:/gluster/engine/brick
    /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.48
    /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.64
    /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.60
    /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.2
    /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.68
    
/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/19d71267-52a4-42a3-bb1e-e3145361c0c2/7a215635-02f3-47db-80db-8b689c6a8f01
    
/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/88d41053-a257-4272-9e2e-2f3de0743b81/6573ed08-d3ed-4d12-9227-2c95941e1ad6
    /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.61
    /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.1
    /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/dom_md/ids
    /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.20
    /__DIRECT_IO_TEST__
    Status: Connected
    Number of entries: 12

    Brick node02:/gluster/engine/brick
    
/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/19d71267-52a4-42a3-bb1e-e3145361c0c2/7a215635-02f3-47db-80db-8b689c6a8f01
    <gfid:9a601373-bbaa-44d8-b396-f0b9b12c026f>
    /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/dom_md/ids
    <gfid:1e309376-c62e-424f-9857-f9a0c3a729bf>
    <gfid:e3565b50-1495-4e5b-ae88-3bceca47b7d9>
    <gfid:4e33ac33-dddb-4e29-b4a3-51770b81166a>
    /__DIRECT_IO_TEST__
    <gfid:67606789-1f34-4c15-86b8-c0d05b07f187>
    <gfid:9ef88647-cfe6-4a35-a38c-a5173c9e8fc0>
    
/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/88d41053-a257-4272-9e2e-2f3de0743b81/6573ed08-d3ed-4d12-9227-2c95941e1ad6
    <gfid:9ad720b2-507d-4830-8294-ec8adee6d384>
    <gfid:d9853e5d-a2bf-4cee-8b39-7781a98033cf>
    Status: Connected
    Number of entries: 12

    Brick node04:/gluster/engine/brick
    Status: Connected
    Number of entries: 0


    running the "gluster volume heal engine" don't solve the problem...


1. What does the glustershd.log say on all 3 nodes when you run the command? Does it complain anything about these files?
2. Are these 12 files also present in the 3rd data brick?
3. Can you provide the output of `gluster volume info` for the this volume?

    Some extra info:

    We have recently changed the gluster from: 2 (full repliacated) +
    1 arbiter to 3 full replicated cluster


Just curious, how did you do this? `remove-brick` of arbiter brick followed by an `add-brick` to increase to replica-3?

Thanks,
Ravi

    but i don't know this is the problem...

    The "data" volume is good and healty and have no unsynced entry.

    Ovirt refuse to put the node02 and node01 in "maintenance mode"
    and complains about "unsynced elements"

    How can I fix this?
    Thank you

    _______________________________________________
    Users mailing list
    Users@ovirt.org <mailto:Users@ovirt.org>
    http://lists.ovirt.org/mailman/listinfo/users
    <http://lists.ovirt.org/mailman/listinfo/users>



_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to