Thanks Strahil,

Head and tail of the info command:

   gluster volume heal MRIData info
   Brick hydra1:/gluster1/data
   Status: Connected
   Number of entries: 0

   Brick hydra1:/gluster2/data
   Status: Connected
   Number of entries: 0

   Brick hydra1:/arbiter/1
   Status: Connected
   Number of entries: 0

   Brick hydra1:/gluster3/data
   Status: Connected
   Number of entries: 0

   Brick hydra2:/gluster1/data
   Status: Connected
   Number of entries: 0

   Brick hydra1:/arbiter/2
   Status: Connected
   Number of entries: 0

   Brick hydra2:/gluster2/data
   Status: Connected
   Number of entries: 0

   Brick hydra2:/gluster3/data
   Status: Connected
   Number of entries: 0

   Brick hydra2:/arbiter/1
   Status: Connected
   Number of entries: 0

   Brick hydra3:/gluster1/data
   Status: Connected
   Number of entries: 0

   Brick hydra3:/gluster2/data
   Status: Connected
   Number of entries: 0

   Brick hydra3:/arbiter/1
   Status: Connected
   Number of entries: 0

   Brick hydra3:/gluster3/data
   [...]
   Status: Connected
   Number of entries: 18240

   Brick hydra4:/gluster2/data
   Status: Connected
   Number of entries: 0

   Brick hydra4:/gluster3/data
   Status: Connected
   Number of entries: 0

   Brick hydra4:/arbiter/1
   Status: Connected
   Number of entries: 0

   {hydra4}~: gluster volume heal MRIData info|head
   Brick hydra1:/gluster1/data
   Status: Connected
   Number of entries: 0

   Brick hydra1:/gluster2/data
   Status: Connected
   Number of entries: 0

   Brick hydra1:/arbiter/1
   Status: Connected

Looking at this more carefully, it would seem that all the errors are on hydra3:/gluster3/data. Am I reading this correctly ?

On 7/30/21 5:39 PM, Strahil Nikolov wrote:

What is the 'gluster volume healinfo summary' output ?

Best Regards,
Strahil Nikolov

    On Fri, Jul 30, 2021 at 23:44, Valerio Luccio
    <valerio.luc...@nyu.edu> wrote:

    Hello all,

    I have a gluster (v. 5.13) on 4 CentOS 7.8 nodes. I recently had
    hardware problems on the RAIDs. I was able to get it back, but I
    noticed some odd things, so I did a "gluster volume heal info" and
    found a ton of errors. When I tried to do "gluster volume heal" I
    got the message:

        Launching heal operation to perform index self heal on volume MRIData 
has been unsuccessful:
        Glusterd Syncop Mgmt brick op 'Heal' failed. Please check glustershd 
log file for details.

    When I look at /var/log/glusterfs/glustershd.log it hasn't changed
    since this morning, so I'm not sure how to interpret the the above
    message. What am I supposed to look for in the log file ?

    Here's a dump of the volume setup:

        Volume Name: MRIData
        Type: Distributed-Replicate
        Volume ID: e051ac20-ead1-4648-9ac6-a29b531515ca
        Status: Started
        Snapshot Count: 0
        Number of Bricks: 6 x (2 + 1) = 18
        Transport-type: tcp
        Bricks:
        Brick1: hydra1:/gluster1/data
        Brick2: hydra1:/gluster2/data
        Brick3: hydra1:/arbiter/1 (arbiter)
        Brick4: hydra1:/gluster3/data
        Brick5: hydra2:/gluster1/data
        Brick6: hydra1:/arbiter/2 (arbiter)
        Brick7: hydra2:/gluster2/data
        Brick8: hydra2:/gluster3/data
        Brick9: hydra2:/arbiter/1 (arbiter)
        Brick10: hydra3:/gluster1/data
        Brick11: hydra3:/gluster2/data
        Brick12: hydra3:/arbiter/1 (arbiter)
        Brick13: hydra3:/gluster3/data
        Brick14: hydra4:/gluster1/data
        Brick15: hydra3:/arbiter/2 (arbiter)
        Brick16: hydra4:/gluster2/data
        Brick17: hydra4:/gluster3/data
        Brick18: hydra4:/arbiter/1 (arbiter)
        Options Reconfigured:
        storage.owner-gid: 36
        storage.owner-uid: 36
        cluster.choose-local: off
        user.cifs: off
        features.shard: on
        cluster.shd-wait-qlength: 10000
        cluster.shd-max-threads: 8
        cluster.locking-scheme: granular
        cluster.data-self-heal-algorithm: full
        cluster.server-quorum-type: server
        cluster.eager-lock: enable
        network.remote-dio: enable
        performance.low-prio-threads: 32
        performance.io-cache: off
        performance.read-ahead: off
        performance.quick-read: off
        auth.allow: *
        network.ping-timeout: 10
        server.allow-insecure: on
        cluster.quorum-type: auto
        cluster.self-heal-daemon: on
        cluster.entry-self-heal: on
        cluster.metadata-self-heal: on
        cluster.data-self-heal: on
        features.cache-invalidation: off
        transport.address-family: inet
        nfs.disable: on
        nfs.exports-auth-enable: on

    Thanks for all replies,

-- As a result of Coronavirus-related precautions, NYU and the Center
    for Brain Imaging operations will be managed remotely until
    further notice.
    All telephone calls and e-mail correspondence are being monitored
    remotely during our normal business hours of 9am-5pm, Monday
    through Friday.
    For MRI scanner-related emergency, please contact: Keith
    Sanzenbach at keith.sanzenb...@nyu.edu
    <mailto:keith.sanzenb...@nyu.edu> and/or Pablo Velasco at
    pablo.vela...@nyu.edu <mailto:pablo.vela...@nyu.edu>
    For computer/hardware/software emergency, please contact: Valerio
    Luccio at valerio.luc...@nyu.edu <mailto:valerio.luc...@nyu.edu>
    For TMS/EEG-related emergency, please contact: Chrysa Papadaniil
    at chr...@nyu.edu <mailto:chr...@nyu.edu>
    For CBI-related administrative emergency, please contact: Jennifer
    Mangan at jennifer.man...@nyu.edu <mailto:jennifer.man...@nyu.edu>

    Valerio Luccio              (212) 998-8736
    Center for Brain Imaging            4 Washington Place, Room 158
    New York University                 New York, NY 10003

        "In an open world, who needs windows or gates ?"

    ________



    Community Meeting Calendar:

    Schedule -
    Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
    Bridge: https://meet.google.com/cpu-eiue-hvk
    
<https://urldefense.proofpoint.com/v2/url?u=https-3A__meet.google.com_cpu-2Deiue-2Dhvk&d=DwMCaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=zZK0dca4HNf-XwnAN9ais1C3ncS0n2x39pF7yr-muHY&m=2Y1Pc_6--SUOiUdMLEDLzC4MH1--DfcNtUY1Leur-bI&s=eS7doAUtU4DEdKnj98MORT56MMt5229hsqgfy6qb0Rc&e=>
    Gluster-users mailing list
    Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
    https://lists.gluster.org/mailman/listinfo/gluster-users
    
<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&d=DwMCaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=zZK0dca4HNf-XwnAN9ais1C3ncS0n2x39pF7yr-muHY&m=2Y1Pc_6--SUOiUdMLEDLzC4MH1--DfcNtUY1Leur-bI&s=NTc8lckuK3z_IChfZe1t6WhHTjrMf9Fl-CPzSn9r-yk&e=>


--
As a result of Coronavirus-related precautions, NYU and the Center for Brain Imaging operations will be managed remotely until further notice. All telephone calls and e-mail correspondence are being monitored remotely during our normal business hours of 9am-5pm, Monday through Friday. For MRI scanner-related emergency, please contact: Keith Sanzenbach at keith.sanzenb...@nyu.edu and/or Pablo Velasco at pablo.vela...@nyu.edu For computer/hardware/software emergency, please contact: Valerio Luccio at valerio.luc...@nyu.edu For TMS/EEG-related emergency, please contact: Chrysa Papadaniil at chr...@nyu.edu For CBI-related administrative emergency, please contact: Jennifer Mangan at jennifer.man...@nyu.edu

Valerio Luccio          (212) 998-8736
Center for Brain Imaging                4 Washington Place, Room 158
New York University             New York, NY 10003

   "In an open world, who needs windows or gates ?"

________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to