Hi Aytac,

Two gluster server nodes. Two volumes, each being a replica 2 of one brick.

One volume is currently still being prepared, so I won't mention it any further.

The other node is used as the main storage domain for our RHEV environment, which connects natively using the gluster client. The configuration at the RHEV end points to one node, with the second node configured as a failover. Clearly, any changes to that volume require extreme caution and planned downtime.

At this stage the VMs do not directly access the gluster storage, although one VM will soon have a gluster volume attached to it (the one that's still being prepared), as it requires a lot of storage space.

We use RHEV bare-metal hypervisors, so there is no option to upgrade, other than to install new versions as they become available. They are installed from an ISO, just like a regular distro, but these are extremely cut down versions of RHEL6, with all the good stuff left out. They could potentially be upgraded by building the client from source and copying it over but I'm not going down that road, for multiple reasons.

regards,
John


On 25/02/15 10:36, aytac zeren wrote:
Hi John,

Would you please share your scenario?

* How many nodes are running as gluster server?
* Which application is accesing gluster volume by which means (NFS, CIFS, Gluster Client)? * Are you accessing volume through a client or the clients are accessing volume themselves (like KVM nodes)

Best Regards
Aytac

On Wed, Feb 25, 2015 at 1:18 AM, John Gardeniers <jgardeni...@objectmastery.com <mailto:jgardeni...@objectmastery.com>> wrote:

    Problem solved, more or less.

    After reading Aytac's comment about 3.6.2 not being considered
    stable yet I removed it from the new node, removed
    /var/lib/glusterd/, rebooted (just to be sure) and installed
    3.5.3. After detaching and re-probing the peer the replace-brick
    command worked and the volume is currently happily undergoing a
    self-heal. At a later and more convenient time I'll upgrade the
    3.4.2 node to the same version. As previously stated, I cannot
    upgrade the clients, so they will just have to stay where they are.

    regards,
    John


    On 25/02/15 08:27, aytac zeren wrote:
    Hi John,

    3.6.2 is a major release and introduces some new features in
    cluster wide concept. Additionally it is not stable yet. The best
    way of doing it would be establishing another 3.6.2 cluster,
    accessing 3.4.0 cluster via nfs or native client, and copying
    content to 3.6.2 cluster gradually. While your volume size
    decreases on 3.4.0 cluster, you can unmount 3.4.0 members from
    cluster, upgrade them and add 3.6.2 trusted pool with brick.
    Please be careful while doing this operation, as number of nodes
    in your cluster should be reliable with your cluster design.
    (Stripped, Replicated, Distributed or a combination of them).

    Notice: I don't take any responsibility on the actions you have
    undertaken with regards to my recommendations, as my
    recommendations are general and does not take your archtiectural
    design into consideration.

    BR
    Aytac

    On Tue, Feb 24, 2015 at 11:19 PM, John Gardeniers
    <jgardeni...@objectmastery.com
    <mailto:jgardeni...@objectmastery.com>> wrote:

        Hi All,

        We have a gluster volume consisting of a single brick, using
        replica 2. Both nodes are currently running gluster 3.4.2 and
        I wish to replace one of the nodes with a new server (rigel),
        which has gluster 3.6.2

        Following this link:

        
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Replacing_an_Old_Brick_with_a_New_Brick_on_a_Replicate_or_Distribute-replicate_Volume.html

        I tried to do a replace brick but got "volume replace-brick:
        failed: Host rigel is not in 'Peer in Cluster' state". Is
        this due to a version incompatibility or is it due to some
        other issue? A bit of googling reveals the error message in
        bug reports but I've not yet found anything that applies to
        this specific case.

        Incidentally, the clients (RHEV bare metal hypervisors, so we
        have no upgrade option) are running 3.4.0. Will this be a
        problem if the nodes are on 3.6.2?

        regards,
        John

        _______________________________________________
        Gluster-users mailing list
        Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
        http://www.gluster.org/mailman/listinfo/gluster-users



    ______________________________________________________________________
    This email has been scanned by the Symantec Email Security.cloud
    service.
    For more information please visit http://www.symanteccloud.com
    ______________________________________________________________________



______________________________________________________________________
This email has been scanned by the Symantec Email Security.cloud service.
For more information please visit http://www.symanteccloud.com
______________________________________________________________________

_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to