I guess you can manipulate the peer list and merge them into a single TSP, but 
I don't know if that would be 'supported'.
Give a try on a setup of VMs and just stop all glusterd and start them after 
merging the files.
Best Regards,Strahil Nikolov

Sent from Yahoo Mail on Android 
 
  On Mon, Aug 23, 2021 at 13:41, Holger Rojahn<[email protected]> wrote:   
Hi,thats not nice because i have a big downtime then.. I hope there is a 
unjoin/join cluster ...
Tnx for help. Must c how i handle this situation...
Strahil Nikolov <[email protected]> schrieb am Mo., 23. Aug. 2021, 12:33:

I guess you have to canibilize the clusters, pick the one that has most data 
and start shrinking the one with most empty space.Once you can't shrink more, 
you will need downtime to set the 'donor' volume read only and start copying 
the data to the 'recieving' volume.Once data is moved, move the remaining 
bricks to the 'recieving' volume and then rebalance.
Repeat again for the other 'donor' volume.
Best Regards,Strahil Nikolov

Sent from Yahoo Mail on Android 
 
  On Fri, Aug 20, 2021 at 15:01, Holger Rojahn<[email protected]> wrote:   
Hi,

I Configured 3 Clusters (For several Shared Homes between different maschines) 
.. So far no Problem.

Now it need a Volume that going across this maschines, but the nodes are bound 
to there own clusters so peer probe fails …

  

How i can build one big cluster with all nodes but without dataloss (And best 
without Downtime J)

  

Hope there is some Pro that can help J

  

Greets from Germany

  

Holger

  
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
[email protected]
https://lists.gluster.org/mailman/listinfo/gluster-users
  

  
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
[email protected]
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to