Hi, thats not nice because i have a big downtime then.. I hope there is a unjoin/join cluster ...
Tnx for help. Must c how i handle this situation... Strahil Nikolov <[email protected]> schrieb am Mo., 23. Aug. 2021, 12:33: > I guess you have to canibilize the clusters, pick the one that has most > data and start shrinking the one with most empty space. > Once you can't shrink more, you will need downtime to set the 'donor' > volume read only and start copying the data to the 'recieving' volume. > Once data is moved, move the remaining bricks to the 'recieving' volume > and then rebalance. > > Repeat again for the other 'donor' volume. > > Best Regards, > Strahil Nikolov > > Sent from Yahoo Mail on Android > <https://go.onelink.me/107872968?pid=InProduct&c=Global_Internal_YGrowth_AndroidEmailSig__AndroidUsers&af_wl=ym&af_sub1=Internal&af_sub2=Global_YGrowth&af_sub3=EmailSignature> > > On Fri, Aug 20, 2021 at 15:01, Holger Rojahn > <[email protected]> wrote: > > Hi, > > I Configured 3 Clusters (For several Shared Homes between different > maschines) .. So far no Problem. > > Now it need a Volume that going across this maschines, but the nodes are > bound to there own clusters so peer probe fails … > > > > How i can build one big cluster with all nodes but without dataloss (And > best without Downtime J) > > > > Hope there is some Pro that can help J > > > > Greets from Germany > > > > Holger > > > ________ > > > > Community Meeting Calendar: > > Schedule - > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC > Bridge: https://meet.google.com/cpu-eiue-hvk > Gluster-users mailing list > [email protected] > https://lists.gluster.org/mailman/listinfo/gluster-users > >
________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list [email protected] https://lists.gluster.org/mailman/listinfo/gluster-users
