I guess you have to canibilize the clusters, pick the one that has most data and start shrinking the one with most empty space.Once you can't shrink more, you will need downtime to set the 'donor' volume read only and start copying the data to the 'recieving' volume.Once data is moved, move the remaining bricks to the 'recieving' volume and then rebalance. Repeat again for the other 'donor' volume. Best Regards,Strahil Nikolov
Sent from Yahoo Mail on Android On Fri, Aug 20, 2021 at 15:01, Holger Rojahn<[email protected]> wrote: <!--#yiv9552146882 _filtered {} _filtered {} _filtered {}#yiv9552146882 #yiv9552146882 p.yiv9552146882MsoNormal, #yiv9552146882 li.yiv9552146882MsoNormal, #yiv9552146882 div.yiv9552146882MsoNormal {margin:0cm;margin-bottom:.0001pt;font-size:11.0pt;font-family:"Calibri", sans-serif;}#yiv9552146882 a:link, #yiv9552146882 span.yiv9552146882MsoHyperlink {color:#0563C1;text-decoration:underline;}#yiv9552146882 a:visited, #yiv9552146882 span.yiv9552146882MsoHyperlinkFollowed {color:#954F72;text-decoration:underline;}#yiv9552146882 span.yiv9552146882E-MailFormatvorlage17 {font-family:"Calibri", sans-serif;color:windowtext;}#yiv9552146882 .yiv9552146882MsoChpDefault {font-family:"Calibri", sans-serif;} _filtered {}#yiv9552146882 div.yiv9552146882WordSection1 {}--> Hi, I Configured 3 Clusters (For several Shared Homes between different maschines) .. So far no Problem. Now it need a Volume that going across this maschines, but the nodes are bound to there own clusters so peer probe fails … How i can build one big cluster with all nodes but without dataloss (And best without Downtime J) Hope there is some Pro that can help J Greets from Germany Holger ________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list [email protected] https://lists.gluster.org/mailman/listinfo/gluster-users
________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list [email protected] https://lists.gluster.org/mailman/listinfo/gluster-users
