Hi Harry,
Thanks for taking time and testing 3.3.0qa42.
So this was a big improvement over the previous trial. the only glitches
were the 120 failures (which mean...?)
You can open log file and search for the reason for the failure.
(/var/lib/glusterfs/gli-rebalance.log). Currently there may
Installed the qa42 version on servers and clients and under load, it
worked as advertised (tho of course more slowly than I would have
liked :)) - removed ~1TB in just under 24 hr (on a DDR-IB connected 4
node set) ~ 40MB/s overall tho there were a huge number of tiny files.
The remove-brick cl
pbs2ib 8780091379699182236 2994733 in progress
Hi Harry,
Can you please test once again with 'glusterfs-3.3.0qa42' and confirm
the behavior? This seems like a bug (suspect it to be some overflow type
of bug, not sure yet). Please help us with opening a bug report,
meantime, we will inves
I'm running 3.3b3 on a 5brick/Ubuntu 10.04.4 system with mixed
IPoIB/GbE. It's behaving well other than the current problem. The
gluster filesystem is live and being used lightly by our cluster.
Note that the gli volume has 2 bricks on pbs2ib. I'm trying to clear
the smaller brick in prepara