Hi,
I came across a situation where there were a few IOs going to the subvolume
which was not available. The situation happens due to the following.
During the remove brick commit the following things happen, the brick stop,
volfile creation, and volfile change notification to client.
The order
Xavi,
Now that the change has been reverted, we can resume this discussion
and decide on the exact format that considers, tier, dht, afr, ec. People
working geo-rep/dht/afr/ec had an internal discussion and we all agreed
that this proposal would be a good way forward. I think once we agree on
Hi kinglong,
I did not hit the same issue on my setup.. 4 cores + 16 GB ram.
I am planning to run some test by limiting no of cores and memory available
with cgroups..
maybe there is an issue when the system has lower end configurations.
Regards,
Sanoj
On Thu, Jun 29, 2017 at 6:21 PM, Sanoj Unnik
Ashish, Xavi,
I think it is better to implement this change as a separate
read-after-write caching xlator which we can load between EC and client
xlator. That way EC will not get a lot more functionality than necessary
and may be this xlator can be used somewhere else in the stack if possibl
[...truncated 6 lines...]
https://bugzilla.redhat.com/1460620 / access-control: Unable to use CIDR
address annotation in nfs.rpc-auth-allow
https://bugzilla.redhat.com/1463365 / core: Changes for Maintainers 2.0
https://bugzilla.redhat.com/1460638 / disperse: ec-data-heal.t fails with brick
mux e