I still have to review the full AMv2 meta updates path to see whether there may still be "split brain" due to the extra RPC to a remote server. But I really like the notion of keeping the deployment topology of branch-1 by default.
The fact is that 2.0 is already lagging, and minimizing the set of changes to get a release out earlier is in the best interest of the community. Enis On Tue, Jun 6, 2017 at 10:38 AM, Francis Liu <tof...@apache.org> wrote: > > That doesn't solve the same problem.Agreed as mentioned regionserver > groups only provides user-system region isolation. > > That still means that the most important operations are competing for > rpc queue time.Given the previous setup. For meta access contention this > should be addressed by higher priority rpc access no? > > On Tuesday, June 6, 2017 9:17 AM, Elliott Clark <ecl...@apache.org> > wrote: > > > That doesn't solve the same problem. Dedicating a remote server for the > system tables still means that all the master to system tables mutations > and reads are done over rpc. That still means that the most important > operations are competing for rpc queue time. > > On Fri, Nov 18, 2016 at 11:37 AM, Francis Liu <tof...@ymail.com.invalid> > wrote: > > > Just some extra bits of information: > > > > Another way to isolate user regions from meta is you can create a > > regionserver group (HBASE-6721) dedicated to the system tables. This is > > what we do at Y!. If the load on meta gets too high (and it does), we > split > > meta so the load gets spread across more regionservers (HBASE-11165) this > > way availability for any client is not affected. Tho agreeing with Stack > > that something is really broken if high priority rpcs cannot get through > to > > meta. > > Does single writer to meta refer to the zkless assignment feature? If > > isn't that feature has been available since 0.98.6 (meta _not_ on > master)? > > and we've been running with it on all our clusters for quite sometime now > > (with some enhancements ie split meta etc). > > Cheers,Francis > > > > On Wednesday, November 16, 2016 10:47 PM, Stack <st...@duboce.net> > > wrote: > > > > > > On Wed, Nov 16, 2016 at 10:44 PM, Stack <st...@duboce.net> wrote: > > > > > On Wed, Nov 16, 2016 at 10:57 AM, Gary Helmling <ghelml...@gmail.com> > > > wrote: > > > > > >> > > >> Do you folks run the meta-carrying-master form G? > > > > > > Pardon me. I missed a paragraph. I see you folks do deploy this form. > > St.Ack > > > > > > > > > > > > > St.Ack > > > > > > > > > > > > > > > > > >> > > >> > > >> > > > > > >> > > Is this just because meta had a dedicated server? > > >> > > > > >> > > > > >> > I'm sure that having dedicated resources for meta helps. But I > don't > > >> think > > >> > that's sufficient. The key is that master writes to meta are local, > > and > > >> do > > >> > not have to contend with the user requests to meta. > > >> > > > >> > It seems premature to be discussing dropping a working > implementation > > >> which > > >> > eliminates painful parts of distributed consensus, until we have a > > >> complete > > >> > working alternative to evaluate. Until then, why are we looking at > > >> > features that are in use and work well? > > >> > > > >> > > > >> > > > >> How to move forward here? The Pv2 master is almost done. An ITBLL > > bakeoff > > >> of new Pv2 based assign vs a Master that exclusively hosts hbase:meta? > > >> > > >> > > >> I think that's a necessary test for proving out the new AM > > implementation. > > >> But remember that we are comparing a feature which is actively > > supporting > > >> production workloads with a line of active development. I think there > > >> should also be additional testing around situations of high meta load > > and > > >> end-to-end assignment latency. > > >> > > > > > > > > > > > > > > > > > >