>
> We've done the work to make sure hbase:meta
> is up before everything else. It has its own WALs so we can split these
> ahead of user-space WALs, and so on. We've not done the work to for
> hbase:replication or hbase:namespace, hbase:acl... etc.

If we import a new SYSTEM-TABLE-ONLY state for region server startup, then
it is necessary to own WALs for all system tables (not only hbase:meta).
All system tables have their own wal and split these ahead of user-space
WALs. And the WAL of system tables no need replication. So we can start a
region server without replication. After all system table online, region
server can continue start from SYSTEM-TABLE-ONLY to STARTED.

Thanks.

2018-03-16 10:12 GMT+08:00 OpenInx <open...@gmail.com>:

> The HBASE-15867 will not be introduced into 2.0.0,   I expect to introduce
> in 2.1.0 release .
>
> Thanks.
>
> On Fri, Mar 16, 2018 at 12:45 AM, Mike Drob <md...@apache.org> wrote:
>
> > I'm also +1 for splitting RS startup into multiple steps.
> >
> > Looking at the linked JIRA and the parent issue it was not immediately
> > apparent if this is an issue for 2.0 or not - can somebody clarify?
> >
> > On Thu, Mar 15, 2018 at 5:14 AM, 张铎(Duo Zhang) <palomino...@gmail.com>
> > wrote:
> >
> > > I'm +1 on the second solution.
> > >
> > > 2018-03-15 16:59 GMT+08:00 Guanghao Zhang <zghao...@gmail.com>:
> > >
> > > > From a more general perspective, this may be a general problem as we
> > may
> > > > move more and more data from zookeeper to system table. Or we may
> have
> > > more
> > > > features to create new system table. But if the RS relays some system
> > > table
> > > > to start up, we will meet a dead lock...
> > > >
> > > > One solution is let master to serve system table only. So the cluster
> > > > startup will have two step. First startup master to serve system
> table.
> > > > Then start region servers. But the problem is master will have
> > > > more responsibility and may be a bottleneck.
> > > >
> > > > Another solution is break RS startup progress to two steps. First
> step
> > is
> > > > "serve system table only". Second step is "totally startup and serve
> > any
> > > > tables". It means we will import a new state for RS startup. A RS's
> > > startup
> > > > progress will be STOPPED ==> SYSTEM-TABLE-ONLY ==> STARTED. But this
> > may
> > > > need more refactor for our RS code.
> > > >
> > > > Thanks.
> > > >
> > > > 2018-03-15 15:57 GMT+08:00 张铎(Duo Zhang) <palomino...@gmail.com>:
> > > >
> > > > > Oh, it should be 'The replication peer related data is small'.
> > > > >
> > > > > 2018-03-15 15:56 GMT+08:00 张铎(Duo Zhang) <palomino...@gmail.com>:
> > > > >
> > > > > > I think this is a bit awkward... A region server even does not
> need
> > > the
> > > > > > meta table to be online when starting, but it needs another
> system
> > > > table
> > > > > > when starting...
> > > > > >
> > > > > > I think unless we can make the regionserver start without
> > > replication,
> > > > > and
> > > > > > initialize it later, otherwise we can not break the tie. Having a
> > > > special
> > > > > > 'region server' seems a bad smell to me. What's the advantage
> > > comparing
> > > > > to
> > > > > > zk?
> > > > > >
> > > > > > BTW, I believe that we only need the ReplicationPeerStorage to be
> > > > > > available when starting a region server, so we can keep this data
> > in
> > > > zk,
> > > > > > and storage the queue related data to hbase:replication table?
> The
> > > > > > replication related data is small so I think this is OK.
> > > > > >
> > > > > > Thanks.
> > > > > >
> > > > > > 2018-03-15 14:55 GMT+08:00 OpenInx <open...@gmail.com>:
> > > > > >
> > > > > >> Hi :
> > > > > >>
> > > > > >> (Paste from https://issues.apache.org/jira/browse/HBASE-20166?
> > > > > >> focusedCommentId=16399886&page=com.atlassian.jira.
> > > > > >> plugin.system.issuetabpanels%3Acomment-tabpanel#comment-
> 16399886)
> > > > > >>
> > > > > >> There's a really big problem here if we use table based
> > replication
> > > to
> > > > > >> start a hbase cluster:
> > > > > >>
> > > > > >> For HMaster process, it works as following:
> > > > > >> 1. Start active master initialization .
> > > > > >> 2. Master wait rs report in .
> > > > > >> 3. Master assign meta region to one of the region servers .
> > > > > >> 4. Master create hbase:replication table if not exist.
> > > > > >>
> > > > > >> But the RS need to finish initialize the replication source &
> sink
> > > > > before
> > > > > >> finish startup( and the initialization of replication source &
> > sink
> > > > must
> > > > > >> finish before opening region, because we need to listen the wal
> > > event,
> > > > > >> otherwise our replication may lost data), and when initialize
> the
> > > > > source &
> > > > > >> sink , we need to read hbase:replication table which hasn't been
> > > > > avaiable
> > > > > >> because our master is waiting rs to be OK, and the rs is waiting
> > > > > >> hbase:replication to be OK ... a dead loop happen again ...
> > > > > >>
> > > > > >> After discussed with Guanghao Zhang offline, I'm considering
> that
> > > try
> > > > to
> > > > > >> assign all system table to a rs which only accept regions of
> > system
> > > > > table
> > > > > >> assignment (The rs will skip to initialize the replication
> source
> > or
> > > > > sink
> > > > > >> )...
> > > > > >>
> > > > > >> I've tried to start a mini cluster by setting
> > > > > >> hbase.balancer.tablesOnMaster.systemTablesOnly=true
> > > > > >> & hbase.balancer.tablesOnMaster=true , it seems not work.
> because
> > > > > >> currently
> > > > > >> we initialize the master logic firstly, then region logic for
> the
> > > > > HMaster
> > > > > >> process, and it should be ...
> > > > > >>
> > > > > >>
> > > > > >> Any  suggestion  ?
> > > > > >>
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
>
>
> --
> ==============================
> Openinx  blog : http://openinx.github.io
>
> TO BE A GREAT HACKER !
> ==============================
>

Reply via email to