On Mon, Mar 6, 2017 at 6:15 PM, Enis Söztutar <enis....@gmail.com> wrote:

> Thanks Stack for the nice writeup.
>
> I think we should shoot for an alpha release sooner than 2 months. It gives
> a test target, and will be a great way to test-drive and push for the
> release vehicles (packing, hadoop3, license issues, etc) and also create
> some well-deserved excitement. I can help with that.
>
>
Looking at the list of items in the Core and Tasks (with an eye on the
concurrent thread "A suggestion for releasing major versions faster...") ,
it might be time for a branch -- end of next week or better, the
end-of-the-month? We could push an Alpha soon after?

As I see it, the blockers on hbase2 are:

+ AMv2/Pv2. Its been trickling in for a year or more now. We are close to
throwing the switch on move up on to new AMv2, cornerstone of 1M-regions
effort and fast assignment. There'll be fall-out but we'll be up on a more
solid intent-log, no-zk basis. Could put this off to hbase-3 I suppose but
its all-over the code base half-done; it'll rot if we just leave it.
+ Rolling Restart from branch-1 to branch-2. Has to work. Can't have a
singularity. No work done.
+ Master carrying hbase:meta. Currently it does by default. We have a
running thread on pros and cons still to finish. If master is to carry
hbase:meta, there is work to do. If not, there is work to do.
+ Updating dependencies and shading the critical likely-clashing libs
(netty, guava). No work done.

Other super important stuff that we should fix (criticals) but that don't
warrant hold-up of the release are:

+ Narrative around client operation timeout (Phil Yang doing great work
here rationalizing our timeout mess)
+ Perf (async hdfs client, netty rpcserver, G1GC default, etc.) and
updating defaults.
+ Hadoop3 (EC, etc.)

I don't make mention of criticals in above list that I have confidence will
land in time (inmemory compaction, the offheaping work). I leave aside
criticals that are not getting love (hbase-replication, FS Redo, though it
seems like hbase-spark might see some uptake -- thanks Jerry and crew).

A major release is an opportunity for big changes. It'd be a pity if we
missed this window to first-class sequenceid throughout or come up on HLC,
at least for new tables, or split hbase:meta but as seems to be the push
over in the concurrent thread, these can wait for hbase3.

St.Ack
1.
https://docs.google.com/document/d/1WCsVlnHjJeKUcl7wHwqb4z9iEu_ktczrlKHK8N4SZzs/edit#heading=h.jxxznc91m047



> Enis
>
> On Mon, Mar 6, 2017 at 2:54 PM, Stack <st...@duboce.net> wrote:
>
> > On Mon, Mar 6, 2017 at 2:50 PM, Stack <st...@duboce.net> wrote:
> >
> > > ...
> > > + No recent work on core decision tasks (clean-up narrative around RPC
> > > timeout, hbase:meta on master or not, batch vs partial semantic, etc.)
> > >
> > >
> > Correction. batch vs partial semantic is making goodprogress
> (HBASE-15484).
> > S
> >
> >
> >
> >
> > > Non-criticals/Ancillaries
> > >
> > > + Async client and C++ client are both making good progress. Not done.
> > > + Backup/Restore is making good progress
> > > + RegionServer-based assignment got a bunch of scrutiny lately and is
> now
> > > 'done'.
> > > + FileSystem Quotas making good progress.
> > >
> > > I'm seeing another month or two at least before branch and probably
> > three.
> > > See doc [1] for more detail.
> > >
> > > Yours,
> > > St.Ack
> > >
> > > 1.  https://docs.google.com/document/d/1WCsVlnHjJeKUcl7wHwqb4z9
> > > iEu_ktczrlKHK8N4SZzs/edit#
> > >
> > >
> > >
> > >
> > >
> > > On Sun, Jan 29, 2017 at 9:16 PM, Stack <st...@duboce.net> wrote:
> > >
> > >> On Thu, Jan 26, 2017 at 11:49 PM, ramkrishna vasudevan <
> > >> ramkrishna.s.vasude...@gmail.com> wrote:
> > >>
> > >>> Hi All
> > >>>
> > >>> Thanks Stack. The doc looks great. The offheap write path/read path-
> I
> > >>> think from the read path perspective we have some good feedback from
> > >>> Alibaba folks.
> > >>>
> > >>
> > >> Agree.
> > >>
> > >>
> > >>> The write path subtasks are all done. We are currently working on
> some
> > >>> perf
> > >>> results that would help us to come up with some docs that suggests
> best
> > >>> configs and tunings for the offheap write path configurations.
> > >>>
> > >>>
> > >> Thanks Ram. Would be good to hear what configs you are looking to
> > >> implement as default so those of us also starting to test can enable
> > them
> > >> to get you feedback.
> > >>
> > >> Also suggest you fill the above short status into the doc (You are
> > >> keeping up full status elsewhere). I've been trying to add status as I
> > see
> > >> it popping up; e.g. Enis did a nice state-of-the-C++ client recently
> up
> > in
> > >> JIRA and I added pointer to the 2.0 doc. Anyone else working on 2.0
> > >> features, would be good if you kept a short state in this overview
> doc;
> > >> just ask for edit perms.
> > >>
> > >> Thanks,
> > >> St.Ack
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>>
> > >>> Regards
> > >>> Ram
> > >>>
> > >>> On Thu, Jan 19, 2017 at 5:37 AM, Andrew Purtell <
> > >>> andrew.purt...@gmail.com>
> > >>> wrote:
> > >>>
> > >>> > I'm interested in both split meta and rsgroups. Good news. I'd like
> > to
> > >>> > help test.
> > >>> >
> > >>> >
> > >>> > > On Jan 18, 2017, at 2:53 PM, Stack <st...@duboce.net> wrote:
> > >>> > >
> > >>> > >> On Wed, Jan 18, 2017 at 2:26 PM, Francis Liu <tof...@apache.org
> >
> > >>> wrote:
> > >>> > >>
> > >>> > >> Hi Stack,
> > >>> > >> I'd like to get split meta (HBASE-112288) in 2.x as well. I can
> > >>> have a
> > >>> > 2.x
> > >>> > >> draft up next week. Was working on the 1.x version internally.
> > >>> > >> Also if you'd like I can be the owner for rsgroups as well.
> > >>> > >> Thanks,Francis
> > >>> > >>
> > >>> > >>
> > >>> > >>
> > >>> > >> I added splitting meta as a possible and had you and I as owner
> on
> > >>> > > rsgroups (I was doing to do a bit of testing and doc for this
> > >>> feature).
> > >>> > >
> > >>> > > Would love to see splittable meta show up. Needs to be rolling
> > >>> > upgradeable
> > >>> > > though. Lets chat up on the issue.
> > >>> > > St.Ack
> > >>> > >
> > >>> > >
> > >>> > >
> > >>> > >
> > >>> > >>
> > >>> > >>
> > >>> > >>
> > >>> > >>    On Wednesday, January 18, 2017 11:29 AM, Stack <
> > st...@duboce.net
> > >>> >
> > >>> > >> wrote:
> > >>> > >>
> > >>> > >>
> > >>> > >> Done Thiruvel (And thanks Guanghao for adding
> hbase-replication).
> > >>> > >> St.Ack
> > >>> > >>
> > >>> > >> On Tue, Jan 17, 2017 at 6:11 PM, Thiruvel Thirumoolan <
> > >>> > >> thiru...@yahoo-inc.com.invalid> wrote:
> > >>> > >>
> > >>> > >>> Hi Stack,
> > >>> > >>> I would like to add Favored Nodes to the ancillary section.
> > >>> > >>> HBASE-15531: Favored Nodes EnhancementsStatus: Active
> > >>> > development.Owner:
> > >>> > >>> Thiruvel Thanks!-Thiruvel
> > >>> > >>>
> > >>> > >>>   On Monday, January 16, 2017 2:10 PM, Stack <st...@duboce.net
> >
> > >>> wrote:
> > >>> > >>>
> > >>> > >>>
> > >>> > >>> On Mon, Jan 16, 2017 at 3:01 AM, Guanghao Zhang <
> > >>> zghao...@gmail.com>
> > >>> > >>> wrote:
> > >>> > >>>
> > >>> > >>>> For 6. Significant contirbs in master only, there are some
> > issues
> > >>> > about
> > >>> > >>>> replication operations routed through master. They are
> sub-task
> > >>> > >>>> of HBASE-10504. And there are other umbrella issue for
> > >>> replication,
> > >>> > >> like
> > >>> > >>>> HBase-14379 Replication V2 and HBASE-15867 Moving HBase
> > >>> Replication
> > >>> > >>>> tracking from Zookeeper to HBase. So i thought we can add a
> new
> > >>> > section
> > >>> > >>>> named
> > >>> > >>>> hbase-replication to possible 2.0.0s. This will help us to
> track
> > >>> the
> > >>> > >>> state.
> > >>> > >>>> Thanks.
> > >>> > >>>>
> > >>> > >>>
> > >>> > >>> Thanks Guanghao Zhang. I agree. I made you an editor. If you
> want
> > >>> to
> > >>> > >> have a
> > >>> > >>> go at a first cut, be my guest. If nothing done in the next day
> > or
> > >>> so,
> > >>> > >> I'll
> > >>> > >>> add this section Sir.
> > >>> > >>> Thanks,
> > >>> > >>> M
> > >>> > >>>
> > >>> > >>>
> > >>> > >>>
> > >>> > >>>
> > >>> > >>
> > >>> > >>
> > >>> > >>
> > >>> > >>
> > >>> >
> > >>>
> > >>
> > >>
> > >
> >
>

Reply via email to