+1. I think we had some implementation codes before by using the
Executors. @Willem
Jiang <[email protected]> can you recall why we move the timeout
handle to the alpha server ? is there any particular reason ? I think maybe
the following
1) the omega fails to send the cancel message when timeout happens due to
the network error ? So the compensation will not happen.
2) the omega crashes and can not recovery to send the message when it
re-starts ?

Willem Jiang <[email protected]> 于2018年11月1日周四 上午11:24写道:

> +1 to use the POC show us the fact :)
> Now I'm thinking to let Omega more smart[1] by doing the timeout
> monitor itself to reduce the complexity of Alpha.
> In this way the Alpha just need to store the message and response the
> request from Omega.
>
> [1]https://issues.apache.org/jira/browse/SCB-1000
>
> Willem Jiang
>
> Twitter: willemjiang
> Weibo: 姜宁willem
> On Thu, Nov 1, 2018 at 11:13 AM 赵俊 <[email protected]> wrote:
> >
> > We can write a simple demo to prove reactive or original netty can
> improve throughout using omega/alpha architecture
> >
> >
> > > On Nov 1, 2018, at 8:29 AM, Willem Jiang <[email protected]>
> wrote:
> > >
> > > I thinking to use actor to do the reactive work, but it looks like we
> > > could make alpha more simple by implement some logic on the Omega
> > > side, such as the timeout function.
> > >
> > > Willem Jiang
> > >
> > > Twitter: willemjiang
> > > Weibo: 姜宁willem
> > >
> > > On Thu, Nov 1, 2018 at 1:57 AM wjm wjm <[email protected]> wrote:
> > >>
> > >> async is not enough, better to be reactive.
> > >>
> > >> 赵俊 <[email protected]> 于2018年10月31日周三 下午5:07写道:
> > >>
> > >>> Hi, Willem
> > >>>
> > >>> I think make the last invocation async is limitation for performance
> tuning
> > >>> As block grpc invoking also use async way internal, only blocking in
> > >>> futureTask.get().
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>> On Oct 30, 2018, at 4:51 PM, Willem Jiang <[email protected]>
> > >>> wrote:
> > >>>>
> > >>>> Thanks for feedback,
> > >>>> I just used one participator to show the most simplest way of
> service
> > >>>> interaction.
> > >>>> I just add some words about the "initial service" and the
> > >>>> "participant service".
> > >>>>
> > >>>> Now we could think about how to reduce the overheads of the
> > >>>> distributed transaction.  I think we can make the last invocation
> > >>>> async to speed up the processing, but it could be a challenge for us
> > >>>> to leverage the async remote invocation without introduce the risk
> of
> > >>>> losing messages.
> > >>>>
> > >>>> Any thoughts?
> > >>>>
> > >>>> Willem Jiang
> > >>>>
> > >>>> Twitter: willemjiang
> > >>>> Weibo: 姜宁willem
> > >>>> On Tue, Oct 30, 2018 at 4:37 PM Zheng Feng <[email protected]>
> wrote:
> > >>>>>
> > >>>>> Great work ! It could be more clear if you can mark the invocation
> > >>> arrows
> > >>>>> with the step numbers. And it usual has two or more participants
> in a
> > >>>>> distribute transaction.
> > >>>>> So you need to improve the sequence diagram to show these actors.
> > >>>>>
> > >>>>> It also could be helpful to describe what is the "initial service"
> and
> > >>> the
> > >>>>> "participant service" ?
> > >>>>>
> > >>>>> Willem Jiang <[email protected]> 于2018年10月30日周二 下午4:23写道:
> > >>>>>
> > >>>>>> Hi Team,
> > >>>>>>
> > >>>>>> I wrote a page[1] to analyze the overheads that Saga or TCC could
> > >>>>>> introduce.
> > >>>>>> Please check it out and let me know what you think.
> > >>>>>> You can either reply this mail or just add comment on the wiki
> page.
> > >>>>>>
> > >>>>>> [1]
> > >>>>>>
> > >>>
> https://cwiki.apache.org/confluence/display/SERVICECOMB/Distributed+Transaction+Coordinator+Overhead
> > >>>>>>
> > >>>>>> Willem Jiang
> > >>>>>>
> > >>>>>> Twitter: willemjiang
> > >>>>>> Weibo: 姜宁willem
> > >>>>>>
> > >>>
> > >>>
> >
>

Reply via email to