I'm totally agree with @bismy and we should be very careful with these
things.

2018-08-20 11:22 GMT+08:00 bismy <bi...@qq.com>:

> I think async is the problem I mentioned. When the transaction is in
> progress, the transaction log is not actually persistent. And transactions
> recovery may rely on these logs.
>
>
> However, I think this is an implementation problem and maybe there need
> complicated algorithms to make sure the really important data is
> persistent.
>
>
> ------------------ 原始邮件 ------------------
> 发件人: "willem.jiang"<willem.ji...@gmail.com>;
> 发送时间: 2018年8月20日(星期一) 中午11:15
> 收件人: "dev"<dev@servicecomb.apache.org>;
>
> 主题: Re: Performance tuning of ServiceComb Saga Pack
>
>
>
> We could use the async writer to store the events to the database when
> using the redis. Redis cluster has the persistent storage at the same time.
>
>
>
> Willem Jiang
>
> Twitter: willemjiang
> Weibo: 姜宁willem
>
> On Mon, Aug 20, 2018 at 11:08 AM, bismy <bi...@qq.com> wrote:
>
> > One doubt about redis, maybe not correct.
> >
> >
> > For transactions persistence is very important, if we use redis, the
> > transactions may lose persistence property. Event you provide retry
> > mechanism, but how about the redis instance is restarted and memory data
> > get lost?
> >
> >
> > Do you mean that using redis clusters and sync memory data between
> > clusters and assume that clusters is high available?
> >
> >
> > ------------------ 原始邮件 ------------------
> > 发件人: "willem.jiang"<willem.ji...@gmail.com>;
> > 发送时间: 2018年8月16日(星期四) 中午11:00
> > 收件人: "dev"<dev@servicecomb.apache.org>;
> >
> > 主题: Re: Performance tuning of ServiceComb Saga Pack
> >
> >
> >
> > We cannot guarantee to update the redis and DB at the same time, we can
> > just do the retry in our code.
> >
> >
> >
> > Willem Jiang
> >
> > Twitter: willemjiang
> > Weibo: 姜宁willem
> >
> > On Thu, Aug 16, 2018 at 10:29 AM, fu chengeng <oliug...@hotmail.com>
> > wrote:
> >
> > > Hi Willem
> > > why not just store  'finished transacation' data to db in a async
> way.can
> > > we guarantee update on both
> > > db and redis are success at same time when the transation is aborted?
> > > 发件人: Willem Jiang
> > > 发送时间: 8月16日星期四 09:40
> > > 主题: Re: Performance tuning of ServiceComb Saga Pack
> > > 收件人: dev@servicecomb.apache.org
> > >
> > >
> > > Hi Amos Alpha send response to the Omega once the message is updated
> into
> > > redis, then we just store the transaction events into the database in
> > async
> > > way (we don't change the states here). Current Redis cluster provides
> the
> > > persistent storage, it could reduce lot of effort of us. Now we just
> use
> > > redis as a smaller table for tracking all the unfinished transaction
> > status
> > > to get better performance. If the transaction is aborted, we can
> updated
> > > the transaction in the DB and Redis at same time, if any of those calls
> > is
> > > failed, I think we just keep trying to update the status. Willem Jiang
> > > Twitter: willemjiang Weibo: 姜宁willem On Wed, Aug 15, 2018 at 10:48 PM,
> > > Zheng Feng wrote: > Hi Willem, > > It makes sense to use the redis to
> > store
> > > the pending transactions (I assume > that you mean these are the "HOT"
> > > ones). But we could be very careful to > "write" the transaction
> status,
> > > and it should be stored in the database at > last. So I think we must
> > make
> > > sure the transaction status in the redis and > the DB is consist and we
> > > SHOULD NOT lose the any status of the transaction. > > How will you use
> > the
> > > redis and the database when storing the status of > transaction ? > 1.
> > > write to the redis and the redis will sync to the database later. if >
> > > failed, rollback the transaction. > 2. both write to the redis and the
> > > database. if any of them failed, > rollback the transaction. > > We
> need
> > > the more detail :) > > Amos > > 2018-08-15 8:48 GMT+08:00 Willem Jiang
> :
> > >
> > > > > Hi, > > > > With the help of JuZheng[1][2], we managed to deploy
> the
> > > saga-spring-demo > > into K8s and start the Jmeter tests for it. By
> > running
> > > the test for a > > while, the DB CPU usage is very high and the
> response
> > > time is up 2~3 > > seconds per call. > > > > It looks like all the
> event
> > > are stored into the database in the same > table > > and never cleaned.
> > > >
> > > Now we are thinking use redis to store the hot data (the saga
> > transaction >
> > > > which is not closed), and put the cold data (which is used for
> > auditing)
> > > > > into database. In this way it could keep the event data smaller and
> > the
> > > > > event sanner[4] can just go through the unfinished the Saga
> > > transactions > to > > fire the timeout event or the compensation event.
> > > >
> > > > > Any thought? > > > > [1]https://github.com/apache/
> > > incubator-servicecomb-saga/pull/250 > > [2]https://github.com/apache/
> > > incubator-servicecomb-saga/pull/252 > > [3] > >
> > https://github.com/apache/
> > > incubator-servicecomb-saga/ > > tree/master/saga-demo/saga-spring-demo
> >
> > > > [4] > > https://github.com/apache/incubator-servicecomb-saga/blob/
> > >
> > > 44491f1dcbb9353792cb44d0be60946e0e4d7a1a/alpha/alpha-core/ > >
> > > src/main/java/org/apache/servicecomb/saga/alpha/core/EventScanner.java
> >
> > > > > > Willem Jiang > > > > Twitter: willemjiang > > Weibo: 姜宁willem >
> > >
> > >
> > >
> >
>

Reply via email to