Hi Amos

Alpha send response to the Omega once the message is updated into redis,
then we just store the transaction events into the database in async way
(we don't change the states here).
Current Redis cluster provides the persistent storage, it could reduce lot
of effort of us.
Now we just use redis as a smaller table for tracking all the unfinished
transaction status to get better performance.
If the transaction is aborted, we can updated the transaction in the DB and
Redis at same time, if any of those calls is failed, I think we just keep
trying to update the status.




Willem Jiang

Twitter: willemjiang
Weibo: 姜宁willem

On Wed, Aug 15, 2018 at 10:48 PM, Zheng Feng <zh.f...@gmail.com> wrote:

> Hi Willem,
>
> It makes sense to use the redis to store the pending transactions (I assume
> that  you mean these are the "HOT" ones). But we could be very careful to
> "write" the transaction status, and it should be stored in the database at
> last. So I think we must make sure the transaction status in the redis and
> the DB is consist and we SHOULD NOT lose the any status of the transaction.
>
> How will you use the redis and the database when storing the status of
> transaction ?
> 1. write to the redis and the redis will sync to the database later. if
> failed, rollback the transaction.
> 2. both write to the redis and the database. if any of them failed,
> rollback the transaction.
>
> We need the more detail :)
>
> Amos
>
> 2018-08-15 8:48 GMT+08:00 Willem Jiang <willem.ji...@gmail.com>:
>
> > Hi,
> >
> > With the help of JuZheng[1][2], we managed to deploy the saga-spring-demo
> > into K8s and start the Jmeter tests for it. By running the test for a
> > while, the DB CPU usage is very high and the response time is up 2~3
> > seconds per call.
> >
> > It looks like all the event are stored into the database in the same
> table
> > and never cleaned.
> > Now we are thinking use redis to store the hot data (the saga transaction
> > which is not closed), and put the cold data (which is used for auditing)
> > into database.  In this way it could keep the event data smaller and the
> > event sanner[4] can just go through the unfinished the Saga transactions
> to
> > fire the timeout event or the compensation event.
> >
> > Any thought?
> >
> > [1]https://github.com/apache/incubator-servicecomb-saga/pull/250
> > [2]https://github.com/apache/incubator-servicecomb-saga/pull/252
> > [3]
> > https://github.com/apache/incubator-servicecomb-saga/
> > tree/master/saga-demo/saga-spring-demo
> > [4]
> > https://github.com/apache/incubator-servicecomb-saga/blob/
> > 44491f1dcbb9353792cb44d0be60946e0e4d7a1a/alpha/alpha-core/
> > src/main/java/org/apache/servicecomb/saga/alpha/core/EventScanner.java
> >
> > Willem Jiang
> >
> > Twitter: willemjiang
> > Weibo: 姜宁willem
> >
>

Reply via email to