One option would be to use a persistent actor as your read side as well,
store the last offset you have seen and stream from after that offset when
the actor has recovered completely, this way you can also provide snapshots
(and possible delete the actor to replay the entire history into a view
Hi everyone. Just waking up this thread (once again) with a related
challenge/question
I'm used to use PersistentView for my read model before they got
deprecated.
I have a CQRS architecture where both write and read sides are cluster
singleton. In write side I use persistent actors to
There is a offset: Long that the query journal can implement in a way that
is suitable. Easiest is to use a timestamp, or a global sequence number if
you have that luxury (e.g. a sql database).
The Cassandra plugin is using a timeuuid (a Cassandra data type) column and
therefore supports queries
Hi Patrik. Thanks for this. I reckon I get it now :-)
--
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> Search the archives: https://groups.google.com/group/akka-user
---
You
Hi everyone. Just waking up this thread with a related challenge/question.
One of the objectives of Persistence Query was to address the limitation that a
PersistentView was only able to project from a single persistence id. As we all
know, this meant that trying to do CQRS in a DDD based
On Thu, Dec 10, 2015 at 7:53 AM, Alan Johnson wrote:
> I'm new to this space, but yes, it seems to me like the read store should
> store lastProcessEventNr someplace, ideally updated in the same transaction
> that updates the view.
Welcome to the community then :-)
Yup,
Hello,
now I better understand queries like allPersistenceIds() and
eventsByPersistenceId(persistenceId). What can I do if persistent query
implementation I use does not support live stream of events? Should I
implement polling myself (schedule message to self and after receive call
query
What you want to do is process all the events coming from the
PersistenceQuery stream and create a view that can be queried, for example:
transform all the events into normalized data and insert them into
ElasticSearch.
A quick example would look like the following (using
Thanks for that, I'm not very familiar with the Akka streams API yet, so
any feedback is very welcome.
I guess the cluster singleton could work for my use case, it could just
communicate to sharded workers that do the data normalization so that it
does not become a bottleneck.
On Friday,