Re: [akka-user] Event Stores for Akka Persistence for CQRS?

2014-08-19 Thread Martin Krasser

Hi Ashley,

thanks for bringing up these questions. Here are some general comments:

as you already mentioned (in different words) akka-persistence is 
currently optimized around write models rather than read models (= Q in 
CQRS) i.e it is optimized for fast, scalable persistence and recovery of 
stateful actors (= PersistentActor).


For full CQRS support, the discussions so far (in several other threads) 
make the assumption that both write and read models are backed by the 
same backend store (assuming read models are maintained by 
PersistentView actor, receiving a stream of events from synthetic or 
physical topics). This is a severe limitation, IMO. As Greg already 
mentioned elsewhere, some read models may be best backed by a graph 
database, for example. Although a graph database may be good for backing 
certain read models, it may have limitations for fast logging of events 
(something where Kafka or Cassandra are very good at). Consequently, it 
definitely makes sense to have different backend stores for write and 
read models.


If akka-persistence should have support for CQRS in the future, its 
design/API should be extended to allow different backend stores for 
write and read models (of course, a provider may choose to use the same 
backend store to serve both which may be a reasonable default). This way 
PersistentActors log events to one backend store and PersistentViews (or 
whatever consumer) generate read models from the other backend store. 
Data transfer between these backend stores can be 
implementation-specific for optimization purposes. For example


- Cassandra (for logging events) = Spark (to batch-process logged 
events) = Graph database XY (to store events processed with Spark), or
- Kafka (for logging events) = Spark Streaming (to stream-process 
logged events) = Database XY (to store events processed with Spark 
Streaming)

- ...

These are just two examples how read model backend stores can be 
populated in a highly scalable way (both in batch and streaming mode). 
Assuming akka-persistence provides an additional plugin API for storage 
backends on the read model side (XY in the examples above) a wide range 
of CQRS applications could be covered with whatever scalability and/or 
ordering requirements needed by the respective applications. In case you 
want to read more about it, take a look at akka-analytics 
https://github.com/krasserm/akka-analytics (it is very much work in 
progress as I'm waiting for Spark to upgrade to Akka 2.3 and Kafka to 
Scala 2.11)


WDYT?

Cheers,
Martin

On 19.08.14 04:52, Ashley Aitken wrote:


I'm keen to hear other people's thoughts on the choice of an event 
store for Akka Persistence for doing CQRS.


As mentioned in my other post, I believe that Akka Persistence only 
provides part of the story for CQRS (but a very important part) and 
that other stores will most likely be needed for query models (both 
SQL and NOSQL stores).


Since they are project specific I would like to focus here on what 
store is best for Akka Persistence for CQRS.


Right now my leading contenders are Kafka and Event Store (but I 
haven't thought too much about Cassandra or MongoDB etc).  My 
knowledge of all of these is limited so please excuse and correct me 
if any of my statements are wrong.



KAFKA: Apache Kafka is publish-subscribe messaging rethought as a 
distributed commit log.

Persistent topics for publishing and subscribing
Highly scalable and distributed
Need to manually create topics for projections
Each topic has own copy of events
Writing to multiple topics is not atomic
Allows logs to be kept for different amounts of time
Battle tested technology from LinkedIn
Not generally used a lifetime store for events

http://kafka.apache.org
https://github.com/krasserm/akka-persistence-kafka/


EVENT STORE: The open-source, functional database with Complex Event 
Processing in JavaScript.

Built specifically for storing and projecting events
Store event once and create virtual projection streams
Journal plugin at early stage in development
Projections are still beta but finalising soon
JSON serialisation (which has +ve and -ve points)
Javascript for projection stream specification
Atom interface helps with debugging
Not as distributed or scalable?
Includes temporal criteria for streams

http://geteventstore.com
https://github.com/EventStore/EventStore.Akka.Persistence

Personally, I like the potential Kafka has to be the event store / log 
for CQRS but also the store for logs in big data processing and 
analytics.  However, the fact that events need to be manually 
replicated to different topics and problems that would be caused if 
this wasn't consistent is a worry.


On the other hand, Event Store has been specifically designed and 
built for event store and projection processing by a leader in the 
field of CQRS.  However, it uses a unique set of technologies and I am 
not sure of it has been battle tested by many or its long term viability.


What do 

Re: [akka-user] Event Stores for Akka Persistence for CQRS?

2014-08-19 Thread Patrik Nordwall
On Tue, Aug 19, 2014 at 8:51 AM, Martin Krasser krass...@googlemail.com
wrote:

  Hi Ashley,

 thanks for bringing up these questions. Here are some general comments:

 as you already mentioned (in different words) akka-persistence is
 currently optimized around write models rather than read models (= Q in
 CQRS) i.e it is optimized for fast, scalable persistence and recovery of
 stateful actors (= PersistentActor).

 For full CQRS support, the discussions so far (in several other threads)
 make the assumption that both write and read models are backed by the same
 backend store (assuming read models are maintained by PersistentView actor,
 receiving a stream of events from synthetic or physical topics).


That is not my view of it, at all. PersistentView is a way to replicate the
events to the read side, which typically will store a denormalized
representation optimized for queries. That query store is typically not the
same as the event store, because the requirements are very different.

Some simple read models may keep this representation in-memory but that is
not what I see as the most common case.

/Patrik


 This is a severe limitation, IMO. As Greg already mentioned elsewhere,
 some read models may be best backed by a graph database, for example.
 Although a graph database may be good for backing certain read models, it
 may have limitations for fast logging of events (something where Kafka or
 Cassandra are very good at). Consequently, it definitely makes sense to
 have different backend stores for write and read models.

 If akka-persistence should have support for CQRS in the future, its
 design/API should be extended to allow different backend stores for write
 and read models (of course, a provider may choose to use the same backend
 store to serve both which may be a reasonable default). This way
 PersistentActors log events to one backend store and PersistentViews (or
 whatever consumer) generate read models from the other backend store. Data
 transfer between these backend stores can be implementation-specific for
 optimization purposes. For example

 - Cassandra (for logging events) = Spark (to batch-process logged events)
 = Graph database XY (to store events processed with Spark), or
 - Kafka (for logging events) = Spark Streaming (to stream-process logged
 events) = Database XY (to store events processed with Spark Streaming)
 - ...

 These are just two examples how read model backend stores can be populated
 in a highly scalable way (both in batch and streaming mode). Assuming
 akka-persistence provides an additional plugin API for storage backends on
 the read model side (XY in the examples above) a wide range of CQRS
 applications could be covered with whatever scalability and/or ordering
 requirements needed by the respective applications. In case you want to
 read more about it, take a look at akka-analytics
 https://github.com/krasserm/akka-analytics (it is very much work in
 progress as I'm waiting for Spark to upgrade to Akka 2.3 and Kafka to Scala
 2.11)

 WDYT?

 Cheers,
 Martin


 On 19.08.14 04:52, Ashley Aitken wrote:


 I'm keen to hear other people's thoughts on the choice of an event store
 for Akka Persistence for doing CQRS.

  As mentioned in my other post, I believe that Akka Persistence only
 provides part of the story for CQRS (but a very important part) and that
 other stores will most likely be needed for query models (both SQL and
 NOSQL stores).

  Since they are project specific I would like to focus here on what store
 is best for Akka Persistence for CQRS.

  Right now my leading contenders are Kafka and Event Store (but I haven't
 thought too much about Cassandra or MongoDB etc).  My knowledge of all of
 these is limited so please excuse and correct me if any of my statements
 are wrong.


  KAFKA: Apache Kafka is publish-subscribe messaging rethought as a
 distributed commit log.
 Persistent topics for publishing and subscribing
  Highly scalable and distributed
 Need to manually create topics for projections
 Each topic has own copy of events
 Writing to multiple topics is not atomic
 Allows logs to be kept for different amounts of time
 Battle tested technology from LinkedIn
 Not generally used a lifetime store for events

  http://kafka.apache.org
  https://github.com/krasserm/akka-persistence-kafka/


  EVENT STORE: The open-source, functional database with Complex Event
 Processing in JavaScript.
 Built specifically for storing and projecting events
  Store event once and create virtual projection streams
 Journal plugin at early stage in development
 Projections are still beta but finalising soon
 JSON serialisation (which has +ve and -ve points)
 Javascript for projection stream specification
 Atom interface helps with debugging
 Not as distributed or scalable?
 Includes temporal criteria for streams

  http://geteventstore.com
  https://github.com/EventStore/EventStore.Akka.Persistence

  Personally, I like the potential Kafka has to be the event 

Re: [akka-user] Improving Akka Persistence wrt CQRS/ES/DDD

2014-08-19 Thread Ashley Aitken

Thank you Greg, I hadn't thought of the Event Store JVM Client for the read 
model https://github.com/EventStore/EventStore.JVM

So I assume one would generally have a ConnectionActor for each custom 
event stream that is required to keep a particular query store up-to-date 
on the read side and these could be in the same or different applications?

It would be nice if PersistentView could just specify an identifier for a 
custom event stream (e.g. from Event Store) and process those events 
appropriately (start after previous last event, restart as needed, etc.)

Also, I still can't see a solution for sagas that can maintain their state 
over crashes, e.g. as a PersistentActor, but also track or replay events 
after a particular time from another PersistentActor(s).  But this is on 
the write side.


On Tuesday, 19 August 2014 13:18:35 UTC+8, Greg Young wrote:

 everything you list here is available today via akka.persistence + event 
 store adapter  a durable subscription (akka event store client) on the 
 read model side.

 On Monday, August 18, 2014 12:01:36 PM UTC-4, Ashley Aitken wrote:

 Hi Roland (and everyone),


 Welcome back Roland - I hope you had a great vacation.


 Thank you for your post.  



 Here’s my response summary:


 I believe Akka needs to allow actors to:


 (i) persist events with as much information as efficiently possible on 
 the write side to allow the store to facilitate the read side extracting 
 them according to what criteria is needed,


 (ii) persist events that don’t relate to a change in state of the actor 
 per se, which I assume is already achievable since an actor can just ignore 
 them on replay, 


 (iii) read from (and replay) streams of events on the read and write side 
 according to a range of criteria supported and defined within the store or 
 via the store API (e.g. using a DSL), and


 (iv) reliably (at least once) deliver information to other read side 
 store(s) and systems above and beyond the store used for persisting the 
 events.


 I believe each of these is readily achievable with Akka but:


 (i) doesn’t mean explicitly persisting the events to specific topics as 
 you suggest in your (1) (although this may be how some stores implement the 
 required functionality on the read side). Instead it means transparently 
 including information like the actorId, event type, actor type, probably 
 the time and possibly information to help with causal ordering (see my next 
 post).


 (iii) with (i) would enable the read side (if the store supports it) to 
 read all events from a particular actor(s), of particular event types, to 
 read events from a particular type(s) of actors, and to read all events. 
  It would also need to allow the read side to read from where it last 
 finished reading, from now, and from the start again.  (iv) is necessary 
 for projections.  



 If you are interested, here’s my detailed explanation:


 I think some of the confusion surrounding these issues is caused by the 
 fact that we seem to be discussing and, if I may suggest, Akka appears to 
 be trying to implement three quite different (but also somewhat related) 
 pieces of functionality within this domain.  These are:


 A. Actor Persistence


 The ability to persist actor state changes incrementally (or wholly) and 
 reconstruct that state at a later time, which we know as event sourcing.  I 
 think Akka provides a great distributed and scalable mechanism for doing 
 this with the current akka.persistence.


 B. Publish/Subscribe to Persistent Queues/Topics


 This functionality would allow actors to write data/events/messages to 
 one (or more) topics and to subscribe to receive similar from one or more 
 topics.  These differ from normal publish/subscribe queues in that they are 
 persistent and the consumer can reread from the topic.


 This is what I think of as the LogProducer and LogConsumer, of which 
 PersistentActor and PersistentView can be thought of as specialisations, 
 i.e. a single topic for each actor.  The current and popular example of a 
 store for this sort of functionality, as you know, is Kafka. 


 C. CQRS with Event Sourcing


 And finally, there is CQRS with Event Sourcing, which I believe is much 
 more that (A) and (B) and particularly doesn’t necessarily require (B.) for 
 all event stores.  So if Akka were to implement (B), which I think would be 
 very useful for other reasons, it would not specifically be for CQRS.


 Please consider this diagram overviewing CQRS with Event Sourcing:


 
 https://www.dropbox.com/s/z2iu0xi4ki42sl7/annotated_cqrs_architecture.jpg
 


 adapted from 


 http://www.gridshore.nl/wp-content/uploads/cqrs_architecture.jpg


 As I understand it, CQRS separates the write model and store from one or 
 *more* read models and stores, with each model and store being optimised 
 for their particular role.  CQRS says nothing specific about the types of 
 store (e.g. SQL or NOSQL, event sourced or not) and how 

[akka-user] akka app design questions

2014-08-19 Thread Fej
Hello,

I'm trying to learn akka and need a clarification how to design an 
application. Lets imagine simple HTTP CRUD app, which handles some 
documents stored in mongodb. For each of this operations I'd need at least 
one actor processing request, fetching db record and formatting the output? 
When do I create actors? Is the idea to create a new actor for each new 
http request? Actors can be started, stopped, but when to do so in context 
of this simple app?

To handle blocking calls to RDMBS the documentations suggests to create an 
dedicated actor and make sure to configure a thread pool which is either 
dedicated for this purpose or sufficiently size. Is there any example 
implementation?

Thanks!

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Event Stores for Akka Persistence for CQRS?

2014-08-19 Thread Martin Krasser


On 19.08.14 10:00, Patrik Nordwall wrote:




On Tue, Aug 19, 2014 at 8:51 AM, Martin Krasser 
krass...@googlemail.com mailto:krass...@googlemail.com wrote:


Hi Ashley,

thanks for bringing up these questions. Here are some general
comments:

as you already mentioned (in different words) akka-persistence is
currently optimized around write models rather than read models (=
Q in CQRS) i.e it is optimized for fast, scalable persistence and
recovery of stateful actors (= PersistentActor).

For full CQRS support, the discussions so far (in several other
threads) make the assumption that both write and read models are
backed by the same backend store (assuming read models are
maintained by PersistentView actor, receiving a stream of events
from synthetic or physical topics).


That is not my view of it, at all. PersistentView is a way to 
replicate the events to the read side, which typically will store a 
denormalized representation optimized for queries. That query store is 
typically not the same as the event store, because the requirements 
are very different.


I agree, but recent discussions were about how to join events from 
several topics/streams that a PersistentView receives (e.g. all events 
of an aggregate type or based on a user-defined join/query). Stores 
(journal) that are optimized for high write-throughput are not 
necessarily the best choice for serving these joins/queries in an 
efficient way. Furthermore, why should I maintain a read model datastore 
via


journal - akka actor(s) - read model datastore

when I can do this much more efficiently via

journal - spark - read model datastore

directly, for example?



Some simple read models may keep this representation in-memory but 
that is not what I see as the most common case.


/Patrik

This is a severe limitation, IMO. As Greg already mentioned
elsewhere, some read models may be best backed by a graph
database, for example. Although a graph database may be good for
backing certain read models, it may have limitations for fast
logging of events (something where Kafka or Cassandra are very
good at). Consequently, it definitely makes sense to have
different backend stores for write and read models.

If akka-persistence should have support for CQRS in the future,
its design/API should be extended to allow different backend
stores for write and read models (of course, a provider may choose
to use the same backend store to serve both which may be a
reasonable default). This way PersistentActors log events to one
backend store and PersistentViews (or whatever consumer) generate
read models from the other backend store. Data transfer between
these backend stores can be implementation-specific for
optimization purposes. For example

- Cassandra (for logging events) = Spark (to batch-process logged
events) = Graph database XY (to store events processed with
Spark), or
- Kafka (for logging events) = Spark Streaming (to stream-process
logged events) = Database XY (to store events processed with
Spark Streaming)
- ...

These are just two examples how read model backend stores can be
populated in a highly scalable way (both in batch and streaming
mode). Assuming akka-persistence provides an additional plugin API
for storage backends on the read model side (XY in the examples
above) a wide range of CQRS applications could be covered with
whatever scalability and/or ordering requirements needed by the
respective applications. In case you want to read more about it,
take a look at akka-analytics
https://github.com/krasserm/akka-analytics (it is very much work
in progress as I'm waiting for Spark to upgrade to Akka 2.3 and
Kafka to Scala 2.11)

WDYT?

Cheers,
Martin


On 19.08.14 04:52, Ashley Aitken wrote:


I'm keen to hear other people's thoughts on the choice of an
event store for Akka Persistence for doing CQRS.

As mentioned in my other post, I believe that Akka Persistence
only provides part of the story for CQRS (but a very important
part) and that other stores will most likely be needed for query
models (both SQL and NOSQL stores).

Since they are project specific I would like to focus here on
what store is best for Akka Persistence for CQRS.

Right now my leading contenders are Kafka and Event Store (but I
haven't thought too much about Cassandra or MongoDB etc).  My
knowledge of all of these is limited so please excuse and correct
me if any of my statements are wrong.


KAFKA: Apache Kafka is publish-subscribe messaging rethought as a
distributed commit log.
Persistent topics for publishing and subscribing
Highly scalable and distributed
Need to manually create topics for projections
Each topic has own copy of events
Writing to multiple topics is not atomic
Allows 

Re: [akka-user] Event Stores for Akka Persistence for CQRS?

2014-08-19 Thread ahjohannessen

On Tuesday, August 19, 2014 9:53:46 AM UTC+1, Martin Krasser wrote:

I agree, but recent discussions were about how to join events from several 
 topics/streams that a PersistentView receives (e.g. all events of an 
 aggregate type or based on a user-defined join/query)...


I think the most realistic approach is to limit join of events to 
persistent actors with same topic (e.g. type) and not arbitrary 
combinations of several topics/streams, because that can be
done much better with a read model datastore.

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Event Stores for Akka Persistence for CQRS?

2014-08-19 Thread Ashley Aitken


On Tuesday, 19 August 2014 14:51:42 UTC+8, Martin Krasser wrote:

For full CQRS support, the discussions so far (in several other threads) 
 make the assumption that both write and read models are backed by the same 
 backend store (assuming read models are maintained by PersistentView actor, 
 receiving a stream of events from synthetic or physical topics). This is 
 a severe limitation, IMO. As Greg already mentioned elsewhere, some read 
 models may be best backed by a graph database, for example. Although a 
 graph database may be good for backing certain read models, it may have 
 limitations for fast logging of events (something where Kafka or Cassandra 
 are very good at). Consequently, it definitely makes sense to have 
 different backend stores for write and read models. 


Yes, I agree.  This is mentioned in my (long and poorly formatted) post: 
https://groups.google.com/d/msg/akka-user/SL5vEVW7aTo/ybeJKoayd_8J
 

 If akka-persistence should have support for CQRS in the future, its 
 design/API should be extended to allow different backend stores for write 
 and read models (of course, a provider may choose to use the same backend 
 store to serve both which may be a reasonable default). This way 
 PersistentActors log events to one backend store and PersistentViews (or 
 whatever consumer) generate read models from the other backend store. Data 
 transfer between these backend stores can be implementation-specific for 
 optimization purposes. 


I personally cannot see why Akka Persistence has to extend that far?  I 
think it may be able to stop at reliable (at least once delivery) to 
another actor connecting to a query store on the read side.   I think it 
may only need to cover [1] and [2] in this diagram:

https://www.dropbox.com/s/z2iu0xi4ki42sl7/annotated_cqrs_architecture.jpg

without forgetting sagas ;-)

BTW, can a PersistentView do AtLeastOnceDelivery?  I don't think so as ALOD 
seems to need a PersistentActor to maintain its delivery state.  But then 
how can a PersistentView reliably deliver events to an actor representing a 
query store?

It seems one needs a PersistentView that can read from a real (or 
synthetic) persistent event stream but also have its own persistence 
journal to maintain its delivery state.  Is this possible with some mixture 
of extend/mixins?
 

 For example

 - Cassandra (for logging events) = Spark (to batch-process logged events) 
 = Graph database XY (to store events processed with Spark), or
 - Kafka (for logging events) = Spark Streaming (to stream-process logged 
 events) = Database XY (to store events processed with Spark Streaming)
 - ...

 These are just two examples how read model backend stores can be populated 
 in a highly scalable way (both in batch and streaming mode). Assuming 
 akka-persistence provides an additional plugin API for storage backends on 
 the read model side (XY in the examples above) a wide range of CQRS 
 applications could be covered with whatever scalability and/or ordering 
 requirements needed by the respective applications. In case you want to 
 read more about it, take a look at akka-analytics 
 https://github.com/krasserm/akka-analytics (it is very much work in 
 progress as I'm waiting for Spark to upgrade to Akka 2.3 and Kafka to Scala 
 2.11)

 WDYT?


That sounds very interesting, thank you for explaining. I will read up on 
Akka-Analytics.

I guess though for simpler systems the read side application could use Akka 
Persistence to write to various query stores (as I mentioned above) and 
also handle queries from the clients (send query to query store, process 
response, repackage for client).  

Finally, how do you see Cassandra comparing to Event Store in providing 
synthetic streams for the read side (i.e. can it)?

Cheers,
Ashley.


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Event Stores for Akka Persistence for CQRS?

2014-08-19 Thread Ashley Aitken


On Tuesday, 19 August 2014 16:53:46 UTC+8, Martin Krasser wrote:


 journal - akka actor(s) - read model datastore 

 when I can do this much more efficiently via 

 journal - spark - read model datastore


 directly, for example

 
I am confused, are you suggesting that spark is talking to the journal data 
store directly, without any involvement of Akka / Akka Persistence?  If so, 
it sounds like a great solution but why would that require an extension to 
the Akka Persistence design/API?




-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Difference between ClusterReceptionistExtension(system).registerService and DistributedPubSubMediator.Put

2014-08-19 Thread Akka Team
I don't think that it's necessary to register it twice. You can just go with 
registerService. Right it they would just overwrite itself in the 
DistributedPubSubMediator.

B/

On 19 August 2014 at 06:51:55, JasonQ (quqingm...@gmail.com) wrote:

Thanks Björn.

So if I have an Actor which need receive message from both a Mediator and a 
ClusterClient, should I call both put and registerService in case there's some 
change in the future or only call registerService is enough? 

On Monday, August 18, 2014 8:16:44 PM UTC+8, Björn Antonsson wrote:
Hi,

The registerService call right now only uses DistributedPubSubMediator.Put. 
There is no difference at all at the moment. There might be in the future 
though.

B/

On 18 August 2014 at 07:10:31, JasonQ (quqin...@gmail.com) wrote:

Hello, 

Per my understanding after going through AKKA document,  if I want to use 
mediator to send message to some actor in the cluster , I need use 
DistributedPubSubMediator.Put.  When using ClusterClient.send, there's sample 
code using registerService in the doc instead of DistributedPubSubMediator.Put. 
Function wise, is there any difference between 
ClusterReceptionistExtension(system).registerService and 
DistributedPubSubMediator.Put? 
--
 Read the docs: http://akka.io/docs/
 Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
 Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+...@googlegroups.com.
To post to this group, send email to akka...@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.
-- 
Björn Antonsson
Typesafe – Reactive Apps on the JVM
twitter: @bantonsson

--
 Read the docs: http://akka.io/docs/
 Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
 Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Event Stores for Akka Persistence for CQRS?

2014-08-19 Thread Roland Kuhn

19 aug 2014 kl. 11:28 skrev Ashley Aitken amait...@gmail.com:

 
 
 On Tuesday, 19 August 2014 16:53:46 UTC+8, Martin Krasser wrote:
 
 journal - akka actor(s) - read model datastore 
 
 when I can do this much more efficiently via 
 
 journal - spark - read model datastore
 
 directly, for example
  
 I am confused, are you suggesting that spark is talking to the journal data 
 store directly, without any involvement of Akka / Akka Persistence?  If so, 
 it sounds like a great solution but why would that require an extension to 
 the Akka Persistence design/API?

Well, another comment is that spark uses Akka actors in its implementation, so 
I don’t see why it would magically be “much more efficient”. I think we are 
mixing up two concerns here, will reply later when I can type properly again.

Regards, 

Roland

 
 
 
 
 
 -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
  http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google Groups 
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.



Dr. Roland Kuhn
Akka Tech Lead
Typesafe – Reactive apps on the JVM.
twitter: @rolandkuhn


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Event Stores for Akka Persistence for CQRS?

2014-08-19 Thread Martin Krasser


On 19.08.14 11:28, Ashley Aitken wrote:



On Tuesday, 19 August 2014 16:53:46 UTC+8, Martin Krasser wrote:


journal - akka actor(s) - read model datastore

when I can do this much more efficiently via

journal - spark - read model datastore


directly, for example

I am confused, are you suggesting that spark is talking to the journal 
data store directly, without any involvement of Akka / Akka Persistence?


Yes. Why looping it through journal actors if something like the 
spark-cassandra-connector, for example, is able to do this in a highly 
scalable way? If one would like to have the same read scalability with 
an akka-persistence plugin, one would partly need to re-implement what 
is already done by the spark-cassandra-connector.


If so, it sounds like a great solution but why would that require an 
extension to the Akka Persistence design/API?


Because transformed/joined/... event streams in backend store on the 
read side must be consumable by PersistentViews (for creating read 
models). I still see this backend store to maintain changes (= 
transformed/joined/... events) instead of current state.







--
 Read the docs: http://akka.io/docs/
 Check the FAQ: 
http://doc.akka.io/docs/akka/current/additional/faq.html

 Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the Google 
Groups Akka User List group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to akka-user+unsubscr...@googlegroups.com 
mailto:akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com 
mailto:akka-user@googlegroups.com.

Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


--
Martin Krasser

blog:http://krasserm.blogspot.com
code:http://github.com/krasserm
twitter: http://twitter.com/mrt1nz

--

 Read the docs: http://akka.io/docs/
 Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html
 Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka User List group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Event Stores for Akka Persistence for CQRS?

2014-08-19 Thread Martin Krasser


On 19.08.14 13:00, Roland Kuhn wrote:


19 aug 2014 kl. 11:28 skrev Ashley Aitken amait...@gmail.com 
mailto:amait...@gmail.com:





On Tuesday, 19 August 2014 16:53:46 UTC+8, Martin Krasser wrote:


journal - akka actor(s) - read model datastore

when I can do this much more efficiently via

journal - spark - read model datastore


directly, for example

I am confused, are you suggesting that spark is talking to the 
journal data store directly, without any involvement of Akka / Akka 
Persistence?  If so, it sounds like a great solution but why would 
that require an extension to the Akka Persistence design/API?


Well, another comment is that spark uses Akka actors in its 
implementation, so I don’t see why it would magically be “much more 
efficient”. I think we are mixing up two concerns here, will reply 
later when I can type properly again.


This is a misunderstanding. As mentioned in my previous message, scaling 
reads through a single journal actor doesn't work, it's not about that I 
see a general performance issue with Akka actors.




Regards,

Roland







--
 Read the docs: http://akka.io/docs/
 Check the FAQ: 
http://doc.akka.io/docs/akka/current/additional/faq.html

 Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the Google 
Groups Akka User List group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to akka-user+unsubscr...@googlegroups.com 
mailto:akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com 
mailto:akka-user@googlegroups.com.

Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.




*Dr. Roland Kuhn*
/Akka Tech Lead/
Typesafe http://typesafe.com/ – Reactive apps on the JVM.
twitter: @rolandkuhn
http://twitter.com/#%21/rolandkuhn

--
 Read the docs: http://akka.io/docs/
 Check the FAQ: 
http://doc.akka.io/docs/akka/current/additional/faq.html

 Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the Google 
Groups Akka User List group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to akka-user+unsubscr...@googlegroups.com 
mailto:akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com 
mailto:akka-user@googlegroups.com.

Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


--
Martin Krasser

blog:http://krasserm.blogspot.com
code:http://github.com/krasserm
twitter: http://twitter.com/mrt1nz

--

 Read the docs: http://akka.io/docs/
 Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html
 Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka User List group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Event Stores for Akka Persistence for CQRS?

2014-08-19 Thread Patrik Nordwall
On Tue, Aug 19, 2014 at 1:35 PM, Martin Krasser krass...@googlemail.com
wrote:


 On 19.08.14 13:00, Roland Kuhn wrote:


  19 aug 2014 kl. 11:28 skrev Ashley Aitken amait...@gmail.com:



 On Tuesday, 19 August 2014 16:53:46 UTC+8, Martin Krasser wrote:


 journal - akka actor(s) - read model datastore

 when I can do this much more efficiently via

 journal - spark - read model datastore


 directly, for example


 I am confused, are you suggesting that spark is talking to the journal
 data store directly, without any involvement of Akka / Akka Persistence?
  If so, it sounds like a great solution but why would that require an
 extension to the Akka Persistence design/API?


  Well, another comment is that spark uses Akka actors in its
 implementation, so I don’t see why it would magically be “much more
 efficient”. I think we are mixing up two concerns here, will reply later
 when I can type properly again.


 This is a misunderstanding. As mentioned in my previous message, scaling
 reads through a single journal actor doesn't work, it's not about that I
 see a general performance issue with Akka actors.



I think the integration akka persistence - kafka - spark - whatever
looks great, but not everybody has that infrastructure, and therefore we
provide PersistentView as a simple way to replicate events to the read
side, and then a de-normalized representation can be stored in whatever
makes sense for the queries.

Martin, what do you suggest? Removing PersistentView altogether?

/Patrik


  Regards,

  Roland






  --
  Read the docs: http://akka.io/docs/
  Check the FAQ:
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google Groups
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




 *Dr. Roland Kuhn*
 *Akka Tech Lead*
 Typesafe http://typesafe.com/ – Reactive apps on the JVM.
 twitter: @rolandkuhn
  http://twitter.com/#%21/rolandkuhn

 --
  Read the docs: http://akka.io/docs/
  Check the FAQ:
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google Groups
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.


 --
 Martin Krasser

 blog:http://krasserm.blogspot.com
 code:http://github.com/krasserm
 twitter: http://twitter.com/mrt1nz

  --
  Read the docs: http://akka.io/docs/
  Check the FAQ:
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google Groups
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




-- 

Patrik Nordwall
Typesafe http://typesafe.com/ -  Reactive apps on the JVM
Twitter: @patriknw

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Event Stores for Akka Persistence for CQRS?

2014-08-19 Thread Martin Krasser


On 19.08.14 13:40, Patrik Nordwall wrote:




On Tue, Aug 19, 2014 at 1:35 PM, Martin Krasser 
krass...@googlemail.com mailto:krass...@googlemail.com wrote:



On 19.08.14 13:00, Roland Kuhn wrote:


19 aug 2014 kl. 11:28 skrev Ashley Aitken amait...@gmail.com
mailto:amait...@gmail.com:




On Tuesday, 19 August 2014 16:53:46 UTC+8, Martin Krasser wrote:


journal - akka actor(s) - read model datastore

when I can do this much more efficiently via

journal - spark - read model datastore


directly, for example

I am confused, are you suggesting that spark is talking to the
journal data store directly, without any involvement of Akka /
Akka Persistence?  If so, it sounds like a great solution but
why would that require an extension to the Akka Persistence
design/API?


Well, another comment is that spark uses Akka actors in its
implementation, so I don’t see why it would magically be “much
more efficient”. I think we are mixing up two concerns here, will
reply later when I can type properly again.


This is a misunderstanding. As mentioned in my previous message,
scaling reads through a single journal actor doesn't work, it's
not about that I see a general performance issue with Akka actors.



I think the integration akka persistence - kafka - spark - 
whatever looks great, but not everybody has that infrastructure, and 
therefore we provide PersistentView as a simple way to replicate 
events to the read side, and then a de-normalized representation can 
be stored in whatever makes sense for the queries.


Of course, that should be possible too, and as I already said, backend 
store providers can choose to use the very same backend for both 
plugins. There is absolutely no need that applications must use Spark as 
part of their infrastructure infrastructure. But if it is needed in 
large-scale applications, a seconds plugin API for on the read side 
would make things much more flexible. For users, who just want to have a 
single backend store, they just have to configure one additional line 
(plugin) in their application conf.




Martin, what do you suggest? Removing PersistentView altogether?


No, not at all, with an additional plugin, PersistentViews should have 
the option to read transformed/joined/... streams from a backend store 
that is optimized for that.




/Patrik



Regards,

Roland







-- 
 Read the docs: http://akka.io/docs/

 Check the FAQ:
http://doc.akka.io/docs/akka/current/additional/faq.html
 Search the archives:
https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the
Google Groups Akka User List group.
To unsubscribe from this group and stop receiving emails from
it, send an email to akka-user+unsubscr...@googlegroups.com
mailto:akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com
mailto:akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.




*Dr. Roland Kuhn*
/Akka Tech Lead/
Typesafe http://typesafe.com/ – Reactive apps on the JVM.
twitter: @rolandkuhn
http://twitter.com/#%21/rolandkuhn

-- 
 Read the docs: http://akka.io/docs/

 Check the FAQ:
http://doc.akka.io/docs/akka/current/additional/faq.html
 Search the archives:
https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the
Google Groups Akka User List group.
To unsubscribe from this group and stop receiving emails from it,
send an email to akka-user+unsubscr...@googlegroups.com
mailto:akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com
mailto:akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


-- 
Martin Krasser


blog:http://krasserm.blogspot.com
code:http://github.com/krasserm
twitter:http://twitter.com/mrt1nz

-- 
 Read the docs: http://akka.io/docs/

 Check the FAQ:
http://doc.akka.io/docs/akka/current/additional/faq.html
 Search the archives:
https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the Google
Groups Akka User List group.
To unsubscribe from this group and stop receiving emails from it,
send an email to akka-user+unsubscr...@googlegroups.com
mailto:akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com
mailto:akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.




--

Patrik Nordwall

Re: [akka-user] Event Stores for Akka Persistence for CQRS?

2014-08-19 Thread Patrik Nordwall
On Tue, Aug 19, 2014 at 1:46 PM, Martin Krasser krass...@googlemail.com
wrote:


 On 19.08.14 13:40, Patrik Nordwall wrote:




 On Tue, Aug 19, 2014 at 1:35 PM, Martin Krasser krass...@googlemail.com
 wrote:


 On 19.08.14 13:00, Roland Kuhn wrote:


  19 aug 2014 kl. 11:28 skrev Ashley Aitken amait...@gmail.com:



 On Tuesday, 19 August 2014 16:53:46 UTC+8, Martin Krasser wrote:


 journal - akka actor(s) - read model datastore

 when I can do this much more efficiently via

 journal - spark - read model datastore


 directly, for example


 I am confused, are you suggesting that spark is talking to the journal
 data store directly, without any involvement of Akka / Akka Persistence?
  If so, it sounds like a great solution but why would that require an
 extension to the Akka Persistence design/API?


  Well, another comment is that spark uses Akka actors in its
 implementation, so I don’t see why it would magically be “much more
 efficient”. I think we are mixing up two concerns here, will reply later
 when I can type properly again.


  This is a misunderstanding. As mentioned in my previous message, scaling
 reads through a single journal actor doesn't work, it's not about that I
 see a general performance issue with Akka actors.



  I think the integration akka persistence - kafka - spark - whatever
 looks great, but not everybody has that infrastructure, and therefore we
 provide PersistentView as a simple way to replicate events to the read
 side, and then a de-normalized representation can be stored in whatever
 makes sense for the queries.


 Of course, that should be possible too, and as I already said, backend
 store providers can choose to use the very same backend for both plugins.
 There is absolutely no need that applications must use Spark as part of
 their infrastructure infrastructure. But if it is needed in large-scale
 applications, a seconds plugin API for on the read side would make things
 much more flexible. For users, who just want to have a single backend
 store, they just have to configure one additional line (plugin) in their
 application conf.



  Martin, what do you suggest? Removing PersistentView altogether?


 No, not at all, with an additional plugin, PersistentViews should have the
 option to read transformed/joined/... streams from a backend store that is
 optimized for that.


ah, now I understand what you mean. That makes sense.
/Patrik




  /Patrik


  Regards,

  Roland






  --
  Read the docs: http://akka.io/docs/
  Check the FAQ:
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google Groups
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




 *Dr. Roland Kuhn*
 *Akka Tech Lead*
 Typesafe http://typesafe.com/ – Reactive apps on the JVM.
 twitter: @rolandkuhn
  http://twitter.com/#%21/rolandkuhn

 --
  Read the docs: http://akka.io/docs/
  Check the FAQ:
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google Groups
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.


   --
 Martin Krasser

 blog:http://krasserm.blogspot.com
 code:http://github.com/krasserm
 twitter: http://twitter.com/mrt1nz

   --
  Read the docs: http://akka.io/docs/
  Check the FAQ:
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google Groups
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




  --

 Patrik Nordwall
 Typesafe http://typesafe.com/ -  Reactive apps on the JVM
 Twitter: @patriknw
   --
  Read the docs: http://akka.io/docs/
  Check the FAQ:
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google Groups
 Akka User List group.
 To unsubscribe from this group and stop 

Re: [akka-user] Event Stores for Akka Persistence for CQRS?

2014-08-19 Thread delasoul
and therefore we provide PersistentView as a simple way to replicate 
events to the read side, and then a de-normalized representation can be 
stored...

If I understand this right, this means:
PersistenActor persists event
PersistentView queries EventStore e.g.: every second and forwards new 
events to the read side, e.g.: EventListener which then updates the 
ReadModel

What is the advantage of using the PersistenView here, instead of just 
emitting the event to the read side from the PersistentActor directly?

thanks,

michael 

On Tuesday, 19 August 2014 13:49:50 UTC+2, Patrik Nordwall wrote:




 On Tue, Aug 19, 2014 at 1:46 PM, Martin Krasser kras...@googlemail.com 
 javascript: wrote:

  
 On 19.08.14 13:40, Patrik Nordwall wrote:
  



 On Tue, Aug 19, 2014 at 1:35 PM, Martin Krasser kras...@googlemail.com 
 javascript: wrote:

  
 On 19.08.14 13:00, Roland Kuhn wrote:
  

  19 aug 2014 kl. 11:28 skrev Ashley Aitken amai...@gmail.com 
 javascript::

  

 On Tuesday, 19 August 2014 16:53:46 UTC+8, Martin Krasser wrote: 


 journal - akka actor(s) - read model datastore 

 when I can do this much more efficiently via 

 journal - spark - read model datastore
  
  
 directly, for example

  
 I am confused, are you suggesting that spark is talking to the journal 
 data store directly, without any involvement of Akka / Akka Persistence? 
  If so, it sounds like a great solution but why would that require an 
 extension to the Akka Persistence design/API?
  

  Well, another comment is that spark uses Akka actors in its 
 implementation, so I don’t see why it would magically be “much more 
 efficient”. I think we are mixing up two concerns here, will reply later 
 when I can type properly again.
  

  This is a misunderstanding. As mentioned in my previous message, 
 scaling reads through a single journal actor doesn't work, it's not about 
 that I see a general performance issue with Akka actors. 


   
  I think the integration akka persistence - kafka - spark - 
 whatever looks great, but not everybody has that infrastructure, and 
 therefore we provide PersistentView as a simple way to replicate events to 
 the read side, and then a de-normalized representation can be stored in 
 whatever makes sense for the queries.
   

 Of course, that should be possible too, and as I already said, backend 
 store providers can choose to use the very same backend for both plugins. 
 There is absolutely no need that applications must use Spark as part of 
 their infrastructure infrastructure. But if it is needed in large-scale 
 applications, a seconds plugin API for on the read side would make things 
 much more flexible. For users, who just want to have a single backend 
 store, they just have to configure one additional line (plugin) in their 
 application conf.


   
  Martin, what do you suggest? Removing PersistentView altogether? 
   

 No, not at all, with an additional plugin, PersistentViews should have 
 the option to read transformed/joined/... streams from a backend store that 
 is optimized for that. 


 ah, now I understand what you mean. That makes sense.
 /Patrik
  

  
   
  /Patrik


  Regards, 

  Roland

  
  
  
  
  
  -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: 
 https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google 
 Groups Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to akka-user+...@googlegroups.com javascript:.
 To post to this group, send email to akka...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.

  
  

 *Dr. Roland Kuhn*
 *Akka Tech Lead*
 Typesafe http://typesafe.com/ – Reactive apps on the JVM.
 twitter: @rolandkuhn
  http://twitter.com/#%21/rolandkuhn
  
 -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: 
 https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google 
 Groups Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to akka-user+...@googlegroups.com javascript:.
 To post to this group, send email to akka...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.


   -- 
 Martin Krasser

 blog:http://krasserm.blogspot.com
 code:http://github.com/krasserm
 twitter: http://twitter.com/mrt1nz

   -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: 
 https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to 

Re: [akka-user] Event Stores for Akka Persistence for CQRS?

2014-08-19 Thread Roland Kuhn

19 aug 2014 kl. 13:49 skrev Patrik Nordwall patrik.nordw...@gmail.com:

 On Tue, Aug 19, 2014 at 1:46 PM, Martin Krasser krass...@googlemail.com 
 wrote:
 
 On 19.08.14 13:40, Patrik Nordwall wrote:
 
 
 
 On Tue, Aug 19, 2014 at 1:35 PM, Martin Krasser krass...@googlemail.com 
 wrote:
 
 On 19.08.14 13:00, Roland Kuhn wrote:
 
 19 aug 2014 kl. 11:28 skrev Ashley Aitken amait...@gmail.com:
 
 
 
 On Tuesday, 19 August 2014 16:53:46 UTC+8, Martin Krasser wrote:
 
 journal - akka actor(s) - read model datastore 
 
 when I can do this much more efficiently via 
 
 journal - spark - read model datastore
 
 directly, for example
  
 I am confused, are you suggesting that spark is talking to the journal 
 data store directly, without any involvement of Akka / Akka Persistence?  
 If so, it sounds like a great solution but why would that require an 
 extension to the Akka Persistence design/API?
 
 Well, another comment is that spark uses Akka actors in its implementation, 
 so I don’t see why it would magically be “much more efficient”. I think we 
 are mixing up two concerns here, will reply later when I can type properly 
 again.
 
 This is a misunderstanding. As mentioned in my previous message, scaling 
 reads through a single journal actor doesn't work, it's not about that I see 
 a general performance issue with Akka actors.
 
 
 
 I think the integration akka persistence - kafka - spark - whatever 
 looks great, but not everybody has that infrastructure, and therefore we 
 provide PersistentView as a simple way to replicate events to the read side, 
 and then a de-normalized representation can be stored in whatever makes 
 sense for the queries.
 
 Of course, that should be possible too, and as I already said, backend store 
 providers can choose to use the very same backend for both plugins. There is 
 absolutely no need that applications must use Spark as part of their 
 infrastructure infrastructure. But if it is needed in large-scale 
 applications, a seconds plugin API for on the read side would make things 
 much more flexible. For users, who just want to have a single backend store, 
 they just have to configure one additional line (plugin) in their application 
 conf.
 
 
 
 Martin, what do you suggest? Removing PersistentView altogether? 
 
 No, not at all, with an additional plugin, PersistentViews should have the 
 option to read transformed/joined/... streams from a backend store that is 
 optimized for that. 
 
 ah, now I understand what you mean. That makes sense.

I’m not completely there yet: in which way does this require changes to Akka 
Persistence? The only thing we need is to support multiple Journals in the same 
ActorSystem, and a way for PersistentView and PersistentActor to select between 
them, is this what you mean? Or do you mean that the read-side would be a new 
kind of plugin?

OTOH this would not solve the read-side concerns raised by Greg: building a 
View on top of an incoming event stream is precisely not what he wants, unless 
I got that wrong. The idea behind CQRS/ES is that the events from the 
write-side drive updates of the read-side which is then queried (i.e. actively 
asked instead of passively updating) in whatever way is appropriate (e.g. graph 
searches).

Regards,

Roland

 /Patrik
  
 
 
 /Patrik
 
 Regards, 
 
 Roland
 
 
 
 
 
 
 -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
  http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google Groups 
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.
 
 
 
 Dr. Roland Kuhn
 Akka Tech Lead
 Typesafe – Reactive apps on the JVM.
 twitter: @rolandkuhn
 
 
 -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
  http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google Groups 
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.
 
 -- 
 Martin Krasser
 
 blog:http://krasserm.blogspot.com
 code:http://github.com/krasserm
 twitter: http://twitter.com/mrt1nz
 -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
  http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 --- 
 You received 

Re: [akka-user] Improving Akka Persistence wrt CQRS/ES/DDD

2014-08-19 Thread Roland Kuhn

18 aug 2014 kl. 16:49 skrev Patrik Nordwall patrik.nordw...@gmail.com:

 On Mon, Aug 18, 2014 at 3:38 PM, Roland Kuhn goo...@rkuhn.info wrote:
 
 18 aug 2014 kl. 10:27 skrev Patrik Nordwall patrik.nordw...@gmail.com:
 
 Hi Roland,
 
 A few more questions for clarification...
 
 
 On Sat, Aug 16, 2014 at 10:11 PM, Vaughn Vernon vver...@shiftmethod.com 
 wrote:
 
 
 On Friday, August 15, 2014 11:39:45 AM UTC-6, rkuhn wrote:
 Dear hakkers,
 
 unfortunately it took me a long time to catch up with akka-user to this 
 point after the vacation, but on the other hand this made for a very 
 interesting and stimulating read, thanks for this thread!
 
 If I may, here’s what I have understood so far:
 In order to support not only actor persistence but also full CQRS we need to 
 adjust our terminology: events are published to topics, where each 
 persistenceId is one such topic but others are also allowed.
 Common use-cases of building projections or denormalized views require the 
 ability to query the union of a possibly large number of topics in such a 
 fashion that no events are lost. This union can be viewed as a synthetic or 
 logical topic, but issues arise in that true topics provide total ordering 
 while these synthetic ones have difficulties doing so.
 Constructing Sagas is hard.
 
 AFAICS 3. is not related to the other two, the mentions in this thread have 
 only alluded to the problems so I assume that the difficulty is primarily to 
 design a process that has the right eventual consistency properties (i.e. 
 rollbacks, retries, …). This is an interesting topic but let’s concentrate 
 on the original question first.
 
 The first point is a rather simple one, we just need to expose the necessary 
 API for writing to a given topic instead of the local Actor’s persistenceId; 
 I’d opt for adding variants of the persist() methods that take an additional 
 String argument. Using the resulting event log is then done as for the 
 others (i.e. Views and potentially queries should just work).
 
 Does that mean that a PersistentActor can emit events targeted to its 
 persistenceId and/or targeted to an external topic and it is only the events 
 targeted to the persistenceId that will be replayed during recovery of that 
 PersistentActor?
 
 Yes.
 
 Both these two types of events can be replayed by a PersistentView.
 
 Yes; they are not different types of events, just how they get to the Journal 
 is slightly different.
 
  
 The only concern is that the Journal needs to be prepared to receive events 
 concurrently from multiple sources instead of just the same Actor, but since 
 each topic needs to be totally ordered this will not be an additional hassle 
 beyond just routing to the same replica, just like for persistenceIds.
 
 Replica as in data store replica, or as in journal actor? 
 
 The Journal must implement this in whatever way is suitable for the back-end. 
 A generic solution would be to shard the topics as Actors across the cluster 
 (internal to the Journal), or the Journal could talk to the replicated 
 back-end store such that a topic always is written to one specific node (if 
 that helps).
 
 What has been requested is all events for an Aggregate type, e.g. all 
 shopping carts, and this will will not scale. It can still be useful, and 
 with some careful design you could partition things when scalability is 
 needed. I'm just saying that it is a big gun, that can be pointed in the 
 wrong direction.

Mixed-up context: #1 is about predefined topics to which events are emitted, 
not queries. We need to strictly keep these separate.

 
  
 
  
 
 Is point one for providing a sequence number from a single ordering source?
 
 Yes, that is also what I was wondering. Do we need such a sequence number? A 
 PersistentView should be able to define a replay starting point. (right now 
 I think that is missing, it is only supported by saving snapshots)
  
 Or do you mean topic in the sense that I cover above with EntitiesRef? In 
 other words, what is the String argument and how does it work?  If you would 
 show a few sample persist() APIs that might help clarify. And if you are 
 referring to a global ordering sequence, whose must maintain that? Is it the 
 store implementation or the developer? 
 
 #1 is not about sequence numbers per se (although it has consequences of that 
 kind): it is only about allowing persistenceIds that are not bound to a 
 single PersistentActor and that all PersistentActors can publish to. Mock 
 code:
 
 def apply(evt: Event) = state = evt(state)
 
 def receiveCommand = {
   case c: Cmd =
 if (isValid(c)) {
   persist(Event1(c))(apply)
   persistToTopic(myTopic, Event2(c)) { evt =
 apply(evt)
 sender() ! Done
   }
 }
 }
 
 
 Looks good, but to make it clear, there is no transaction that spans over 
 these two persist calls.

Of course.

  
 Everyone who listens to myTopic will then (eventually) get Event2.
 
  
 
 The second point is the 

Re: [akka-user] Improving Akka Persistence wrt CQRS/ES/DDD

2014-08-19 Thread Gary Malouf
For CQRS specifically, a lot of what people call scalability is in it's
ability to easily model multiple read views to make queries very fast off
the same event data.

In the cases where a true global ordering is truly necessary, one often
does not need to handle hundreds of thousands of writes per second.  I
think the ideal is to have the global ordering property for events by
default, and have to disable that if you feel a need to do more writes per
second than a single writer can handle.

Once the global ordering property is enforced, solving many of the
publisher ordering issues (and supporting sagas) becomes significantly
easier to achieve.
On Aug 19, 2014 8:49 AM, Roland Kuhn goo...@rkuhn.info wrote:


 18 aug 2014 kl. 16:49 skrev Patrik Nordwall patrik.nordw...@gmail.com:

 On Mon, Aug 18, 2014 at 3:38 PM, Roland Kuhn goo...@rkuhn.info wrote:


 18 aug 2014 kl. 10:27 skrev Patrik Nordwall patrik.nordw...@gmail.com:

 Hi Roland,

 A few more questions for clarification...


 On Sat, Aug 16, 2014 at 10:11 PM, Vaughn Vernon vver...@shiftmethod.com
 wrote:


 On Friday, August 15, 2014 11:39:45 AM UTC-6, rkuhn wrote:

 Dear hakkers,

 unfortunately it took me a long time to catch up with akka-user to this
 point after the vacation, but on the other hand this made for a very
 interesting and stimulating read, thanks for this thread!

 If I may, here’s what I have understood so far:

1. In order to support not only actor persistence but also full
CQRS we need to adjust our terminology: events are published to topics,
where each persistenceId is one such topic but others are also allowed.
2. Common use-cases of building projections or denormalized views
require the ability to query the union of a possibly large number of 
 topics
in such a fashion that no events are lost. This union can be viewed as a
synthetic or logical topic, but issues arise in that true topics provide
total ordering while these synthetic ones have difficulties doing so.
3. Constructing Sagas is hard.


 AFAICS 3. is not related to the other two, the mentions in this thread
 have only alluded to the problems so I assume that the difficulty is
 primarily to design a process that has the right eventual consistency
 properties (i.e. rollbacks, retries, …). This is an interesting topic but
 let’s concentrate on the original question first.

 The first point is a rather simple one, we just need to expose the
 necessary API for writing to a given topic instead of the local Actor’s
 persistenceId; I’d opt for adding variants of the persist() methods that
 take an additional String argument. Using the resulting event log is then
 done as for the others (i.e. Views and potentially queries should just
 work).


 Does that mean that a PersistentActor can emit events targeted to its
 persistenceId and/or targeted to an external topic and it is only the
 events targeted to the persistenceId that will be replayed during recovery
 of that PersistentActor?


 Yes.

 Both these two types of events can be replayed by a PersistentView.


 Yes; they are not different types of events, just how they get to the
 Journal is slightly different.



  The only concern is that the Journal needs to be prepared to receive
 events concurrently from multiple sources instead of just the same Actor,
 but since each topic needs to be totally ordered this will not be an
 additional hassle beyond just routing to the same replica, just like for
 persistenceIds.


 Replica as in data store replica, or as in journal actor?


 The Journal must implement this in whatever way is suitable for the
 back-end. A generic solution would be to shard the topics as Actors across
 the cluster (internal to the Journal), or the Journal could talk to the
 replicated back-end store such that a topic always is written to one
 specific node (if that helps).


 What has been requested is all events for an Aggregate type, e.g. all
 shopping carts, and this will will not scale. It can still be useful, and
 with some careful design you could partition things when scalability is
 needed. I'm just saying that it is a big gun, that can be pointed in the
 wrong direction.


 Mixed-up context: #1 is about predefined topics to which events are
 emitted, not queries. We need to strictly keep these separate.








 Is point one for providing a sequence number from a single ordering
 source?


 Yes, that is also what I was wondering. Do we need such a sequence
 number? A PersistentView should be able to define a replay starting point.
 (right now I think that is missing, it is only supported by saving
 snapshots)


 Or do you mean topic in the sense that I cover above with EntitiesRef?
 In other words, what is the String argument and how does it work?  If you
 would show a few sample persist() APIs that might help clarify. And if you
 are referring to a global ordering sequence, whose must maintain that?
 Is it the store implementation or the developer?


 #1 is not 

Re: [akka-user] Event Stores for Akka Persistence for CQRS?

2014-08-19 Thread Martin Krasser


On 19.08.14 14:37, Roland Kuhn wrote:


19 aug 2014 kl. 13:49 skrev Patrik Nordwall patrik.nordw...@gmail.com 
mailto:patrik.nordw...@gmail.com:


On Tue, Aug 19, 2014 at 1:46 PM, Martin Krasser 
krass...@googlemail.com mailto:krass...@googlemail.com wrote:



On 19.08.14 13:40, Patrik Nordwall wrote:




On Tue, Aug 19, 2014 at 1:35 PM, Martin Krasser
krass...@googlemail.com mailto:krass...@googlemail.com wrote:


On 19.08.14 13:00, Roland Kuhn wrote:


19 aug 2014 kl. 11:28 skrev Ashley Aitken
amait...@gmail.com mailto:amait...@gmail.com:




On Tuesday, 19 August 2014 16:53:46 UTC+8, Martin Krasser
wrote:


journal - akka actor(s) - read model datastore

when I can do this much more efficiently via

journal - spark - read model datastore


directly, for example

I am confused, are you suggesting that spark is talking to
the journal data store directly, without any involvement
of Akka / Akka Persistence?  If so, it sounds like a great
solution but why would that require an extension to the
Akka Persistence design/API?


Well, another comment is that spark uses Akka actors in its
implementation, so I don’t see why it would magically be
“much more efficient”. I think we are mixing up two
concerns here, will reply later when I can type properly again.


This is a misunderstanding. As mentioned in my previous
message, scaling reads through a single journal actor
doesn't work, it's not about that I see a general
performance issue with Akka actors.



I think the integration akka persistence - kafka - spark -
whatever looks great, but not everybody has that
infrastructure, and therefore we provide PersistentView as a
simple way to replicate events to the read side, and then a
de-normalized representation can be stored in whatever makes
sense for the queries.


Of course, that should be possible too, and as I already said,
backend store providers can choose to use the very same backend
for both plugins. There is absolutely no need that applications
must use Spark as part of their infrastructure infrastructure.
But if it is needed in large-scale applications, a seconds plugin
API for on the read side would make things much more flexible.
For users, who just want to have a single backend store, they
just have to configure one additional line (plugin) in their
application conf.




Martin, what do you suggest? Removing PersistentView altogether?


No, not at all, with an additional plugin, PersistentViews should
have the option to read transformed/joined/... streams from a
backend store that is optimized for that.


ah, now I understand what you mean. That makes sense.


I’m not completely there yet: in which way does this require changes 
to Akka Persistence? The only thing we need is to support multiple 
Journals in the same ActorSystem, and a way for PersistentView and 
PersistentActor to select between them, is this what you mean?


This would go into the right direction, except that I wouldn't call the 
plugin that serves PersistentViews a journal because it only provides 
an interface for reading. Furthermore, this plugin could additionally 
offer an API for passing backend-specific query statements for 
joining/transforming/... streams on the fly (if supported/wanted).



Or do you mean that the read-side would be a new kind of plugin?


Yes, see above



OTOH this would not solve the read-side concerns raised by Greg: 
building a View on top of an incoming event stream is precisely not 
what he wants, unless I got that wrong. The idea behind CQRS/ES is 
that the events from the write-side drive updates of the read-side 
which is then queried (i.e. actively asked instead of passively 
updating) in whatever way is appropriate (e.g. graph searches).




I cannot see how my proposal is in contradiction with that. Can you 
please explain?



Regards,

Roland


/Patrik




/Patrik



Regards,

Roland







-- 
 Read the docs: http://akka.io/docs/

 Check the FAQ:
http://doc.akka.io/docs/akka/current/additional/faq.html
 Search the archives:
https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to
the Google Groups Akka User List group.
To unsubscribe from this group and stop receiving emails
from it, send an email to
akka-user+unsubscr...@googlegroups.com
mailto:akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to
akka-user@googlegroups.com
mailto:akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.




   

Re: [akka-user] Improving Akka Persistence wrt CQRS/ES/DDD

2014-08-19 Thread Roland Kuhn

18 aug 2014 kl. 18:01 skrev Ashley Aitken amait...@gmail.com:

 Hi Roland (and everyone),
 
 Welcome back Roland - I hope you had a great vacation.
 
 Thank you for your post.  
 
 
 Here’s my response summary:
 
 I believe Akka needs to allow actors to:
 
 (i) persist events with as much information as efficiently possible on the 
 write side to allow the store to facilitate the read side extracting them 
 according to what criteria is needed,
 

This is a convoluted way of saying that Events must be self-contained, right? 
In that case: check!

 
 (ii) persist events that don’t relate to a change in state of the actor per 
 se, which I assume is already achievable since an actor can just ignore them 
 on replay, 
 

Yes, the actor chooses which effect an Event has on its state. Check!

 
 (iii) read from (and replay) streams of events on the read and write side 
 according to a range of criteria supported and defined within the store or 
 via the store API (e.g. using a DSL), and
 

This is the unclear point: who defines the query and when? What are the 
consistency guarantees for the generated event stream?

 
 (iv) reliably (at least once) deliver information to other read side store(s) 
 and systems above and beyond the store used for persisting the events.
 

This is PersistentView, so “check!” (As argued previously “reliably” translates 
to “persistent”.)

 
 I believe each of these is readily achievable with Akka but:
 
 (i) doesn’t mean explicitly persisting the events to specific topics as you 
 suggest in your (1) (although this may be how some stores implement the 
 required functionality on the read side). Instead it means transparently 
 including information like the actorId, event type, actor type, probably the 
 time and possibly information to help with causal ordering (see my next post).
 

No, again we need to strictly keep Topics and Queries separate, they are very 
different features. Topics are defined up-front and explicitly written to, 
Queries are constructed later based on the existing event log contents. Marking 
events within the store with timestamps of some kind might help achieving a 
pseudo-deterministic behavior, but it is by no means a guarantee. Causal 
ordering is out of scope, and it also does not help in achieving the desired 
ability to replay Queries from some given point in the past.

 
 (iii) with (i) would enable the read side (if the store supports it) to read 
 all events from a particular actor(s), of particular event types, to read 
 events from a particular type(s) of actors, and to read all events.  It would 
 also need to allow the read side to read from where it last finished reading, 
 from now, and from the start again.  (iv) is necessary for projections.  
 
 
 If you are interested, here’s my detailed explanation:
 
 I think some of the confusion surrounding these issues is caused by the fact 
 that we seem to be discussing and, if I may suggest, Akka appears to be 
 trying to implement three quite different (but also somewhat related) pieces 
 of functionality within this domain.

Just anecdotally: the goal of Akka Persistence is to achieve at-least-once 
processing semantics for persistent actors. We’ll see how far a stretch it is 
to incorporate all that is needed for effective CQRS/ES.

  These are:
 
 A. Actor Persistence
 
 The ability to persist actor state changes incrementally (or wholly) and 
 reconstruct that state at a later time, which we know as event sourcing.  I 
 think Akka provides a great distributed and scalable mechanism for doing this 
 with the current akka.persistence.
 
 B. Publish/Subscribe to Persistent Queues/Topics
 
 This functionality would allow actors to write data/events/messages to one 
 (or more) topics and to subscribe to receive similar from one or more topics. 
  These differ from normal publish/subscribe queues in that they are 
 persistent and the consumer can reread from the topic.
 
 This is what I think of as the LogProducer and LogConsumer, of which 
 PersistentActor and PersistentView can be thought of as specialisations, i.e. 
 a single topic for each actor.  The current and popular example of a store 
 for this sort of functionality, as you know, is Kafka. 
 

Agreed; this moved into focus thanks to your initiating this discussion!

 
 C. CQRS with Event Sourcing
 
 And finally, there is CQRS with Event Sourcing, which I believe is much more 
 that (A) and (B) and particularly doesn’t necessarily require (B.) for all 
 event stores.  So if Akka were to implement (B), which I think would be very 
 useful for other reasons, it would not specifically be for CQRS.
 
 Please consider this diagram overviewing CQRS with Event Sourcing:
 
 https://www.dropbox.com/s/z2iu0xi4ki42sl7/annotated_cqrs_architecture.jpg
 
 adapted from 
 
 http://www.gridshore.nl/wp-content/uploads/cqrs_architecture.jpg
 
 As I understand it, CQRS separates the write model and store from one or 
 *more* read models and stores, with 

Re: [akka-user] Improving Akka Persistence wrt CQRS/ES/DDD

2014-08-19 Thread Roland Kuhn

19 aug 2014 kl. 07:18 skrev Greg Young gregoryyou...@gmail.com:

 everything you list here is available today via akka.persistence + event 
 store adapter  a durable subscription (akka event store client) on the read 
 model side.

This sounds like the best candidate for a way forward at this point. “Durable 
subscription” is a tough nut to crack, though, for a distributed storage 
system, especially if the underlying Query is supposed to be created on the 
live system instead of up-front.

Regards,

Roland

 
 On Monday, August 18, 2014 12:01:36 PM UTC-4, Ashley Aitken wrote:
 Hi Roland (and everyone),
 
 Welcome back Roland - I hope you had a great vacation.
 
 Thank you for your post.  
 
 
 Here’s my response summary:
 
 I believe Akka needs to allow actors to:
 
 (i) persist events with as much information as efficiently possible on the 
 write side to allow the store to facilitate the read side extracting them 
 according to what criteria is needed,
 
 (ii) persist events that don’t relate to a change in state of the actor per 
 se, which I assume is already achievable since an actor can just ignore them 
 on replay, 
 
 (iii) read from (and replay) streams of events on the read and write side 
 according to a range of criteria supported and defined within the store or 
 via the store API (e.g. using a DSL), and
 
 (iv) reliably (at least once) deliver information to other read side store(s) 
 and systems above and beyond the store used for persisting the events.
 
 I believe each of these is readily achievable with Akka but:
 
 (i) doesn’t mean explicitly persisting the events to specific topics as you 
 suggest in your (1) (although this may be how some stores implement the 
 required functionality on the read side). Instead it means transparently 
 including information like the actorId, event type, actor type, probably the 
 time and possibly information to help with causal ordering (see my next post).
 
 (iii) with (i) would enable the read side (if the store supports it) to read 
 all events from a particular actor(s), of particular event types, to read 
 events from a particular type(s) of actors, and to read all events.  It would 
 also need to allow the read side to read from where it last finished reading, 
 from now, and from the start again.  (iv) is necessary for projections.  
 
 
 If you are interested, here’s my detailed explanation:
 
 I think some of the confusion surrounding these issues is caused by the fact 
 that we seem to be discussing and, if I may suggest, Akka appears to be 
 trying to implement three quite different (but also somewhat related) pieces 
 of functionality within this domain.  These are:
 
 A. Actor Persistence
 
 The ability to persist actor state changes incrementally (or wholly) and 
 reconstruct that state at a later time, which we know as event sourcing.  I 
 think Akka provides a great distributed and scalable mechanism for doing this 
 with the current akka.persistence.
 
 B. Publish/Subscribe to Persistent Queues/Topics
 
 This functionality would allow actors to write data/events/messages to one 
 (or more) topics and to subscribe to receive similar from one or more topics. 
  These differ from normal publish/subscribe queues in that they are 
 persistent and the consumer can reread from the topic.
 
 This is what I think of as the LogProducer and LogConsumer, of which 
 PersistentActor and PersistentView can be thought of as specialisations, i.e. 
 a single topic for each actor.  The current and popular example of a store 
 for this sort of functionality, as you know, is Kafka. 
 
 C. CQRS with Event Sourcing
 
 And finally, there is CQRS with Event Sourcing, which I believe is much more 
 that (A) and (B) and particularly doesn’t necessarily require (B.) for all 
 event stores.  So if Akka were to implement (B), which I think would be very 
 useful for other reasons, it would not specifically be for CQRS.
 
 Please consider this diagram overviewing CQRS with Event Sourcing:
 
 https://www.dropbox.com/s/z2iu0xi4ki42sl7/annotated_cqrs_architecture.jpg
 
 adapted from 
 
 http://www.gridshore.nl/wp-content/uploads/cqrs_architecture.jpg
 
 As I understand it, CQRS separates the write model and store from one or 
 *more* read models and stores, with each model and store being optimised for 
 their particular role.  CQRS says nothing specific about the types of store 
 (e.g. SQL or NOSQL, event sourced or not) and how consistency is achieved.
 
 As you know, when using event sourcing the changes to the write model 
 entities (e.g. Aggregate Roots) are stored as events and the write model is 
 reconstructed by replaying those events.  This is (A) above and what 
 akka.persistence has achieved very well in a distributed and scalable way.  
 
 This is the dashed area labelled [1] in the diagram.
 
 Further, CQRS uses commands to initiate changes to the write model and 
 signals theses changes with events (whether the events are used for event 
 sourcing or 

Re: [akka-user] Improving Akka Persistence wrt CQRS/ES/DDD

2014-08-19 Thread Roland Kuhn

19 aug 2014 kl. 14:57 skrev Gary Malouf malouf.g...@gmail.com:

 For CQRS specifically, a lot of what people call scalability is in it's 
 ability to easily model multiple read views to make queries very fast off the 
 same event data.
 
 In the cases where a true global ordering is truly necessary, one often does 
 not need to handle hundreds of thousands of writes per second.  I think the 
 ideal is to have the global ordering property for events by default, and have 
 to disable that if you feel a need to do more writes per second than a single 
 writer can handle.
 

Unfortunately it is not only the number of writes per second, the sheer data 
volume can drive the need for a distributed, partitioned storage mechanism. 
There is only so much you can fit within a single machine and once you go 
beyond that you quickly run into CAP (if you want your guarantees to hold 100% 
at all times). The way forward then necessitates that you must compromise on 
something, either Availability or Determinism (in this case).

Regards,

Roland

 Once the global ordering property is enforced, solving many of the publisher 
 ordering issues (and supporting sagas) becomes significantly easier to 
 achieve. 
 
 On Aug 19, 2014 8:49 AM, Roland Kuhn goo...@rkuhn.info wrote:
 
 18 aug 2014 kl. 16:49 skrev Patrik Nordwall patrik.nordw...@gmail.com:
 
 On Mon, Aug 18, 2014 at 3:38 PM, Roland Kuhn goo...@rkuhn.info wrote:
 
 18 aug 2014 kl. 10:27 skrev Patrik Nordwall patrik.nordw...@gmail.com:
 
 Hi Roland,
 
 A few more questions for clarification...
 
 
 On Sat, Aug 16, 2014 at 10:11 PM, Vaughn Vernon vver...@shiftmethod.com 
 wrote:
 
 
 On Friday, August 15, 2014 11:39:45 AM UTC-6, rkuhn wrote:
 Dear hakkers,
 
 unfortunately it took me a long time to catch up with akka-user to this 
 point after the vacation, but on the other hand this made for a very 
 interesting and stimulating read, thanks for this thread!
 
 If I may, here’s what I have understood so far:
 In order to support not only actor persistence but also full CQRS we need 
 to adjust our terminology: events are published to topics, where each 
 persistenceId is one such topic but others are also allowed.
 Common use-cases of building projections or denormalized views require the 
 ability to query the union of a possibly large number of topics in such a 
 fashion that no events are lost. This union can be viewed as a synthetic or 
 logical topic, but issues arise in that true topics provide total ordering 
 while these synthetic ones have difficulties doing so.
 Constructing Sagas is hard.
 
 AFAICS 3. is not related to the other two, the mentions in this thread have 
 only alluded to the problems so I assume that the difficulty is primarily 
 to design a process that has the right eventual consistency properties 
 (i.e. rollbacks, retries, …). This is an interesting topic but let’s 
 concentrate on the original question first.
 
 The first point is a rather simple one, we just need to expose the 
 necessary API for writing to a given topic instead of the local Actor’s 
 persistenceId; I’d opt for adding variants of the persist() methods that 
 take an additional String argument. Using the resulting event log is then 
 done as for the others (i.e. Views and potentially queries should just 
 work).
 
 Does that mean that a PersistentActor can emit events targeted to its 
 persistenceId and/or targeted to an external topic and it is only the 
 events targeted to the persistenceId that will be replayed during recovery 
 of that PersistentActor?
 
 Yes.
 
 Both these two types of events can be replayed by a PersistentView.
 
 Yes; they are not different types of events, just how they get to the 
 Journal is slightly different.
 
  
 The only concern is that the Journal needs to be prepared to receive events 
 concurrently from multiple sources instead of just the same Actor, but 
 since each topic needs to be totally ordered this will not be an additional 
 hassle beyond just routing to the same replica, just like for 
 persistenceIds.
 
 Replica as in data store replica, or as in journal actor? 
 
 The Journal must implement this in whatever way is suitable for the 
 back-end. A generic solution would be to shard the topics as Actors across 
 the cluster (internal to the Journal), or the Journal could talk to the 
 replicated back-end store such that a topic always is written to one 
 specific node (if that helps).
 
 What has been requested is all events for an Aggregate type, e.g. all 
 shopping carts, and this will will not scale. It can still be useful, and 
 with some careful design you could partition things when scalability is 
 needed. I'm just saying that it is a big gun, that can be pointed in the 
 wrong direction.
 
 Mixed-up context: #1 is about predefined topics to which events are emitted, 
 not queries. We need to strictly keep these separate.
 
 
  
 
  
 
 Is point one for providing a sequence number from a single ordering source?
 
 Yes, that 

Re: [akka-user] Improving Akka Persistence wrt CQRS/ES/DDD

2014-08-19 Thread √iktor Ҡlang
The decision if scale is needed cannot be implicit, as then you are luring
people into the non-scalable world and when they find out then it is too
late.


On Tue, Aug 19, 2014 at 3:20 PM, Roland Kuhn goo...@rkuhn.info wrote:


 19 aug 2014 kl. 14:57 skrev Gary Malouf malouf.g...@gmail.com:

 For CQRS specifically, a lot of what people call scalability is in it's
 ability to easily model multiple read views to make queries very fast off
 the same event data.

 In the cases where a true global ordering is truly necessary, one often
 does not need to handle hundreds of thousands of writes per second.  I
 think the ideal is to have the global ordering property for events by
 default, and have to disable that if you feel a need to do more writes per
 second than a single writer can handle.


 Unfortunately it is not only the number of writes per second, the sheer
 data volume can drive the need for a distributed, partitioned storage
 mechanism. There is only so much you can fit within a single machine and
 once you go beyond that you quickly run into CAP (if you want your
 guarantees to hold 100% at all times). The way forward then necessitates
 that you must compromise on something, either Availability or Determinism
 (in this case).

 Regards,

 Roland

 Once the global ordering property is enforced, solving many of the
 publisher ordering issues (and supporting sagas) becomes significantly
 easier to achieve.
 On Aug 19, 2014 8:49 AM, Roland Kuhn goo...@rkuhn.info wrote:


 18 aug 2014 kl. 16:49 skrev Patrik Nordwall patrik.nordw...@gmail.com:

 On Mon, Aug 18, 2014 at 3:38 PM, Roland Kuhn goo...@rkuhn.info wrote:


 18 aug 2014 kl. 10:27 skrev Patrik Nordwall patrik.nordw...@gmail.com:

 Hi Roland,

 A few more questions for clarification...


 On Sat, Aug 16, 2014 at 10:11 PM, Vaughn Vernon vver...@shiftmethod.com
  wrote:


  On Friday, August 15, 2014 11:39:45 AM UTC-6, rkuhn wrote:

 Dear hakkers,

 unfortunately it took me a long time to catch up with akka-user to
 this point after the vacation, but on the other hand this made for a very
 interesting and stimulating read, thanks for this thread!

 If I may, here’s what I have understood so far:

1. In order to support not only actor persistence but also full
CQRS we need to adjust our terminology: events are published to topics,
where each persistenceId is one such topic but others are also allowed.
2. Common use-cases of building projections or denormalized views
require the ability to query the union of a possibly large number of 
 topics
in such a fashion that no events are lost. This union can be viewed as 
 a
synthetic or logical topic, but issues arise in that true topics 
 provide
total ordering while these synthetic ones have difficulties doing so.
3. Constructing Sagas is hard.


 AFAICS 3. is not related to the other two, the mentions in this thread
 have only alluded to the problems so I assume that the difficulty is
 primarily to design a process that has the right eventual consistency
 properties (i.e. rollbacks, retries, …). This is an interesting topic but
 let’s concentrate on the original question first.

 The first point is a rather simple one, we just need to expose the
 necessary API for writing to a given topic instead of the local Actor’s
 persistenceId; I’d opt for adding variants of the persist() methods that
 take an additional String argument. Using the resulting event log is then
 done as for the others (i.e. Views and potentially queries should just
 work).


 Does that mean that a PersistentActor can emit events targeted to its
 persistenceId and/or targeted to an external topic and it is only the
 events targeted to the persistenceId that will be replayed during recovery
 of that PersistentActor?


 Yes.

 Both these two types of events can be replayed by a PersistentView.


 Yes; they are not different types of events, just how they get to the
 Journal is slightly different.



  The only concern is that the Journal needs to be prepared to receive
 events concurrently from multiple sources instead of just the same Actor,
 but since each topic needs to be totally ordered this will not be an
 additional hassle beyond just routing to the same replica, just like for
 persistenceIds.


 Replica as in data store replica, or as in journal actor?


 The Journal must implement this in whatever way is suitable for the
 back-end. A generic solution would be to shard the topics as Actors across
 the cluster (internal to the Journal), or the Journal could talk to the
 replicated back-end store such that a topic always is written to one
 specific node (if that helps).


 What has been requested is all events for an Aggregate type, e.g. all
 shopping carts, and this will will not scale. It can still be useful, and
 with some careful design you could partition things when scalability is
 needed. I'm just saying that it is a big gun, that can be pointed in the
 wrong direction.


 Mixed-up context: #1 

[akka-user] Re: Event Stores for Akka Persistence for CQRS?

2014-08-19 Thread Juan José Vázquez Delgado
Hi guys, really interesting thread. However, it follows from this 
discussion that Akka Persistence is not currently 100% ready for a full 
CRQS/ES implementation. A little bit frustrating but, to be honest, it's 
true that it's still an experimental feature. As users, we're assuming this.

Anyway, and thinking about how to solve the Query part, what do you think 
about using some distributed in-memory data grid solution such as Hazelcast 
or GridGain?.

Regards, 

Juanjo

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Improving Akka Persistence wrt CQRS/ES/DDD

2014-08-19 Thread Gary Malouf
So how does one handle combining events from different streams- a global
sequence number is the most straightforward.

Also, not everything needs to scale on the write side to that degree.
On Aug 19, 2014 9:24 AM, √iktor Ҡlang viktor.kl...@gmail.com wrote:

 The decision if scale is needed cannot be implicit, as then you are luring
 people into the non-scalable world and when they find out then it is too
 late.


 On Tue, Aug 19, 2014 at 3:20 PM, Roland Kuhn goo...@rkuhn.info wrote:


 19 aug 2014 kl. 14:57 skrev Gary Malouf malouf.g...@gmail.com:

 For CQRS specifically, a lot of what people call scalability is in it's
 ability to easily model multiple read views to make queries very fast off
 the same event data.

 In the cases where a true global ordering is truly necessary, one often
 does not need to handle hundreds of thousands of writes per second.  I
 think the ideal is to have the global ordering property for events by
 default, and have to disable that if you feel a need to do more writes per
 second than a single writer can handle.


 Unfortunately it is not only the number of writes per second, the sheer
 data volume can drive the need for a distributed, partitioned storage
 mechanism. There is only so much you can fit within a single machine and
 once you go beyond that you quickly run into CAP (if you want your
 guarantees to hold 100% at all times). The way forward then necessitates
 that you must compromise on something, either Availability or Determinism
 (in this case).

 Regards,

 Roland

 Once the global ordering property is enforced, solving many of the
 publisher ordering issues (and supporting sagas) becomes significantly
 easier to achieve.
 On Aug 19, 2014 8:49 AM, Roland Kuhn goo...@rkuhn.info wrote:


 18 aug 2014 kl. 16:49 skrev Patrik Nordwall patrik.nordw...@gmail.com:

 On Mon, Aug 18, 2014 at 3:38 PM, Roland Kuhn goo...@rkuhn.info wrote:


 18 aug 2014 kl. 10:27 skrev Patrik Nordwall patrik.nordw...@gmail.com
 :

 Hi Roland,

 A few more questions for clarification...


 On Sat, Aug 16, 2014 at 10:11 PM, Vaughn Vernon 
 vver...@shiftmethod.com wrote:


  On Friday, August 15, 2014 11:39:45 AM UTC-6, rkuhn wrote:

 Dear hakkers,

 unfortunately it took me a long time to catch up with akka-user to
 this point after the vacation, but on the other hand this made for a very
 interesting and stimulating read, thanks for this thread!

 If I may, here’s what I have understood so far:

1. In order to support not only actor persistence but also full
CQRS we need to adjust our terminology: events are published to 
 topics,
where each persistenceId is one such topic but others are also 
 allowed.
2. Common use-cases of building projections or denormalized views
require the ability to query the union of a possibly large number of 
 topics
in such a fashion that no events are lost. This union can be viewed 
 as a
synthetic or logical topic, but issues arise in that true topics 
 provide
total ordering while these synthetic ones have difficulties doing so.
3. Constructing Sagas is hard.


 AFAICS 3. is not related to the other two, the mentions in this
 thread have only alluded to the problems so I assume that the difficulty 
 is
 primarily to design a process that has the right eventual consistency
 properties (i.e. rollbacks, retries, …). This is an interesting topic but
 let’s concentrate on the original question first.

 The first point is a rather simple one, we just need to expose the
 necessary API for writing to a given topic instead of the local Actor’s
 persistenceId; I’d opt for adding variants of the persist() methods that
 take an additional String argument. Using the resulting event log is then
 done as for the others (i.e. Views and potentially queries should just
 work).


 Does that mean that a PersistentActor can emit events targeted to its
 persistenceId and/or targeted to an external topic and it is only the
 events targeted to the persistenceId that will be replayed during recovery
 of that PersistentActor?


 Yes.

 Both these two types of events can be replayed by a PersistentView.


 Yes; they are not different types of events, just how they get to the
 Journal is slightly different.



  The only concern is that the Journal needs to be prepared to receive
 events concurrently from multiple sources instead of just the same Actor,
 but since each topic needs to be totally ordered this will not be an
 additional hassle beyond just routing to the same replica, just like for
 persistenceIds.


 Replica as in data store replica, or as in journal actor?


 The Journal must implement this in whatever way is suitable for the
 back-end. A generic solution would be to shard the topics as Actors across
 the cluster (internal to the Journal), or the Journal could talk to the
 replicated back-end store such that a topic always is written to one
 specific node (if that helps).


 What has been requested is all events for an Aggregate type, 

Re: [akka-user] Event Stores for Akka Persistence for CQRS?

2014-08-19 Thread Roland Kuhn

19 aug 2014 kl. 15:39 skrev Juan José Vázquez Delgado jvazq...@tecsisa.com:

 Hi guys, really interesting thread. However, it follows from this discussion 
 that Akka Persistence is not currently 100% ready for a full CRQS/ES 
 implementation. A little bit frustrating but, to be honest, it's true that 
 it's still an experimental feature. As users, we're assuming this.

Akka Persistence is about persistent actors, using Event Sourcing to achieve 
this goal. This makes it a perfect fit for the C in CQRS. The Q on the other 
hand does not actually need to have anything to do with Akka or actors at all, 
per se. If we can provide nice things then we will, or course :-)

 Anyway, and thinking about how to solve the Query part, what do you think 
 about using some distributed in-memory data grid solution such as Hazelcast 
 or GridGain?.

As I see it you should be able to use whatever fits your use-case for the Query 
side, in particular since the requirements for its structure are domain 
specific. Beware that none of the solutions are built on magic, though, and 
that things which sound too good to be true usually are.

Regards,

Roland

 
 Regards, 
 
 Juanjo
 
 -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
  http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google Groups 
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.



Dr. Roland Kuhn
Akka Tech Lead
Typesafe – Reactive apps on the JVM.
twitter: @rolandkuhn


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Event Stores for Akka Persistence for CQRS?

2014-08-19 Thread Patrik Nordwall
On Tue, Aug 19, 2014 at 2:33 PM, delasoul michael.ham...@gmx.at wrote:

 and therefore we provide PersistentView as a simple way to replicate
 events to the read side, and then a de-normalized representation can be
 stored...

 If I understand this right, this means:
 PersistenActor persists event
 PersistentView queries EventStore e.g.: every second and forwards new
 events to the read side, e.g.: EventListener which then updates the
 ReadModel

 What is the advantage of using the PersistenView here, instead of just
 emitting the event to the read side from the PersistentActor directly?


The PersistentView (read side) can process the events in its own pace, it
is decoupled from the write side. It can be down without affecting the
write side, and it can be started later and catch up.
Also, you can have multiple PersistentView instances consuming the same
stream of events, maybe doing different things with them.
/Patrik



 thanks,

 michael


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] [akka-stream] : Streaming large file from 1 server to 1 client

2014-08-19 Thread Xavier Bucchiotty
Hello HakKers,

I need to transfer larges files (~GB) between 2 distants VM. I first used 
akka.io module which works great.
I currently take a look a akka-stream-experimental module to benefit from 
precious asynchronous backpressure.

But when I create a Flow from a Stream[ByteString], it keeps a reference on 
it. And because of Scala Stream memoization == OutOfMemory.
I tried with an iterator instead. But then my integrations tests fails. 
Seems to be that some chunk have disappeared or some concurrent access to 
the datasource.

From reading the documentation of Flow in scaladsl package, I begin to 
think that streaming a file from 1 point to another is not a use case 
covered by reactive streams. Am I correct?
Can I expect some improvement in next release on this way?


To help, here is the code:

// streamer is an implements Iterable[Byte] and reads the file byte by byte.
val byteStream = new Streamer(buffStream)

// toStream is needed there to pass from Iterator[Iterable[Byte]] to 
Iterable[Iterable[Byte]] 
// because we need a flow of an Iterable[ByteString]
byteStream.grouped(chunkSize).toStream.map(_.toArray)
.map(ByteString.fromArray)

Flow(bytes).produceTo(materializer, client.outputStream)

Thanks for your help

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] [akka-stream] : Streaming large file from 1 server to 1 client

2014-08-19 Thread √iktor Ҡlang
On Tue, Aug 19, 2014 at 4:08 PM, Xavier Bucchiotty xbucchio...@xebia.fr
wrote:

 Hello HakKers,

 I need to transfer larges files (~GB) between 2 distants VM. I first used
 akka.io module which works great.
 I currently take a look a akka-stream-experimental module to benefit from
 precious asynchronous backpressure.

 But when I create a Flow from a Stream[ByteString], it keeps a reference
 on it. And because of Scala Stream memoization == OutOfMemory.
 I tried with an iterator instead. But then my integrations tests fails.
 Seems to be that some chunk have disappeared or some concurrent access to
 the datasource.

 From reading the documentation of Flow in scaladsl package, I begin to
 think that streaming a file from 1 point to another is not a use case
 covered by reactive streams. Am I correct?
 Can I expect some improvement in next release on this way?


 To help, here is the code:

 // streamer is an implements Iterable[Byte] and reads the file byte by
 byte.
 val byteStream = new Streamer(buffStream)

 // toStream is needed there to pass from Iterator[Iterable[Byte]] to
 Iterable[Iterable[Byte]]
 // because we need a flow of an Iterable[ByteString]
 byteStream.grouped(chunkSize).toStream.map(_.toArray)
 .map(ByteString.fromArray)

 Flow(bytes).produceTo(materializer, client.outputStream)


Hi Xavier,

Flow can take an iterator:

defapply[T](iterator: Iterator[T]): Flow
http://doc.akka.io/api/akka-stream-and-http-experimental/0.4/akka/stream/scaladsl/Flow.html
[T]

Start a new flow from the given Iterator.


So something like the following should work: (Warning, have not compiled
this)

Flow((new Streamer(buffStream)).grouped(chunkSize).map {
  iteratorOfBytes =
val b = new ByteStringbuilder()
b.sizeHint(chunkSize)
b ++= iteratorOfBytes
b.result()
})



 Thanks for your help

 --
  Read the docs: http://akka.io/docs/
  Check the FAQ:
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google Groups
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




-- 
Cheers,
√

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Improving Akka Persistence wrt CQRS/ES/DDD

2014-08-19 Thread Greg Young
I am not responding to this one post just a reply towards the end and will 
discuss a few posts from earlier.

To start I have to agree with some of the posters that premature scaling 
can cause many issues. This actually reminds me of the CQRS journey which 
people mentioned earlier. One of the main criticisms of the CQRS Journey is 
that it prematurely took scaling constraints which causes the code to be 
much much more complex than it needs to be. This was partially due to it 
being a sample app of something larger and partially due to the pp team 
also showing azure at the same time. Because they wanted to distribute and 
show Azure at the same time the team took cloud constraints as a given. 
This caused for instance every handler in the system to need to be 
idempotent. While seemingly a small constraint this actually adds a 
significant amount of complexity to the system.

The same problem exists in what is being discussed today. For 95+% of 
systems it is totally reasonable that when I write a projection I expect my 
events to have assured ordering. As Vaughn mentioned a few hundred 
events/second is the vast majority of systems. Systems like these can be 
completely linearized and ordering assurances are not an issue. This 
removes a LOT of complexity in projections code as you don't have to handle 
hundreds to thousands of edge cases in your read models where you get 
events out of order. Saying that ordering assurances are not needed and 
everyone should use casual consistency is really saying we don't care 
about the bottom 95% of users.




RKuhn had mentioned doing joins. You are correct in this is how we do it 
now. We offer historically perfect joins but in live there is no way to do 
a live perfect join via queries. We do however support another mechanism 
for this that will assure that your live join will always match your 
historical. We allow you to precalculate and save the results of the join. 
This produces a stream full of stream links which can then be replayed as 
many times (perfectly) as you want.


There was some discussion above about using precalculated topics to handle 
projections. I believe the terminology was called tags. The general idea if 
I can repeat it is to write an event FooOccurred and to include upon it 
some tags (foo, bar, baz) which would map it to topics that could then be 
replayed as a whole. This on the outset seems like a good idea but will not 
work well in production. The place where it will run into a problem is that 
I cannot know when writing the events all mappings that any future 
projections may wish to have. Tomorrow my business could ask me for a 
report that looks at a completely new way of partitioning the events and I 
will be unable to do it.


As I mentioned previously in a quick comment. What is being asked for today 
is actually already supported with akka,persistence providing you are using 
event store as your backend (for those interested today is the release of 
the final RC of 3.0 which has all of the support for the akka,perisistence 
client (binaries are for win/linux/max)). Basically what you would do is 
run akka.persistence on your write side but *not* use it for supporting 
your read models. Instead when dealing with your read models you would use 
a catchupsubscription for what you are interested in. I do not see anything 
inherently wrong with this way of doing things and it begs the question of 
whether this is actually a more appropriate way to deal with eventsourced 
systems using akka,.persistence. eg use native storage directly if it 
supports it.

Cheers,

Greg
On Tuesday, August 19, 2014 9:24:10 AM UTC-4, √ wrote:

 The decision if scale is needed cannot be implicit, as then you are luring 
 people into the non-scalable world and when they find out then it is too 
 late.


 On Tue, Aug 19, 2014 at 3:20 PM, Roland Kuhn goo...@rkuhn.info 
 javascript: wrote:


 19 aug 2014 kl. 14:57 skrev Gary Malouf malou...@gmail.com javascript:
 :

 For CQRS specifically, a lot of what people call scalability is in it's 
 ability to easily model multiple read views to make queries very fast off 
 the same event data.

 In the cases where a true global ordering is truly necessary, one often 
 does not need to handle hundreds of thousands of writes per second.  I 
 think the ideal is to have the global ordering property for events by 
 default, and have to disable that if you feel a need to do more writes per 
 second than a single writer can handle. 


 Unfortunately it is not only the number of writes per second, the sheer 
 data volume can drive the need for a distributed, partitioned storage 
 mechanism. There is only so much you can fit within a single machine and 
 once you go beyond that you quickly run into CAP (if you want your 
 guarantees to hold 100% at all times). The way forward then necessitates 
 that you must compromise on something, either Availability or Determinism 
 (in this case).

 Regards,

 Roland

 Once the 

Re: [akka-user] Event Stores for Akka Persistence for CQRS?

2014-08-19 Thread Greg Young


On Tuesday, August 19, 2014 9:44:10 AM UTC-4, rkuhn wrote:


 19 aug 2014 kl. 15:39 skrev Juan José Vázquez Delgado jvaz...@tecsisa.com 
 javascript::

 Hi guys, really interesting thread. However, it follows from this 
 discussion that Akka Persistence is not currently 100% ready for a full 
 CRQS/ES implementation. A little bit frustrating but, to be honest, it's 
 true that it's still an experimental feature. As users, we're assuming this.


 Akka Persistence is about persistent actors, using Event Sourcing to 
 achieve this goal. This makes it a perfect fit for the C in CQRS. The Q on 
 the other hand does not actually need to have anything to do with Akka or 
 actors at all, per se. If we can provide nice things then we will, or 
 course :-)



Yes it is very good at that now. Just need some way of supporting the Q 
side efficiently (without bringing in massive infrastructure) and its 
probably good :)
 


 Anyway, and thinking about how to solve the Query part, what do you think 
 about using some distributed in-memory data grid solution such as Hazelcast 
 or GridGain?.


 As I see it you should be able to use whatever fits your use-case for the 
 Query side, in particular since the requirements for its structure are 
 domain specific. Beware that none of the solutions are built on magic, 
 though, and that things which sound too good to be true usually are.

 Regards,

 Roland


 Regards, 

 Juanjo

 -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google Groups 
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to akka-user+...@googlegroups.com javascript:.
 To post to this group, send email to akka...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




 *Dr. Roland Kuhn*
 *Akka Tech Lead*
 Typesafe http://typesafe.com/ – Reactive apps on the JVM.
 twitter: @rolandkuhn
 http://twitter.com/#!/rolandkuhn
  


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Event Stores for Akka Persistence for CQRS?

2014-08-19 Thread Patrik Nordwall
On Tue, Aug 19, 2014 at 5:07 PM, delasoul michael.ham...@gmx.at wrote:

 Then the PersistentView is not used as  a middle-man to replicate events
 to the read side, but it is the read side(meaning if a client sends a query
 a PersistentView creates the response)?
 That's how I understood PersistentViews until now - but maybe that was
 wrong, so I' am asking...


That is one possible way of using a PersistentView, but not how I would use
it in a large system. I would use it to consume the events and save a
representation that is optimized for the queries in a separate database (or
other type of product). Queries go directly (or via some other actor) to
the database.

/Patrik




thank's for your answer

 On Tuesday, 19 August 2014 16:11:33 UTC+2, Patrik Nordwall wrote:




 On Tue, Aug 19, 2014 at 2:33 PM, delasoul michael...@gmx.at wrote:

 and therefore we provide PersistentView as a simple way to replicate
 events to the read side, and then a de-normalized representation can be
 stored...

 If I understand this right, this means:
 PersistenActor persists event
 PersistentView queries EventStore e.g.: every second and forwards new
 events to the read side, e.g.: EventListener which then updates the
 ReadModel

 What is the advantage of using the PersistenView here, instead of just
 emitting the event to the read side from the PersistentActor directly?


 The PersistentView (read side) can process the events in its own pace, it
 is decoupled from the write side. It can be down without affecting the
 write side, and it can be started later and catch up.
 Also, you can have multiple PersistentView instances consuming the same
 stream of events, maybe doing different things with them.
 /Patrik



 thanks,

 michael


   --
  Read the docs: http://akka.io/docs/
  Check the FAQ:
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google Groups
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




-- 

Patrik Nordwall
Typesafe http://typesafe.com/ -  Reactive apps on the JVM
Twitter: @patriknw

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Event Stores for Akka Persistence for CQRS?

2014-08-19 Thread Ashley Aitken


On Tuesday, 19 August 2014 19:33:55 UTC+8, Martin Krasser wrote:

 If so, it sounds like a great solution but why would that require an 
 extension to the Akka Persistence design/API?
  

 Because transformed/joined/... event streams in backend store on the read 
 side must be consumable by PersistentViews (for creating read models). I 
 still see this backend store to maintain changes (= transformed/joined/... 
 events) instead of current state. 


I am sorry I still don't see this.

This suggests to me that spark is talking directly to the read model 
datastore (e.g. graph database, MongoDB, SQL database).

So are you suggesting: 

1. journal - spark - Akka actors (like PersistentView) - read model data 
store 

or

2. journal - spark - read model data store (like graph database, MongoDb, 
SQL database) - Akka actors - Queries

I see PersistentView(for generalised topics) as the glue between the Akka 
journal (write store) and read stores (1.).

Thanks for your patience.

Cheers,
Ashley.

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] WebSocket + AKKA + Redis pub/sub

2014-08-19 Thread gitted
This seems like a fairly common pattern/problem domain that people will be 
facing when they want to incorporate real-time updates on websites.

Is there anything open source that can help guide me on how to implement 
this?  I was thinking of using redis also but rabbitmq would be very 
similiar just the backing queue is different.

On Monday, September 3, 2012 12:10:45 PM UTC-4, Jeremy Pierre wrote:

 Apologies, I should have jumped in earlier - my team and I are using 
 websockets via Netty in Akka 2.0.x with RabbitMQ.  Looks like Redis 
 pub/sub is a bit simpler but maybe I can help. 

 Can't give a lot of details about what specifically we're working on 
 but if you have general questions about setup/approach I'll do my 
 best. 

 Jeremy 

 On Mon, Sep 3, 2012 at 5:21 AM, Viktor Klang viktor...@gmail.com 
 javascript: wrote: 
  Hi Jason, 
  
  Apparently no one wants to admit to have done that :-) 
  
  Cheers, 
  √ 
  
  On Tue, Aug 21, 2012 at 12:36 PM, Jason arvi...@gmail.com javascript: 
 wrote: 
  I wonder know if anyone have tried to use websocket to send message to 
 Redis 
  pub/sub? 
  
  
  -- 
  Read the docs: http://akka.io/docs/ 
  Check the FAQ: http://akka.io/faq/ 
  Search the archives: 
 https://groups.google.com/group/akka-user 
  --- 
  You received this message because you are subscribed to the Google 
 Groups 
  Akka User List group. 
  To post to this group, send email to akka...@googlegroups.com 
 javascript:. 
  To unsubscribe from this group, send email to 
  akka-user+...@googlegroups.com javascript:. 
  Visit this group at http://groups.google.com/group/akka-user?hl=en. 
  
  
  
  -- 
   Read the docs: http://akka.io/docs/ 
   Check the FAQ: http://akka.io/faq/ 
   Search the archives: 
 https://groups.google.com/group/akka-user 
  --- 
  You received this message because you are subscribed to the Google 
 Groups Akka User List group. 
  To post to this group, send email to akka...@googlegroups.com 
 javascript:. 
  To unsubscribe from this group, send email to 
 akka-user+...@googlegroups.com javascript:. 
  Visit this group at http://groups.google.com/group/akka-user?hl=en. 
  
  


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Event Stores for Akka Persistence for CQRS?

2014-08-19 Thread Martin Krasser


On 19.08.14 17:41, Ashley Aitken wrote:



On Tuesday, 19 August 2014 19:33:55 UTC+8, Martin Krasser wrote:


If so, it sounds like a great solution but why would that require
an extension to the Akka Persistence design/API?


Because transformed/joined/... event streams in backend store on
the read side must be consumable by PersistentViews (for creating
read models). I still see this backend store to maintain changes
(= transformed/joined/... events) instead of current state.


I am sorry I still don't see this.

This suggests to me that spark is talking directly to the read model 
datastore (e.g. graph database, MongoDB, SQL database).


So are you suggesting:

1. journal - spark - Akka actors (like PersistentView) - read model 
data store


or

2. journal - spark - read model data store (like graph database, 
MongoDb, SQL database) - Akka actors - Queries


I was suggesting 2.



I see PersistentView(for generalised topics) as the glue between the 
Akka journal (write store) and read stores (1.).


Thanks for your patience.

Cheers,
Ashley.

--
 Read the docs: http://akka.io/docs/
 Check the FAQ: 
http://doc.akka.io/docs/akka/current/additional/faq.html

 Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the Google 
Groups Akka User List group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to akka-user+unsubscr...@googlegroups.com 
mailto:akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com 
mailto:akka-user@googlegroups.com.

Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


--
Martin Krasser

blog:http://krasserm.blogspot.com
code:http://github.com/krasserm
twitter: http://twitter.com/mrt1nz

--

 Read the docs: http://akka.io/docs/
 Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html
 Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka User List group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Event Stores for Akka Persistence for CQRS?

2014-08-19 Thread delasoul
Ok thanks, what confused me was:  a simple way to replicate events to the 
read side - which I misunderstood for sending events, but you meant
smthg. else.
If a PersistentView is only involved in writing the ReadModel, is it not 
harder to achieve a consistent read model (have to make sure that the
PersistentView is alive to update the ReadModel)?


On Tuesday, 19 August 2014 17:15:57 UTC+2, Patrik Nordwall wrote:




 On Tue, Aug 19, 2014 at 5:07 PM, delasoul michael...@gmx.at javascript:
  wrote:

 Then the PersistentView is not used as  a middle-man to replicate 
 events to the read side, but it is the read side(meaning if a client sends 
 a query a PersistentView creates the response)?
 That's how I understood PersistentViews until now - but maybe that was 
 wrong, so I' am asking...


 That is one possible way of using a PersistentView, but not how I would 
 use it in a large system. I would use it to consume the events and save a 
 representation that is optimized for the queries in a separate database (or 
 other type of product). Queries go directly (or via some other actor) to 
 the database.

 /Patrik
  

  

 thank's for your answer

 On Tuesday, 19 August 2014 16:11:33 UTC+2, Patrik Nordwall wrote:




 On Tue, Aug 19, 2014 at 2:33 PM, delasoul michael...@gmx.at wrote:

 and therefore we provide PersistentView as a simple way to replicate 
 events to the read side, and then a de-normalized representation can be 
 stored...

 If I understand this right, this means:
 PersistenActor persists event
 PersistentView queries EventStore e.g.: every second and forwards new 
 events to the read side, e.g.: EventListener which then updates the 
 ReadModel

 What is the advantage of using the PersistenView here, instead of just 
 emitting the event to the read side from the PersistentActor directly?


 The PersistentView (read side) can process the events in its own pace, 
 it is decoupled from the write side. It can be down without affecting the 
 write side, and it can be started later and catch up.
 Also, you can have multiple PersistentView instances consuming the same 
 stream of events, maybe doing different things with them.
 /Patrik
  


 thanks,

 michael 


   -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google Groups 
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to akka-user+...@googlegroups.com javascript:.
 To post to this group, send email to akka...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




 -- 

 Patrik Nordwall
 Typesafe http://typesafe.com/ -  Reactive apps on the JVM
 Twitter: @patriknw

  

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Event Stores for Akka Persistence for CQRS?

2014-08-19 Thread delasoul
As I am no Spark expert - will it be used only as kind of 
messaging(streaming) middleware to sync write and read store or also to 
somehow change/merge/
filter the events it gets/pulls from the write store or is this all done 
via the plugin for PersistentViews?
(I guess it has to be like this, otherwise using only one backend store 
cannot be supported?)

thanks,

michael

On Tuesday, 19 August 2014 17:48:16 UTC+2, Martin Krasser wrote:

  
 On 19.08.14 17:41, Ashley Aitken wrote:
  


 On Tuesday, 19 August 2014 19:33:55 UTC+8, Martin Krasser wrote: 

   If so, it sounds like a great solution but why would that require an 
 extension to the Akka Persistence design/API?
  

 Because transformed/joined/... event streams in backend store on the read 
 side must be consumable by PersistentViews (for creating read models). I 
 still see this backend store to maintain changes (= transformed/joined/... 
 events) instead of current state. 
  

  I am sorry I still don't see this.

  This suggests to me that spark is talking directly to the read model 
 datastore (e.g. graph database, MongoDB, SQL database).

  So are you suggesting: 

  1. journal - spark - Akka actors (like PersistentView) - read model 
 data store 

  or

  2. journal - spark - read model data store (like graph database, 
 MongoDb, SQL database) - Akka actors - Queries
  

 I was suggesting 2.

  
  I see PersistentView(for generalised topics) as the glue between the 
 Akka journal (write store) and read stores (1.).
  
  Thanks for your patience.

  Cheers,
 Ashley.

  -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google Groups 
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to akka-user+...@googlegroups.com javascript:.
 To post to this group, send email to akka...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.


 -- 
 Martin Krasser

 blog:http://krasserm.blogspot.com
 code:http://github.com/krasserm
 twitter: http://twitter.com/mrt1nz

  

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Improving Akka Persistence wrt CQRS/ES/DDD

2014-08-19 Thread Ashley Aitken


On Tuesday, 19 August 2014 21:14:17 UTC+8, rkuhn wrote:


 18 aug 2014 kl. 18:01 skrev Ashley Aitken amai...@gmail.com javascript:
 :

 I believe Akka needs to allow actors to:


 (i) persist events with as much information as efficiently possible on the 
 write side to allow the store to facilitate the read side extracting them 
 according to what criteria is needed,

 This is a convoluted way of saying that Events must be self-contained, 
 right? In that case: check!


No, I don't think so.  As I understand it now, the only thing the event 
store knows about each event is the persistenceId and a chunk of opaque 
data. It doesn't know the type of the event, the type of the message, any 
time information, any causal dependency etc.  I guess what I am saying is 
that the events need to include as much metadata as possible so that the 
event store can provide the necessary synthetic streams if they are 
requested by the read side.  As I mentioned later, some event stores (like 
Kafka may replicate the events into separate topics based on this 
information), others (like Event Store) may use this information later to 
form streams of links to the original events.  

(iii) read from (and replay) streams of events on the read and write side 
 according to a range of criteria supported and defined within the store or 
 via the store API (e.g. using a DSL), and

 This is the unclear point: who defines the query and when? What are the 
 consistency guarantees for the generated event stream?


I suggest the developers of the read side specify the queries directly to 
the event store but this may be after the events have initially been 
persisted.  The event store produces the query stream (if it can) and a 
PersistentView can be setup to read from that named query.  With regards to 
consistency guarantees - my understanding is that these streams are used to 
eventually guarantee that the query model will be consistent with the write 
model, i.e. all the events will get across.  With regards to ordering I 
think the event store does the best it can to provide consistent ordering, 
e.g. total ordering if there was no distribution and causal ordering, where 
possible, if there was ordering.  The developer would need to understand 
the limitations of how the query store is configured and queried.

  

 (iv) reliably (at least once) deliver information to other read side 
 store(s) and systems above and beyond the store used for persisting the 
 events.

 This is PersistentView, so “check!” (As argued previously “reliably” 
 translates to “persistent”.)


As I asked in another thread (I think) I am not sure how PersistentView can 
do this when PersistentActor is the one that can mixin AtLeastOnceDelivery?

I think we need a PeristentView that can guarantee AtLeastOnceDelivery to 
an actor representing a query store.  This would seem to require a 
PersistentViewActor ;-) that can read from a persistent query and also 
persist its state to provide guaranteed delivery.

My lack of knowledge of Scala and Akka may be showing here.

I believe each of these is readily achievable with Akka but:


 (i) doesn’t mean explicitly persisting the events to specific topics as 
 you suggest in your (1) (although this may be how some stores implement the 
 required functionality on the read side). Instead it means transparently 
 including information like the actorId, event type, actor type, probably 
 the time and possibly information to help with causal ordering (see my next 
 post).

 No, again we need to strictly keep Topics and Queries separate, they are 
 very different features. Topics are defined up-front and explicitly written 
 to, Queries are constructed later based on the existing event log contents. 
 Marking events within the store with timestamps of some kind might help 
 achieving a pseudo-deterministic behavior, but it is by no means a 
 guarantee. Causal ordering is out of scope, and it also does not help in 
 achieving the desired ability to replay Queries from some given point in 
 the past.


I think we do agree somewhere in there but I don't think as was suggested 
(by you earlier?) that creating topics up-front whether a fixed set or 
arbitrary tags will work.  I feel in what way the store supports the 
queries (and how much it can) is up to the store (e.g. creating separate 
topics or synthetic topics), so I would argue against using topics for 
CQRS.  As I mention below for Pub/Sub to Persistent topics it would be 
great, but not for CQRS.

 C. CQRS with Event Sourcing


 And finally, there is CQRS with Event Sourcing, which I believe is much 
 more that (A) and (B) and particularly doesn’t necessarily require (B.) for 
 all event stores.  So if Akka were to implement (B), which I think would be 
 very useful for other reasons, it would not specifically be for CQRS.


 Please consider this diagram overviewing CQRS with Event Sourcing:


 

[akka-user] Using stash with become

2014-08-19 Thread Luis Medina
Hi all,

I'm working on implementing retry functionality in one of my actor's that 
will retry writing a message to RabbitMQ if the original write fails. This 
looks something like this:

private String retryMessage;

private void processMessage(String message) {
try {
writer.write(message);
   } catch (IOException e) {
LOGGER.warn(Unable to write message to RabbitMQ. Retrying...);
retryMessage = message;
context().become(retry);
sendTimeout();
}
}

private PartialFunctionObject, BoxedUnit retry = ReceiveBuilder
   .match(String.class, message - stash())
   .match(ReceiveTimeout.class, receiveTimeout - retryMessage())
   .build();

private void sendTimeout() {
long waitTime = getWaitTime();

context().setReceiveTimeout(Duration.Undefined());
context().setReceiveTimeout(Duration.create(waitTime, 
TimeUnit.MILLISECONDS));
}

private void retryMessage() {
try {
writer.write(retryMessage);
context().setReceiveTimeout(Duration.Undefined());
context().unbecome();
unstashAll();
} catch (IOException e) {
LOGGER.warn(Unable to write message to RabbitMQ. Retrying...);
sendTimeout();
}
}

As you can see, when the write fails, the actor switches to a retry mode. 
In this retry mode, it will stash any incoming messages that are meant to 
be written until it can successfully write the original message that it 
failed on. To do a retry, I set a ReceiveTimeout that whose duration 
increases exponentially with each failure. When the actor receives a 
ReceiveTimeout, it simply tries to write it again. Now the part that I'm 
curious about has to do with unstashing all of the messages and unbecoming 
its retry mode.

In the retryMessage() method, if the write is successful, the 
ReceiveTimeout will be shut off, the actor will revert back to its original 
state, and any messages that were stashed during this time will be 
unstashed. In terms of reverting back to its original state and unstashing 
messages, is this the right order in which to do this? In all of the 
examples that I've seen, I noticed that the unstashAll() always came before 
the context().unbecome() which is what I originally had. After giving it 
some thought, however, I started to wonder if having this order would cause 
me to lose any messages if they arrived in between the unstashAll() and 
context().unbecome() operations thus causing them to get stashed but never 
becoming unstashed again. This is why I ended up reversing their order. Is 
my thinking correct?

Luis

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Event Stores for Akka Persistence for CQRS?

2014-08-19 Thread Martin Krasser


On 19.08.14 18:48, delasoul wrote:
As I am no Spark expert - will it be used only as kind of 
messaging(streaming) middleware to sync write and read store or also 
to somehow change/merge/

filter the events it gets/pulls from the write store


usually, to process (transform/aggregate/filter/...) these events.


or is this all done via the plugin for PersistentViews?
(I guess it has to be like this, otherwise using only one backend 
store cannot be supported?)


thanks,

michael

On Tuesday, 19 August 2014 17:48:16 UTC+2, Martin Krasser wrote:


On 19.08.14 17:41, Ashley Aitken wrote:



On Tuesday, 19 August 2014 19:33:55 UTC+8, Martin Krasser wrote:


If so, it sounds like a great solution but why would that
require an extension to the Akka Persistence design/API?


Because transformed/joined/... event streams in backend store
on the read side must be consumable by PersistentViews (for
creating read models). I still see this backend store to
maintain changes (= transformed/joined/... events) instead of
current state.


I am sorry I still don't see this.

This suggests to me that spark is talking directly to the read
model datastore (e.g. graph database, MongoDB, SQL database).

So are you suggesting:

1. journal - spark - Akka actors (like PersistentView) - read
model data store

or

2. journal - spark - read model data store (like graph
database, MongoDb, SQL database) - Akka actors - Queries


I was suggesting 2.



I see PersistentView(for generalised topics) as the glue between
the Akka journal (write store) and read stores (1.).

Thanks for your patience.

Cheers,
Ashley.

-- 
 Read the docs: http://akka.io/docs/

 Check the FAQ:
http://doc.akka.io/docs/akka/current/additional/faq.html
http://doc.akka.io/docs/akka/current/additional/faq.html
 Search the archives:
https://groups.google.com/group/akka-user
https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the
Google Groups Akka User List group.
To unsubscribe from this group and stop receiving emails from it,
send an email to akka-user+...@googlegroups.com javascript:.
To post to this group, send email to akka...@googlegroups.com
javascript:.
Visit this group at http://groups.google.com/group/akka-user
http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout
https://groups.google.com/d/optout.


-- 
Martin Krasser


blog:http://krasserm.blogspot.com
code:http://github.com/krasserm
twitter:http://twitter.com/mrt1nz

--
 Read the docs: http://akka.io/docs/
 Check the FAQ: 
http://doc.akka.io/docs/akka/current/additional/faq.html

 Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the Google 
Groups Akka User List group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to akka-user+unsubscr...@googlegroups.com 
mailto:akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com 
mailto:akka-user@googlegroups.com.

Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


--
Martin Krasser

blog:http://krasserm.blogspot.com
code:http://github.com/krasserm
twitter: http://twitter.com/mrt1nz

--

 Read the docs: http://akka.io/docs/
 Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html
 Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka User List group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Akka FSM and Akk persistence with Java (8)

2014-08-19 Thread Barak Cohen
Hi,

We have just started to build our new application with Akka using java 8.
We started by using the FSM actor but when we read about the persistent 
actors we wanted to use them as well.

I saw that in the Scala documentation of the persistence module there is a 
reference on how to combine the two 
but i couldn't find any reference to it in the Java version nor in the code 
base.

Is there a way to combine the two in java as well?

Thanks,
Barak  

-- 
This message may contain confidential and/or privileged information. 
If you are not the addressee or authorized to receive this on behalf of the 
addressee you must not use, copy, disclose or take action based on this 
message or any information herein. 
If you have received this message in error, please advise the sender 
immediately by reply email and delete this message. Thank you.

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Akka FSM and Akk persistence with Java (8)

2014-08-19 Thread Konrad 'ktoso' Malawski
Hi Barak,
It’s currently not possible to use FSM with PersistentActors.
We do want to provide this before persistence goes stable, here’s the ticket 
for tracking it: https://github.com/akka/akka/issues/15279

I was already playing around with implementing it, but sadly it requires a bit 
more work.


-- 
Konrad 'ktoso' Malawski
hAkker @ typesafe
http://akka.io

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Akka shell?

2014-08-19 Thread John Antypas
A bit off topic, but I hope the Typesafe gurus are ahead of us on this 
:-)

Has anyone ever thought of a UNIX shell build on the Scala REPL which has 
akka on the backend?  

As we are moving to distributed clusters, it seems to me that a UNIX shell 
with the REPL and maybe the Akka microkernel behind it, would allow me to 
write Akka style scripts in AkkaScript (Scala DSL?) and connect multiple 
machines across an Akka cluster.  Something like Erlang and the BEAM VM.

Literally, you could run Akka programs under the Akka shell, or Akka script 
and connect to the cluster where multiple programs/scripts would use the 
actors space underneath it all to pass messages back and forth.   

It seems to me we're almost there.   We have clustering, we have the Scala 
DSLs, we have Akka.   We'd need routers to server name servers for zone 
space, and we'd need lots of Akka/scalal libraries for System tasks but you 
could write a distributed, actor-aware shell.

Thoughts?

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] akka app design questions

2014-08-19 Thread Justin du coeur
On Tue, Aug 19, 2014 at 4:51 AM, Fej ifelset...@gmail.com wrote:

 I'm trying to learn akka and need a clarification how to design an
 application. Lets imagine simple HTTP CRUD app, which handles some
 documents stored in mongodb. For each of this operations I'd need at least
 one actor processing request, fetching db record and formatting the output?
 When do I create actors? Is the idea to create a new actor for each new
 http request? Actors can be started, stopped, but when to do so in context
 of this simple app?


As so often in programming, there's no one-size-fits-all answer, and many
ways this could be done.  But yes, if I was going to do something this
simple, I'd likely to do it with one Actor per request, possibly with
different Actor classes for different request types.  Basically, if you
think of an Actor as your unit of concurrency / parallelism, you're often
not too far off.

As for when to start and stop them, that depends on how concerned you are
with the fine details of efficiency.  You *could* maintain a pool of Actors
and assign them to requests, and save a few cycles that way, but I'd
usually say that's overkill -- simply spinning up an Actor for each request
as it comes in isn't horribly expensive, and lets you be very confident
that each one is nice and clean when the request begins...

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: WebSocket + AKKA + Redis pub/sub

2014-08-19 Thread Ryan Tanner
I've never used it in production but wy back I built a proof-of-concept 
based on one of the Play Framework sample apps using Redis pub/sub to feed 
a WebSocket based chat room.

https://github.com/ryantanner/websocketchat-redis

It does extremely little, really just round-trips messages across Redis 
pub/sub rather than having messages be sent directly out to other 
subscribers.

On Tuesday, August 21, 2012 4:36:39 AM UTC-6, Jason wrote:

 I wonder know if anyone have tried to use websocket to send message to 
 Redis pub/sub?




-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] modifying log level at run-time

2014-08-19 Thread Adam
Hi,

I know Akka's configuration does not get reloaded at run-time (see here 
https://groups.google.com/forum/#!msg/akka-user/bQA1zJP3yOY/fBVmGfRvYUYJ).
It is however quite a common use case for logger settings to be re-loadable 
at run-time.
Is there any way to achieve this?

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Cassandra Journal v2.0.3?

2014-08-19 Thread Matthew Howard
Hi - no we haven't implemented this yet... we wound up getting sidetracked 
by some other priorities. But we are going to need to implement this at 
some point. It's been a bit since I've looked into the details but as I 
remember there were 2 key changes needed for C* 1.2:


   1. C* 2.0 introduced the IF NOT EXISTS clause - so in 
   CassandraStatements.scala 
   
https://github.com/krasserm/akka-persistence-cassandra/blob/master/src/main/scala/akka/persistence/cassandra/journal/CassandraStatements.scala
 (both 
   in snapshot and journal files) that would need to be removed... which 
   obviously means those will fail if the keyspaces/columnfamilies exist. I 
   was thinking of just catching the exception in CassandraJournal.scala 
   
https://github.com/krasserm/akka-persistence-cassandra/blob/f6c885fe943aaa898a3d191083368ecab6e1ddc0/src/main/scala/akka/persistence/cassandra/journal/CassandraJournal.scala#L27
 (again 
   would also need the same in CassandraSnapshotStore.scala )
   2. The Datastax 2.0 driver doesn't support C* 1.2, so that would need to 
   be downgraded to the 1.0 driver. I believe this means the inserts currently 
   using the BatchStatement will need to change... this will probably be the 
   tricky part, although depending on your throughput needs it might be fine 
   to just execute the prepared statements without batching. See 
   http://www.datastax.com/dev/blog/client-side-improvements-in-cassandra-2-0 
for 
   a description of the new BatchStatement in the datastax driver. By 
   looking at the old PreparedStatement you can see the types of changes 
   that would need to be made. 

Ignore my comments above regarding the composite keys of processor_id and 
partition_nr... Cassandra 1.2 should support the composite PK as defined in 
this journal. 

So at first glance the changes for C* 1.2 seem not too bad - I would only 
be worried about any hidden gotchas that I haven't noticed regarding the 
1.0 driver, and about the potential performance hit of not being able to 
use the BatchStatement. 

I don't know when we will take this on, but I'll post back to this thread 
when we do. 



On Thursday, August 14, 2014 12:35:27 AM UTC-4, ratika...@razorthink.net 
wrote:

 Hi Mathew,

 We also need to use the akka persistence journal plugin for older version 
 of Cassandra v 1.2.x, hoever the plugin available works for version 2.0.3 
 or higher. Came across your post, did you happen to implement/tweak the 
 journal for older version of Cassandra ? If yes would share it with us or 
 let us know what were the tweaks required. Thanks for your help.

 --Ratika

 On Tuesday, May 6, 2014 12:51:25 AM UTC+5:30, Matthew Howard wrote:

 Has anyone implemented an akka persistence journal for older versions of 
 Cassandra? I see the current journal is dependent on C* v2.0.3 or higher (
 https://github.com/krasserm/akka-persistence-cassandra) but my app is 
 currently on 1.1.9 and we are only actively planning to upgrade to v1.2 
 (just found this out - I thought we were moving to 2). 

 I'm guessing there isn't one already out there, but thought I'd ask 
 before attempting to implement one. Assuming I would need to implement it 
 (probably a question for Martin directly) any warnings or recommendations? 
 At first glance I'd obviously need to tweak the create 
 keyspace/columnfamily commands (and change the driver), but I'm not seeing 
 anything that appears to be too wildly dependent on C* v2.0.3 features. The 
 handling of the partition_nr seems to be the biggest issue - I'm thinking 
 we could just create the rowkey as a concatenation of the processor_id and 
 partition_nr (e.g. myprocessor-0, myprocessor-1, etc... ). But I 
 think/hope? otherwise the composite columns should work the same and I'm 
 not going to get myself into a rabbit hole... 

 Thanks in advance,
 Matt Howard



-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Improving Akka Persistence wrt CQRS/ES/DDD

2014-08-19 Thread Greg Young
please forgive the typo.

It still adds a ton of complexity that is unnecessary for the vast number 
of systems. We support it in event store but most don't use it. 

On Tuesday, August 19, 2014 11:08:14 AM UTC-4, Martin Krasser wrote:


 On 19.08.14 16:27, Greg Young wrote:
  
 I am not responding to this one post just a reply towards the end and will 
 discuss a few posts from earlier. 

  To start I have to agree with some of the posters that premature scaling 
 can cause many issues. This actually reminds me of the CQRS journey which 
 people mentioned earlier. One of the main criticisms of the CQRS Journey is 
 that it prematurely took scaling constraints which causes the code to be 
 much much more complex than it needs to be. This was partially due to it 
 being a sample app of something larger and partially due to the pp team 
 also showing azure at the same time. Because they wanted to distribute and 
 show Azure at the same time the team took cloud constraints as a given. 
 This caused for instance every handler in the system to need to be 
 idempotent. While seemingly a small constraint this actually adds a 
 significant amount of complexity to the system.

  The same problem exists in what is being discussed today. For 95+% of 
 systems it is totally reasonable that when I write a projection I expect my 
 events to have assured ordering. As Vaughn mentioned a few hundred 
 events/second is the vast majority of systems. Systems like these can be 
 completely linearized and ordering assurances are not an issue. This 
 removes a LOT of complexity in projections code as you don't have to handle 
 hundreds to thousands of edge cases in your read models where you get 
 events out of order. Saying that ordering assurances are not needed and 
 everyone should use casual consistency is really saying we don't care 
 about the bottom 95% of users.

  
   
 Can you please enlighten me what you mean by casual consistency. Past 
 discussions were always about causal consistency 
 http://en.wikipedia.org/wiki/Causal_consistency. If implemented, it 
 would add additional ordering to events in akka-persistence compared to the 
 ordering that is given right now. Today, only the ordering of events with 
 the same persistenceId is defined. Events with different persistenceId are 
 currently considered concurrent by akka-persistence. Causal consistency 
 would additionally introduce ordering of events across persistenceIds if 
 they are causally related (i.e. have a happens-before relationship). Those 
 events that don't have such a relationship are truely concurrent. Causal 
 consistency is not trivial to implement but has the advantage that it 
 doesn't prevent scalability (see also this paper 
 http://www.cs.berkeley.edu/%7Ealig/papers/bolt-on-causal-consistency.pdf, 
 for example). It is weaker than sequential consistency, though. 

  
  
  RKuhn had mentioned doing joins. You are correct in this is how we do it 
 now. We offer historically perfect joins but in live there is no way to do 
 a live perfect join via queries. We do however support another mechanism 
 for this that will assure that your live join will always match your 
 historical. We allow you to precalculate and save the results of the join. 
 This produces a stream full of stream links which can then be replayed as 
 many times (perfectly) as you want.

  
  There was some discussion above about using precalculated topics to 
 handle projections. I believe the terminology was called tags. The general 
 idea if I can repeat it is to write an event FooOccurred and to include 
 upon it some tags (foo, bar, baz) which would map it to topics that could 
 then be replayed as a whole. This on the outset seems like a good idea but 
 will not work well in production. The place where it will run into a 
 problem is that I cannot know when writing the events all mappings that any 
 future projections may wish to have. Tomorrow my business could ask me for 
 a report that looks at a completely new way of partitioning the events and 
 I will be unable to do it.

  
  As I mentioned previously in a quick comment. What is being asked for 
 today is actually already supported with akka,persistence providing you are 
 using event store as your backend (for those interested today is the 
 release of the final RC of 3.0 which has all of the support for the 
 akka,perisistence client (binaries are for win/linux/max)). Basically what 
 you would do is run akka.persistence on your write side but *not* use it 
 for supporting your read models. Instead when dealing with your read models 
 you would use a catchupsubscription for what you are interested in. I do 
 not see anything inherently wrong with this way of doing things and it begs 
 the question of whether this is actually a more appropriate way to deal 
 with eventsourced systems using akka,.persistence. eg use native storage 
 directly if it supports it.

  Cheers,

  Greg
 On Tuesday, August 19, 2014 9:24:10 

Re: [akka-user] modifying log level at run-time

2014-08-19 Thread Will Sargent
Yes, you can use SLF4JLogger, then cast to Logback and change the log level
there.

http://stackoverflow.com/a/3838108/5266

Will Sargent
Consultant, Professional Services
Typesafe http://typesafe.com, the company behind Play Framework
http://www.playframework.com, Akka http://akka.io and Scala
http://www.scala-lang.org/


On Tue, Aug 19, 2014 at 12:20 PM, Adam adamho...@gmail.com wrote:

 Hi,

 I know Akka's configuration does not get reloaded at run-time (see here
 https://groups.google.com/forum/#!msg/akka-user/bQA1zJP3yOY/fBVmGfRvYUYJ
 ).
 It is however quite a common use case for logger settings to be
 re-loadable at run-time.
 Is there any way to achieve this?

  --
  Read the docs: http://akka.io/docs/
  Check the FAQ:
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google Groups
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] modifying log level at run-time

2014-08-19 Thread Will Sargent
Or, if you just want a reloadable runtime, you can tell Logback to watch
for changes to the logging file using autoScan:

http://logback.qos.ch/manual/configuration.html#autoScan


Will Sargent
Consultant, Professional Services
Typesafe http://typesafe.com, the company behind Play Framework
http://www.playframework.com, Akka http://akka.io and Scala
http://www.scala-lang.org/


On Tue, Aug 19, 2014 at 2:06 PM, Will Sargent will.sarg...@typesafe.com
wrote:

 Yes, you can use SLF4JLogger, then cast to Logback and change the log
 level there.

 http://stackoverflow.com/a/3838108/5266

 Will Sargent
 Consultant, Professional Services
 Typesafe http://typesafe.com, the company behind Play Framework
 http://www.playframework.com, Akka http://akka.io and Scala
 http://www.scala-lang.org/


 On Tue, Aug 19, 2014 at 12:20 PM, Adam adamho...@gmail.com wrote:

 Hi,

 I know Akka's configuration does not get reloaded at run-time (see here
 https://groups.google.com/forum/#!msg/akka-user/bQA1zJP3yOY/fBVmGfRvYUYJ
 ).
 It is however quite a common use case for logger settings to be
 re-loadable at run-time.
 Is there any way to achieve this?

  --
  Read the docs: http://akka.io/docs/
  Check the FAQ:
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google Groups
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] modifying log level at run-time

2014-08-19 Thread √iktor Ҡlang
or:

defsetLogLevel(level: LogLevel
http://doc.akka.io/api/akka/2.3.5/akka/event/Logging$$LogLevel.html): Unit
http://www.scala-lang.org/api/2.10.4/index.html#scala.Unit

Change log level: default loggers (i.e. from configuration file) are
subscribed/unsubscribed as necessary so that they listen to all levels
which are at least as severe as the given one. See object Logging for more
information.

NOTE: if the StandardOutLogger is configured also as normal logger, it will
not participate in the automatic management of log level subscriptions!


http://doc.akka.io/api/akka/2.3.5/?_ga=1.45167204.1579561034.1353497989#akka.event.LoggingBus


On Tue, Aug 19, 2014 at 11:09 PM, Will Sargent will.sarg...@typesafe.com
wrote:

 Or, if you just want a reloadable runtime, you can tell Logback to watch
 for changes to the logging file using autoScan:

 http://logback.qos.ch/manual/configuration.html#autoScan


 Will Sargent
 Consultant, Professional Services
 Typesafe http://typesafe.com, the company behind Play Framework
 http://www.playframework.com, Akka http://akka.io and Scala
 http://www.scala-lang.org/


 On Tue, Aug 19, 2014 at 2:06 PM, Will Sargent will.sarg...@typesafe.com
 wrote:

 Yes, you can use SLF4JLogger, then cast to Logback and change the log
 level there.

 http://stackoverflow.com/a/3838108/5266

 Will Sargent
 Consultant, Professional Services
 Typesafe http://typesafe.com, the company behind Play Framework
 http://www.playframework.com, Akka http://akka.io and Scala
 http://www.scala-lang.org/


 On Tue, Aug 19, 2014 at 12:20 PM, Adam adamho...@gmail.com wrote:

 Hi,

 I know Akka's configuration does not get reloaded at run-time (see here
 https://groups.google.com/forum/#!msg/akka-user/bQA1zJP3yOY/fBVmGfRvYUYJ
 ).
 It is however quite a common use case for logger settings to be
 re-loadable at run-time.
 Is there any way to achieve this?

  --
  Read the docs: http://akka.io/docs/
  Check the FAQ:
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives:
 https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google
 Groups Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.



  --
  Read the docs: http://akka.io/docs/
  Check the FAQ:
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google Groups
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




-- 
Cheers,
√

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Improving Akka Persistence wrt CQRS/ES/DDD

2014-08-19 Thread Martin Krasser


On 19.08.14 21:57, Greg Young wrote:

please forgive the typo.

It still adds a ton of complexity that is unnecessary for the vast 
number of systems.


I don't see that. The complexity should only be on plugin providers, not 
application code (see also the research paper I linked in a previous 
post). It is the provider's responsibility (in collaboration with 
PersistentActor and PersistentView) to ensure causal ordering in event 
streams. Properly implemented, there's no additional complexity for 
applications. They can just rely on the stricter ordering guarantees.



We support it in event store but most don't use it.


Can you please share any pointers that describe how causal consistency 
is supported/implemented by event store?




On Tuesday, August 19, 2014 11:08:14 AM UTC-4, Martin Krasser wrote:


On 19.08.14 16:27, Greg Young wrote:

I am not responding to this one post just a reply towards the
end and will discuss a few posts from earlier.

To start I have to agree with some of the posters that
premature scaling can cause many issues. This actually reminds
me of the CQRS journey which people mentioned earlier. One of
the main criticisms of the CQRS Journey is that it prematurely
took scaling constraints which causes the code to be much much
more complex than it needs to be. This was partially due to it
being a sample app of something larger and partially due to
the pp team also showing azure at the same time. Because they
wanted to distribute and show Azure at the same time the team
took cloud constraints as a given. This caused for instance
every handler in the system to need to be idempotent. While
seemingly a small constraint this actually adds a significant
amount of complexity to the system.

The same problem exists in what is being discussed today. For
95+% of systems it is totally reasonable that when I write a
projection I expect my events to have assured ordering. As
Vaughn mentioned a few hundred events/second is the vast
majority of systems. Systems like these can be completely
linearized and ordering assurances are not an issue. This
removes a LOT of complexity in projections code as you don't
have to handle hundreds to thousands of edge cases in your
read models where you get events out of order. Saying that
ordering assurances are not needed and everyone should use
casual consistency is really saying we don't care about the
bottom 95% of users.



Can you please enlighten me what you mean by casual consistency.
Past discussions were always about causal consistency
http://en.wikipedia.org/wiki/Causal_consistency. If implemented,
it would add additional ordering to events in akka-persistence
compared to the ordering that is given right now. Today, only the
ordering of events with the same persistenceId is defined. Events
with different persistenceId are currently considered concurrent
by akka-persistence. Causal consistency would additionally
introduce ordering of events across persistenceIds if they are
causally related (i.e. have a happens-before relationship). Those
events that don't have such a relationship are truely concurrent.
Causal consistency is not trivial to implement but has the
advantage that it doesn't prevent scalability (see also this paper
http://www.cs.berkeley.edu/%7Ealig/papers/bolt-on-causal-consistency.pdf,
for example). It is weaker than sequential consistency, though.



RKuhn had mentioned doing joins. You are correct in this is
how we do it now. We offer historically perfect joins but in
live there is no way to do a live perfect join via queries. We
do however support another mechanism for this that will assure
that your live join will always match your historical. We
allow you to precalculate and save the results of the join.
This produces a stream full of stream links which can then be
replayed as many times (perfectly) as you want.


There was some discussion above about using precalculated
topics to handle projections. I believe the terminology was
called tags. The general idea if I can repeat it is to write
an event FooOccurred and to include upon it some tags (foo,
bar, baz) which would map it to topics that could then be
replayed as a whole. This on the outset seems like a good idea
but will not work well in production. The place where it will
run into a problem is that I cannot know when writing the
events all mappings that any future projections may wish to
have. Tomorrow my business could ask me for a report that
looks at a completely new way of partitioning the events and I
will be unable to do it.