Re: [akka-user] Akka Persistence - Views with multiple processors

2014-04-20 Thread Patrik Nordwall
Hi Olger,

What if you keep the sharded event sourced actors (+10k), but let them also 
send the events to one or a few processors. Then you can connect the 
views/streams to these processors.

If you don't like storing the events twice you can instead store some meta-data 
(processor-id, seq-no,timestamp) and have a view that creates sub-views on 
demand from the replayed meta-data. The sub-views would forward to the parent 
aggregated view.

/Patrik

 19 apr 2014 kl. 20:46 skrev Olger Warnier ol...@spectare.nl:
 
 
 Hi Martin, 
 
 Had to think about it a little, hereby my follow up. (hope you don't mind the 
 continues discussion, it helps me a lot in defining the right approach, 
 thanks for that)
 
 On Saturday, April 19, 2014 7:11:23 AM UTC+2, Martin Krasser wrote:
 Hi Olger,
 
 installing 10k views/producers won't scale, at least not with the current 
 implementation. Here are some alternatives:
 Intresting, what would need to change to have is scaling ?
 (Idea is to have the eventsourcedprocessors reflect a DDD style Aggregate 
 Root instance and have those distributed using cluster sharding) 
  
 
 - Maybe a custom journal plugin is what you need: a plugin that delegates 
 all write/read requests to the actual journal actor and that additionally 
 updates a database with the events to be written. This essentially installs 
 a single listener per ActorSystem (this is to some extend comparable to a 
 database trigger that executes additonal commands. If the backend datastore 
 supports that directly, I recommend implementing the trigger there, if 
 possible).
 
 I am not sure, if I understand it.. the basic idea is to have the 'events' 
 stored via the eventsourcedprocessor being published to 'n' views. The actual 
 number of view that need to listen to these events are not known up front 
 (people can add their own views... at system startup, it will be clear) 
 As every eventsourced actor is actually an AggregateRoot (in DDD terms) and 
 thereby something of an instance with it's own state, the changes in these 
 states need to be aggregated (that can be done with the streaming as you 
 mention) and published to the views that are interested (subscribed). 
 Doing this by hand in the aggregate root actor is not a problem, thereafter 
 write your own listener actor and that will populate a view data store. Still 
 I have the feeling that the actual 'View' (or ViewProducer) could be 
 implemented in such a way that it's done by the view.
  
 
 - Instead of having thousands of processors, what speaks against combining 
 them into a single processor (or only a few) per node?
 This would mean that I'll have all my aggregate root instances running in 1 
 processor meaning that I need to reconstruct state per aggregate root 
 instance in some way. Using EventsourcedProcessor, I'd expect that I need to 
 replay everything for all instances and pick the one that I need for 
 processing at that moment. (this can of course be optimized with snapshots 
 and something like memcached). This appears to be a performance hit as I feel 
 it. 
  
 
 Further comments inline ...
 
 On 18.04.14 16:10, Olger Warnier wrote:
 Hi Martin, 
 
 
 I'm currently working on view composition using the brand new akka-stream 
 module. Basic idea is to make views stream producers and to use the 
 akka-stream DSL to merge message/event streams from several producers into 
 whatever you need. See also 
 https://twitter.com/mrt1nz/status/457120534111981569 for a first running 
 example.
 
 WDYT?
 
 First of all Nice stuff !, I think this is useful for the system at my 
 hands (real-time patient monitoring based on medical data)
 I've seen the streams announcements but did not dive into that yet. Looking 
 at your code StreamExample.scala it more or less 'clicks' in concept. (and 
 hopefully in the right way)
 
 From a 'View' perspective as currently is available in akka-persistence, 
 every producing actor needs a view attached to it in order to push the 
 events to the streams producer, right ? (when I look at the 
 ViewProducer.scala code, this is what is done.)
 
 PersistentFlow.fromProcessor(p1).toProducer(materializer)
 Now, I have a sharding cluster with an EventsourcedProcessor  (expect 
 10.000ths of these EventsourcedProcessor actor instances) , so I'll need to 
 create a line like this for every EventsourcedProcessor in order to get the 
 stream of events together. Thereafter, I need to merge them together to get 
 a single stream of events. (at least that is one of the features of using 
 the streams)
 
 Every processor instance itself could create such a producer during start 
 and send it to another actor that merges received producers.
 That would not allow me to implement 'View' (as is known in the persistence 
 package) in order to listen to events within my cluster of aggregate root 
 instances, I'll need to build something additional for that (as View is more 
 used for the collection of those events and thereafter 

Re: [akka-user] Akka Persistence - Views with multiple processors

2014-04-20 Thread Olger Warnier
Hi Patrick, 

Sounds like an interesting approach, storing some meta-data at the view may 
help to check / show the reliability of the system. 

At this moment the events are sent to a processor per node that publishes 
the event (distributed pub sub) 
When you talk about view, that's the akka-persistence view ? 
So more or less, the sub processors could send messages to the View and 
when there is a Persist() around it, it will be stored. 

Is that a correct understanding ?

Kind regards, 

Olger


On Sunday, April 20, 2014 2:32:07 PM UTC+2, Patrik Nordwall wrote:

 Hi Olger,

 What if you keep the sharded event sourced actors (+10k), but let them 
 also send the events to one or a few processors. Then you can connect the 
 views/streams to these processors.

 If you don't like storing the events twice you can instead store some 
 meta-data (processor-id, seq-no,timestamp) and have a view that creates 
 sub-views on demand from the replayed meta-data. The sub-views would 
 forward to the parent aggregated view.

 /Patrik

 19 apr 2014 kl. 20:46 skrev Olger Warnier ol...@spectare.nl javascript:
 :


 Hi Martin, 

 Had to think about it a little, hereby my follow up. (hope you don't mind 
 the continues discussion, it helps me a lot in defining the right approach, 
 thanks for that)

 On Saturday, April 19, 2014 7:11:23 AM UTC+2, Martin Krasser wrote:

  Hi Olger,

 installing 10k views/producers won't scale, at least not with the current 
 implementation. Here are some alternatives:

 Intresting, what would need to change to have is scaling ?
 (Idea is to have the eventsourcedprocessors reflect a DDD style Aggregate 
 Root instance and have those distributed using cluster sharding) 
  


 - Maybe a custom journal plugin is what you need: a plugin that delegates 
 all write/read requests to the actual journal actor and that additionally 
 updates a database with the events to be written. This essentially installs 
 a single listener per ActorSystem (this is to some extend comparable to a 
 database trigger that executes additonal commands. If the backend datastore 
 supports that directly, I recommend implementing the trigger there, if 
 possible). 


 I am not sure, if I understand it.. the basic idea is to have the 'events' 
 stored via the eventsourcedprocessor being published to 'n' views. The 
 actual number of view that need to listen to these events are not known up 
 front (people can add their own views... at system startup, it will be 
 clear) 
 As every eventsourced actor is actually an AggregateRoot (in DDD terms) 
 and thereby something of an instance with it's own state, the changes in 
 these states need to be aggregated (that can be done with the streaming as 
 you mention) and published to the views that are interested (subscribed). 
 Doing this by hand in the aggregate root actor is not a problem, 
 thereafter write your own listener actor and that will populate a view data 
 store. Still I have the feeling that the actual 'View' (or ViewProducer) 
 could be implemented in such a way that it's done by the view.
  


 - Instead of having thousands of processors, what speaks against 
 combining them into a single processor (or only a few) per node?

 This would mean that I'll have all my aggregate root instances running in 
 1 processor meaning that I need to reconstruct state per aggregate root 
 instance in some way. Using EventsourcedProcessor, I'd expect that I need 
 to replay everything for all instances and pick the one that I need for 
 processing at that moment. (this can of course be optimized with snapshots 
 and something like memcached). This appears to be a performance hit as I 
 feel it. 
  


 Further comments inline ...

 On 18.04.14 16:10, Olger Warnier wrote:
  
 Hi Martin,  

  
  I'm currently working on view composition using the brand new 
 akka-stream module. Basic idea is to make views stream producers and to use 
 the akka-stream DSL to merge message/event streams from several producers 
 into whatever you need. See also 
 https://twitter.com/mrt1nz/status/457120534111981569 for a first 
 running example.

 WDYT?
  

  First of all Nice stuff !, I think this is useful for the system at my 
 hands (real-time patient monitoring based on medical data)
 I've seen the streams announcements but did not dive into that yet. 
 Looking at your code StreamExample.scala it more or less 'clicks' in 
 concept. (and hopefully in the right way)

  From a 'View' perspective as currently is available in 
 akka-persistence, every producing actor needs a view attached to it in 
 order to push the events to the streams producer, right ? (when I look at 
 the ViewProducer.scala code, this is what is done.)

  PersistentFlow.fromProcessor(p1).toProducer(materializer)
  Now, I have a sharding cluster with an EventsourcedProcessor  (expect 
 10.000ths of these EventsourcedProcessor actor instances) , so I'll need to 
 create a line like this for every EventsourcedProcessor in order to get 

Re: [akka-user] Akka Persistence - Views with multiple processors

2014-04-20 Thread Patrik Nordwall
On Sun, Apr 20, 2014 at 2:47 PM, Olger Warnier ol...@spectare.nl wrote:

 Hi Patrick,

 Sounds like an interesting approach, storing some meta-data at the view
 may help to check / show the reliability of the system.

 At this moment the events are sent to a processor per node that publishes
 the event (distributed pub sub)


That sounds good, as well.


 When you talk about view, that's the akka-persistence view ?


Yes, persistence.View and persistence.Processor


 So more or less, the sub processors could send messages to the View and
 when there is a Persist() around it, it will be stored.


I'm not sure I understand what you mean here. Let me clarify my proposal
with an example. Let's say we have a User aggregate root with some profile
information that can be updated. The user is represented by a User
EventsourcedProcessor actor, which is sharded. On the query side we want to
be able to search users by first and last name, i.e. we want to store all
users in a relational database table on the query side.

The User actor persist FirstNameChanged, and inside the persist block it
sends a Persistent(FirstNameChanged) message to the AllUsers Processor. On
the query side we have a AllUsersView connected to that processor. When
AllUsersView receives FirstNameChanged it updates the db table.

To handle lost messages between User and AllUsers you might want to send an
acknowledgement from AllUsers to User, and have a retry mechanism in User.
I would implement that myself in User, but PersistentChannel might be an
alternative.

That is the most straight forward solution. The drawback is that
FirstNameChanged is stored twice. Therefore I suggested the meta-data
alternative. User sends Persistent(UserChangedNotification(processorId)))
to the AllUsers Processor. When AllUsersView receives
UserChangedNotification it creates a child actor, a View for the
processorId in the UserChangedNotification, if it doesn't already have such
a child. That view would replay all events of the User and can update the
database table. It must keep track of how far it has replayed/stored in db,
i.e. seqNr must be stored in the db. The child View can be stopped when it
becomes inactive.

That alternative is more complicated, and I'm not sure it is worth it.

Cheers,
Patrik





 Is that a correct understanding ?

 Kind regards,

 Olger


 On Sunday, April 20, 2014 2:32:07 PM UTC+2, Patrik Nordwall wrote:

 Hi Olger,

 What if you keep the sharded event sourced actors (+10k), but let them
 also send the events to one or a few processors. Then you can connect the
 views/streams to these processors.

 If you don't like storing the events twice you can instead store some
 meta-data (processor-id, seq-no,timestamp) and have a view that creates
 sub-views on demand from the replayed meta-data. The sub-views would
 forward to the parent aggregated view.

 /Patrik

 19 apr 2014 kl. 20:46 skrev Olger Warnier ol...@spectare.nl:


 Hi Martin,

 Had to think about it a little, hereby my follow up. (hope you don't mind
 the continues discussion, it helps me a lot in defining the right approach,
 thanks for that)

 On Saturday, April 19, 2014 7:11:23 AM UTC+2, Martin Krasser wrote:

  Hi Olger,

 installing 10k views/producers won't scale, at least not with the
 current implementation. Here are some alternatives:

 Intresting, what would need to change to have is scaling ?
 (Idea is to have the eventsourcedprocessors reflect a DDD style Aggregate
 Root instance and have those distributed using cluster sharding)



 - Maybe a custom journal plugin is what you need: a plugin that
 delegates all write/read requests to the actual journal actor and that
 additionally updates a database with the events to be written. This
 essentially installs a single listener per ActorSystem (this is to some
 extend comparable to a database trigger that executes additonal commands.
 If the backend datastore supports that directly, I recommend implementing
 the trigger there, if possible).


 I am not sure, if I understand it.. the basic idea is to have the
 'events' stored via the eventsourcedprocessor being published to 'n' views.
 The actual number of view that need to listen to these events are not known
 up front (people can add their own views... at system startup, it will be
 clear)
 As every eventsourced actor is actually an AggregateRoot (in DDD terms)
 and thereby something of an instance with it's own state, the changes in
 these states need to be aggregated (that can be done with the streaming as
 you mention) and published to the views that are interested (subscribed).
 Doing this by hand in the aggregate root actor is not a problem,
 thereafter write your own listener actor and that will populate a view data
 store. Still I have the feeling that the actual 'View' (or ViewProducer)
 could be implemented in such a way that it's done by the view.



 - Instead of having thousands of processors, what speaks against
 combining them into a single processor (or 

[akka-user] [persistence] in memory journal for testing

2014-04-20 Thread Tim Pigden
I'm using 2.3
I eventually stumbled across this
(more or less)
akka.persistence.journal.plugin = akka.persistence.journal.inmem

from the reference configuration file for snapshot 2.4

Is there any particular reason this is hidden away with no mention in the 
docs?

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Akka Persistence - Views with multiple processors

2014-04-20 Thread Olger Warnier


On Sunday, April 20, 2014 4:59:22 PM UTC+2, Patrik Nordwall wrote:


 On Sun, Apr 20, 2014 at 2:47 PM, Olger Warnier ol...@spectare.nljavascript:
  wrote:

 Hi Patrick, 

 Sounds like an interesting approach, storing some meta-data at the view 
 may help to check / show the reliability of the system. 

 At this moment the events are sent to a processor per node that publishes 
 the event (distributed pub sub) 


 That sounds good, as well.
  

 When you talk about view, that's the akka-persistence view ? 


 Yes, persistence.View and persistence.Processor
  

 So more or less, the sub processors could send messages to the View and 
 when there is a Persist() around it, it will be stored. 


 I'm not sure I understand what you mean here. Let me clarify my proposal 
 with an example. Let's say we have a User aggregate root with some profile 
 information that can be updated. The user is represented by a User 
 EventsourcedProcessor actor, which is sharded. On the query side we want to 
 be able to search users by first and last name, i.e. we want to store all 
 users in a relational database table on the query side.

Yup, great sample.  


 The User actor persist FirstNameChanged, and inside the persist block it 
 sends a Persistent(FirstNameChanged) message to the AllUsers Processor. On 
 the query side we have a AllUsersView connected to that processor. When 
 AllUsersView receives FirstNameChanged it updates the db table.

Indeed. In what way is the AllUsersView connected to that Processor ? (in a 
distributed pub sub situation) (although, I have to understand in what way 
'inside the persist block' is to be interpreted.  
 


 To handle lost messages between User and AllUsers you might want to send 
 an acknowledgement from AllUsers to User, and have a retry mechanism in 
 User. I would implement that myself in User, but PersistentChannel might be 
 an alternative.

Is it possible todo persistent channels with the distributed pub sub stuff 
that's available in akka ? 


 That is the most straight forward solution. The drawback is that 
 FirstNameChanged is stored twice. Therefore I suggested the meta-data 
 alternative. User sends Persistent(UserChangedNotification(processorId))) 
 to the AllUsers Processor. When AllUsersView receives 
 UserChangedNotification it creates a child actor, a View for the 
 processorId in the UserChangedNotification, if it doesn't already have such 
 a child. That view would replay all events of the User and can update the 
 database table. It must keep track of how far it has replayed/stored in db, 
 i.e. seqNr must be stored in the db. The child View can be stopped when it 
 becomes inactive.

Will that work with a sharded cluster ? (and a 'View' may be running on 
another node)
 


 That alternative is more complicated, and I'm not sure it is worth it.

From a solution perspective, using the distributed pub sub, maybe with 
persistent channels is what will do. 

Most of my questions have todo with using akka-persistence as a full 
fledged DDD framework, not too hard without the sharding (although a view 
for every aggregate root instance seems not to fit when you want to use 
that for database connectivity that contains a view model). with the 
sharding it is more complicated and a good structure to actually build a 
view that is on 'some' node listening for events, doing' it's thing is a 
handy part. 

Cheers, 
Olger
 


 Cheers,
 Patrik


  


 Is that a correct understanding ?

 Kind regards, 

 Olger


 On Sunday, April 20, 2014 2:32:07 PM UTC+2, Patrik Nordwall wrote:

 Hi Olger,

 What if you keep the sharded event sourced actors (+10k), but let them 
 also send the events to one or a few processors. Then you can connect the 
 views/streams to these processors.

 If you don't like storing the events twice you can instead store some 
 meta-data (processor-id, seq-no,timestamp) and have a view that creates 
 sub-views on demand from the replayed meta-data. The sub-views would 
 forward to the parent aggregated view.

 /Patrik

 19 apr 2014 kl. 20:46 skrev Olger Warnier ol...@spectare.nl:


 Hi Martin, 

 Had to think about it a little, hereby my follow up. (hope you don't 
 mind the continues discussion, it helps me a lot in defining the right 
 approach, thanks for that)

 On Saturday, April 19, 2014 7:11:23 AM UTC+2, Martin Krasser wrote:

  Hi Olger,

 installing 10k views/producers won't scale, at least not with the 
 current implementation. Here are some alternatives:

 Intresting, what would need to change to have is scaling ?
 (Idea is to have the eventsourcedprocessors reflect a DDD style 
 Aggregate Root instance and have those distributed using cluster sharding) 
  


 - Maybe a custom journal plugin is what you need: a plugin that 
 delegates all write/read requests to the actual journal actor and that 
 additionally updates a database with the events to be written. This 
 essentially installs a single listener per ActorSystem (this is to some 
 extend 

Re: [akka-user] Best way to communicate with an Actor?

2014-04-20 Thread Chris Toomey
Curious how would you recommend accomplishing that in this example (getting 
messages to /project/someactor containing ActorRefs for 
actors /project/users/user-X, /project/users/user-Y, /project/users/user-Z)?

Chris

On Wednesday, April 16, 2014 12:15:15 PM UTC-7, √ wrote:

 Hi Chanan,

 I'd say the recommendation is to minimize shared knowledge (actor xy lives 
 at path a/b/c) exchange ActorRefs inside messages.


 On Wed, Apr 16, 2014 at 7:51 PM, Chanan Braunstein 
 chanan.b...@pearson.com javascript: wrote:

 I asked a somewhat similar question before but this time its more focused.

 I have a hierarchy like so:

 /project/users/user-userid

 There is just one project, one users, and many user-userid actors.

 If I want to send a message from a different actor out side of 
 this hierarchy to one or more users, for example from /project/someactor, 
 it seems to me I have 3 basic ways of doing it (forgetting about EventBus, 
 which is a fourth):

 1. I always have an ActorRef for /project. I can send it a message which 
 it will forward to /project/users who will grab the correct actor from its 
 children collection and forward the message to it.

 2. I can construct an ActorSelection from /project/someactor and send 
 message to the ActorSelection directly.

 3. I can store in  /project/someactor in a HashMap or some other data 
 structure all the /project/users/user-userid ActorRef's, grab them from 
 the map when I need to send one a message and directly send a message to 
 the ActorRef

 So, my questions are: what is the preferred way to communicate with an 
 actor in this case? Realizing that there might not be a preferred way - 
 what is a bad idea to do if any? What are the pros and cons that people see 
 in each way? Lastly, does anyone have a way I missed?

 Thanks,
 Chanan

 -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google Groups 
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to akka-user+...@googlegroups.com javascript:.
 To post to this group, send email to akka...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




 -- 
 Cheers,
 √
  

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: [persistence] in memory journal for testing

2014-04-20 Thread Todd Nist
I don't believe it is hidden, well not completely at least.  There is a 
reference in the Akka 
documentationhttp://doc.akka.io/docs/akka/snapshot/scala/persistence.htmlto 
the Community 
Contribution http://akka.io/community/ journals provided, which is where 
the in-memory one is listed as well as the others.


On Sunday, April 20, 2014 11:59:19 AM UTC-4, Tim Pigden wrote:

 I'm using 2.3
 I eventually stumbled across this
 (more or less)
 akka.persistence.journal.plugin = akka.persistence.journal.inmem

 from the reference configuration file for snapshot 2.4

 Is there any particular reason this is hidden away with no mention in the 
 docs?


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Akka FSM actor and round-robin routing

2014-04-20 Thread Stefano Rocco
I think a possible solution might look like this:

import scala.collection.mutable.Queue

import scala.concurrent.duration._

import akka.actor._

import akka.routing._


object State extends Enumeration {

  val Unknown, Created, Sent = Value

}


trait Action


case class Unknown() extends Action

case class Init() extends Action

case class Check() extends Action

case class Send() extends Action


case class Enqueue(actor: (ActorContext) = ActorRef, message: Action)


class QueueRouter(val size: Int) extends Actor {


  private val waiting: Queue[((ActorContext) = ActorRef, Any)] = 
Queue.empty

  private var running: Int = 0


  override def receive = {

case command: Enqueue = {

  println(OnEnqueue - QueueRouter(waiting: %s, running: %s).format(
waiting.size, running))

  waiting.enqueue((command.actor, command.message))

  if (running = size) {

self ! Dequeue()

  }

}

case command: Dequeue = {

  println(OnDequeue - QueueRouter(waiting: %s, running: %s).format(
waiting.size, running))

  if (waiting.nonEmpty) {

running += 1

val (actor, message) = waiting.dequeue

val actorRef = actor(context)

context.watch(actorRef)

actorRef ! message

  }

}

case command: Terminated = {

  println(OnTerminated - QueueRouter(waiting: %s, running: %s)
.format(waiting.size, running - 1))

  running -= 1

  context.unwatch(command.getActor)

  self ! Dequeue()

}

  }


  override val supervisorStrategy = {

// TODO: Decide you supervision strategy

OneForOneStrategy(maxNrOfRetries = 1, withinTimeRange = 1 minute) {

  case t: Throwable = SupervisorStrategy.Stop

}

  }


  private case class Dequeue()

}


class DummyActor(id: Int) extends Actor with FSM[State.Value, Action] {


  startWith(State.Unknown, Unknown())


  when(State.Unknown) {

case Event(current: Unknown, previous) = {

  stay

}

case Event(current: Init, previous) = {

  self ! Check()

  goto(State.Created)

}

  }


  when(State.Created) {

case Event(current: Check, previous) = {

  self ! Send()

  goto(State.Sent)

}

  }


  when(State.Sent) {

case Event(current: Send, previous) = {

  //println(Sent(%s): send start.format(id))

  Thread.sleep(3000)

  println(Sent(%s): send stop.format(id))

  stop

}

  }


  initialize

}


object TestQueueRouter {


  def factory(props: Props): (ActorContext) = ActorRef = {


def delegate(context: ActorContext) = {

  context.actorOf(props)

}


delegate _

  }


  def main(args: Array[String]) = {

implicit val system = ActorSystem(main)

val router = Props(new 
QueueRouter(2)).withRouter(RoundRobinRouter(nrOfInstances 
= 2))

val pooler = system.actorOf(router)

for (i - 0 until 10) {

  pooler ! Enqueue(factory(Props(new DummyActor(i))), Init())

}

println(OK)

  }

}

On Thursday, December 5, 2013 7:24:41 PM UTC+1, Eugene Dzhurinsky wrote:

 Hello!

 I want to convert some set of actors into FSM using Akka FSM. Currently 
 system is designed in the way that every actor knows what to do with 
 results of it's action and which actor is next in sequence of processing.

 Now I want to have some sort of dedicated actors, which are doing only 
 things they should know (and now know about entire message routing), and 
 central FSM, which knows how to route messages and process transformation 
 flow.

 So overall idea is pictured at http://i.imgur.com/Sxib6dN.png

 Client sends some request to FSM actor, FSM actor  - on transition to next 
 state - sends message to some actor in onTransition block. That actor 
 replies to sender with some message, which is processed inside FSM state 
 somehow until request is finished.

 So far everything looks good, however I'm not sure what will happen if 
 multiple clients will start interaction with FSM actor. Will the workflow 
 be recorded somewhere, so flows from different clients won't collide at 
 some point (like, FSM actor receives message from another client instead of 
 originating one)?

 Is it safe to have say 10 FSM actors and round-robin router, or I need to 
 create new FSM actor on every request from client, and then kill it once 
 finished?

 Thanks!


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.