[akka-user] How to tune failure-detector on high load cluster

2014-07-15 Thread Xingrun CHEN
We found that ClusterSystem-scheduler-1 thread is very busy and consume 
50% cpu (but not more than 50%).
And we meet 'node unreachable' issue every day.
Here's our configs:

akka.cluster {
failure-detector {
  acceptable-heartbeat-pause = 6 s # default 3 s
  threshold = 12.0# default 8.0
}
scheduler {
  # make it less than system's tick-duration to force start a new one
  tick-duration = 9 ms # default 33ms
  ticks-per-wheel = 512 # default 512
}
}

akka.remote {
transport-failure-detector {
  heartbeat-interval = 30 s   # default 4s
  acceptable-heartbeat-pause = 12 s  # default 10s
}
}

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Passivate

2014-07-15 Thread delasoul
Hello Ashey,

I guess you answered your question(s) yourself:

- in-between receiving Passivate and Terminated the Manager will buffer all 
incoming messages for the passivating Aggregate
- when receiving Terminated it will flush the buffer for the Aggregate, 
which can result in activation again.

So, the answer to both questions is yes, but the buffer for passivated 
actors is limited, if the limit is reached all messages will go to 
DeadLetters.

hth,

michael



On Monday, 14 July 2014 17:28:42 UTC+2, Ashley Aitken wrote:


 A couple of quick questions about passivate and PersistentActors:

 Can other actors still send messages to persistent actors that have been 
 passivated?

 Will these messages cause the persistent actor to be reactivated?

 I am asking about this in single node and clustered context.

 I saw elsewhere that Patrik has written this in the cluster/sharding 
 context:

 - all messages are sent via the Manager actor, which creates child 
 Aggregate instances on demand
 - when receiving a message the Manager extract the Aggregate identifier 
 from the message
 - the Manager creates a new child Aggregate actor if it doesn't exist, 
 and then forwards the message to the Aggregate
 - the Aggregate can passivate itself by sending a Passivate message to 
 the parent Manager, which then sends PoisonPill to the Aggregate
 - in-between receiving Passivate and Terminated the Manager will buffer 
 all incoming messages for the passivating Aggregate
 - when receiving Terminated it will flush the buffer for the Aggregate, 
 which can result in activation again
 The PoisonPill can be replaced with some other custom stop message if the 
 Aggregate needs to do further interactions with other actors before 
 stopping.


 Thanks in advance for any answers.

 Cheers,
 Ashley.




-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] How to tune failure-detector on high load cluster

2014-07-15 Thread √iktor Ҡlang
Can you send us a stack trace of the scheduler thread?


On Tue, Jul 15, 2014 at 7:45 AM, Xingrun CHEN cowboy...@gmail.com wrote:

 We found that ClusterSystem-scheduler-1 thread is very busy and consume
 50% cpu (but not more than 50%).
 And we meet 'node unreachable' issue every day.
 Here's our configs:

 akka.cluster {
 failure-detector {
   acceptable-heartbeat-pause = 6 s # default 3 s
   threshold = 12.0# default 8.0
 }
 scheduler {
   # make it less than system's tick-duration to force start a new one
   tick-duration = 9 ms # default 33ms
   ticks-per-wheel = 512 # default 512
 }
 }

 akka.remote {
 transport-failure-detector {
   heartbeat-interval = 30 s   # default 4s
   acceptable-heartbeat-pause = 12 s  # default 10s
 }
 }

  --
  Read the docs: http://akka.io/docs/
  Check the FAQ:
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google Groups
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




-- 
Cheers,
√

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] How to tune failure-detector on high load cluster

2014-07-15 Thread Xingrun CHEN

We have about 100k actors who sent heartbeat to client side evey several 
seconds. (use system.scheduler.scheduleOnce)
And we found it may cause the cluster heartbeat delay. So we change the 
config `akka.cluster. scheduler.tick-duration` to 9 ms, and then the akka 
cluster created a second scheduler called 
'ClusterSystem-cluster-scheduler-15'. After that, the unreachable issue 
still occurred.

Here is the stack trace of system scheduler:

ClusterSystem-scheduler-1 prio=10 tid=0x7f9648a7fd60 nid=0x2f37 
runnable [0x7f96084c4000]
   java.lang.Thread.State: RUNNABLE
at 
scala.concurrent.forkjoin.ForkJoinPool.execute(ForkJoinPool.java:2955)
at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinPool.execute(AbstractDispatcher.scala:381)
at 
akka.dispatch.ExecutorServiceDelegate$class.execute(ThreadPoolBuilder.scala:212)
at 
akka.dispatch.Dispatcher$LazyExecutorServiceDelegate.execute(Dispatcher.scala:43)
at akka.dispatch.Dispatcher.executeTask(Dispatcher.scala:76)
at 
akka.dispatch.MessageDispatcher.unbatchedExecute(AbstractDispatcher.scala:145)
at 
akka.dispatch.BatchingExecutor$class.execute(BatchingExecutor.scala:113)
at 
akka.dispatch.MessageDispatcher.execute(AbstractDispatcher.scala:85)
at 
akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(Scheduler.scala:467)
at 
akka.actor.LightArrayRevolverScheduler$$anon$8.executeBucket$1(Scheduler.scala:419)
at 
akka.actor.LightArrayRevolverScheduler$$anon$8.nextTick(Scheduler.scala:423)
at 
akka.actor.LightArrayRevolverScheduler$$anon$8.run(Scheduler.scala:375)
at java.lang.Thread.run(Thread.java:722)

On Tuesday, July 15, 2014 4:30:36 PM UTC+8, √ wrote:

 Can you send us a stack trace of the scheduler thread?


 On Tue, Jul 15, 2014 at 7:45 AM, Xingrun CHEN cowb...@gmail.com 
 javascript: wrote:

 We found that ClusterSystem-scheduler-1 thread is very busy and consume 
 50% cpu (but not more than 50%).
 And we meet 'node unreachable' issue every day.
 Here's our configs:

 akka.cluster {
 failure-detector {
   acceptable-heartbeat-pause = 6 s # default 3 s
   threshold = 12.0# default 8.0
 }
 scheduler {
   # make it less than system's tick-duration to force start a new one
   tick-duration = 9 ms # default 33ms
   ticks-per-wheel = 512 # default 512
 }
 }

 akka.remote {
 transport-failure-detector {
   heartbeat-interval = 30 s   # default 4s
   acceptable-heartbeat-pause = 12 s  # default 10s
 }
 }

  -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google Groups 
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to akka-user+...@googlegroups.com javascript:.
 To post to this group, send email to akka...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




 -- 
 Cheers,
 √
  

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] maximum-frame-size and Akka performance

2014-07-15 Thread Asterios Katsifodimos
Hello,

I was wondering whether there is any performance implication in case one 
sets the maximum frame size to a ridiculously large size. 

In my case, I may have some large files exchange. So, if I set e.g. 
max-frame-size to 200MiB instead of the rather-low default one, should I 
expect any performance degradation?

Cheers,
Asterios

p.s. I know that I should not be using actors to move files around - this 
is a temporary solution and then I will design a Blob transfer service next 
month or so. 

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: maximum-frame-size and Akka performance

2014-07-15 Thread Asterios Katsifodimos
To make it more clear: will there be any performance degradation for small 
messages (couple of kilobytes)?

On Tuesday, July 15, 2014 11:54:41 AM UTC+2, Asterios Katsifodimos wrote:

 Hello,

 I was wondering whether there is any performance implication in case one 
 sets the maximum frame size to a ridiculously large size. 

 In my case, I may have some large files exchange. So, if I set e.g. 
 max-frame-size to 200MiB instead of the rather-low default one, should I 
 expect any performance degradation?

 Cheers,
 Asterios

 p.s. I know that I should not be using actors to move files around - this 
 is a temporary solution and then I will design a Blob transfer service next 
 month or so. 


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] maximum-frame-size and Akka performance

2014-07-15 Thread Konrad 'ktoso' Malawski
Hello Asterios,
It’s not a very good idea to use actor messaging to send such huge messages 
using akka’s remote messaging.
Reason being - you’ll delay other messages until the huge one has been pumped 
through the network.

Actor messages should ideally be in kilobyte ranges.

1) In your case I would store these files in a distributed filesystem (do you 
have HDFS in your cluster for example?) that is optimised to deal with such 
data (because akka-actor is optimised for fast small message transfer, not huge 
files) and send the location to the receiver, so it can decide to pull the work 
(the data) when it’s ready to do so. 

Wins: You don’t clog the actor system; The receiver is free to pull the data 
when it’s ready; and you use an optimised storage system for this kind of data.

2) If you want to stick to actor messages, you could chunk up the data so you 
don’t clog the connection. See: http://www.eaipatterns.com/MessageSequence.html

3) One other approach would be to use akka-io to deal with the file transfer. 
The receiving end would need more work here.


-- 
Konrad 'ktoso' Malawski
hAkker @ typesafe
http://akka.io

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] maximum-frame-size and Akka performance

2014-07-15 Thread Asterios Katsifodimos
Hi Konrad, thanks for the suggestions! I will soon impement a blob service 
(yes, we have HDFS in our environment - that's a good choice).

But my question remains unanswered: does the maximum-frame-size, if set 
ridiculously high, play any role in the performance of small (couple of 
kilobytes), normal messages?

Thanks again,
Asterios


On Tuesday, July 15, 2014 12:16:05 PM UTC+2, Konrad Malawski wrote:

 Hello Asterios,
 It’s not a very good idea to use actor messaging to send such huge 
 messages using akka’s remote messaging.
 Reason being - you’ll delay other messages until the huge one has been 
 pumped through the network.

 Actor messages should ideally be in kilobyte ranges.

 1) In your case I would store these files in a distributed filesystem (do 
 you have HDFS in your cluster for example?) that is optimised to deal with 
 such data (because akka-actor is optimised for fast small message transfer, 
 not huge files) and send the location to the receiver, so it can decide to 
 pull the work (the data) when it’s ready to do so. 

 Wins: You don’t clog the actor system; The receiver is free to pull the 
 data when it’s ready; and you use an optimised storage system for this kind 
 of data.

 2) If you want to stick to actor messages, you could chunk up the data so 
 you don’t clog the connection. See: 
 http://www.eaipatterns.com/MessageSequence.html

 3) One other approach would be to use akka-io to deal with the file 
 transfer. The receiving end would need more work here.


 -- 
 Konrad 'ktoso' Malawski
 hAkker @ typesafe
 http://akka.io


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] maximum-frame-size and Akka performance

2014-07-15 Thread Konrad 'ktoso' Malawski
Yes it will degrade, because you’ll clog the connection between system `A` and 
`B` while pumping through LargeMessage,
and the other waiting SmallMessages won’t get through while the large one is 
being written.

-- 
Konrad 'ktoso' Malawski
hAkker @ typesafe
http://akka.io

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Migrating from Persistent Channels to PersistentActoryWithAtleastOnceDelivery

2014-07-15 Thread Konrad Malawski
Hello John,

Just store the confirmations too:

  case c @ Confirmation(deliveryId) =
persist(c) { _ = confirmDelivery(deliveryId) }

I also recommend watching our ScalaDays talk (Patrik at the 3/4 mark speaks
about the AtLeastOnceDelivery trait):
http://www.parleys.com/play/53a7d2c3e4b0543940d9e53b/chapter71/about

Happy hakking!
​


On Tue, Jul 15, 2014 at 2:11 PM, John Dugo john.d...@gmail.com wrote:

 I'm migrating code that previously used Persistent Channels over to using
 PersistentActoryWithAtleastOnceDelivery.  With the channel implementation,
 I avoided re-sending messages on recovery by using message confirmations
 (when a message was confirmed it wrote the confirmation to the journal and
 those confirmed messages were filtered out during recovery).

 I'm trying to replicate that functionality, but the confirmDelivery method
 doesn't trigger any calls into the journal and only stores confirmations in
 memory.  Is there a way for me to persist these confirmations for existing
 messages (or is there another recommended approach)?


 Thanks

 - John

 --
  Read the docs: http://akka.io/docs/
  Check the FAQ:
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google Groups
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




-- 
Cheers,
Konrad 'ktoso' Malawski
hAkker @ Typesafe

http://typesafe.com

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Migrating from Persistent Channels to PersistentActoryWithAtleastOnceDelivery

2014-07-15 Thread Michael Pisula
Hi John,

When using the PersistentActorWithAtLeastOnceDelivery you have to persist 
everything yourself. You call the persist method for each event you want 
replayed after the actor restarts. For at least once delivery you should at 
least persist the event that will be used in deliver as well as the 
confirmation. Make sure you trigger the delivery and the confirmDeliver 
from both receive methods. For more information see the code example 
here: 
http://doc.akka.io/docs/akka/2.3.4/java/persistence.html#At-Least-Once_Delivery

Cheers,
Michael

Am Dienstag, 15. Juli 2014 14:11:16 UTC+2 schrieb John Dugo:

 I'm migrating code that previously used Persistent Channels over to using 
 PersistentActoryWithAtleastOnceDelivery.  With the channel implementation, 
 I avoided re-sending messages on recovery by using message confirmations 
 (when a message was confirmed it wrote the confirmation to the journal and 
 those confirmed messages were filtered out during recovery).

 I'm trying to replicate that functionality, but the confirmDelivery method 
 doesn't trigger any calls into the journal and only stores confirmations in 
 memory.  Is there a way for me to persist these confirmations for existing 
 messages (or is there another recommended approach)?  


 Thanks

 - John


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Migrating from Persistent Channels to PersistentActoryWithAtleastOnceDelivery

2014-07-15 Thread John Dugo
Completely glazed over that section of docs.

Thanks!

On Tuesday, July 15, 2014 8:40:27 AM UTC-4, Michael Pisula wrote:

 Hi John,

 When using the PersistentActorWithAtLeastOnceDelivery you have to persist 
 everything yourself. You call the persist method for each event you want 
 replayed after the actor restarts. For at least once delivery you should at 
 least persist the event that will be used in deliver as well as the 
 confirmation. Make sure you trigger the delivery and the confirmDeliver 
 from both receive methods. For more information see the code example here: 
 http://doc.akka.io/docs/akka/2.3.4/java/persistence.html#At-Least-Once_Delivery

 Cheers,
 Michael

 Am Dienstag, 15. Juli 2014 14:11:16 UTC+2 schrieb John Dugo:

 I'm migrating code that previously used Persistent Channels over to using 
 PersistentActoryWithAtleastOnceDelivery.  With the channel implementation, 
 I avoided re-sending messages on recovery by using message confirmations 
 (when a message was confirmed it wrote the confirmation to the journal and 
 those confirmed messages were filtered out during recovery).

 I'm trying to replicate that functionality, but the confirmDelivery 
 method doesn't trigger any calls into the journal and only stores 
 confirmations in memory.  Is there a way for me to persist these 
 confirmations for existing messages (or is there another recommended 
 approach)?  


 Thanks

 - John



-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Questions about akka persistence and scalability

2014-07-15 Thread Konrad Malawski
Hello Alexandre,
First things first: There *must not* be multiple persistent actors in the
system with the same persistentId.
There must be exactly one persistent actor writing to the journal using a
given persistent id.

Otherwise we can't guarantee any ordering. I mean you can start 2 separate
systems, start actors with the same id on them, and start writing random
conflicting things to the journal.
Instead you should when creating persistentactors, use some kinds of
repository that manages which actors are where etc.

This is where patterns like ClusterSharding and ClusterSingleton help you
to spread out these actors.
Please check the docs here:
http://doc.akka.io/docs/akka/2.3.4/contrib/cluster-sharding.html


And yes, when a PersistentActor is moved around on the cluster it will be
replayed - it will get all it's previous persistent events (or a snapshot),
so your logic can simply assume it's in the state it should be and cake
new writes - while being oblivious that a migration has just happened.


On Mon, Jul 14, 2014 at 11:49 AM, Alexandre Delegue 
alexandre.dele...@serli.com wrote:

 Thanks, I've watched the talk and it was pretty good.
 What append when a new persistent actor start on a shard. Does it receive
 all the message from the journal on the receiceRecover method or just a
 subset filtered by the shardResolver ?

 Thanks,
 Alex


 Le samedi 12 juillet 2014 19:04:13 UTC+2, Alexandre Delegue a écrit :

 Hi,

 I have some questions about akka persistence and scalabity and how to
 implement certain use case.

 The idea is to have an application that I can clone depending on the
 load. With akka persistence (event sourcing), if I clone an app I will have
 N persistents actor with the same persistent id :
 - If the journal is distributed each instance of persistent actor will
 write on the same journal. No problem if the order of the events stays the
 same.
 - If the state is persisted on DB, each node will share the same state.
 There is a problem when the persistent actor of each node start and replays
 the events from the journal, the state will be updated from different
 source and will be wrong. I can't persist the state on memory if I have to
 much element to deal with.

 What is the best way to deal with this problem ?

 For exemple if I have a shopping cart and the events are the actions the
 users can perform (create cart, add item, order ...) how can I scale that
 with akka persistence and event sourcing ?

 Thanks,
 Alex

  --
  Read the docs: http://akka.io/docs/
  Check the FAQ:
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google Groups
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




-- 
Cheers,
Konrad 'ktoso' Malawski
hAkker @ Typesafe

http://typesafe.com

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Passivate

2014-07-15 Thread Konrad Malawski
Just like Michael (quoting Patrik) said :-)

Also, please note that Passivate is *not *a feature of akka-persistence,
it's a feature of cluster sharding:
http://doc.akka.io/docs/akka/2.3.4/contrib/cluster-sharding.html

It's true however that it (and the entire cluster sharding) plays very well
with persistence :-)


On Tue, Jul 15, 2014 at 9:19 AM, delasoul michael.ham...@gmx.at wrote:

 Hello Ashey,

 I guess you answered your question(s) yourself:


 - in-between receiving Passivate and Terminated the Manager will buffer
 all incoming messages for the passivating Aggregate
 - when receiving Terminated it will flush the buffer for the Aggregate,
 which can result in activation again.

 So, the answer to both questions is yes, but the buffer for passivated
 actors is limited, if the limit is reached all messages will go to
 DeadLetters.

 hth,

 michael




 On Monday, 14 July 2014 17:28:42 UTC+2, Ashley Aitken wrote:


 A couple of quick questions about passivate and PersistentActors:

 Can other actors still send messages to persistent actors that have been
 passivated?

 Will these messages cause the persistent actor to be reactivated?

 I am asking about this in single node and clustered context.

 I saw elsewhere that Patrik has written this in the cluster/sharding
 context:

 - all messages are sent via the Manager actor, which creates child
 Aggregate instances on demand
 - when receiving a message the Manager extract the Aggregate identifier
 from the message
 - the Manager creates a new child Aggregate actor if it doesn't exist,
 and then forwards the message to the Aggregate
 - the Aggregate can passivate itself by sending a Passivate message to
 the parent Manager, which then sends PoisonPill to the Aggregate
 - in-between receiving Passivate and Terminated the Manager will buffer
 all incoming messages for the passivating Aggregate
 - when receiving Terminated it will flush the buffer for the Aggregate,
 which can result in activation again
 The PoisonPill can be replaced with some other custom stop message if
 the Aggregate needs to do further interactions with other actors before
 stopping.


 Thanks in advance for any answers.

 Cheers,
 Ashley.


  --
  Read the docs: http://akka.io/docs/
  Check the FAQ:
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google Groups
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




-- 
Cheers,
Konrad 'ktoso' Malawski
hAkker @ Typesafe

http://typesafe.com

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] akka-persistence Processor emulation

2014-07-15 Thread Pavel Zalunin
Hi,

I revised my implementation and done with
https://gist.github.com/whiter4bbit/22cd3b0909bb390e80db. For my particular
case there is relatively long window between requests, in many cases
Appends and Removes count will match (also possible by calling
confirmDelivery for messages, that not delivered for some timeout).
So, I'm done with checking if numberOfUnconfirmed is zero and removing all
messages from journal in this case - I think it is same as tracking seqNr
for each message (but I can't find a way to do this, I can just get this Nr
after message is stored).
Also need to store snapshot before removing all messages, it is need to
adjust deliveryId during actor recovery.
But disadvantage of this approach is:

case SaveSnapshotSuccess(meta) = {
  log.info(Snapshot stored: {}, meta)
  compactToSeqNr.map(deleteMessages(_, true)) // if delete
messages fails, then deliveryId will be diverged
  compactToSeqNr = None
}

(Same when removing messages, then saving snapshot)

But I think it can be avoided by using custom id generator and adding those
ids as parameters for Append and Removes (as I did in my first
impementation)

Pavel.


2014-07-14 19:27 GMT+03:00 Pavel Zalunin wr4b...@gmail.com:

 Hi,

 Thanks for explaining delivery semantics, actually it is in scala doc at
 both trait and method description, missed it, maybe some example can fit
 well in documentation:). And thanks for persistence module at all - with
 2.3.2 our app works well for last few months!

 Regarding deliveryIds. Two things:
 1) I need to tag 'Append' with unique id, because I can send two same
 messages and both should be delivered, without id I can end with:
 Append(Request(example.com/r2))
 Append(Request(example.com/r1))
 Append(Request(example.com/r2))

 Remove(Request(example.com/r2))

 And last remove is confusing - I don't know which append it corresponds to
 - 1st or 3rd, it can matter when receiving actor state depends on messages
 ordering, thus I added id field here (uuid not a good solution probably,
 but ok for testing)

 2) When I receive command I need to:
   a) persist corresponding Append,
   b) make a delivery call
   c) when actor responds with Confirm, I need to persist 'Remove', that
 corresponding to 'Append' persisted at a) - that is I need to know which id
 has this append, as delivery id generated at b) I can't rely on it.

 About pruning log. As you pointed, I can't rely on pattern
 Append/Remove/Append/Remove/Append/Remove in journal, that's why I
 mentioned removing all messages and storing just sequence of appends, that
 I got after removing Appends with corresponding Removes after
 RecoveryCompleted. Regarding removing messages based on seqId, I think it
 is trickly because I need to know exact seqIds for Append/Remove in logs,
 but lastSeqenceNr I can get just after successful persist event (but I'm
 not sure, maybe it incremented asynchronously), so when I'm storing Append
 I need to pass exact instance to persist, but I don't know seqId where it
 can be stored
 persist(Append(...))(_ = //here maybe I can get seqId)


 Pavel.


 2014-07-14 17:05 GMT+03:00 Konrad Malawski ktos...@gmail.com:

 This thread motivated me to improve the documentation on these methods a
 bit, progress can be tracked in this issue:
 https://github.com/akka/akka/issues/15538

 -- k

 --
  Read the docs: http://akka.io/docs/
  Check the FAQ:
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google Groups
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Direct FN Integration platform

2014-07-15 Thread Darshana Samanpura
Dear Akka Leads and Team,

Direct Fn is a software solution provider for financial industry, 
specializing in market data and trading back ends, last year we started a 
project call Direct Fn Integration platform in order to integrate systems 
for B2B services and Real Time Business Intelligence.

We use Akka and Camel for writing the core part of the platform, the 
project is really successful and first live application will be launch in 
this month at Riyadh Bank Saudi Arabia, many more clients are in the queue 
for the same product stack.

I would like to thank people in Akka team for developing such a grate 
framework for us and making it available for use free.

Cheers 
Darshana

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Actor terminated vs queue messages that they sent but not yet delivered.

2014-07-15 Thread Guylain Lavoie
Hello,

I would like to understand what happens to the yet undelivered queued 
messages that were sent by an actor that was just terminated.
For example, actor A send message M to B. M is queued as B is currently 
occupied. A is terminated. Will the message M be delivery to B? If so, is 
there a way to safely ignore this message when it is later delivered?

Thanks!
Guylain

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Kafka journal

2014-07-15 Thread Ashley Aitken

I think this is a fantastic development (if I understand it correctly).

From my reading and basic understanding I had concerns about the need for 
two event infrastructures to 1) implement more complex view models in CQRS 
because currently Views can't follow more than one PersistentActor, and 2) 
integration of events with non-Akka based systems.

Please correct me if I am wrong but having one powerful event 
infrastructure (like Kafka) as the event store to use across applications 
will enable (2) and possibly (1) for now and in the future perhaps with 
akka-streams.  Particularly as Kafka provides publish-subscribe 
functionality.  

Event stores and streams seem so central to many contemporary systems. 


On Monday, 14 July 2014 23:08:08 UTC+8, Martin Krasser wrote:

  Thanks Heiko, really hope to get some user feedback ...

 On 14.07.14 17:04, Heiko Seeberger wrote:
  
 Fantastic! 

  Great work, Martin. Keep it coming!

  Heiko

  On 14 Jul 2014, at 16:36, Martin Krasser kras...@googlemail.com 
 javascript: wrote:

  There's now a first release of the Kafka journal. Details at 
 https://github.com/krasserm/akka-persistence-kafka

 Am Sonntag, 13. Juli 2014 16:19:55 UTC+2 schrieb Martin Krasser: 

  
 On 13.07.14 16:03, Richard Rodseth wrote:
  
 Thanks for the detailed reply. I might have been forgetting that Akka 
 persistence can be used for more than persisting DDD aggregates. I had also 
 forgotten that the event store and snapshot store can be different.


 You can even use Kafka to implement a snapshot store. You just need to 
 enable log compaction 
 http://kafka.apache.org/documentation.html#compaction which will 
 always keep the last snapshot (entry) for each persistent actor (key). I 
 also plan to implement a snapshot store backed by Kafka but I'm not sure at 
 the moment how well Kafka supports large log entries.

  

 On Sun, Jul 13, 2014 at 12:51 AM, Martin Krasser kras...@googlemail.com 
 wrote:

  Hi Richard,

 when using the Kafka journal with default/typical retention times, your 
 application is responsible for storing snapshots at intervals that are 
 significantly smaller than the retention time (for example, with a 
 retention time of 7 days, you may want to take snapshots of your persistent 
 actors every 3 days or so). Alternatively, configure Kafka to keep messages 
 forever (i.e. set the retention time to the maximum value) if needed. I 
 don't go into Kafka partitioning details here but it is possible to 
 implement the journal driver in a way that both a single persistent actor's 
 data are partitioned *and* kept in order. However, with the initial 
 implementation, all data for a single persistent actor must fit on a single 
 Kafka node (different persistent actors are of course distributed over a 
 Kafka cluster). Hence, deleting old data after a few weeks and taking 
 snapshots at regular interval is the way to go (which is good enough for 
 many applications I think).

 The real value of the Kafka journal IMO comes with the many external 
 integrations it supports. For example, you can can use the it as an input 
 source for Spark streaming 
 http://spark.apache.org/docs/latest/streaming-programming-guide.html 
 and can do (scalable) stream processing of events generated by persistent 
 actors i.e. you can easily create Akka - Kafka - Spark Streaming 
 pipelines. This is an alternative to Akka's PersistentView and even allows 
 processing of events generated by several/all persistent actors with a 
 single consumer such as a single Spark DStream (which is currently a 
 limitation https://github.com/akka/akka/issues/15004 when using 
 PersistentViews). 

 I just see this as a starting point for what akka-persistence may 
 require from all journal implementations in later releases: provide a 
 persistent event stream generated several persistent actors in a scalable 
 way. This stream could then be consumed with akka-streams or Spark 
 Streaming, using a generic connector rather than a 
 journal-backend-specific, for example. 

 Initially I just wanted to implement the Kafka integration as 
 interceptor for journal commands so that events are stored in Kafka in 
 addition to another journal backend. This may be ok for some projects, 
 others may think that operational complexity gets too high when you have to 
 administer a Kafka/Zookeeper cluster in addition to a Cassandra or MongoDB 
 cluster, for example.

 Hope that clarifies things a bit.

 Cheers,
 Martin 


 On 12.07.14 15:35, Richard Rodseth wrote:
   
  I saw a tweet from Martin Krasser that he was working on an Akka 
 Persistence journal plug-in for Kafka. This puzzled me a bit since Kafka 
 messages are durable rather than persistent - they are stored for a 
 configurable time. 

  Could anyone comment on a typical usage? Assuming that your persistent 
 actor is going to get recovered before the Kafka topic expires seems odd.
  
  While the Akka/Kafka combination seems great, I always pictured it 
 would