Re: [akka-user] What is akka message network overhead?

2014-08-07 Thread Sean Zhong
My application.conf

akka {
  loglevel = INFO

  extensions = 
[com.romix.akka.serialization.kryo.KryoSerializationExtension$]

  actor {
provider = akka.remote.RemoteActorRefProvider

serializers {
  kryo = com.romix.akka.serialization.kryo.KryoSerializer
}

serialization-bindings {
   java.lang.String = kryo
}

default-dispatcher {
  throughput = 1024
  fork-join-executor {
parallelism-max = 4
  }
}
   default-mailbox {
  mailbox-type = akka.dispatch.SingleConsumerOnlyUnboundedMailbox
}
  }
  remote {
enabled-transports = [akka.remote.netty.tcp]
netty.tcp {
  port = 0
}
  }
}



On Thursday, August 7, 2014 1:45:12 PM UTC+8, Sean Zhong wrote:

 Hi Michael,

 I used wireshark to capture the traffic. I found for each message sent(the 
 message is sent with option noSender), there is an extra cost of ActorPath. 

 For example, for the following msg example, message payload length length 
 is 100 bytes(bold), but there is also a target actor path for 221 
 bytes(red), which is much bigger than message itself.
 Can the actorPath overhead cost be reduced?
 . 

 akka.tcp://
 app0executor0@192.168.1.53:51582/remote/akka.tcp/2120193a-e10b-474e-bccb-8ebc4b3a0247@192.168.1.53:48948/remote/akka.tcp/cluster@192.168.1.54:43676/user/master/Worker1/app_0_executor_0/group_1_task_0#-768886794
 o
 h
 *1880512348131407402383073127833013356174562750285568666448502416582566533241241053122856142164774120*
 :



 On Thursday, August 7, 2014 1:34:42 AM UTC+8, Michael Frank wrote:

  sean- how are you measuring your network thoroughput?  does 140MB/s 
 include TCP, IP, ethernet header data?  are you communicating across local 
 network or across the internet?  the greater the distance your packets have 
 to travel (specifically the number of hops), the higher chance that they 
 will get dropped and retransmitted, or fragmented.  a tool like Wireshark, 
 tcpdump, or ScaPy could help you differentiate utilization at different 
 protocol layers and also identify network instability.

 -Michael

 On 08/06/14 08:46, Sean Zhong wrote:
  
 Konrad, thanks. 

  After enabling the debug flag, 

  I saw the system message like Terminate are using javaSerialiaer, is 
 this expected?

  [DEBUG] [08/06/2014 22:19:11.836] 
 [0e6fb647-7893-4328-a335-5e26e2ab080c-akka.actor.default-dispatcher-4] 
 [akka.serialization.Serialization(akka://0e6fb647-7893-4328-a335-5e26e2ab080c)]
  
 Using serializer[*akka.serialization.JavaSerializer*] for message 
 [akka.dispatch.sysmsg.Terminate]


  Besides, as my message is String, I cannot find related serialization 
 log for type java.lang.String. How to know for sure protobuf is used for 
 String?

  Are you sure you’re not CPU or something else -bound? And you should be 
 able to stuff the network?


  What I means is that 140MB/s network are occupied, but source message 
 throughput is only 60MB/s, so there are 80MB/s bandwidth I cannot explain.

  
 On Wednesday, August 6, 2014 11:30:49 PM UTC+8, Konrad Malawski wrote: 

   Hi Sean,

 On the wire:
 You can look into 
 https://github.com/akka/akka/tree/master/akka-remote/src/main/protobuf 
 for what exactly we pass around on the wire.

 Which serializer is used:
 enable debug logging, we log this info in Serialization.scala 
 log.debug(Using 
 serializer[{}] for message [{}], ser.getClass.getName, clazz.getName)

 Other hints:
 Are you sure you’re not CPU or something else -bound? And you should be 
 able to stuff the network?
 Have you played around with the number of threads etc?
 What hardware are you benchmarking on?
 ​
  

 On Wed, Aug 6, 2014 at 5:19 PM, Sean Zhong cloc...@gmail.com wrote:

  Thanks, Konrad,


 Are there other akka data/attributes attached to a remote sending 
 message?  Or just?
 serialized(msg) + actorRef

  Which part of akka source code I can check for answer?

  Besides, is there LOG flags I can enable so that I can check which 
 serilization framework is in effect?

  In my experiment, I cannot explain half network bandwitdh usage with 
 akka remote messaging. My message throughput is 60MB/s, but the network 
 bandwidth is 140MB/s. How can I trouble shooting this.

  
 On Wednesday, August 6, 2014 7:53:39 PM UTC+8, Konrad Malawski wrote:

  Hello Sean, 
 actual overhead in terms of how many bytes – depends on your 
 serialiser.

  1) Yes, a message includes the sender. Not much optimisations in 
 there currently, although we have nice ideas on what we can do within a 
 cluster (short handles instead of full addresses),
 2) Refer to: 
 http://doc.akka.io/docs/akka/snapshot/scala/serialization.html 
 Strings are a byte array, the envelope is protobuf (you can find it in 
 `akka-remote`), other kinds of payload is whatever you configured – 
 defaults to java serialisation.
 3) Of the entire message? It depends on your serialiser.
  
  Hope this helps, happy hakking :-)
  

  On Wed, Aug 6, 2014 at 1:43 PM, Sean Zhong cloc...@gmail.com wrote:
  
  Suppose I 

[akka-user] Re: where can I get akka 2.3.1 for scala 2.11

2014-08-07 Thread Jim Newsham

Nevermind.  I tested akka 2.3.4 and apparently the issue in question was 
fixed.

Regards,
Jim

On Wednesday, August 6, 2014 2:03:58 PM UTC-10, Jim Newsham wrote:


 I can't seem to find the akka 2.3.1 artifacts targeting scala 2.11.  Are 
 these available somewhere?  We'd like to upgrade to scala 2.11, but we 
 can't upgrade to a later version of akka due to a blocker issue (#15109 -- 
 unless I'm mistaken this hasn't been fixed in an akka release).  

 The specific artifacts we need are akka-actor, akka-slf4j, akka-remote, 
 and akka-testkit.

 Thanks!
 Jim




-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] What is akka message network overhead?

2014-08-07 Thread Sean Zhong
Is it possible to reduce the average message overhead?

200 bytes extra cost per remote message doesn't looks good...

On Thursday, August 7, 2014 1:45:12 PM UTC+8, Sean Zhong wrote:

 Hi Michael,

 I used wireshark to capture the traffic. I found for each message sent(the 
 message is sent with option noSender), there is an extra cost of ActorPath. 

 For example, for the following msg example, message payload length length 
 is 100 bytes(bold), but there is also a target actor path for 221 
 bytes(red), which is much bigger than message itself.
 Can the actorPath overhead cost be reduced?
 . 

 akka.tcp://
 app0executor0@192.168.1.53:51582/remote/akka.tcp/2120193a-e10b-474e-bccb-8ebc4b3a0247@192.168.1.53:48948/remote/akka.tcp/cluster@192.168.1.54:43676/user/master/Worker1/app_0_executor_0/group_1_task_0#-768886794
 o
 h
 *1880512348131407402383073127833013356174562750285568666448502416582566533241241053122856142164774120*
 :



 On Thursday, August 7, 2014 1:34:42 AM UTC+8, Michael Frank wrote:

  sean- how are you measuring your network thoroughput?  does 140MB/s 
 include TCP, IP, ethernet header data?  are you communicating across local 
 network or across the internet?  the greater the distance your packets have 
 to travel (specifically the number of hops), the higher chance that they 
 will get dropped and retransmitted, or fragmented.  a tool like Wireshark, 
 tcpdump, or ScaPy could help you differentiate utilization at different 
 protocol layers and also identify network instability.

 -Michael

 On 08/06/14 08:46, Sean Zhong wrote:
  
 Konrad, thanks. 

  After enabling the debug flag, 

  I saw the system message like Terminate are using javaSerialiaer, is 
 this expected?

  [DEBUG] [08/06/2014 22:19:11.836] 
 [0e6fb647-7893-4328-a335-5e26e2ab080c-akka.actor.default-dispatcher-4] 
 [akka.serialization.Serialization(akka://0e6fb647-7893-4328-a335-5e26e2ab080c)]
  
 Using serializer[*akka.serialization.JavaSerializer*] for message 
 [akka.dispatch.sysmsg.Terminate]


  Besides, as my message is String, I cannot find related serialization 
 log for type java.lang.String. How to know for sure protobuf is used for 
 String?

  Are you sure you’re not CPU or something else -bound? And you should be 
 able to stuff the network?


  What I means is that 140MB/s network are occupied, but source message 
 throughput is only 60MB/s, so there are 80MB/s bandwidth I cannot explain.

  
 On Wednesday, August 6, 2014 11:30:49 PM UTC+8, Konrad Malawski wrote: 

   Hi Sean,

 On the wire:
 You can look into 
 https://github.com/akka/akka/tree/master/akka-remote/src/main/protobuf 
 for what exactly we pass around on the wire.

 Which serializer is used:
 enable debug logging, we log this info in Serialization.scala 
 log.debug(Using 
 serializer[{}] for message [{}], ser.getClass.getName, clazz.getName)

 Other hints:
 Are you sure you’re not CPU or something else -bound? And you should be 
 able to stuff the network?
 Have you played around with the number of threads etc?
 What hardware are you benchmarking on?
 ​
  

 On Wed, Aug 6, 2014 at 5:19 PM, Sean Zhong cloc...@gmail.com wrote:

  Thanks, Konrad,


 Are there other akka data/attributes attached to a remote sending 
 message?  Or just?
 serialized(msg) + actorRef

  Which part of akka source code I can check for answer?

  Besides, is there LOG flags I can enable so that I can check which 
 serilization framework is in effect?

  In my experiment, I cannot explain half network bandwitdh usage with 
 akka remote messaging. My message throughput is 60MB/s, but the network 
 bandwidth is 140MB/s. How can I trouble shooting this.

  
 On Wednesday, August 6, 2014 7:53:39 PM UTC+8, Konrad Malawski wrote:

  Hello Sean, 
 actual overhead in terms of how many bytes – depends on your 
 serialiser.

  1) Yes, a message includes the sender. Not much optimisations in 
 there currently, although we have nice ideas on what we can do within a 
 cluster (short handles instead of full addresses),
 2) Refer to: 
 http://doc.akka.io/docs/akka/snapshot/scala/serialization.html 
 Strings are a byte array, the envelope is protobuf (you can find it in 
 `akka-remote`), other kinds of payload is whatever you configured – 
 defaults to java serialisation.
 3) Of the entire message? It depends on your serialiser.
  
  Hope this helps, happy hakking :-)
  

  On Wed, Aug 6, 2014 at 1:43 PM, Sean Zhong cloc...@gmail.com wrote:
  
  Suppose I want to transfer a msg to another machine  

  otherActor ! “hello 

  what is the wire message overhead? 

  1. the wire messge need to have information about ActorRef, will 
 the ActorRef be attached for every message, or will it be cached on 
 target 
 machine, so the sender only need to attach an Id?
  2. How string hello is serialized on the wire, with java 
 serialization? or protobuf? Can this be configured?
 3. What is other overhead for this message?
  
  
  

  
 -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 

Re: [akka-user] Connecting DistributedPubSubMediators between different ActorSystems

2014-08-07 Thread Konrad Malawski
Hi Brian,
No, PubSub works within a cluster – it needs to know which nodes to send
messages to, right?
However you could have a subscriber that will mediate the messages to the
other cluster via Cluster Client –
http://doc.akka.io/docs/akka/2.3.4/contrib/cluster-client.html
Would that help in your case?

I'm interested why you need separate clusters – is it that they're local
to some resource or something like that?


On Thu, Aug 7, 2014 at 5:52 AM, Brian Dunlap bdunla...@gmail.com wrote:

 Is it possible to connect DistributedPubSub mediators between
  **different** ActorSystems?

 In ClusterSystemA we'd like to subscribe to events from the ClusterSystemB
 mediator.

 We need **separate** clusters - that's why can't use the same cluster name.


 Thanks!
 Brian -

 --
  Read the docs: http://akka.io/docs/
  Check the FAQ:
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google Groups
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




-- 
Cheers,
Konrad 'ktoso' Malawski
hAkker @ Typesafe

http://typesafe.com

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: where can I get akka 2.3.1 for scala 2.11

2014-08-07 Thread Konrad Malawski
Hello Jim,
yes, as indicated by the issue: https://github.com/akka/akka/issues/15109
it's resolved.

Related answer – we have not published akka 2.3.1 for scala 2.11 because at
that time scala 2.11 was not available as stable yet.
We do not plan to back release 2.3.1 and instead suggest using 2.3.4 :-)

Happy hakking!


-- 
Cheers,
Konrad 'ktoso' Malawski
hAkker @ Typesafe

http://typesafe.com

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Synchronising responses sent from parents and children

2014-08-07 Thread Lawrence Wagerfield
I have problem that involves synchronising outbound messages from a parent 
actor and its child actor. This particular problem is with regards to 
forwarding failure messages to clients. 

Here is the example: 

I have a service actor that receives a request from a client actor*.*

The service actor creates a new child transaction actor to deal with said 
request, which then response directly to the client actor after performing 
the work.

If the transaction actor fails, it is stopped by the service actor which 
then sends a failure report to the client actor.

The problem is the client actor must now support receiving failures after 
receiving the response it is actually interested in - otherwise the 
potential 'post-workload' failures from the transaction actor may 
deadletter, or worse, be misinterpreted by the client actor (i.e. a failure 
for a subsequent transaction).

I have considered an approach whereby the client actor must wait for the 
transaction 
actor to terminate before safely continuing, since after that point, it can 
be guaranteed that no more messages will be received.

Is there a common solution to this problem?

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Multiple Futures inside Actor's receive

2014-08-07 Thread Michael Pisula
Instead of mutating state from within the future I would use the pipeTo 
pattern. Using pipeTo you can send the result of a future to an actor (e.g. 
to self). There you can safely change state, as you are in 
single-threaded-illusion-land again...

HTH

Cheers,
Michael

Am Donnerstag, 7. August 2014 07:25:05 UTC+2 schrieb Soumya Simanta:

 I'm cross posting this here for better coverage. 


 http://stackoverflow.com/questions/25174504/multiple-future-calls-in-an-actors-receive-method


 I'm trying to make two external calls (to a Redis database) inside an 
 Actor's receive method. Both calls return a Future and I need the result 
 of the first Future inside the second. I'm wrapping both calls inside a 
 Redis transaction to avoid anyone else from modifying the value in the 
 database while I'm reading it.

 The internal state of the actor is updated based on the value of the 
 second Future.

 Here is what my current code looks like which I is incorrect because I'm 
 updating the internal state of the actor inside a Future.onComplete
  callback.

 I cannot use the PipeTo pattern because I need both both Future have to 
 be in a transaction. If I use Await for the first Future then my receive 
 method will *block*. Any idea how to fix this ?

 My *second question* is related to how I'm using Futures. Is this usage 
 of Futures below correct? Is there a better way of dealing with multiple 
 Futures in general? Imagine if there were 3 or 4 Future each depending on 
 the previous one.

 import akka.actor.{Props, ActorLogging, Actor}import 
 akka.util.ByteStringimport redis.RedisClient
 import scala.concurrent.Futureimport scala.util.{Failure, Success}

 object GetSubscriptionsDemo extends App {
   val akkaSystem = akka.actor.ActorSystem(redis-demo)
   val actor = akkaSystem.actorOf(Props(new SimpleRedisActor(localhost, 
 dummyzset)), name = simpleactor)
   actor ! UpdateState}
 case object UpdateState
 class SimpleRedisActor(ip: String, key: String) extends Actor with 
 ActorLogging {

   //mutable state that is updated on a periodic basis
   var mutableState: Set[String] = Set.empty

   //required by Future
   implicit val ctx = context dispatcher

   var rClient = RedisClient(ip)(context.system)

   def receive = {
 case UpdateState = {
   log.info(Start of UpdateState ...)

   val tran = rClient.transaction()

   val zf: Future[Long] = tran.zcard(key)  //FIRST Future 
   zf.onComplete {

 case Success(z) = {
   //SECOND Future, depends on result of FIRST Future 
   val rf: Future[Seq[ByteString]] = tran.zrange(key, z - 1, z) 
   rf.onComplete {
 case Success(x) = {
   //convert ByteString to UTF8 String
   val v = x.map(_.utf8String)
   log.info(sUpdating state with $v )
   //update actor's internal state inside callback for a Future
   //IS THIS CORRECT ?
   mutableState ++ v
 }
 case Failure(e) = {
   log.warning(ZRANGE future failed ..., e)
 }
   }
 }
 case Failure(f) = log.warning(ZCARD future failed ..., f)
   }
   tran.exec()

 }
   }
 }

 The compiles but when I run it gets struck.

 2014-08-07  INFO [redis-demo-akka.actor.default-dispatcher-3] 
 a.e.s.Slf4jLogger - Slf4jLogger started2014-08-07 04:38:35.106UTC INFO 
 [redis-demo-akka.actor.default-dispatcher-3] e.c.s.e.a.g.SimpleRedisActor - 
 Start of UpdateState ...2014-08-07 04:38:35.134UTC INFO 
 [redis-demo-akka.actor.default-dispatcher-8span class=pun st

 ...

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Multiple Futures inside Actor's receive

2014-08-07 Thread Michael Pisula
Sorry, still early. Missed the part where you said that you don't want to 
use PipeTo because of the transaction. Not sure if that is a problem at all 
though. From what I see you use the transaction to make sure nothing 
happens with the values between your zcard and zrange calls, afterwards its 
only modification of the internal state. If you just pipe that to a 
separate actor containing the state I would expect things to work fine. Or 
do you want the transaction to ensure that update to the internal state and 
synced with the reads from redis. Then I am not sure that it will work like 
you implemented it.

Cheers

Am Donnerstag, 7. August 2014 09:04:13 UTC+2 schrieb Michael Pisula:

 Instead of mutating state from within the future I would use the pipeTo 
 pattern. Using pipeTo you can send the result of a future to an actor (e.g. 
 to self). There you can safely change state, as you are in 
 single-threaded-illusion-land again...

 HTH

 Cheers,
 Michael

 Am Donnerstag, 7. August 2014 07:25:05 UTC+2 schrieb Soumya Simanta:

 I'm cross posting this here for better coverage. 


 http://stackoverflow.com/questions/25174504/multiple-future-calls-in-an-actors-receive-method


 I'm trying to make two external calls (to a Redis database) inside an 
 Actor's receive method. Both calls return a Future and I need the result 
 of the first Future inside the second. I'm wrapping both calls inside a 
 Redis transaction to avoid anyone else from modifying the value in the 
 database while I'm reading it.

 The internal state of the actor is updated based on the value of the 
 second Future.

 Here is what my current code looks like which I is incorrect because I'm 
 updating the internal state of the actor inside a Future.onComplete
  callback.

 I cannot use the PipeTo pattern because I need both both Future have to 
 be in a transaction. If I use Await for the first Future then my receive 
 method will *block*. Any idea how to fix this ?

 My *second question* is related to how I'm using Futures. Is this usage 
 of Futures below correct? Is there a better way of dealing with multiple 
 Futures in general? Imagine if there were 3 or 4 Future each depending on 
 the previous one.

 import akka.actor.{Props, ActorLogging, Actor}import 
 akka.util.ByteStringimport redis.RedisClient
 import scala.concurrent.Futureimport scala.util.{Failure, Success}

 object GetSubscriptionsDemo extends App {
   val akkaSystem = akka.actor.ActorSystem(redis-demo)
   val actor = akkaSystem.actorOf(Props(new SimpleRedisActor(localhost, 
 dummyzset)), name = simpleactor)
   actor ! UpdateState}
 case object UpdateState
 class SimpleRedisActor(ip: String, key: String) extends Actor with 
 ActorLogging {

   //mutable state that is updated on a periodic basis
   var mutableState: Set[String] = Set.empty

   //required by Future
   implicit val ctx = context dispatcher

   var rClient = RedisClient(ip)(context.system)

   def receive = {
 case UpdateState = {
   log.info(Start of UpdateState ...)

   val tran = rClient.transaction()

   val zf: Future[Long] = tran.zcard(key)  //FIRST Future 
   zf.onComplete {

 case Success(z) = {
   //SECOND Future, depends on result of FIRST Future 
   val rf: Future[Seq[ByteString]] = tran.zrange(key, z - 1, z) 
   rf.onComplete {
 case Success(x) = {
   //convert ByteString to UTF8 String
   val v = x.map(_.utf8String)
   log.info(sUpdating state with $v )
   //update actor's internal state inside callback for a Future
   //IS THIS CORRECT ?
   mutableState ++ v
 }
 case Failure(e) = {
   log.warning(ZRANGE future failed ..., e)
 }
   }
 }
 case Failure(f) = log.warning(ZCARD future failed ..., f)
   }
   tran.exec()

 }
   }
 }

 The compiles but when I run it gets struck.

 2014-08-07  INFO [redis-demo-akka.actor.default-dispatcher-3] 
 a.e.s.Slf4jLogger - Slf4jLogger started2014-08-07 04:38:35.106UTC INFO 
 [redis-demo-akka.actor.default-dispatcher-3] e.c.s.e.a.g.SimpleRedisActor - 
 Start of UpdateState ...2014-08-07 04:38:35.134UTC INFO 
 [redis-demo-akka.actor.default-dispatcher-8span class=pun st

 ...



-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] What is akka message network overhead?

2014-08-07 Thread Endre Varga
On Thu, Aug 7, 2014 at 10:05 AM, √iktor Ҡlang viktor.kl...@gmail.com
wrote:

 Or add compression.


This is the Akka wire level envelope, cannot be directly controlled by
users (unless someone writes a new transport of course).

-Endre


 On Aug 7, 2014 9:52 AM, Endre Varga endre.va...@typesafe.com wrote:

 Hi Sean,

 Unfortunately there is no way to reduce this overhead without changing
 the wire layer format, which we cannot do now. As you correctly see,
 practically all the overhead comes from the path of the destination and
 sender actor. In the future we have plans to implement a scheme which
 allows the sender to abbreviate the most common paths used to a single
 number, but this needs a new protocol.

 So the answer currently is that you cannot reduce this overhead without
 introducing some batching scheme yourself: instead of sending MyMessage you
 can send Array[MyMessage], so the cost of the recipient path is only
 suffered once for the batch, but not for the individual messages -- i.e.
 you can amortize the overhead.

 -Endre


 On Thu, Aug 7, 2014 at 8:11 AM, Sean Zhong clock...@gmail.com wrote:

 Is it possible to reduce the average message overhead?

 200 bytes extra cost per remote message doesn't looks good...


 On Thursday, August 7, 2014 1:45:12 PM UTC+8, Sean Zhong wrote:

 Hi Michael,

 I used wireshark to capture the traffic. I found for each message
 sent(the message is sent with option noSender), there is an extra cost of
 ActorPath.

 For example, for the following msg example, message payload length
 length is 100 bytes(bold), but there is also a target actor path for 221
 bytes(red), which is much bigger than message itself.
 Can the actorPath overhead cost be reduced?
 .

 akka.tcp://app0executor0@192.168.1.53:51582/remote/akka.
 tcp/2120193a-e10b-474e-bccb-8ebc4b3a0247@192.168.1.53:
 48948/remote/akka.tcp/cluster@192.168.1.54:43676/user/
 master/Worker1/app_0_executor_0/group_1_task_0#-768886794o
 h
 *1880512348131407402383073127833013356174562750285568666448502416582566533241241053122856142164774120*
 :



 On Thursday, August 7, 2014 1:34:42 AM UTC+8, Michael Frank wrote:

  sean- how are you measuring your network thoroughput?  does 140MB/s
 include TCP, IP, ethernet header data?  are you communicating across local
 network or across the internet?  the greater the distance your packets 
 have
 to travel (specifically the number of hops), the higher chance that they
 will get dropped and retransmitted, or fragmented.  a tool like Wireshark,
 tcpdump, or ScaPy could help you differentiate utilization at different
 protocol layers and also identify network instability.

 -Michael

 On 08/06/14 08:46, Sean Zhong wrote:

 Konrad, thanks.

  After enabling the debug flag,

  I saw the system message like Terminate are using javaSerialiaer, is
 this expected?

  [DEBUG] [08/06/2014 22:19:11.836] [0e6fb647-7893-4328-a335-
 5e26e2ab080c-akka.actor.default-dispatcher-4] [akka.serialization.
 Serialization(akka://0e6fb647-7893-4328-a335-5e26e2ab080c)] Using
 serializer[*akka.serialization.JavaSerializer*] for message
 [akka.dispatch.sysmsg.Terminate]


  Besides, as my message is String, I cannot find related
 serialization log for type java.lang.String. How to know for sure protobuf
 is used for String?

  Are you sure you’re not CPU or something else -bound? And you should
 be able to stuff the network?


  What I means is that 140MB/s network are occupied, but source
 message throughput is only 60MB/s, so there are 80MB/s bandwidth I cannot
 explain.


 On Wednesday, August 6, 2014 11:30:49 PM UTC+8, Konrad Malawski wrote:

   Hi Sean,

 On the wire:
 You can look into https://github.com/akka/akka/
 tree/master/akka-remote/src/main/protobuf for what exactly we pass
 around on the wire.

 Which serializer is used:
 enable debug logging, we log this info in Serialization.scala 
 log.debug(Using
 serializer[{}] for message [{}], ser.getClass.getName, clazz.getName)

 Other hints:
 Are you sure you’re not CPU or something else -bound? And you should
 be able to stuff the network?
 Have you played around with the number of threads etc?
 What hardware are you benchmarking on?
 ​


 On Wed, Aug 6, 2014 at 5:19 PM, Sean Zhong cloc...@gmail.com wrote:

  Thanks, Konrad,


 Are there other akka data/attributes attached to a remote sending
 message?  Or just?
 serialized(msg) + actorRef

  Which part of akka source code I can check for answer?

  Besides, is there LOG flags I can enable so that I can check which
 serilization framework is in effect?

  In my experiment, I cannot explain half network bandwitdh usage
 with akka remote messaging. My message throughput is 60MB/s, but the
 network bandwidth is 140MB/s. How can I trouble shooting this.


 On Wednesday, August 6, 2014 7:53:39 PM UTC+8, Konrad Malawski wrote:

  Hello Sean,
 actual overhead in terms of how many bytes – depends on your
 serialiser.

  1) Yes, a message includes the sender. Not much optimisations in
 there currently, 

Re: [akka-user] Re: Akka Remoting fails for large messages

2014-08-07 Thread Endre Varga
Hi Syed,

As the very first step, can you tell us what is the Akka version you are
using? If it is not Akka 2.3.4, please try to upgrade to 2.3.4 and see if
the issue still remains.

-Endre


On Thu, Aug 7, 2014 at 12:12 AM, Ryan Tanner ryan.tan...@gmail.com wrote:

 When those large messages are received, what's supposed to happen?  It's
 possible that whatever processing is triggered by that message is so CPU
 intensive that Akka's internal remoting can't get any time for its own
 messages.


 On Wednesday, August 6, 2014 12:08:20 PM UTC-6, Syed Ahmed wrote:

 Hello,
 Im new to Akka/Akka Remoting..

 We need to send large messages in the order 500-600MB. After viewing and
 checking some mailer post I found that I have to change Akka Remote netty
 tcp settings which I did and my application.conf has the following...

  remote {
 transport = akka.remote.netty.NettyRemoteTransport
 netty.tcp {
   hostname = ip address of host
   port = 2553
   maximum-frame-size = 524288000
send-buffer-size= 5
receive-buffer-size=5
 }
   }

 I create a client and remote server app using akka-remoting. Using
 protobuff message serialization -- a client sends a large message to server
 and the server acknowledges with the timestamp when it received after
 deserialized the same.

 With above settings things work fine for messages upto 200MB. When I try
 slightly large message 250MB.. The server shows the following ..

 ---
 [INFO] [08/05/2014 23:41:39.302] 
 [RemoteNodeApp-akka.remote.default-remote-dispatcher-4]
 [akka.tcp://RemoteNodeApp@remote-ip:2552/system/transports/
 akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%
 2FRemoteNodeApp%40remote_ip%3A36922-1] No response from remote.
 Handshake timed out or transport failure detector triggered.

 ---

 and the client shows the following

 [WARN] [08/05/2014 23:41:39.326] 
 [LocalNodeApp-akka.remote.default-remote-dispatcher-6]
 [akka.tcp://LocalNodeApp@client_ip:2553/system/endpointManager/
 reliableEndpointWriter-akka.tcp%3A%2F%2FRemoteNodeApp%4010.194.188.97%3A2552-0]
 Association with remote system [akka.tcp://RemoteNodeApp@server_ip:2552]
 has failed, address is now gated for [5000] ms. Reason is: [Disassociated].

 and the message is not sent at all..  Any idea what else is missing?

 BTW -- in the above I don't see any OOM errors or anything and the
 client/remote app are still up and running.

 thx
 -Syed



  --
  Read the docs: http://akka.io/docs/
  Check the FAQ:
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google Groups
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] What is akka message network overhead?

2014-08-07 Thread √iktor Ҡlang
You can do wire-level compression.


On Thu, Aug 7, 2014 at 10:09 AM, Endre Varga endre.va...@typesafe.com
wrote:




 On Thu, Aug 7, 2014 at 10:05 AM, √iktor Ҡlang viktor.kl...@gmail.com
 wrote:

 Or add compression.


 This is the Akka wire level envelope, cannot be directly controlled by
 users (unless someone writes a new transport of course).

 -Endre


 On Aug 7, 2014 9:52 AM, Endre Varga endre.va...@typesafe.com wrote:

 Hi Sean,

 Unfortunately there is no way to reduce this overhead without changing
 the wire layer format, which we cannot do now. As you correctly see,
 practically all the overhead comes from the path of the destination and
 sender actor. In the future we have plans to implement a scheme which
 allows the sender to abbreviate the most common paths used to a single
 number, but this needs a new protocol.

 So the answer currently is that you cannot reduce this overhead without
 introducing some batching scheme yourself: instead of sending MyMessage you
 can send Array[MyMessage], so the cost of the recipient path is only
 suffered once for the batch, but not for the individual messages -- i.e.
 you can amortize the overhead.

 -Endre


 On Thu, Aug 7, 2014 at 8:11 AM, Sean Zhong clock...@gmail.com wrote:

 Is it possible to reduce the average message overhead?

 200 bytes extra cost per remote message doesn't looks good...


 On Thursday, August 7, 2014 1:45:12 PM UTC+8, Sean Zhong wrote:

 Hi Michael,

 I used wireshark to capture the traffic. I found for each message
 sent(the message is sent with option noSender), there is an extra cost of
 ActorPath.

 For example, for the following msg example, message payload length
 length is 100 bytes(bold), but there is also a target actor path for 221
 bytes(red), which is much bigger than message itself.
 Can the actorPath overhead cost be reduced?
 .

 akka.tcp://app0executor0@192.168.1.53:51582/remote/akka.
 tcp/2120193a-e10b-474e-bccb-8ebc4b3a0247@192.168.1.53:
 48948/remote/akka.tcp/cluster@192.168.1.54:43676/user/
 master/Worker1/app_0_executor_0/group_1_task_0#-768886794o
 h
 *1880512348131407402383073127833013356174562750285568666448502416582566533241241053122856142164774120*
 :



 On Thursday, August 7, 2014 1:34:42 AM UTC+8, Michael Frank wrote:

  sean- how are you measuring your network thoroughput?  does 140MB/s
 include TCP, IP, ethernet header data?  are you communicating across 
 local
 network or across the internet?  the greater the distance your packets 
 have
 to travel (specifically the number of hops), the higher chance that they
 will get dropped and retransmitted, or fragmented.  a tool like 
 Wireshark,
 tcpdump, or ScaPy could help you differentiate utilization at different
 protocol layers and also identify network instability.

 -Michael

 On 08/06/14 08:46, Sean Zhong wrote:

 Konrad, thanks.

  After enabling the debug flag,

  I saw the system message like Terminate are using javaSerialiaer,
 is this expected?

  [DEBUG] [08/06/2014 22:19:11.836] [0e6fb647-7893-4328-a335-
 5e26e2ab080c-akka.actor.default-dispatcher-4] [akka.serialization.
 Serialization(akka://0e6fb647-7893-4328-a335-5e26e2ab080c)] Using
 serializer[*akka.serialization.JavaSerializer*] for message
 [akka.dispatch.sysmsg.Terminate]


  Besides, as my message is String, I cannot find related
 serialization log for type java.lang.String. How to know for sure 
 protobuf
 is used for String?

  Are you sure you’re not CPU or something else -bound? And you
 should be able to stuff the network?


  What I means is that 140MB/s network are occupied, but source
 message throughput is only 60MB/s, so there are 80MB/s bandwidth I cannot
 explain.


 On Wednesday, August 6, 2014 11:30:49 PM UTC+8, Konrad Malawski
 wrote:

   Hi Sean,

 On the wire:
 You can look into https://github.com/akka/akka/
 tree/master/akka-remote/src/main/protobuf for what exactly we pass
 around on the wire.

 Which serializer is used:
 enable debug logging, we log this info in Serialization.scala 
 log.debug(Using
 serializer[{}] for message [{}], ser.getClass.getName, clazz.getName)

 Other hints:
 Are you sure you’re not CPU or something else -bound? And you should
 be able to stuff the network?
 Have you played around with the number of threads etc?
 What hardware are you benchmarking on?
 ​


 On Wed, Aug 6, 2014 at 5:19 PM, Sean Zhong cloc...@gmail.com
 wrote:

  Thanks, Konrad,


 Are there other akka data/attributes attached to a remote sending
 message?  Or just?
 serialized(msg) + actorRef

  Which part of akka source code I can check for answer?

  Besides, is there LOG flags I can enable so that I can check
 which serilization framework is in effect?

  In my experiment, I cannot explain half network bandwitdh usage
 with akka remote messaging. My message throughput is 60MB/s, but the
 network bandwidth is 140MB/s. How can I trouble shooting this.


 On Wednesday, August 6, 2014 7:53:39 PM UTC+8, Konrad Malawski
 wrote:

  Hello Sean,
 actual overhead in terms of 

[akka-user] performance drops when upgrade from akka 2.2.3 to 2.3.4 with default config

2014-08-07 Thread Sean Zhong
Hi, 

When I upgrade from akka 2.2.3 to akka 2.3.4, I found the message 
throughput drops about 30%.

My benchmark looks like this:
4 machines, each machine has 1 source actor and 1 target actor. Each source 
actor will randomly deliver a 100 bytes messge at a time to any target 
actor.

I use default configuration. Are there default configuration changes in 
akka reference.conf which result in this behavior?

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] What is akka message network overhead?

2014-08-07 Thread Sean Zhong


 compressed link/interface


Is this configuration inside Akka conf? I cannot find the document, do you 
have pointer to this?
 

On Thursday, August 7, 2014 4:58:05 PM UTC+8, √ wrote:

 Hi Sean,


 On Thu, Aug 7, 2014 at 10:49 AM, Sean Zhong cloc...@gmail.com 
 javascript: wrote:

 Hi Viktor,

 About wire-compression, do you mean this?

 akka {
  remote {
  compression-scheme = zlib # Options: zlib (lzf to come), leave out 
 for no compression
  zlib-compression-level = 6 # Options: 0-9 (1 being fastest and 9 being 
 the most compressed), default is 6
  }
 }



 No, that's from a legacy version of Akka. I mean using a compressed 
 link/interface.
  

 Will it do compression at message level? or at a batch level(share same 
 source machine and target machine)? 



 Hi Endre,

 This is the Akka wire level envelope, cannot be directly controlled by 
 users (unless someone writes a new transport of course).


 Which part of source code I can look at to write a new transport?  






 On Thursday, August 7, 2014 4:22:16 PM UTC+8, √ wrote:

 You can do wire-level compression.


 On Thu, Aug 7, 2014 at 10:09 AM, Endre Varga endre...@typesafe.com 
 wrote:




 On Thu, Aug 7, 2014 at 10:05 AM, √iktor Ҡlang viktor...@gmail.com wrote:

 Or add compression.


 This is the Akka wire level envelope, cannot be directly controlled by 
 users (unless someone writes a new transport of course).
  
 -Endre
  

 On Aug 7, 2014 9:52 AM, Endre Varga endre...@typesafe.com wrote:

  Hi Sean,

 Unfortunately there is no way to reduce this overhead without changing the 
 wire layer format, which we cannot do now. As you correctly see, 
 practically all the overhead comes from the path of the destination and 
 sender actor. In the future we have plans to implement a scheme which 
 allows the sender to abbreviate the most common paths used to a single 
 number, but this needs a new protocol.

 So the answer currently is that you cannot reduce this overhead without 
 introducing some batching scheme yourself: instead of sending MyMessage you 
 can send Array[MyMessage], so the cost of the recipient path is only 
 suffered once for the batch, but not for the individual messages -- i.e. 
 you can amortize the overhead.

 -Endre


 On Thu, Aug 7, 2014 at 8:11 AM, Sean Zhong cloc...@gmail.com wrote:

 Is it possible to reduce the average message overhead?

 200 bytes extra cost per remote message doesn't looks good...


 On Thursday, August 7, 2014 1:45:12 PM UTC+8, Sean Zhong wrote:

 Hi Michael,

 I used wireshark to capture the traffic. I found for each message sent(the 
 message is sent with option noSender), there is an extra cost of ActorPath. 

 For example, for the following msg example, message payload length length 
 is 100 bytes(bold), but there is also a target actor path for 221 
 bytes(red), which is much bigger than message itself.
 Can the actorPath overhead cost be reduced?
 . 

 akka.tcp://app0executor0@192.168.1.53:51582/remote/akka.tcp/
 2120193a-e10b-474e-bccb-8ebc4b3a0247@192.168.1.53:48948/
 remote/akka.tcp/cluster@192.168.1.54:43676/user/master/
 Worker1/app_0_executor_0/group_1_task_0#-768886794o
 h
 *1880512348131407402383073127833013356174562750285568666448502416582566533241241053122856142164774120*
 :



 On Thursday, August 7, 2014 1:34:42 AM UTC+8, Michael Frank wrote:

  sean- how are you measuring your network thoroughput?  does 140MB/s 
 include TCP, IP, ethernet header data?  are you communicating across local 
 network or across the internet?  the greater the distance your packets have 
 to travel (specifically the number of hops), the higher chance that they 
 will get dropped and retransmitted, or fragmented.  a tool like Wireshark, 
 tcpdump, or ScaPy could help you differentiate utilization at different 
 protocol layers and also identify network instability.

 -Michael

 On 08/06/14 08:46, Sean Zhong wrote:
  
 Konrad, thanks. 

  After enabling the debug flag, 

  I saw the system message like Terminate are using javaSerialiaer, is 
 this expected?

  [DEBUG] [08/06/2014 22:19:11.836] [0e6fb647-7893-4328-a335-5e26e
 2ab080c-akka.actor.default-dispatcher-4] [akka.serialization.Serializat
 ion(akka://0e6fb647-7893-4328-a335-5e26e2ab080c)] Using serializer[
 *akka.serialization.JavaSerializer*] for message [akka.dispatch.sysmsg.
 Terminate]


  Besides, as my message is String, I cannot find related serialization 
 log for type java.lang.String. How to know for sure protobuf is used for 
 String?

  Are you sure you’re not CPU or something else -bound? And you should be 
 able to stuff the network?


  What I means is that 140MB/s network are occupied, but source message 
 throughput is only 60MB/s, so there are 80MB/s bandwidth I cannot explain.

  
 On Wednesday, August 6, 2014 11:30:49 PM UTC+8, Konrad Malawski wrote: 

   Hi Sean,

 On the wire:
 You can look into https://github.com/akka/akka/t
 ree/master/akka-remote/src/main/protobuf for what exactly we pass around 
 on the wire.

 Which 

[akka-user] Re: Actors behaving unexpectedly with Circuit breaker

2014-08-07 Thread Jasper
I tried it and this looks very promising as since all processors now go 
into Open state. However without the reader I'm deep into encoding hell 
because my files are in us-ascii and my db in UTF-8 :

invalid byte sequence for encoding UTF8: 0x00
And I can't just sanitize the files beforehand... Anyway I'm aware it's not 
really the place for this so unless anyone have the solution, thanks for 
your help !


Le jeudi 7 août 2014 07:27:04 UTC+2, Brett Wooldridge a écrit :

 It appears that you are using the PostgreSQL CopyManager, correct? 
  Looking at QueryExecutorImpl it appears that rollback() is trying to 
 obtain a lock that was not released by the CopyManager.  I recommend using 
 the CopyManager.copyIn() method that returns a CopyIn object, rather than 
 using the convenience method that takes a reader.  Use the writeToCopy() to 
 pump the data in, and be sure to catch SQLException.  If you get an 
 SQLException, call cancelCopy() and retry or whatever your recovery 
 scenario is, otherwise call endCopy().  I would have expected PostgreSQL 
 to handle the severing of a Connection in the middle of a bulk copy better, 
 but that is probably a question for the PostgreSQL group.

 Just my armchair diagnosis.

 On Wednesday, August 6, 2014 11:04:13 PM UTC+9, Jasper wrote:


 Sys-akka.actor.pinned-dispatcher-6 [WAITING]
 java.lang.Object.wait()Object.java:503
 org.postgresql.core.v3.QueryExecutorImpl.waitOnLock()QueryExecutorImpl.
 java:91
 org.postgresql.core.v3.QueryExecutorImpl.execute(Query, ParameterList, 
 ResultHandler, int, int, int)QueryExecutorImpl.java:228
 org.postgresql.jdbc2.AbstractJdbc2Connection.executeTransactionCommand(
 Query)AbstractJdbc2Connection.java:808
 org.postgresql.jdbc2.AbstractJdbc2Connection.rollback()
 AbstractJdbc2Connection.java:861
 com.zaxxer.hikari.proxy.ConnectionProxy.resetConnectionState()
 ConnectionProxy.java:192
 com.zaxxer.hikari.proxy.ConnectionProxy.close()ConnectionProxy.java:305
 java.lang.reflect.Method.invoke(Object, Object[])Method.java:606
 xx..util.Cleaning$.using(Object, Function1)Cleaning.scala:14
 xx..actors.FileProcessor$$anonfun$receive$1$$anonfun$applyOrElse$1.
 apply$mcV$sp()FileProcessor.scala:75
 xx..actors.FileProcessor$$anonfun$receive$1$$anonfun$applyOrElse$1.
 apply()FileProcessor.scala:56
 xx..actors.FileProcessor$$anonfun$receive$1$$anonfun$applyOrElse$1.
 apply()FileProcessor.scala:56
 akka.pattern.CircuitBreaker$$anonfun$withSyncCircuitBreaker$1.apply()
 CircuitBreaker.scala:135
 akka.pattern.CircuitBreaker$$anonfun$withSyncCircuitBreaker$1.apply()
 CircuitBreaker.scala:135
 akka.pattern.CircuitBreaker$State$class.callThrough(CircuitBreaker$State, 
 Function0)CircuitBreaker.scala:296
 akka.pattern.CircuitBreaker$Closed$.callThrough(Function0)CircuitBreaker.
 scala:345
 akka.pattern.CircuitBreaker$Closed$.invoke(Function0)CircuitBreaker.scala
 :354
 akka.pattern.CircuitBreaker.withCircuitBreaker(Function0)CircuitBreaker.
 scala:113
 akka.pattern.CircuitBreaker.withSyncCircuitBreaker(Function0)
 CircuitBreaker.scala:135
 xx..actors.FileProcessor$$anonfun$receive$1.applyOrElse(Object, 
 Function1)FileProcessor.scala:55
 akka.actor.Actor$class.aroundReceive(Actor, PartialFunction, Object)Actor
 .scala:/spa
 ...



-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] What is akka message network overhead?

2014-08-07 Thread √iktor Ҡlang
That would be completely outside of Akka.


On Thu, Aug 7, 2014 at 11:01 AM, Sean Zhong clock...@gmail.com wrote:

 compressed link/interface


 Is this configuration inside Akka conf? I cannot find the document, do you
 have pointer to this?


 On Thursday, August 7, 2014 4:58:05 PM UTC+8, √ wrote:

 Hi Sean,


 On Thu, Aug 7, 2014 at 10:49 AM, Sean Zhong cloc...@gmail.com wrote:

 Hi Viktor,

 About wire-compression, do you mean this?

 akka {
  remote {
  compression-scheme = zlib # Options: zlib (lzf to come), leave out
 for no compression
  zlib-compression-level = 6 # Options: 0-9 (1 being fastest and 9 being
 the most compressed), default is 6
  }
 }



 No, that's from a legacy version of Akka. I mean using a compressed
 link/interface.


 Will it do compression at message level? or at a batch level(share same
 source machine and target machine)?



 Hi Endre,

 This is the Akka wire level envelope, cannot be directly controlled by
 users (unless someone writes a new transport of course).


 Which part of source code I can look at to write a new transport?






 On Thursday, August 7, 2014 4:22:16 PM UTC+8, √ wrote:

 You can do wire-level compression.


 On Thu, Aug 7, 2014 at 10:09 AM, Endre Varga endre...@typesafe.com
 wrote:




 On Thu, Aug 7, 2014 at 10:05 AM, √iktor Ҡlang viktor...@gmail.com
 wrote:

 Or add compression.


 This is the Akka wire level envelope, cannot be directly controlled by
 users (unless someone writes a new transport of course).

 -Endre


 On Aug 7, 2014 9:52 AM, Endre Varga endre...@typesafe.com wrote:

  Hi Sean,

 Unfortunately there is no way to reduce this overhead without changing
 the wire layer format, which we cannot do now. As you correctly see,
 practically all the overhead comes from the path of the destination and
 sender actor. In the future we have plans to implement a scheme which
 allows the sender to abbreviate the most common paths used to a single
 number, but this needs a new protocol.

 So the answer currently is that you cannot reduce this overhead without
 introducing some batching scheme yourself: instead of sending MyMessage you
 can send Array[MyMessage], so the cost of the recipient path is only
 suffered once for the batch, but not for the individual messages -- i.e.
 you can amortize the overhead.

 -Endre


 On Thu, Aug 7, 2014 at 8:11 AM, Sean Zhong cloc...@gmail.com wrote:

 Is it possible to reduce the average message overhead?

 200 bytes extra cost per remote message doesn't looks good...


 On Thursday, August 7, 2014 1:45:12 PM UTC+8, Sean Zhong wrote:

 Hi Michael,

 I used wireshark to capture the traffic. I found for each message
 sent(the message is sent with option noSender), there is an extra cost of
 ActorPath.

 For example, for the following msg example, message payload length length
 is 100 bytes(bold), but there is also a target actor path for 221
 bytes(red), which is much bigger than message itself.
 Can the actorPath overhead cost be reduced?
 .

 akka.tcp://app0executor0@192.168.1.53:51582/remote/akka.tcp/
 2120193a-e10b-474e-bccb-8ebc4b3a0247@192.168.1.53:48948/remo
 te/akka.tcp/cluster@192.168.1.54:43676/user/master/Worker1/
 app_0_executor_0/group_1_task_0#-768886794o
 h
 *1880512348131407402383073127833013356174562750285568666448502416582566533241241053122856142164774120*
 :



 On Thursday, August 7, 2014 1:34:42 AM UTC+8, Michael Frank wrote:

  sean- how are you measuring your network thoroughput?  does 140MB/s
 include TCP, IP, ethernet header data?  are you communicating across local
 network or across the internet?  the greater the distance your packets have
 to travel (specifically the number of hops), the higher chance that they
 will get dropped and retransmitted, or fragmented.  a tool like Wireshark,
 tcpdump, or ScaPy could help you differentiate utilization at different
 protocol layers and also identify network instability.

 -Michael

 On 08/06/14 08:46, Sean Zhong wrote:

 Konrad, thanks.

  After enabling the debug flag,

  I saw the system message like Terminate are using javaSerialiaer, is
 this expected?

  [DEBUG] [08/06/2014 22:19:11.836] [0e6fb647-7893-4328-a335-5e26e
 2ab080c-akka.actor.default-dispatcher-4] [akka.serialization.Serializat
 ion(akka://0e6fb647-7893-4328-a335-5e26e2ab080c)] Using serializer[
 *akka.serialization.JavaSerializer*] for message [akka.dispatch.sysmsg.
 Terminate]


  Besides, as my message is String, I cannot find related serialization
 log for type java.lang.String. How to know for sure protobuf is used for
 String?

  Are you sure you’re not CPU or something else -bound? And you should be
 able to stuff the network?


  What I means is that 140MB/s network are occupied, but source message
 throughput is only 60MB/s, so there are 80MB/s bandwidth I cannot explain.


 On Wednesday, August 6, 2014 11:30:49 PM UTC+8, Konrad Malawski wrote:

   Hi Sean,

 On the wire:
 You can look into https://github.com/akka/akka/t
 ree/master/akka-remote/src/main/protobuf for 

Re: [akka-user] Re: Actors behaving unexpectedly with Circuit breaker

2014-08-07 Thread √iktor Ҡlang
:( encoding hell


On Thu, Aug 7, 2014 at 11:10 AM, Jasper lme...@excilys.com wrote:

 I tried it and this looks very promising as since all processors now go
 into Open state. However without the reader I'm deep into encoding hell
 because my files are in us-ascii and my db in UTF-8 :

 invalid byte sequence for encoding UTF8: 0x00
 And I can't just sanitize the files beforehand... Anyway I'm aware it's
 not really the place for this so unless anyone have the solution, thanks
 for your help !


 Le jeudi 7 août 2014 07:27:04 UTC+2, Brett Wooldridge a écrit :

 It appears that you are using the PostgreSQL CopyManager, correct?
  Looking at QueryExecutorImpl it appears that rollback() is trying to
 obtain a lock that was not released by the CopyManager.  I recommend using
 the CopyManager.copyIn() method that returns a CopyIn object, rather
 than using the convenience method that takes a reader.  Use the
 writeToCopy() to pump the data in, and be sure to catch SQLException.
  If you get an SQLException, call cancelCopy() and retry or whatever
 your recovery scenario is, otherwise call endCopy().  I would have
 expected PostgreSQL to handle the severing of a Connection in the middle of
 a bulk copy better, but that is probably a question for the PostgreSQL
 group.

 Just my armchair diagnosis.

 On Wednesday, August 6, 2014 11:04:13 PM UTC+9, Jasper wrote:


 Sys-akka.actor.pinned-dispatcher-6 [WAITING]
 java.lang.Object.wait()Object.java:503
 org.postgresql.core.v3.QueryExecutorImpl.waitOnLock()QueryExecutorImpl.
 java:91
 org.postgresql.core.v3.QueryExecutorImpl.execute(Query, ParameterList,
 ResultHandler, int, int, int)QueryExecutorImpl.java:228
 org.postgresql.jdbc2.AbstractJdbc2Connection.executeTransactionCommand(
 Query)AbstractJdbc2Connection.java:808
 org.postgresql.jdbc2.AbstractJdbc2Connection.rollback()Abstr
 actJdbc2Connection.java:861
 com.zaxxer.hikari.proxy.ConnectionProxy.resetConnectionState()
 ConnectionProxy.java:192
 com.zaxxer.hikari.proxy.ConnectionProxy.close()ConnectionProxy.java:305
 java.lang.reflect.Method.invoke(Object, Object[])Method.java:606
 xx..util.Cleaning$.using(Object, Function1)Cleaning.scala:14
 xx..actors.FileProcessor$$anonfun$receive$1$$anonfun$applyOrElse$1.
 apply$mcV$sp()FileProcessor.scala:75
 xx..actors.FileProcessor$$anonfun$receive$1$$anonfun$applyOrElse$1.
 apply()FileProcessor.scala:56
 xx..actors.FileProcessor$$anonfun$receive$1$$anonfun$applyOrElse$1.
 apply()FileProcessor.scala:56
 akka.pattern.CircuitBreaker$$anonfun$withSyncCircuitBreaker$1.apply()
 CircuitBreaker.scala:135
 akka.pattern.CircuitBreaker$$anonfun$withSyncCircuitBreaker$1.apply()
 CircuitBreaker.scala:135
 akka.pattern.CircuitBreaker$State$class.callThrough(CircuitBreaker$State
 , Function0)CircuitBreaker.scala:296
 akka.pattern.CircuitBreaker$Closed$.callThrough(Function0)CircuitBreaker
 .scala:345
 akka.pattern.CircuitBreaker$Closed$.invoke(Function0)CircuitBreaker.
 scala:354
 akka.pattern.CircuitBreaker.withCircuitBreaker(Function0)CircuitBreaker.
 scala:113
 akka.pattern.CircuitBreaker.withSyncCircuitBreaker(Function0)
 CircuitBreaker.scala:135
 xx..actors.FileProcessor$$anonfun$receive$1.applyOrElse(Object,
 Function1)FileProcessor.scala:55
 akka.actor.Actor$class.aroundReceive(Actor, PartialFunction, Object)
 Actor.scala:/spa
 ...

  --
  Read the docs: http://akka.io/docs/
  Check the FAQ:
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google Groups
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




-- 
Cheers,
√

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Sharding problem when restarting Cluster

2014-08-07 Thread Morten Kjetland
Hi,

Turns out there was a bug in our homebrew jdbc-snapshot implementation.

The loaded SelectedSnapshot was populated with Option(state) instead of
just the state, so the following lines in ShardCoordinator was not executed:

 case SnapshotOffer(_, state: State) =
  log.debug(receiveRecover SnapshotOffer {}, state)
  persistentState = state

The snapshot was therefor never applied, so when it started receiving
events with sequenceNr after the snapshot, it blew up.

Thanks a lot for helping me in the right direction.

Best regards,
Morten Kjetland


On Wed, Aug 6, 2014 at 2:12 PM, Morten Kjetland m...@kjetland.com wrote:

 Thanks the response,

 We are using a homebrew jdbc journal.

 I checked the journal and ShardRegionProxyRegistered is written to it.
 But I was unable to reproduce the problem now.
 It might be a problem related to snapshoting in combination with a bug in
 our jdbc journal.
 I'll try to reproduce it later and check the db again.

 I just saw that https://github.com/dnvriend/akka-persistence-jdbc was
 worked on during the summer, so I'll try to use that one instead of our
 own, and see if the problem goes away.

 Best regards,
 Morten Kjetland


 On Wed, Aug 6, 2014 at 12:40 PM, Konrad Malawski kt...@typesafe.com
 wrote:

 Hi Morten,
 thanks for reporting!
 Which journal plugin are you using?

 It looks like during replay it gets an ShardHomeAllocated without getting
 ShardRegionProxyRegistered first - which makes it blow up (we must first
 register, then allocate the shard).

 One reason could be that the persist of ShardRegionProxyRegistered never
 succeeded...?
 Would you be able to verify if your journal contains such these events
 (or if SRPR is missing)?
 It would be great to track down to the root of this problem. It *could*
 be a bug on our side, but hard to pinpoint exactly yet.

 --
 Cheers,
 Konrad 'ktoso' Malawski
 hAkker @ Typesafe

 http://typesafe.com

 --
  Read the docs: http://akka.io/docs/
  Check the FAQ:
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google Groups
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Sharding problem when restarting Cluster

2014-08-07 Thread Konrad Malawski
Great to hear you've found the problem!
We'll provide a TCK for journal plugins with the next release (minor
already), so I suggest grinding your custom plugin with it to see if it's
really valid :-)

Happy hakking!


On Thu, Aug 7, 2014 at 11:30 AM, Morten Kjetland m...@kjetland.com wrote:

 Hi,

 Turns out there was a bug in our homebrew jdbc-snapshot implementation.

 The loaded SelectedSnapshot was populated with Option(state) instead of
 just the state, so the following lines in ShardCoordinator was not executed:

  case SnapshotOffer(_, state: State) ⇒
   log.debug(receiveRecover SnapshotOffer {}, state)
   persistentState = state

 The snapshot was therefor never applied, so when it started receiving
 events with sequenceNr after the snapshot, it blew up.

 Thanks a lot for helping me in the right direction.

 Best regards,
 Morten Kjetland


 On Wed, Aug 6, 2014 at 2:12 PM, Morten Kjetland m...@kjetland.com wrote:

 Thanks the response,

 We are using a homebrew jdbc journal.

 I checked the journal and ShardRegionProxyRegistered is written to it.
 But I was unable to reproduce the problem now.
 It might be a problem related to snapshoting in combination with a bug in
 our jdbc journal.
 I'll try to reproduce it later and check the db again.

 I just saw that https://github.com/dnvriend/akka-persistence-jdbc was
 worked on during the summer, so I'll try to use that one instead of our
 own, and see if the problem goes away.

 Best regards,
 Morten Kjetland


 On Wed, Aug 6, 2014 at 12:40 PM, Konrad Malawski kt...@typesafe.com
 wrote:

 Hi Morten,
 thanks for reporting!
 Which journal plugin are you using?

 It looks like during replay it gets an ShardHomeAllocated without
 getting ShardRegionProxyRegistered first – which makes it blow up (we must
 first register, then allocate the shard).

 One reason could be that the persist of ShardRegionProxyRegistered never
 succeeded...?
 Would you be able to verify if your journal contains such these events
 (or if SRPR is missing)?
 It would be great to track down to the root of this problem. It *could*
 be a bug on our side, but hard to pinpoint exactly yet.

 --
 Cheers,
 Konrad 'ktoso' Malawski
 hAkker @ Typesafe

 http://typesafe.com

 --
  Read the docs: http://akka.io/docs/
  Check the FAQ:
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives:
 https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google
 Groups Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.



  --
  Read the docs: http://akka.io/docs/
  Check the FAQ:
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google Groups
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




-- 
Cheers,
Konrad 'ktoso' Malawski
hAkker @ Typesafe

http://typesafe.com

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] performance drops when upgrade from akka 2.2.3 to 2.3.4 with default config

2014-08-07 Thread Sean Zhong
I will try to get a min code set to reproduce this. I will post updates 
here.

On Thursday, August 7, 2014 5:47:51 PM UTC+8, Patrik Nordwall wrote:

 Could you please share the benchmark source code?
 /Patrik


 On Thu, Aug 7, 2014 at 11:45 AM, Endre Varga endre...@typesafe.com 
 javascript: wrote:

 Hi Sean,

 This is interesting, we actually measured increase with 2.3.4, in fact 
 the relevant changes in that version were directly targeted to increasing 
 performance somewhat. One important difference is that 2.3.4 prioritizes 
 internal Akka messages over user messages to improve stability. Can it be 
 the case that you have a lot of system message traffic between your 
 systems? Do you have lots of remote deployed actors maybe?
  
 -Endre


 On Thu, Aug 7, 2014 at 10:53 AM, Sean Zhong cloc...@gmail.com 
 javascript: wrote:

 Hi, 

 When I upgrade from akka 2.2.3 to akka 2.3.4, I found the message 
 throughput drops about 30%.

 My benchmark looks like this:
 4 machines, each machine has 1 source actor and 1 target actor. Each 
 source actor will randomly deliver a 100 bytes messge at a time to any 
 target actor.

 I use default configuration. Are there default configuration changes in 
 akka reference.conf which result in this behavior?

  -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: 
 https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google 
 Groups Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to akka-user+...@googlegroups.com javascript:.
 To post to this group, send email to akka...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.


  -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google Groups 
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to akka-user+...@googlegroups.com javascript:.
 To post to this group, send email to akka...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




 -- 

 Patrik Nordwall
 Typesafe http://typesafe.com/ -  Reactive apps on the JVM
 Twitter: @patriknw

  

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] performance drops when upgrade from akka 2.2.3 to 2.3.4 with default config

2014-08-07 Thread Sean Zhong


 Can it be the case that you have a lot of system message traffic between 
 your systems? Do you have lots of remote deployed actors maybe?


All actors(4 source, 4 target) are created using remote actors.


On Thursday, August 7, 2014 5:45:34 PM UTC+8, drewhk wrote:

 Hi Sean,

 This is interesting, we actually measured increase with 2.3.4, in fact the 
 relevant changes in that version were directly targeted to increasing 
 performance somewhat. One important difference is that 2.3.4 prioritizes 
 internal Akka messages over user messages to improve stability. Can it be 
 the case that you have a lot of system message traffic between your 
 systems? Do you have lots of remote deployed actors maybe?

 -Endre


 On Thu, Aug 7, 2014 at 10:53 AM, Sean Zhong cloc...@gmail.com 
 javascript: wrote:

 Hi, 

 When I upgrade from akka 2.2.3 to akka 2.3.4, I found the message 
 throughput drops about 30%.

 My benchmark looks like this:
 4 machines, each machine has 1 source actor and 1 target actor. Each 
 source actor will randomly deliver a 100 bytes messge at a time to any 
 target actor.

 I use default configuration. Are there default configuration changes in 
 akka reference.conf which result in this behavior?

  -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google Groups 
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to akka-user+...@googlegroups.com javascript:.
 To post to this group, send email to akka...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Akka Cluster is shutting down after upgrading to Play 2.3.2

2014-08-07 Thread √iktor Ҡlang
java.lang.NoSuchMethodError:
com.google.common.io.Closeables.closeQuietly(Ljava/io/Closeable;)V

NSME is essentially always a sign of classpath issues, either having the
wrong version of the lib on the classpath, the wrong version first on the
classpath or not having the dependency on the classpath.


On Thu, Aug 7, 2014 at 1:21 PM, Manikandan Kaliyaperumal rkman...@gmail.com
 wrote:

 Hi,

 I have been running my Jobs using distributed Worker with Master in PLay
 2.2.3 without any issues until now.

 But, it's started causing issues after upgrading the Play version to 2.3.2.

 It is starting up initially and after a while it is not accepting any work
 and later it's shutting down automatically.

 Not sure what is causing this issue. I have just changed only the play
 version. All other stack (and code base) remains same - Akka 2.3.4 with
 Persistance and remoting (2.3.4).


 [INFO] [08/06/2014 23:03:48.353]
 [QuoteClusterSystem-akka.actor.default-dispatcher-18]
 [akka://QuoteClusterSystem/user/producer] Produce.onReceive is called with
 UserTick:
 [INFO] [08/06/2014 23:03:48.353]
 [QuoteClusterSystem-akka.actor.default-dispatcher-18]
 [akka://QuoteClusterSystem/user/producer] Produced work: 755
 [INFO] [08/06/2014 23:03:53.375]
 [WorkersSystem-akka.actor.default-dispatcher-7]
 [akka://WorkersSystem/user/worker] No ack from master, retrying
 (00b5d3fe-d906-4b05-a9e9-74fbd248b630 -
 323649e0-33b5-49b4-8734-ce02eb383ad1)
 [INFO] [08/06/2014 23:03:53.376]
 [QuoteClusterSystem-akka.actor.default-dispatcher-3]
 [akka://QuoteClusterSystem/user/master/active] Work
 323649e0-33b5-49b4-8734-ce02eb383ad1 not in progress, reported as done by
 worker 00b5d3fe-d906-4b05-a9e9-74fbd248b630
 [INFO] [08/06/2014 23:03:53.378]
 [QuoteClusterSystem-akka.actor.default-dispatcher-18]
 [akka://QuoteClusterSystem/user/master/active] Work
 9d0376cb-b65e-499d-8787-4cc9ef51afbc not in progress, reported as done by
 worker 7ba10a7b-c7d8-41ad-a291-5f740fa3e422
 [INFO] [08/06/2014 23:03:53.381]
 [QuoteClusterSystem-akka.actor.default-dispatcher-14]
 [akka://QuoteClusterSystem/user/master/active] Work
 635ea450-e1f0-47dd-8ef3-5bca9069b217 not in progress, reported as done by
 worker 831637b1-e3a2-4cd1-bb70-0db0f4745156
 [INFO] [08/06/2014 23:03:53.421]
 [WorkersSystem-akka.actor.default-dispatcher-6]
 [akka://WorkersSystem/user/worker] No ack from master, retrying
 (33dab9fa-a8b8-44af-b637-5c9ea89cd174 -
 2eb29b49-953f-40ac-a70f-d10b810da6c6)
 [INFO] [08/06/2014 23:03:53.422]
 [QuoteClusterSystem-akka.actor.default-dispatcher-3]
 [akka://QuoteClusterSystem/user/master/active] Work
 2eb29b49-953f-40ac-a70f-d10b810da6c6 not in progress, reported as done by
 worker 33dab9fa-a8b8-44af-b637-5c9ea89cd174
 User  == 1000272547
 User  == 1000272120
 User  == 1000273974
 User  == 1000275489
 User  == 1000270351
 User  == 1000273884
 User  == 1000270803
 User  == 1000273539
 [INFO] [08/06/2014 23:03:56.384]
 [QuoteClusterSystem-akka.actor.default-dispatcher-7]
 [akka://QuoteClusterSystem/user/producer] Produce.onReceive is called with
 UserTick:

 [INFO] [08/06/2014 23:03:56.384]
 [QuoteClusterSystem-akka.actor.default-dispatcher-7]
 [akka://QuoteClusterSystem/user/producer] Produced work: 756
 [ERROR] [08/06/2014 23:03:56.400]
 [QuoteClusterSystem-akka.actor.default-dispatcher-16]
 [ActorSystem(QuoteClusterSystem)] Uncaught error from thread
 [QuoteClusterSystem-akka.actor.default-dispatcher-16] shutting down JVM
 since 'akka.jvm-exit-on-fatal-error' is enabled
 java.lang.NoSuchMethodError:
 com.google.common.io.Closeables.closeQuietly(Ljava/io/Closeable;)V
 at org.iq80.leveldb.impl.MMapLogWriter.close(MMapLogWriter.java:83)
 at org.iq80.leveldb.impl.DbImpl.makeRoomForWrite(DbImpl.java:832)
 at org.iq80.leveldb.impl.DbImpl.writeInternal(DbImpl.java:658)
 at org.iq80.leveldb.impl.DbImpl.write(DbImpl.java:647)
 at
 akka.persistence.journal.leveldb.LeveldbStore$class.withBatch(LeveldbStore.scala:89)
 at
 akka.persistence.journal.leveldb.SharedLeveldbStore.withBatch(LeveldbStore.scala:127)
 at
 akka.persistence.journal.leveldb.LeveldbStore$class.writeMessages(LeveldbStore.scala:44)
 at
 akka.persistence.journal.leveldb.SharedLeveldbStore.writeMessages(LeveldbStore.scala:127)
 at
 akka.persistence.journal.leveldb.SharedLeveldbStore$$anonfun$receive$1.applyOrElse(LeveldbStore.scala:131)
 at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
 at
 akka.persistence.journal.leveldb.SharedLeveldbStore.aroundReceive(LeveldbStore.scala:127)
 at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
 at akka.actor.ActorCell.invoke(ActorCell.scala:487)
 at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
 at akka.dispatch.Mailbox.run(Mailbox.scala:220)
 at
 akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
 at
 

Re: [akka-user] Synchronising responses sent from parents and children

2014-08-07 Thread Konrad Malawski
Hi Lawrence,
In general, exactly one entity in a distributed system should be
responsible for deciding about success / failure,
otherwise there always will be a race of some kind.

In your case though, the problem arrises because the service actor does not
know if the transaction actor has completed the work,
so how about sending the response back through the transaction actor?

Also, in your case, can the transaction actor fail after sending it's
response to the client actor, how would that happen (with a NonFatal
exception)?
I'd expect it to do `client ! stuff; context stop self`, is that not the
case?



On Thu, Aug 7, 2014 at 8:59 AM, Lawrence Wagerfield 
lawre...@dmz.wagerfield.com wrote:

 I have problem that involves synchronising outbound messages from a parent
 actor and its child actor. This particular problem is with regards to
 forwarding failure messages to clients.

 Here is the example:

 I have a service actor that receives a request from a client actor*.*

 The service actor creates a new child transaction actor to deal with said
 request, which then response directly to the client actor after
 performing the work.

 If the transaction actor fails, it is stopped by the service actor which
 then sends a failure report to the client actor.

 The problem is the client actor must now support receiving failures after
 receiving the response it is actually interested in - otherwise the
 potential 'post-workload' failures from the transaction actor may
 deadletter, or worse, be misinterpreted by the client actor (i.e. a
 failure for a subsequent transaction).

 I have considered an approach whereby the client actor must wait for the 
 transaction
 actor to terminate before safely continuing, since after that point, it
 can be guaranteed that no more messages will be received.

 Is there a common solution to this problem?

  --
  Read the docs: http://akka.io/docs/
  Check the FAQ:
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google Groups
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




-- 
Cheers,
Konrad 'ktoso' Malawski
hAkker @ Typesafe

http://typesafe.com

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Akka Cluster is shutting down after upgrading to Play 2.3.2

2014-08-07 Thread Manikandan Kaliyaperumal
Hi,

Thanks for the details. I am looking into this. But the shut down starts 
well ahead of this and running with no ack for sometime before Closing 
it. Definitely i will look into this NSME issue, but this cluster shutdown 
is caused by some thing else? The same code is running fine in Play 2.2.3.

Thanks,
Mani

On Thursday, August 7, 2014 7:24:26 PM UTC+8, √ wrote:

 java.lang.NoSuchMethodError: 
 com.google.common.io.Closeables.closeQuietly(Ljava/io/Closeable;)V

 NSME is essentially always a sign of classpath issues, either having the 
 wrong version of the lib on the classpath, the wrong version first on the 
 classpath or not having the dependency on the classpath.


 On Thu, Aug 7, 2014 at 1:21 PM, Manikandan Kaliyaperumal 
 rkma...@gmail.com javascript: wrote:

 Hi,

 I have been running my Jobs using distributed Worker with Master in PLay 
 2.2.3 without any issues until now.

 But, it's started causing issues after upgrading the Play version to 
 2.3.2.

 It is starting up initially and after a while it is not accepting any 
 work and later it's shutting down automatically.

 Not sure what is causing this issue. I have just changed only the play 
 version. All other stack (and code base) remains same - Akka 2.3.4 with 
 Persistance and remoting (2.3.4).


 [INFO] [08/06/2014 23:03:48.353] 
 [QuoteClusterSystem-akka.actor.default-dispatcher-18] 
 [akka://QuoteClusterSystem/user/producer] Produce.onReceive is called with 
 UserTick:
 [INFO] [08/06/2014 23:03:48.353] 
 [QuoteClusterSystem-akka.actor.default-dispatcher-18] 
 [akka://QuoteClusterSystem/user/producer] Produced work: 755
 [INFO] [08/06/2014 23:03:53.375] 
 [WorkersSystem-akka.actor.default-dispatcher-7] 
 [akka://WorkersSystem/user/worker] No ack from master, retrying 
 (00b5d3fe-d906-4b05-a9e9-74fbd248b630 - 
 323649e0-33b5-49b4-8734-ce02eb383ad1)
 [INFO] [08/06/2014 23:03:53.376] 
 [QuoteClusterSystem-akka.actor.default-dispatcher-3] 
 [akka://QuoteClusterSystem/user/master/active] Work 
 323649e0-33b5-49b4-8734-ce02eb383ad1 not in progress, reported as done by 
 worker 00b5d3fe-d906-4b05-a9e9-74fbd248b630
 [INFO] [08/06/2014 23:03:53.378] 
 [QuoteClusterSystem-akka.actor.default-dispatcher-18] 
 [akka://QuoteClusterSystem/user/master/active] Work 
 9d0376cb-b65e-499d-8787-4cc9ef51afbc not in progress, reported as done by 
 worker 7ba10a7b-c7d8-41ad-a291-5f740fa3e422
 [INFO] [08/06/2014 23:03:53.381] 
 [QuoteClusterSystem-akka.actor.default-dispatcher-14] 
 [akka://QuoteClusterSystem/user/master/active] Work 
 635ea450-e1f0-47dd-8ef3-5bca9069b217 not in progress, reported as done by 
 worker 831637b1-e3a2-4cd1-bb70-0db0f4745156
 [INFO] [08/06/2014 23:03:53.421] 
 [WorkersSystem-akka.actor.default-dispatcher-6] 
 [akka://WorkersSystem/user/worker] No ack from master, retrying 
 (33dab9fa-a8b8-44af-b637-5c9ea89cd174 - 
 2eb29b49-953f-40ac-a70f-d10b810da6c6)
 [INFO] [08/06/2014 23:03:53.422] 
 [QuoteClusterSystem-akka.actor.default-dispatcher-3] 
 [akka://QuoteClusterSystem/user/master/active] Work 
 2eb29b49-953f-40ac-a70f-d10b810da6c6 not in progress, reported as done by 
 worker 33dab9fa-a8b8-44af-b637-5c9ea89cd174
 User  == 1000272547
 User  == 1000272120
 User  == 1000273974
 User  == 1000275489
 User  == 1000270351
 User  == 1000273884
 User  == 1000270803
 User  == 1000273539
 [INFO] [08/06/2014 23:03:56.384] 
 [QuoteClusterSystem-akka.actor.default-dispatcher-7] 
 [akka://QuoteClusterSystem/user/producer] Produce.onReceive is called with 
 UserTick:

 [INFO] [08/06/2014 23:03:56.384] 
 [QuoteClusterSystem-akka.actor.default-dispatcher-7] 
 [akka://QuoteClusterSystem/user/producer] Produced work: 756
 [ERROR] [08/06/2014 23:03:56.400] 
 [QuoteClusterSystem-akka.actor.default-dispatcher-16] 
 [ActorSystem(QuoteClusterSystem)] Uncaught error from thread 
 [QuoteClusterSystem-akka.actor.default-dispatcher-16] shutting down JVM 
 since 'akka.jvm-exit-on-fatal-error' is enabled
 java.lang.NoSuchMethodError: 
 com.google.common.io.Closeables.closeQuietly(Ljava/io/Closeable;)V
 at 
 org.iq80.leveldb.impl.MMapLogWriter.close(MMapLogWriter.java:83)
 at org.iq80.leveldb.impl.DbImpl.makeRoomForWrite(DbImpl.java:832)
 at org.iq80.leveldb.impl.DbImpl.writeInternal(DbImpl.java:658)
 at org.iq80.leveldb.impl.DbImpl.write(DbImpl.java:647)
 at 
 akka.persistence.journal.leveldb.LeveldbStore$class.withBatch(LeveldbStore.scala:89)
 at 
 akka.persistence.journal.leveldb.SharedLeveldbStore.withBatch(LeveldbStore.scala:127)
 at 
 akka.persistence.journal.leveldb.LeveldbStore$class.writeMessages(LeveldbStore.scala:44)
 at 
 akka.persistence.journal.leveldb.SharedLeveldbStore.writeMessages(LeveldbStore.scala:127)
 at 
 akka.persistence.journal.leveldb.SharedLeveldbStore$$anonfun$receive$1.applyOrElse(LeveldbStore.scala:131)
 at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
 at 
 

Re: [akka-user] performance drops when upgrade from akka 2.2.3 to 2.3.4 with default config

2014-08-07 Thread Sean Zhong





https://lh5.googleusercontent.com/-HoimNVHLEVs/U-NbcKNJOYI/D7Q/kESkTGWcAdQ/s1600/001.png

Cluster: 4 machines, each machine has 1 source actor and 1 target actor.. 
all actor started remotely by a master.
test scenario:   Each source actor will randomly deliver a 100 bytes messge 
at a time to any target actor.

Test result:

akka 2.2.3: 500K 100bytes message /second 
akka 2.3.4: 320K 100bytes message / second

the network bandwidth occupation reflects the message throughput. network 
bandwidth is 3.2x message throughput. (as the actorPath overhead is 221 
bytes, aka 2.2 times the message size in my test)

Network bandwidth usage ratio:  akka2.2.3/akka 2.3.4 ~= 1.5

But the CPU ratio akka2.2.3/akka 2.3.4 = 40/15 = 2.6.  

akka 2.2.3 use much more CPU for same throughput, does this mean akka 2.3.4 
is more efficient? But why the message throughput drops on akka 2.3.4.

I dumped the effective akka conf for 2.2.3 and 2.3.4 in the attachment.


On Thursday, August 7, 2014 6:05:54 PM UTC+8, Sean Zhong wrote:

 Can it be the case that you have a lot of system message traffic between 
 your systems? Do you have lots of remote deployed actors maybe?


 All actors(4 source, 4 target) are created using remote actors.


 On Thursday, August 7, 2014 5:45:34 PM UTC+8, drewhk wrote:

 Hi Sean,

 This is interesting, we actually measured increase with 2.3.4, in fact 
 the relevant changes in that version were directly targeted to increasing 
 performance somewhat. One important difference is that 2.3.4 prioritizes 
 internal Akka messages over user messages to improve stability. Can it be 
 the case that you have a lot of system message traffic between your 
 systems? Do you have lots of remote deployed actors maybe?

 -Endre


 On Thu, Aug 7, 2014 at 10:53 AM, Sean Zhong cloc...@gmail.com wrote:

 Hi, 

 When I upgrade from akka 2.2.3 to akka 2.3.4, I found the message 
 throughput drops about 30%.

 My benchmark looks like this:
 4 machines, each machine has 1 source actor and 1 target actor. Each 
 source actor will randomly deliver a 100 bytes messge at a time to any 
 target actor.

 I use default configuration. Are there default configuration changes in 
 akka reference.conf which result in this behavior?

  -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: 
 https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google 
 Groups Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to akka-user+...@googlegroups.com.
 To post to this group, send email to akka...@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


akka.2.3.4.conf.json
Description: Binary data


akka2.2.3.conf.json
Description: Binary data


[akka-user] Re: Akka Cluster is shutting down after upgrading to Play 2.3.2

2014-08-07 Thread Michael Pisula
Hi Mani,

I had the same issue (NSME in the leveldb code) when I bumped an 
application to akka 2.3.4 and guava 17. As soon as I dropped the guava 
dependency everything was fine again. 
I just checked the pom and Play 2.3 seems to have a dependency on Guava 16, 
while the leveldb in akka-persistence 2.3.4 has a dependency on Guava 12 
(?). 
There seems to be an issue with backwards compatibility in guava...

I did not find a direct dependency on Guava in Play 2.2.3, so that might 
explain why everything goes haywire after the version bump. As for a 
possible solution, perhaps someone with more sbt knowledge than myself can 
suggest something... ;-) Worst case, I guess you could switch to a 
different journal implementation that does not depend on such an outdated 
guava version.

Cheers,
Michael

Am Donnerstag, 7. August 2014 13:21:12 UTC+2 schrieb Manikandan 
Kaliyaperumal:

 Hi,

 I have been running my Jobs using distributed Worker with Master in PLay 
 2.2.3 without any issues until now.

 But, it's started causing issues after upgrading the Play version to 2.3.2.

 It is starting up initially and after a while it is not accepting any work 
 and later it's shutting down automatically.

 Not sure what is causing this issue. I have just changed only the play 
 version. All other stack (and code base) remains same - Akka 2.3.4 with 
 Persistance and remoting (2.3.4).


 [INFO] [08/06/2014 23:03:48.353] 
 [QuoteClusterSystem-akka.actor.default-dispatcher-18] 
 [akka://QuoteClusterSystem/user/producer] Produce.onReceive is called with 
 UserTick:
 [INFO] [08/06/2014 23:03:48.353] 
 [QuoteClusterSystem-akka.actor.default-dispatcher-18] 
 [akka://QuoteClusterSystem/user/producer] Produced work: 755
 [INFO] [08/06/2014 23:03:53.375] 
 [WorkersSystem-akka.actor.default-dispatcher-7] 
 [akka://WorkersSystem/user/worker] No ack from master, retrying 
 (00b5d3fe-d906-4b05-a9e9-74fbd248b630 - 
 323649e0-33b5-49b4-8734-ce02eb383ad1)
 [INFO] [08/06/2014 23:03:53.376] 
 [QuoteClusterSystem-akka.actor.default-dispatcher-3] 
 [akka://QuoteClusterSystem/user/master/active] Work 
 323649e0-33b5-49b4-8734-ce02eb383ad1 not in progress, reported as done by 
 worker 00b5d3fe-d906-4b05-a9e9-74fbd248b630
 [INFO] [08/06/2014 23:03:53.378] 
 [QuoteClusterSystem-akka.actor.default-dispatcher-18] 
 [akka://QuoteClusterSystem/user/master/active] Work 
 9d0376cb-b65e-499d-8787-4cc9ef51afbc not in progress, reported as done by 
 worker 7ba10a7b-c7d8-41ad-a291-5f740fa3e422
 [INFO] [08/06/2014 23:03:53.381] 
 [QuoteClusterSystem-akka.actor.default-dispatcher-14] 
 [akka://QuoteClusterSystem/user/master/active] Work 
 635ea450-e1f0-47dd-8ef3-5bca9069b217 not in progress, reported as done by 
 worker 831637b1-e3a2-4cd1-bb70-0db0f4745156
 [INFO] [08/06/2014 23:03:53.421] 
 [WorkersSystem-akka.actor.default-dispatcher-6] 
 [akka://WorkersSystem/user/worker] No ack from master, retrying 
 (33dab9fa-a8b8-44af-b637-5c9ea89cd174 - 
 2eb29b49-953f-40ac-a70f-d10b810da6c6)
 [INFO] [08/06/2014 23:03:53.422] 
 [QuoteClusterSystem-akka.actor.default-dispatcher-3] 
 [akka://QuoteClusterSystem/user/master/active] Work 
 2eb29b49-953f-40ac-a70f-d10b810da6c6 not in progress, reported as done by 
 worker 33dab9fa-a8b8-44af-b637-5c9ea89cd174
 User  == 1000272547
 User  == 1000272120
 User  == 1000273974
 User  == 1000275489
 User  == 1000270351
 User  == 1000273884
 User  == 1000270803
 User  == 1000273539
 [INFO] [08/06/2014 23:03:56.384] 
 [QuoteClusterSystem-akka.actor.default-dispatcher-7] 
 [akka://QuoteClusterSystem/user/producer] Produce.onReceive is called with 
 UserTick:

 [INFO] [08/06/2014 23:03:56.384] 
 [QuoteClusterSystem-akka.actor.default-dispatcher-7] 
 [akka://QuoteClusterSystem/user/producer] Produced work: 756
 [ERROR] [08/06/2014 23:03:56.400] 
 [QuoteClusterSystem-akka.actor.default-dispatcher-16] 
 [ActorSystem(QuoteClusterSystem)] Uncaught error from thread 
 [QuoteClusterSystem-akka.actor.default-dispatcher-16] shutting down JVM 
 since 'akka.jvm-exit-on-fatal-error' is enabled
 java.lang.NoSuchMethodError: 
 com.google.common.io.Closeables.closeQuietly(Ljava/io/Closeable;)V
 at org.iq80.leveldb.impl.MMapLogWriter.close(MMapLogWriter.java:83)
 at org.iq80.leveldb.impl.DbImpl.makeRoomForWrite(DbImpl.java:832)
 at org.iq80.leveldb.impl.DbImpl.writeInternal(DbImpl.java:658)
 at org.iq80.leveldb.impl.DbImpl.write(DbImpl.java:647)
 at 
 akka.persistence.journal.leveldb.LeveldbStore$class.withBatch(LeveldbStore.scala:89)
 at 
 akka.persistence.journal.leveldb.SharedLeveldbStore.withBatch(LeveldbStore.scala:127)
 at 
 akka.persistence.journal.leveldb.LeveldbStore$class.writeMessages(LeveldbStore.scala:44)
 at 
 akka.persistence.journal.leveldb.SharedLeveldbStore.writeMessages(LeveldbStore.scala:127)
 at 
 akka.persistence.journal.leveldb.SharedLeveldbStore$$anonfun$receive$1.applyOrElse(LeveldbStore.scala:131)
 at 

[akka-user] Re: Akka Cluster is shutting down after upgrading to Play 2.3.2

2014-08-07 Thread Michael Pisula
Just verified, the method Closable.closeQuietly was deprecated since at 
least guava 14, and was removed in guava 16. Guava 17 then introduced two 
closeQuietly methods, which have different parameter types though...

Cheers,
Michael

Am Donnerstag, 7. August 2014 13:42:46 UTC+2 schrieb Michael Pisula:

 Hi Mani,

 I had the same issue (NSME in the leveldb code) when I bumped an 
 application to akka 2.3.4 and guava 17. As soon as I dropped the guava 
 dependency everything was fine again. 
 I just checked the pom and Play 2.3 seems to have a dependency on Guava 
 16, while the leveldb in akka-persistence 2.3.4 has a dependency on Guava 
 12 (?). 
 There seems to be an issue with backwards compatibility in guava...

 I did not find a direct dependency on Guava in Play 2.2.3, so that might 
 explain why everything goes haywire after the version bump. As for a 
 possible solution, perhaps someone with more sbt knowledge than myself can 
 suggest something... ;-) Worst case, I guess you could switch to a 
 different journal implementation that does not depend on such an outdated 
 guava version.

 Cheers,
 Michael

 Am Donnerstag, 7. August 2014 13:21:12 UTC+2 schrieb Manikandan 
 Kaliyaperumal:

 Hi,

 I have been running my Jobs using distributed Worker with Master in PLay 
 2.2.3 without any issues until now.

 But, it's started causing issues after upgrading the Play version to 
 2.3.2.

 It is starting up initially and after a while it is not accepting any 
 work and later it's shutting down automatically.

 Not sure what is causing this issue. I have just changed only the play 
 version. All other stack (and code base) remains same - Akka 2.3.4 with 
 Persistance and remoting (2.3.4).


 [INFO] [08/06/2014 23:03:48.353] 
 [QuoteClusterSystem-akka.actor.default-dispatcher-18] 
 [akka://QuoteClusterSystem/user/producer] Produce.onReceive is called with 
 UserTick:
 [INFO] [08/06/2014 23:03:48.353] 
 [QuoteClusterSystem-akka.actor.default-dispatcher-18] 
 [akka://QuoteClusterSystem/user/producer] Produced work: 755
 [INFO] [08/06/2014 23:03:53.375] 
 [WorkersSystem-akka.actor.default-dispatcher-7] 
 [akka://WorkersSystem/user/worker] No ack from master, retrying 
 (00b5d3fe-d906-4b05-a9e9-74fbd248b630 - 
 323649e0-33b5-49b4-8734-ce02eb383ad1)
 [INFO] [08/06/2014 23:03:53.376] 
 [QuoteClusterSystem-akka.actor.default-dispatcher-3] 
 [akka://QuoteClusterSystem/user/master/active] Work 
 323649e0-33b5-49b4-8734-ce02eb383ad1 not in progress, reported as done by 
 worker 00b5d3fe-d906-4b05-a9e9-74fbd248b630
 [INFO] [08/06/2014 23:03:53.378] 
 [QuoteClusterSystem-akka.actor.default-dispatcher-18] 
 [akka://QuoteClusterSystem/user/master/active] Work 
 9d0376cb-b65e-499d-8787-4cc9ef51afbc not in progress, reported as done by 
 worker 7ba10a7b-c7d8-41ad-a291-5f740fa3e422
 [INFO] [08/06/2014 23:03:53.381] 
 [QuoteClusterSystem-akka.actor.default-dispatcher-14] 
 [akka://QuoteClusterSystem/user/master/active] Work 
 635ea450-e1f0-47dd-8ef3-5bca9069b217 not in progress, reported as done by 
 worker 831637b1-e3a2-4cd1-bb70-0db0f4745156
 [INFO] [08/06/2014 23:03:53.421] 
 [WorkersSystem-akka.actor.default-dispatcher-6] 
 [akka://WorkersSystem/user/worker] No ack from master, retrying 
 (33dab9fa-a8b8-44af-b637-5c9ea89cd174 - 
 2eb29b49-953f-40ac-a70f-d10b810da6c6)
 [INFO] [08/06/2014 23:03:53.422] 
 [QuoteClusterSystem-akka.actor.default-dispatcher-3] 
 [akka://QuoteClusterSystem/user/master/active] Work 
 2eb29b49-953f-40ac-a70f-d10b810da6c6 not in progress, reported as done by 
 worker 33dab9fa-a8b8-44af-b637-5c9ea89cd174
 User  == 1000272547
 User  == 1000272120
 User  == 1000273974
 User  == 1000275489
 User  == 1000270351
 User  == 1000273884
 User  == 1000270803
 User  == 1000273539
 [INFO] [08/06/2014 23:03:56.384] 
 [QuoteClusterSystem-akka.actor.default-dispatcher-7] 
 [akka://QuoteClusterSystem/user/producer] Produce.onReceive is called with 
 UserTick:

 [INFO] [08/06/2014 23:03:56.384] 
 [QuoteClusterSystem-akka.actor.default-dispatcher-7] 
 [akka://QuoteClusterSystem/user/producer] Produced work: 756
 [ERROR] [08/06/2014 23:03:56.400] 
 [QuoteClusterSystem-akka.actor.default-dispatcher-16] 
 [ActorSystem(QuoteClusterSystem)] Uncaught error from thread 
 [QuoteClusterSystem-akka.actor.default-dispatcher-16] shutting down JVM 
 since 'akka.jvm-exit-on-fatal-error' is enabled
 java.lang.NoSuchMethodError: 
 com.google.common.io.Closeables.closeQuietly(Ljava/io/Closeable;)V
 at 
 org.iq80.leveldb.impl.MMapLogWriter.close(MMapLogWriter.java:83)
 at org.iq80.leveldb.impl.DbImpl.makeRoomForWrite(DbImpl.java:832)
 at org.iq80.leveldb.impl.DbImpl.writeInternal(DbImpl.java:658)
 at org.iq80.leveldb.impl.DbImpl.write(DbImpl.java:647)
 at 
 akka.persistence.journal.leveldb.LeveldbStore$class.withBatch(LeveldbStore.scala:89)
 at 
 akka.persistence.journal.leveldb.SharedLeveldbStore.withBatch(LeveldbStore.scala:127)
 at 
 

Re: [akka-user] performance drops when upgrade from akka 2.2.3 to 2.3.4 with default config

2014-08-07 Thread Sean Zhong
I made a diff, and try to use old akka 2.2.3 config when running with akka 
2.3.4

Here is the diff:

--- akka.2.3.4.conf.json Thu Aug  7 19:33:37 2014
 +++ akka2.2.3.conf.json Thu Aug  7 19:32:35 2014
 @@ -13,13 +13,10 @@
},
default-dispatcher: {
  attempt-teamwork: on,
 -default-executor: {
 -  fallback: fork-join-executor
 -},
 -executor: default-executor,
 +executor: fork-join-executor,
  fork-join-executor: {
parallelism-factor: 3,
 -  parallelism-max: 4,
 +  parallelism-max: 64,
parallelism-min: 8
  },
  mailbox-capacity: -1,
 @@ -40,14 +37,14 @@
task-queue-size: -1,
task-queue-type: linked
  },
 -throughput: 1024,
 +throughput: 5,
  throughput-deadline-time: 0ms,
  type: Dispatcher
},
default-mailbox: {
  mailbox-capacity: 1000,
  mailbox-push-timeout-time: 10s,
 -mailbox-type: 
 akka.dispatch.SingleConsumerOnlyUnboundedMailbox,
 +mailbox-type: akka.dispatch.UnboundedMailbox,
  stash-capacity: -1
},
default-stash-dispatcher: {
 @@ -62,7 +59,6 @@
resizer: {
  backoff-rate: 0.1,
  backoff-threshold: 0.3,
 -enabled: off,
  lower-bound: 1,
  messages-per-resize: 10,
  pressure-threshold: 1,
 @@ -110,29 +106,12 @@
},
provider: akka.remote.RemoteActorRefProvider,
reaper-interval: 5s,
 -  router: {
 -type-mapping: {
 -  balancing-pool: akka.routing.BalancingPool,
 -  broadcast-group: akka.routing.BroadcastGroup,
 -  broadcast-pool: akka.routing.BroadcastPool,
 -  consistent-hashing-group: 
 akka.routing.ConsistentHashingGroup,
 -  consistent-hashing-pool: akka.routing.ConsistentHashingPool,
 -  from-code: akka.routing.NoRouter,
 -  random-group: akka.routing.RandomGroup,
 -  random-pool: akka.routing.RandomPool,
 -  round-robin-group: akka.routing.RoundRobinGroup,
 -  round-robin-pool: akka.routing.RoundRobinPool,
 -  scatter-gather-group: 
 akka.routing.ScatterGatherFirstCompletedGroup,
 -  scatter-gather-pool: 
 akka.routing.ScatterGatherFirstCompletedPool,
 -  smallest-mailbox-pool: akka.routing.SmallestMailboxPool
 -}
 -  },
serialization-bindings: {
  [B: bytes,
 -akka.actor.ActorSelectionMessage: akka-containers,
 +akka.actor.SelectionPath: akka-containers,
  akka.remote.DaemonMsgCreate: daemon-create,
 -com.google.protobuf.GeneratedMessage: proto,
 -java.io.Serializable: java,
 +com.google.protobuf_spark.GeneratedMessage: proto,
 +java.io.Serializable: java
},
serialize-creators: off,
serialize-messages: off,
 @@ -149,13 +128,9 @@
unstarted-push-timeout: 10s
  },
  daemonic: off,
 -event-handler-startup-timeout: 5s,
 -event-handlers: [
 -  akka.event.Logging$DefaultLogger
 -],
 -extensions: [
 -  com.romix.akka.serialization.kryo.KryoSerializationExtension$
 -],
 +event-handler-startup-timeout: -1s,
 +event-handlers: [],
 +extensions: [],
  home: ,
  io: {
default-backlog: 1000,
 @@ -228,37 +203,18 @@
  gremlin: akka.remote.transport.FailureInjectorProvider,
  trttl: akka.remote.transport.ThrottlerProvider
},
 -  backoff-interval: 5 ms,
 -  backoff-remote-dispatcher: {
 -executor: fork-join-executor,
 -fork-join-executor: {
 -  parallelism-max: 2,
 -  parallelism-min: 2
 -},
 -type: Dispatcher
 -  },
 +  backoff-interval: 0.01 s,
command-ack-timeout: 30 s,
 -  default-remote-dispatcher: {
 -executor: fork-join-executor,
 -fork-join-executor: {
 -  parallelism-max: 2,
 -  parallelism-min: 2
 -},
 -type: Dispatcher
 -  },
enabled-transports: [
  akka.remote.netty.tcp
],
flush-wait-on-shutdown: 2 s,
 -  gremlin: {
 -debug: off
 -  },
 -  initial-system-message-delivery-timeout: 3 m,
 -  log-buffer-size-exceeding: 5,
 +  gate-invalid-addresses-for: 60 s,
log-frame-size-exceeding: off,
log-received-messages: off,
log-remote-lifecycle-events: on,
log-sent-messages: off,
 +  maximum-retries-in-window: 3,
netty: {
  ssl: {
applied-adapters: [],
 @@ -360,26 +316,29 @@
write-buffer-low-water-mark: 0b
  }
},
 -  prune-quarantine-marker-after: 5 d,
 +  quarantine-systems-for: 60s,
require-cookie: off,
 -  resend-interval: 2 s,
 -  retry-gate-closed-for: 5 s,
 +  resend-interval: 1 s,
 +  retry-gate-closed-for: 0 s,
 + 

Re: [akka-user] Concept for the pressure queue

2014-08-07 Thread Prakhyat Mallikarjun
Hi Konard,

I appreciate your response.

There are two approaches for work processing either pull or push. Pull 
seems to be better approach for processing 
work(http://blog.goconspire.com/post/64901258135/akka-at-conspire-part-5-the-importance-of-pulling).
 

Note:Pull based work processing sample 
code(https://github.com/derekwyatt/akka-worker-pull).

In Pull based approach, if producer is creating more work, we can implement 
a logic to add more worker actors to the system. These additional workers 
will take the surge of work created by fast producer. This maintain's 
balance or capability in the system to handle sudden load.

Nothing against akka streams but it's back pressure will force the producer 
to push the work at slow rate.

What are your thoughts on taking the approach of pull based processing with 
capability of adding more worker's or worker nodes on the fly?   

-Prakhyat M M

On Wednesday, 6 August 2014 15:32:04 UTC+5:30, Akka Team wrote:

 Hi Prakhyat,
 Just creating more actors can help with throughput, but if you don't 
 signal to a faster producer that it should slow down producing, it will 
 eventually stuff out all inboxes anyway.

 This is one of the reasons we're working on reactive-streams 
 http://reactive-streams.org which make dealing with back-pressure a 
 built-in thing (which is not true for Actors – they're a very generic tool).

 -- konrad


 On Wed, Aug 6, 2014 at 9:36 AM, Prakhyat Mallikarjun prakh...@gmail.com 
 javascript: wrote:

 Hi Derek,

 Instead of PressureQueue, why you did not choose approach of creating 
 more actors to take flood of requests from client? 


 On Wednesday, 25 April 2012 17:55:25 UTC+5:30, Derek Wyatt wrote:

 Don't sweat it.  It's just a concept and I'm not in any rush to use it 
 any time soon.  I'm glad you remembered, though.

 Cheers,
 Derek

 On 2012-04-25, at 8:16 AM, Viktor Klang wrote:

 Hey Derek,


 sorry for the delayed response on this, everyone's been busy with 
 ScalaDays.
 I'll get to this as soon as my backlog decreases.

 Thanks for staying on top of things!

 Cheers,
 √


 On Mon, Apr 16, 2012 at 1:52 PM, Derek Wyatt de...@derekwyatt.org 
 wrote:

 Hi guys,

 A few days ago I talked about a queue that slows down its clients when 
 it starts to get full.  I decided to throw together a quick concept 
 example 
 so that I could get some feedback.

 The code can be found at: https://github.com/derekwyatt/
 PressureQueue-Concept

 If you've got some time, I'd appreciate people taking a look at it to 
 see what they think.  I'll summarize the purpose here:

 When your Actor is working with an unfriendly API or a naturally 
 limited system that doesn't deal well with tons of load, it would be nice 
 if we can slow the clients down.  If the client wants an answer within the 
 next 10 seconds, but the queue's size dictates that it's going to take 
 three hours anyway, then there's no point in letting the enqueue occur.  
 In 
 fact, letting it occur means that the work it's going to have to do 
 because 
 of that will be useless - the client is going to retry in 20 seconds 
 anyway, or it's going to ignore the result.  This sort of thing can lead 
 to 
 load death, or at the very least load spikes that aren't desirable.

 By purposely blocking the clients (yes, this is a blocking API) we get 
 a natural throttling.  If things are coded correctly, then the slowing 
 down of the enqueue call propagates all the way back to the clients…

 As long as the WorkingActor has threads to do its work, the clients can 
 block and not starve the WorkingActor.  As the WorkingActor clears out its 
 Mailbox, the enqueues start to speed up again, and slow down as the queue 
 fills up.  Smooth load curves instead of sharp spikes.  That's the idea.

 Comments and flames are welcome :)



 -- 
 You received this message because you are subscribed to the Google 
 Groups Akka User List group.
 To post to this group, send email to akka...@googlegroups.com.
 To unsubscribe from this group, send email to akka-user+...@
 googlegroups.com.

 For more options, visit this group at http://groups.google.com/
 group/akka-user?hl=en.


  -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google Groups 
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to akka-user+...@googlegroups.com javascript:.

 To post to this group, send email to akka...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




 -- 
 Akka Team
 Typesafe - The software stack for applications that scale
 Blog: letitcrash.com
 Twitter: @akkateam
  

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 

Re: [akka-user] [akka-stream] One Consumer with multiple Producers

2014-08-07 Thread 09goral
Some kind of intermediate actor that woul merge incoming messages and 
produce one stream from them ? 

W dniu czwartek, 7 sierpnia 2014 13:15:29 UTC+2 użytkownik √ napisał:

 Hi!

 A Consumer (in the future, Subscriber) can only be connected to one 
 producer, and as such you need to merge multiple producers (in the future, 
 Publisher) into one. Choose between merge/concat/zip depending on desired 
 semantics.


 On Thu, Aug 7, 2014 at 1:11 PM, Christian Douven chdo...@gmail.com 
 javascript: wrote:

 Hello,

 is it possible to connect multiple producers to a single consumer?


 I have the following use case:

 Multiple producers create events which I want to be in a single event 
 stream.



 Currently I create my Consumer like this:


 val (eventConsumer, eventProducer) = Duct[MyEvent].build(FlowMaterializer
 (MaterializerSettings()))


 Then I attach a consumer to the event producer to have all events 
 consumed, since I don't need any Events that get produced when there is no 
 one listening: 

 eventProducer.produceTo(new DevNullConsumer())


 Now that what fails:

 eventProducer.produceTo(testConsumer)

 Flow(List[MyEvent](event1)).toProducer(FlowMaterializer(
 MaterializerSettings())).produceTo(eventConsumer)
 Flow(List[MyEvent](event2)).toProducer(FlowMaterializer(
 MaterializerSettings())).produceTo(eventConsumer)





 TestConsumer will receive only the event1.


 Are Consumer[T] not ment to be fed by multiple producers?

 I somehow have the feeling, that the first Flow completes the whole 
 thing.



 Best regards

 Christian




  -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google Groups 
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to akka-user+...@googlegroups.com javascript:.
 To post to this group, send email to akka...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




 -- 
 Cheers,
 √
  

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Multiple Futures inside Actor's receive

2014-08-07 Thread Soumya Simanta
Michael, 

Thank you for your response. 
Here is what I'm struggling with. 

In order to use pipeTo pattern I'll need access to the transaction  (tran )and 
the FIRST Future (zf) in the actor where I'm piping the Future to because 
the SECOND Future depends on the value (z) of FIRST. How can I do that ? 

//SECOND Future, depends on result of FIRST Future 
  val rf: Future[Seq[ByteString]] = tran.zrange(key, z - 1, z) 



On Thursday, August 7, 2014 3:51:17 AM UTC-4, Michael Pisula wrote:

 Sorry, still early. Missed the part where you said that you don't want to 
 use PipeTo because of the transaction. Not sure if that is a problem at all 
 though. From what I see you use the transaction to make sure nothing 
 happens with the values between your zcard and zrange calls, afterwards its 
 only modification of the internal state. If you just pipe that to a 
 separate actor containing the state I would expect things to work fine. Or 
 do you want the transaction to ensure that update to the internal state and 
 synced with the reads from redis. Then I am not sure that it will work like 
 you implemented it.

 Cheers

 Am Donnerstag, 7. August 2014 09:04:13 UTC+2 schrieb Michael Pisula:

 Instead of mutating state from within the future I would use the pipeTo 
 pattern. Using pipeTo you can send the result of a future to an actor (e.g. 
 to self). There you can safely change state, as you are in 
 single-threaded-illusion-land again...

 HTH

 Cheers,
 Michael

 Am Donnerstag, 7. August 2014 07:25:05 UTC+2 schrieb Soumya Simanta:

 I'm cross posting this here for better coverage. 


 http://stackoverflow.com/questions/25174504/multiple-future-calls-in-an-actors-receive-method


 I'm trying to make two external calls (to a Redis database) inside an 
 Actor's receive method. Both calls return a Future and I need the 
 result of the first Future inside the second. I'm wrapping both calls 
 inside a Redis transaction to avoid anyone else from modifying the value in 
 the database while I'm reading it.

 The internal state of the actor is updated based on the value of the 
 second Future.

 Here is what my current code looks like which I is incorrect because I'm 
 updating the internal state of the actor inside a Future.onComplete
  callback.

 I cannot use the PipeTo pattern because I need both both Future have to 
 be in a transaction. If I use Await for the first Future then my 
 receive method will *block*. Any idea how to fix this ?

 My *second question* is related to how I'm using Futures. Is this usage 
 of Futures below correct? Is there a better way of dealing with 
 multiple Futures in general? Imagine if there were 3 or 4 Future each 
 depending on the previous one.

 import akka.actor.{Props, ActorLogging, Actor}import 
 akka.util.ByteStringimport redis.RedisClient
 import scala.concurrent.Futureimport scala.util.{Failure, Success}

 object GetSubscriptionsDemo extends App {
   val akkaSystem = akka.actor.ActorSystem(redis-demo)
   val actor = akkaSystem.actorOf(Props(new SimpleRedisActor(localhost, 
 dummyzset)), name = simpleactor)
   actor ! UpdateState}
 case object UpdateState
 class SimpleRedisActor(ip: String, key: String) extends Actor with 
 ActorLogging {

   //mutable state that is updated on a periodic basis
   var mutableState: Set[String] = Set.empty

   //required by Future
   implicit val ctx = context dispatcher

   var rClient = RedisClient(ip)(context.system)

   def receive = {
 case UpdateState = {
   log.info(Start of UpdateState ...)

   val tran = rClient.transaction()

   val zf: Future[Long] = tran.zcard(key)  //FIRST Future 
   zf.onComplete {

 case Success(z) = {
   //SECOND Future, depends on result of FIRST Future 
   val rf: Future[Seq[ByteString]] = tran.zrange(key, z - 1, z) 
   rf.onComplete {
 case Success(x) = {
   //convert ByteString to UTF8 String
   val v = x.map(_.utf8String)
   log.info(sUpdating state with $v )
   //update actor's internal state inside callback for a Future
   //IS THIS CORRECT ?
   mutableState ++ v
 }
 case Failure(e) = {
   log.warning(ZRANGE future failed ..., e)
 }
   }
 }
 case Failure(f) = log.warning(ZCARD future failed ..., f)
   }
   tran.exec()

 }
   }
 }

 The compiles but when I run it gets struck.

 2014-08-07  INFO [redis-demo-akka.actor.default-dispatcher-3] 
 a.e.s.Slf4jLogger - Slf4jLogger started2014-08-07 04:38:35.106UTC INFO 
 [redis-demo-akka.actor.default-dispatcher-3] e.c.s.e.a.g.SimpleRedisActor - 
 Start of UpdateState ...2014-08-07 04:38:35.134UTC INFO 
 [redis-demo-akka.actor.default-dispatcher-8span class=pun st

 ...



-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: 

Re: [akka-user] performance drops when upgrade from akka 2.2.3 to 2.3.4 with default config

2014-08-07 Thread Sean Zhong
I finally narrowed down the config item: akka.actor.remote.use-dispatcher

The default setting for akka 2.3.4 remote is 
akka.actor.remote.use-dispatcher = akka.remote.default-remote-dispatcher, 
when I change it to akka.actor.remote.use-dispatcher = , then the 
performance is same or better with akka 2.2.3.




On Thursday, August 7, 2014 7:57:28 PM UTC+8, √ wrote:

 I hope you didn't try to run the config verbatim as it works differently 
 between the versions, but what happened when you updated to use the 2.2.3 
 values for the options that were still there in 2.3.4?


 On Thu, Aug 7, 2014 at 1:55 PM, Sean Zhong cloc...@gmail.com 
 javascript: wrote:

 I made a diff, and try to use old akka 2.2.3 config when running with 
 akka 2.3.4

 Here is the diff:

 --- akka.2.3.4.conf.json Thu Aug  7 19:33:37 2014
 +++ akka2.2.3.conf.json Thu Aug  7 19:32:35 2014
 @@ -13,13 +13,10 @@
},
default-dispatcher: {
  attempt-teamwork: on,
 -default-executor: {
 -  fallback: fork-join-executor
 -},
 -executor: default-executor,
 +executor: fork-join-executor,
  fork-join-executor: {
parallelism-factor: 3,
 -  parallelism-max: 4,
 +  parallelism-max: 64,
parallelism-min: 8
  },
  mailbox-capacity: -1,
 @@ -40,14 +37,14 @@
task-queue-size: -1,
task-queue-type: linked
  },
 -throughput: 1024,
 +throughput: 5,
  throughput-deadline-time: 0ms,
  type: Dispatcher
},
default-mailbox: {
  mailbox-capacity: 1000,
  mailbox-push-timeout-time: 10s,
 -mailbox-type: 
 akka.dispatch.SingleConsumerOnlyUnboundedMailbox,
 +mailbox-type: akka.dispatch.UnboundedMailbox,
  stash-capacity: -1
},
default-stash-dispatcher: {
 @@ -62,7 +59,6 @@
resizer: {
  backoff-rate: 0.1,
  backoff-threshold: 0.3,
 -enabled: off,
  lower-bound: 1,
  messages-per-resize: 10,
  pressure-threshold: 1,
 @@ -110,29 +106,12 @@
},
provider: akka.remote.RemoteActorRefProvider,
reaper-interval: 5s,
 -  router: {
 -type-mapping: {
 -  balancing-pool: akka.routing.BalancingPool,
 -  broadcast-group: akka.routing.BroadcastGroup,
 -  broadcast-pool: akka.routing.BroadcastPool,
 -  consistent-hashing-group: 
 akka.routing.ConsistentHashingGroup,
 -  consistent-hashing-pool: 
 akka.routing.ConsistentHashingPool,
 -  from-code: akka.routing.NoRouter,
 -  random-group: akka.routing.RandomGroup,
 -  random-pool: akka.routing.RandomPool,
 -  round-robin-group: akka.routing.RoundRobinGroup,
 -  round-robin-pool: akka.routing.RoundRobinPool,
 -  scatter-gather-group: 
 akka.routing.ScatterGatherFirstCompletedGroup,
 -  scatter-gather-pool: 
 akka.routing.ScatterGatherFirstCompletedPool,
 -  smallest-mailbox-pool: akka.routing.SmallestMailboxPool
 -}
 -  },
serialization-bindings: {
  [B: bytes,
 -akka.actor.ActorSelectionMessage: akka-containers,
 +akka.actor.SelectionPath: akka-containers,
  akka.remote.DaemonMsgCreate: daemon-create,
 -com.google.protobuf.GeneratedMessage: proto,
 -java.io.Serializable: java,
 +com.google.protobuf_spark.GeneratedMessage: proto,
 +java.io.Serializable: java
},
serialize-creators: off,
serialize-messages: off,
 @@ -149,13 +128,9 @@
unstarted-push-timeout: 10s
  },
  daemonic: off,
 -event-handler-startup-timeout: 5s,
 -event-handlers: [
 -  akka.event.Logging$DefaultLogger
 -],
 -extensions: [
 -  com.romix.akka.serialization.kryo.KryoSerializationExtension$
 -],
 +event-handler-startup-timeout: -1s,
 +event-handlers: [],
 +extensions: [],
  home: ,
  io: {
default-backlog: 1000,
 @@ -228,37 +203,18 @@
  gremlin: akka.remote.transport.FailureInjectorProvider,
  trttl: akka.remote.transport.ThrottlerProvider
},
 -  backoff-interval: 5 ms,
 -  backoff-remote-dispatcher: {
 -executor: fork-join-executor,
 -fork-join-executor: {
 -  parallelism-max: 2,
 -  parallelism-min: 2
 -},
 -type: Dispatcher
 -  },
 +  backoff-interval: 0.01 s,
command-ack-timeout: 30 s,
 -  default-remote-dispatcher: {
 -executor: fork-join-executor,
 -fork-join-executor: {
 -  parallelism-max: 2,
 -  parallelism-min: 2
 -},
 -type: Dispatcher
 -  },
enabled-transports: [
  akka.remote.netty.tcp
],
flush-wait-on-shutdown: 2 s,
 -  gremlin: {
 -debug: off
 -  },
 -  initial-system-message-delivery-timeout: 3 m,
 -  

Re: [akka-user] performance drops when upgrade from akka 2.2.3 to 2.3.4 with default config

2014-08-07 Thread Sean Zhong
The default-remote-dispatcher config is:
### Default dispatcher for the remoting subsystem

default-remote-dispatcher {
  type = Dispatcher
  executor = fork-join-executor
  fork-join-executor {
# Min number of threads to cap factor-based parallelism number to
parallelism-min = 2
parallelism-max = 2
  }
}

Is it possible that the value setting here are too Conservative which 
impact performance. I will tune this and see what happens...



On Thursday, August 7, 2014 8:35:50 PM UTC+8, Sean Zhong wrote:

 I finally narrowed down the config item: akka.actor.remote.use-dispatcher

 The default setting for akka 2.3.4 remote is 
 akka.actor.remote.use-dispatcher = akka.remote.default-remote-dispatcher, 
 when I change it to akka.actor.remote.use-dispatcher = , then the 
 performance is same or better with akka 2.2.3.




 On Thursday, August 7, 2014 7:57:28 PM UTC+8, √ wrote:

 I hope you didn't try to run the config verbatim as it works differently 
 between the versions, but what happened when you updated to use the 2.2.3 
 values for the options that were still there in 2.3.4?


 On Thu, Aug 7, 2014 at 1:55 PM, Sean Zhong cloc...@gmail.com wrote:

 I made a diff, and try to use old akka 2.2.3 config when running with 
 akka 2.3.4

 Here is the diff:

 --- akka.2.3.4.conf.json Thu Aug  7 19:33:37 2014
 +++ akka2.2.3.conf.json Thu Aug  7 19:32:35 2014
 @@ -13,13 +13,10 @@
},
default-dispatcher: {
  attempt-teamwork: on,
 -default-executor: {
 -  fallback: fork-join-executor
 -},
 -executor: default-executor,
 +executor: fork-join-executor,
  fork-join-executor: {
parallelism-factor: 3,
 -  parallelism-max: 4,
 +  parallelism-max: 64,
parallelism-min: 8
  },
  mailbox-capacity: -1,
 @@ -40,14 +37,14 @@
task-queue-size: -1,
task-queue-type: linked
  },
 -throughput: 1024,
 +throughput: 5,
  throughput-deadline-time: 0ms,
  type: Dispatcher
},
default-mailbox: {
  mailbox-capacity: 1000,
  mailbox-push-timeout-time: 10s,
 -mailbox-type: 
 akka.dispatch.SingleConsumerOnlyUnboundedMailbox,
 +mailbox-type: akka.dispatch.UnboundedMailbox,
  stash-capacity: -1
},
default-stash-dispatcher: {
 @@ -62,7 +59,6 @@
resizer: {
  backoff-rate: 0.1,
  backoff-threshold: 0.3,
 -enabled: off,
  lower-bound: 1,
  messages-per-resize: 10,
  pressure-threshold: 1,
 @@ -110,29 +106,12 @@
},
provider: akka.remote.RemoteActorRefProvider,
reaper-interval: 5s,
 -  router: {
 -type-mapping: {
 -  balancing-pool: akka.routing.BalancingPool,
 -  broadcast-group: akka.routing.BroadcastGroup,
 -  broadcast-pool: akka.routing.BroadcastPool,
 -  consistent-hashing-group: 
 akka.routing.ConsistentHashingGroup,
 -  consistent-hashing-pool: 
 akka.routing.ConsistentHashingPool,
 -  from-code: akka.routing.NoRouter,
 -  random-group: akka.routing.RandomGroup,
 -  random-pool: akka.routing.RandomPool,
 -  round-robin-group: akka.routing.RoundRobinGroup,
 -  round-robin-pool: akka.routing.RoundRobinPool,
 -  scatter-gather-group: 
 akka.routing.ScatterGatherFirstCompletedGroup,
 -  scatter-gather-pool: 
 akka.routing.ScatterGatherFirstCompletedPool,
 -  smallest-mailbox-pool: akka.routing.SmallestMailboxPool
 -}
 -  },
serialization-bindings: {
  [B: bytes,
 -akka.actor.ActorSelectionMessage: akka-containers,
 +akka.actor.SelectionPath: akka-containers,
  akka.remote.DaemonMsgCreate: daemon-create,
 -com.google.protobuf.GeneratedMessage: proto,
 -java.io.Serializable: java,
 +com.google.protobuf_spark.GeneratedMessage: proto,
 +java.io.Serializable: java
},
serialize-creators: off,
serialize-messages: off,
 @@ -149,13 +128,9 @@
unstarted-push-timeout: 10s
  },
  daemonic: off,
 -event-handler-startup-timeout: 5s,
 -event-handlers: [
 -  akka.event.Logging$DefaultLogger
 -],
 -extensions: [
 -  com.romix.akka.serialization.kryo.KryoSerializationExtension$
 -],
 +event-handler-startup-timeout: -1s,
 +event-handlers: [],
 +extensions: [],
  home: ,
  io: {
default-backlog: 1000,
 @@ -228,37 +203,18 @@
  gremlin: akka.remote.transport.FailureInjectorProvider,
  trttl: akka.remote.transport.ThrottlerProvider
},
 -  backoff-interval: 5 ms,
 -  backoff-remote-dispatcher: {
 -executor: fork-join-executor,
 -fork-join-executor: {
 -  parallelism-max: 2,
 -  parallelism-min: 2
 -},
 

Re: [akka-user] [akka-stream] One Consumer with multiple Producers

2014-08-07 Thread Christian Douven
Something like that.

This somehow looks like I have to know right of the beginning how much 
streams there are to merge.

Unfortunately my components come and go. They don't know any Flow to merge 
with, they just know the consumer.

They do some work, emit some events and then eventually die...

So what I would really like is a consumer who can consume from many 
producers.


Maybe I'm on the wrong track


Am Donnerstag, 7. August 2014 14:23:31 UTC+2 schrieb √:

 Like this:

 val mat = FlowMaterializer(MaterializerSettings())
 Flow(List[MyEvent](event1)).merge(Flow(List[MyEvent](event2)).toProducer(mat)).produceTo(eventConsumer,
  
 mat)


 On Thu, Aug 7, 2014 at 2:15 PM, 09goral gor...@gmail.com javascript: 
 wrote:

 Some kind of intermediate actor that woul merge incoming messages and 
 produce one stream from them ? 

 W dniu czwartek, 7 sierpnia 2014 13:15:29 UTC+2 użytkownik √ napisał:

 Hi!

 A Consumer (in the future, Subscriber) can only be connected to one 
 producer, and as such you need to merge multiple producers (in the future, 
 Publisher) into one. Choose between merge/concat/zip depending on desired 
 semantics.


 On Thu, Aug 7, 2014 at 1:11 PM, Christian Douven chdo...@gmail.com 
 wrote:

 Hello,

 is it possible to connect multiple producers to a single consumer?


 I have the following use case:

 Multiple producers create events which I want to be in a single event 
 stream.



 Currently I create my Consumer like this:


 val (eventConsumer, eventProducer) = Duct[MyEvent].build(FlowMateri
 alizer(MaterializerSettings()))


 Then I attach a consumer to the event producer to have all events 
 consumed, since I don't need any Events that get produced when there is no 
 one listening: 

 eventProducer.produceTo(new DevNullConsumer())


 Now that what fails:

 eventProducer.produceTo(testConsumer)

 Flow(List[MyEvent](event1)).toProducer(FlowMaterializer(Mate
 rializerSettings())).produceTo(eventConsumer)
 Flow(List[MyEvent](event2)).toProducer(FlowMaterializer(Mate
 rializerSettings())).produceTo(eventConsumer)





 TestConsumer will receive only the event1.


 Are Consumer[T] not ment to be fed by multiple producers?
  
 I somehow have the feeling, that the first Flow completes the whole 
 thing.



 Best regards

 Christian




  -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: http://doc.akka.io/docs/akka/
 current/additional/faq.html
  Search the archives: https://groups.google.com/
 group/akka-user
 --- 
 You received this message because you are subscribed to the Google 
 Groups Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to akka-user+...@googlegroups.com.
 To post to this group, send email to akka...@googlegroups.com.

 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




 -- 
 Cheers,
 √
  
  -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google Groups 
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to akka-user+...@googlegroups.com javascript:.
 To post to this group, send email to akka...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




 -- 
 Cheers,
 √
  

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] performance drops when upgrade from akka 2.2.3 to 2.3.4 with default config

2014-08-07 Thread Sean Zhong
I finally find out it is all out remote dispatcher parallism setting!  when 
I change parallelism-max to 10, the performance is greatly improved. 

  default-remote-dispatcher: {
executor: fork-join-executor,
fork-join-executor: {
  parallelism-max: 10, 
  parallelism-factor : 2.0
  parallelism-min: 2
},
   use-dispatcher: akka.remote.default-remote-dispatcher,
   

Sorry for taking so long to find out the answer, thanks for your help

On Thursday, August 7, 2014 8:47:26 PM UTC+8, Sean Zhong wrote:

 The default-remote-dispatcher config is:
 ### Default dispatcher for the remoting subsystem

 default-remote-dispatcher {
   type = Dispatcher
   executor = fork-join-executor
   fork-join-executor {
 # Min number of threads to cap factor-based parallelism number to
 parallelism-min = 2
 parallelism-max = 2
   }
 }

 


 Is it possible that the value setting here are too Conservative which 
 impact performance. I will tune this and see what happens...



 On Thursday, August 7, 2014 8:35:50 PM UTC+8, Sean Zhong wrote:

 I finally narrowed down the config item: akka.actor.remote.use-dispatcher

 The default setting for akka 2.3.4 remote is 
 akka.actor.remote.use-dispatcher = akka.remote.default-remote-dispatcher, 
 when I change it to akka.actor.remote.use-dispatcher = , then the 
 performance is same or better with akka 2.2.3.




 On Thursday, August 7, 2014 7:57:28 PM UTC+8, √ wrote:

 I hope you didn't try to run the config verbatim as it works differently 
 between the versions, but what happened when you updated to use the 2.2.3 
 values for the options that were still there in 2.3.4?


 On Thu, Aug 7, 2014 at 1:55 PM, Sean Zhong cloc...@gmail.com wrote:

 I made a diff, and try to use old akka 2.2.3 config when running with 
 akka 2.3.4

 Here is the diff:

 --- akka.2.3.4.conf.json Thu Aug  7 19:33:37 2014
 +++ akka2.2.3.conf.json Thu Aug  7 19:32:35 2014
 @@ -13,13 +13,10 @@
},
default-dispatcher: {
  attempt-teamwork: on,
 -default-executor: {
 -  fallback: fork-join-executor
 -},
 -executor: default-executor,
 +executor: fork-join-executor,
  fork-join-executor: {
parallelism-factor: 3,
 -  parallelism-max: 4,
 +  parallelism-max: 64,
parallelism-min: 8
  },
  mailbox-capacity: -1,
 @@ -40,14 +37,14 @@
task-queue-size: -1,
task-queue-type: linked
  },
 -throughput: 1024,
 +throughput: 5,
  throughput-deadline-time: 0ms,
  type: Dispatcher
},
default-mailbox: {
  mailbox-capacity: 1000,
  mailbox-push-timeout-time: 10s,
 -mailbox-type: 
 akka.dispatch.SingleConsumerOnlyUnboundedMailbox,
 +mailbox-type: akka.dispatch.UnboundedMailbox,
  stash-capacity: -1
},
default-stash-dispatcher: {
 @@ -62,7 +59,6 @@
resizer: {
  backoff-rate: 0.1,
  backoff-threshold: 0.3,
 -enabled: off,
  lower-bound: 1,
  messages-per-resize: 10,
  pressure-threshold: 1,
 @@ -110,29 +106,12 @@
},
provider: akka.remote.RemoteActorRefProvider,
reaper-interval: 5s,
 -  router: {
 -type-mapping: {
 -  balancing-pool: akka.routing.BalancingPool,
 -  broadcast-group: akka.routing.BroadcastGroup,
 -  broadcast-pool: akka.routing.BroadcastPool,
 -  consistent-hashing-group: 
 akka.routing.ConsistentHashingGroup,
 -  consistent-hashing-pool: 
 akka.routing.ConsistentHashingPool,
 -  from-code: akka.routing.NoRouter,
 -  random-group: akka.routing.RandomGroup,
 -  random-pool: akka.routing.RandomPool,
 -  round-robin-group: akka.routing.RoundRobinGroup,
 -  round-robin-pool: akka.routing.RoundRobinPool,
 -  scatter-gather-group: 
 akka.routing.ScatterGatherFirstCompletedGroup,
 -  scatter-gather-pool: 
 akka.routing.ScatterGatherFirstCompletedPool,
 -  smallest-mailbox-pool: akka.routing.SmallestMailboxPool
 -}
 -  },
serialization-bindings: {
  [B: bytes,
 -akka.actor.ActorSelectionMessage: akka-containers,
 +akka.actor.SelectionPath: akka-containers,
  akka.remote.DaemonMsgCreate: daemon-create,
 -com.google.protobuf.GeneratedMessage: proto,
 -java.io.Serializable: java,
 +com.google.protobuf_spark.GeneratedMessage: proto,
 +java.io.Serializable: java
},
serialize-creators: off,
serialize-messages: off,
 @@ -149,13 +128,9 @@
unstarted-push-timeout: 10s
  },
  daemonic: off,
 -event-handler-startup-timeout: 5s,
 -event-handlers: [
 -  akka.event.Logging$DefaultLogger
 -],
 -extensions: [
 -  

Re: [akka-user] performance drops when upgrade from akka 2.2.3 to 2.3.4 with default config

2014-08-07 Thread Endre Varga
Sean,

Yes the separate dispatcher might be the cause. This default was added to
protect the remoting subsystem from load in userspace (learning from some
past problems). Feel free to reconfigure it to anything that works for you
-- the default might be conservative indeed.

-Endre


On Thu, Aug 7, 2014 at 2:47 PM, Sean Zhong clock...@gmail.com wrote:

 The default-remote-dispatcher config is:
 ### Default dispatcher for the remoting subsystem

 default-remote-dispatcher {
   type = Dispatcher
   executor = fork-join-executor
   fork-join-executor {
 # Min number of threads to cap factor-based parallelism number to
 parallelism-min = 2
 parallelism-max = 2
   }
 }

 Is it possible that the value setting here are too Conservative which
 impact performance. I will tune this and see what happens...



 On Thursday, August 7, 2014 8:35:50 PM UTC+8, Sean Zhong wrote:

 I finally narrowed down the config item: akka.actor.remote.use-dispatcher

 The default setting for akka 2.3.4 remote is akka.actor.remote.use-dispatcher
 = akka.remote.default-remote-dispatcher, when I change it
 to akka.actor.remote.use-dispatcher = , then the performance is same
 or better with akka 2.2.3.




 On Thursday, August 7, 2014 7:57:28 PM UTC+8, √ wrote:

 I hope you didn't try to run the config verbatim as it works differently
 between the versions, but what happened when you updated to use the 2.2.3
 values for the options that were still there in 2.3.4?


 On Thu, Aug 7, 2014 at 1:55 PM, Sean Zhong cloc...@gmail.com wrote:

 I made a diff, and try to use old akka 2.2.3 config when running with
 akka 2.3.4

 Here is the diff:

 --- akka.2.3.4.conf.json Thu Aug  7 19:33:37 2014
 +++ akka2.2.3.conf.json Thu Aug  7 19:32:35 2014
 @@ -13,13 +13,10 @@
},
default-dispatcher: {
  attempt-teamwork: on,
 -default-executor: {
 -  fallback: fork-join-executor
 -},
 -executor: default-executor,
 +executor: fork-join-executor,
  fork-join-executor: {
parallelism-factor: 3,
 -  parallelism-max: 4,
 +  parallelism-max: 64,
parallelism-min: 8
  },
  mailbox-capacity: -1,
 @@ -40,14 +37,14 @@
task-queue-size: -1,
task-queue-type: linked
  },
 -throughput: 1024,
 +throughput: 5,
  throughput-deadline-time: 0ms,
  type: Dispatcher
},
default-mailbox: {
  mailbox-capacity: 1000,
  mailbox-push-timeout-time: 10s,
 -mailbox-type: akka.dispatch.SingleConsumerOnlyUnboundedMai
 lbox,
 +mailbox-type: akka.dispatch.UnboundedMailbox,
  stash-capacity: -1
},
default-stash-dispatcher: {
 @@ -62,7 +59,6 @@
resizer: {
  backoff-rate: 0.1,
  backoff-threshold: 0.3,
 -enabled: off,
  lower-bound: 1,
  messages-per-resize: 10,
  pressure-threshold: 1,
 @@ -110,29 +106,12 @@
},
provider: akka.remote.RemoteActorRefProvider,
reaper-interval: 5s,
 -  router: {
 -type-mapping: {
 -  balancing-pool: akka.routing.BalancingPool,
 -  broadcast-group: akka.routing.BroadcastGroup,
 -  broadcast-pool: akka.routing.BroadcastPool,
 -  consistent-hashing-group: akka.routing.
 ConsistentHashingGroup,
 -  consistent-hashing-pool: akka.routing.
 ConsistentHashingPool,
 -  from-code: akka.routing.NoRouter,
 -  random-group: akka.routing.RandomGroup,
 -  random-pool: akka.routing.RandomPool,
 -  round-robin-group: akka.routing.RoundRobinGroup,
 -  round-robin-pool: akka.routing.RoundRobinPool,
 -  scatter-gather-group: akka.routing.
 ScatterGatherFirstCompletedGroup,
 -  scatter-gather-pool: akka.routing.
 ScatterGatherFirstCompletedPool,
 -  smallest-mailbox-pool: akka.routing.SmallestMailboxPool
 -}
 -  },
serialization-bindings: {
  [B: bytes,
 -akka.actor.ActorSelectionMessage: akka-containers,
 +akka.actor.SelectionPath: akka-containers,
  akka.remote.DaemonMsgCreate: daemon-create,
 -com.google.protobuf.GeneratedMessage: proto,
 -java.io.Serializable: java,
 +com.google.protobuf_spark.GeneratedMessage: proto,
 +java.io.Serializable: java
},
serialize-creators: off,
serialize-messages: off,
 @@ -149,13 +128,9 @@
unstarted-push-timeout: 10s
  },
  daemonic: off,
 -event-handler-startup-timeout: 5s,
 -event-handlers: [
 -  akka.event.Logging$DefaultLogger
 -],
 -extensions: [
 -  com.romix.akka.serialization.kryo.KryoSerializationExtension$
 -],
 +event-handler-startup-timeout: -1s,
 +event-handlers: [],
 +extensions: [],
  home: ,
  io: {
default-backlog: 1000,
 @@ -228,37 

Re: [akka-user] Concept for the pressure queue

2014-08-07 Thread Prakhyat Mallikarjun
Hello,

Thanks for detailing . It clarified my thoughts a lot. This conversation 
has put more lights on reactive streams. Also, how reactive streams and 
pull mode better fit in design/architecture.

On Thursday, 7 August 2014 18:07:31 UTC+5:30, Konrad Malawski wrote:

 Hello again,
 replies in-line:

 [-- cut --]  

 In Pull based approach, if producer is creating more work, we can 
 implement a logic to add more worker actors to the system. These additional 
 workers will take the surge of work created by fast producer. This 
 maintain's balance or capability in the system to handle sudden load.

 Sure, that's standard tactics for scaling on-demand.
 Also, it assumes you are able to do this fast enough, which simply may not 
 be true.

  

 Nothing against akka streams but it's back pressure will force the 
 producer to push the work at slow rate.

 No, because the protocol used by reactive streams can adapt and in the 
 case of faster subscribers than publishers, they can emit way larger demand 
 than the publisher is producing at.
 We call it dynamic push pull, which shows how the protocol behaves quite 
 well. This means that they can signal demand quite rarely, as in this 
 example:

 We can signal much more demand, if the subscriber is fast or has large 
 buffers:
 S: DEMAND 10.000
 P: SEND
 P: SEND
 P: SEND  
 ...   # 100 MORE SENDS
 S: DEMAND 10.000
 P: SEND   # SEND NO. 104

  This effectively looks like push for a while (P can push 1 elements 
 without waiting for any more demand). 
 Signalling demand can be interleaved between signalling data, which can 
 lead to the publisher never having to wait with publishing.

 If demand is depleted, it looks like pull again, because the publisher 
 can't publish to this subscriber.

 vs. the naive implementation (which we avoid, but uses the same protocol) 
 (which would such, because of the overheads, but perhaps that's exacly what 
 a subscriber needs – because it can't cope with more elements than 1 (has 
 no buffer, is slow, whatever)):
 S: DEMAND 1
 P: SEND
 S: DEMAND 1
 P: SEND
 S: DEMAND 1
 P: SEND
  

 What are your thoughts on taking the approach of pull based processing 
 with capability of adding more worker's or worker nodes on the fly?

 Of course, that's one of our most often recommended patterns ever :-)
 Like I said before, it still assumes the work dispatcher is able to keep 
 up with incoming work requests which it then delegates to these worker 
 nodes – this usually holds, but is not guaranteed.

 In order to guarantee things like this proper congestion control must be 
 implemented within the system - such as the reactive streams protocol (or 
 TCP or your home grown congestion control – there's plenty of these).

 -- 
 Cheers,
 Konrad 'ktoso' Malawski
 hAkker @ Typesafe

 http://typesafe.com
  

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Deploying Akka Cluster App To Amazon EC2 and Monitoring

2014-08-07 Thread Prakhyat Mallikarjun
Hi,

*Deploy In Amazon:*
I am currently working a simple application based on Pull mode of 
processing work. 

I want to deploy the same App in Amazon Cloud EC2 instances. Also how to 
get this app auto scaling in amazon. I ran through web but did not find any 
good articles which could explain this step by step. Waiting for help in 
this regard.

*Monitoring AKKA:*
Also how to monitor akka cluster based application. There are no good open 
source simple to setup tools. I tried latest app dynamics, was lucky it can 
monitor/profile/trace akka's asynchronous calls. But tool is short of 
providing below information, 


   - Message processing times
   - Time waiting in mailbox
   - Mailbox sizes
   - Dispatchers health
   
There are tools like Kamon/Datadog but they have dependency on other tools 
like StatsD,Graphite etc, which are complex to setup. 

Let me know simple ways to monitor akka cluster based apps. Also how to 
monitor akka cluster based apps in Amazon.

-Prakhyat M M 

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Actors behaving unexpectedly with Circuit breaker

2014-08-07 Thread Jasper
Alright I fixed it, it was stupid. But actually it didn't solve anything... 
Here's how I did it :

doJDBCStuff() :

val cpManager = conn.unwrap(classOf[PGConnection]).getCopyAPI
val stringBytes: Array[Byte] = batchStrings.toString().map(_.toByte).toArray
val copy = cpManager.copyIn(sCOPY tableName FROM STDIN WITH CSV)
try {
  copy.writeToCopy(stringBytes, 0, stringBytes.length)
  copy.endCopy()
} finally {
  if(copy.isActive){
copy.cancelCopy()
  }
}
conn.commit()

The potential exception should be caught higher in the first snippet I 
showed (to resend the message). By the way, I noticed that If I turn on 
autoCommit, absolutely all processors are blocking... 

Le jeudi 7 août 2014 12:02:48 UTC+2, Brett Wooldridge a écrit :

 That error indicates that one of your us-ascii files has a NUL byte 
 (0x00) in it somewhere, this is never valid in UTF-8.  You have two 
 options.  Figure out why there is a NUL character the file, or at least 
 sanitizing the bytes on-the-fly as you read chunks from the file.  Either 
 redacting the NUL bytes completely, or replacing them with, for example, a 
 space (0x20).


 On Thursday, August 7, 2014 6:10:30 PM UTC+9, Jasper wrote:

 I tried it and this looks very promising as since all processors now go 
 into Open state. However without the reader I'm deep into encoding hell 
 because my files are in us-ascii and my db in UTF-8 :

 invalid byte sequence for encoding UTF8: 0x00
 And I can't just sanitize the files beforehand... Anyway I'm aware it's 
 not really the place for this so unless anyone have the solution, thanks 
 for your help !


 Le jeudi 7 août 2014 07:27:04 UTC+2, Brett Wooldridge a écrit :

 It appears that you are using the PostgreSQL CopyManager, correct? 
  Looking at QueryExecutorImpl it appears that rollback() is trying to 
 obtain a lock that was not released by the CopyManager.  I recommend using 
 the CopyManager.copyIn() method that returns a CopyIn object, rather 
 than using the convenience method that takes a reader.  Use the 
 writeToCopy() to pump the data in, and be sure to catch SQLException. 
  If you get an SQLException, call cancelCopy() and retry or whatever 
 your recovery scenario is, otherwise call endCopy().  I would have 
 expected PostgreSQL to handle the severing of a Connection in the middle of 
 a bulk copy better, but that is probably a question for the PostgreSQL 
 group.

 Just my armchair diagnosis.

 On Wednesday, August 6, 2014 11:04:13 PM UTC+9, Jasper wrote:


 Sys-akka.actor.pinned-dispatcher-6 [WAITING]
 java.lang.Object.wait()Object.java:503
 org.postgresql.core.v3.QueryExecutorImpl.waitOnLock()QueryExecutorImpl.
 java:91
 org.postgresql.core.v3.QueryExecutorImpl.execute(Query, ParameterList, 
 ResultHandler, int, int, int)QueryExecutorImpl.java:228
 org.postgresql.jdbc2.AbstractJdbc2Connection.executeTransactionCommand(
 Query)AbstractJdbc2Connection.java:808
 org.postgresql.jdbc2.AbstractJdbc2Connection.rollback()
 AbstractJdbc2Connection.java:861
 com.zaxxer.hikari.proxy.ConnectionProxy.resetConnectionState()
 ConnectionProxy.java:192
 com.zaxxer.hikari.proxy.ConnectionProxy.close()ConnectionProxy.java:305
 java.lang.reflect.Method.invoke(Object, Object[])Method.java:606
 xx..util.Cleaning$.using(Object, Function1)Cleaning.scala:14
 xx..actors.FileProcessor$$anonfun$receive$1$$anonfun$applyOrElse$1.
 apply$mcV$sp()FileProcessor.scala:75
 xx..actors.FileProcessor$$anonfun$receive$1$$anonfun$applyOrElse$1.
 apply()FileProcessor.scala:56
 xx..actors.FileProcessor$$anonfun$receive$1$$anonfun$applyOrElse$1.
 apply()FileProcessor.scala:56
 akka.pattern.CircuitBreaker$$anonfun$withSyncCircuitBreaker$1.apply()
 CircuitBreaker.scala:135
 akka.pattern.CircuitBreaker$$anonfun$withSyncCircuitBreaker$1.apply()
 CircuitBreaker.scala:135
 akka.pattern.CircuitBreaker$State$class.callThrough(
 CircuitBreaker$State, Function0)CircuitBreaker.scala:296
 akka.pattern.CircuitBreaker$Closed$.callThrough(Function0)
 CircuitBreaker.scala:345
 akka.pattern.CircuitBreaker$Closed$.invoke(Function0)CircuitBreaker.
 scala:354
 akka.pattern.CircuitBreaker.withCircuitBreaker(Function0)CircuitBreaker
 .scala:113
 akka.pattern.CircuitBreaker.withSyncCircuitBreaker(Function0)
 CircuitBreaker.scala:135
 xx..actors.FileProcessor$$anonfun$receive$1.applyOrElse(Object, 
 Function1)FileProcessor.scala:55
 akka.actor.Actor$class.aroundReceive(Actor, PartialFunction, Object)
 Actor.scala:/spa
 ...



-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at 

Re: [akka-user] Re: Multiple Futures inside Actor's receive

2014-08-07 Thread Konrad Malawski
Hello Soumya,
how about maping over the Futures in a for comprehention, like this, and
then sending the result back to the actor so it can set the value (*don't*
modify a var from a Future (not safe – as it's executing on a different
thread, and Actor guarantees don't hold any longer)).

case SetState(v) =

  mutableState = v

case UpdateState = {
  log.info(Start of UpdateState ...)

  val tran = rClient.transaction()
  for {
z - tran.zcard(key)

x - tran.zrange(key, z - 1, z)
v = x.map(_.utf8String)
  } self ! SetState(v)

  tran.exec()

This always brings the question what about logging, so I'd wrap each
operation in such a thing:

def card(tran: Transaction, a: Any): Future[Long] = {
  val f = tran.zcard(a)
  f.onFailure {

case e = log.error(Whoops!, e)
  }
  f

}


Let me know how that looks to you, cheers!

Also, this thread may help you out in similar situations: Waiting on
multiple Akka messages
https://groups.google.com/forum/#!searchin/akka-user/futures$20multiple$20FSM/akka-user/KqePSVbaQEM/7L_cM9TNKAsJ
.


On Thu, Aug 7, 2014 at 2:19 PM, Soumya Simanta soumya.sima...@gmail.com
wrote:

 Michael,

 Thank you for your response.
 Here is what I'm struggling with.

  In order to use pipeTo pattern I'll need access to the transaction  (tran 
 )and
 the FIRST Future (zf) in the actor where I'm piping the Future to because
 the SECOND Future depends on the value (z) of FIRST. How can I do that ?

 //SECOND Future, depends on result of FIRST Future
   val rf: Future[Seq[ByteString]] = tran.zrange(key, z - 1, z)



 On Thursday, August 7, 2014 3:51:17 AM UTC-4, Michael Pisula wrote:

 Sorry, still early. Missed the part where you said that you don't want to
 use PipeTo because of the transaction. Not sure if that is a problem at all
 though. From what I see you use the transaction to make sure nothing
 happens with the values between your zcard and zrange calls, afterwards its
 only modification of the internal state. If you just pipe that to a
 separate actor containing the state I would expect things to work fine. Or
 do you want the transaction to ensure that update to the internal state and
 synced with the reads from redis. Then I am not sure that it will work like
 you implemented it.

 Cheers

 Am Donnerstag, 7. August 2014 09:04:13 UTC+2 schrieb Michael Pisula:

 Instead of mutating state from within the future I would use the pipeTo
 pattern. Using pipeTo you can send the result of a future to an actor (e.g.
 to self). There you can safely change state, as you are in
 single-threaded-illusion-land again...

 HTH

 Cheers,
 Michael

 Am Donnerstag, 7. August 2014 07:25:05 UTC+2 schrieb Soumya Simanta:

 I'm cross posting this here for better coverage.

 http://stackoverflow.com/questions/25174504/multiple-
 future-calls-in-an-actors-receive-method


 I'm trying to make two external calls (to a Redis database) inside an
 Actor's receive method. Both calls return a Future and I need the
 result of the first Future inside the second. I'm wrapping both calls
 inside a Redis transaction to avoid anyone else from modifying the value in
 the database while I'm reading it.

 The internal state of the actor is updated based on the value of the
 second Future.

 Here is what my current code looks like which I is incorrect because
 I'm updating the internal state of the actor inside a Future.onComplete
  callback.

 I cannot use the PipeTo pattern because I need both both Future have
 to be in a transaction. If I use Await for the first Future then my
 receive method will *block*. Any idea how to fix this ?

 My *second question* is related to how I'm using Futures. Is this
 usage of Futures below correct? Is there a better way of dealing with
 multiple Futures in general? Imagine if there were 3 or 4 Future each
 depending on the previous one.

 import akka.actor.{Props, ActorLogging, Actor}import 
 akka.util.ByteStringimport redis.RedisClient
 import scala.concurrent.Futureimport scala.util.{Failure, Success}

 object GetSubscriptionsDemo extends App {
   val akkaSystem = akka.actor.ActorSystem(redis-demo)
   val actor = akkaSystem.actorOf(Props(new SimpleRedisActor(localhost, 
 dummyzset)), name = simpleactor)
   actor ! UpdateState}
 case object UpdateState
 class SimpleRedisActor(ip: String, key: String) extends Actor with 
 ActorLogging {

   //mutable state that is updated on a periodic basis
   var mutableState: Set[String] = Set.empty

   //required by Future
   implicit val ctx = context dispatcher

   var rClient = RedisClient(ip)(context.system)

   def receive = {
 case UpdateState = {
   log.info(Start of UpdateState ...)

   val tran = rClient.transaction()

   val zf: Future[Long] = tran.zcard(key)  //FIRST Future
   zf.onComplete {

 case Success(z) = {
   //SECOND Future, depends on result of FIRST Future
   val rf: Future[Seq[ByteString]] = tran.zrange(key, z - 1, z)
   rf.onComplete {
 case 

Re: [akka-user] Synchronising responses sent from parents and children

2014-08-07 Thread Lawrence Wagerfield
Are you suggesting the default decider combined with a one-for-one strategy 
with a max retry attempt of 1, combined with the following code?:

override def preRestart(exception)
client ! exception
context stop self

On Thursday, August 7, 2014 12:29:05 PM UTC+1, Konrad Malawski wrote:

 Hi Lawrence,
 In general, exactly one entity in a distributed system should be 
 responsible for deciding about success / failure,
 otherwise there always will be a race of some kind.

 In your case though, the problem arrises because the service actor does 
 not know if the transaction actor has completed the work,
 so how about sending the response back through the transaction actor?

 Also, in your case, can the transaction actor fail after sending it's 
 response to the client actor, how would that happen (with a NonFatal 
 exception)?
 I'd expect it to do `client ! stuff; context stop self`, is that not the 
 case?



 On Thu, Aug 7, 2014 at 8:59 AM, Lawrence Wagerfield 
 lawr...@dmz.wagerfield.com javascript: wrote:

 I have problem that involves synchronising outbound messages from a 
 parent actor and its child actor. This particular problem is with regards 
 to forwarding failure messages to clients. 

 Here is the example: 

 I have a service actor that receives a request from a client actor*.*

 The service actor creates a new child transaction actor to deal with 
 said request, which then response directly to the client actor after 
 performing the work.

 If the transaction actor fails, it is stopped by the service actor which 
 then sends a failure report to the client actor.

 The problem is the client actor must now support receiving failures 
 after receiving the response it is actually interested in - otherwise the 
 potential 'post-workload' failures from the transaction actor may 
 deadletter, or worse, be misinterpreted by the client actor (i.e. a 
 failure for a subsequent transaction).

 I have considered an approach whereby the client actor must wait for the 
 transaction 
 actor to terminate before safely continuing, since after that point, it 
 can be guaranteed that no more messages will be received.

 Is there a common solution to this problem?

  -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google Groups 
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to akka-user+...@googlegroups.com javascript:.
 To post to this group, send email to akka...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




 -- 
 Cheers,
 Konrad 'ktoso' Malawski
 hAkker @ Typesafe

 http://typesafe.com
  

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] combining Akka Persistence with Akka Scheduling

2014-08-07 Thread Odd Möller
Hi Greg and Akka!

Nice to hear that you find the prototype interesting. I'll try to have a
look into your scheduler code as soon as I can. I will also look into
moving my existing code to a new github repo (which is not a fork of Akka
repo) for any further development.

Greetings
Odd


On Tue, Aug 5, 2014 at 8:20 PM, Greg Flanagan ventis...@gmail.com wrote:

 Odd,

 I have something similar to what you have but I haven't integrated
 persistence yet. It basically uses JodaTime to allow the user to schedule a
 task based on a datetime has the ability to repeat tasks at a cadence like,
 hourly, daily, weekly, etc., and repeat a finite or infinite number of
 times. It's used like this.

 val delayTo = DateTime(2014, 10, 2, 12, 30, 0, UTC)
 DateTimeScheduler.schedule(date, actorRef, message, Cadence.Daily,
 Repeat.Inf)

 this would schedule a message to be sent to the actorRef starting on Oct,
 2, 2014 at 12:30 UTC and will be run again at Oct 3, 2014 at 12:30 UTC and
 will repeat forever.

 This is just a prototype I was playing around with. Your persistence work
 looks nice, we should join forces.

 There is some code.
 https://github.com/kaiserpelagic/akka-datetime-scheduler

 Greg


 On Tuesday, August 5, 2014 8:59:51 AM UTC-7, Akka Team wrote:

 Hi Odd!
 I had a brief look at your prototype (didn't read the whole thing –
 that's a lot of work you've put in there! :-)).
 In general I think it would benefit from living outside of Akka as a
 library which users could pick up if they wanted to,
 then you could also pull in JodaTime (and not have regexes for dates and
 custom `At` types) but could re-use existing `Instant`s etc.

 Keep at it, good luck! :-)


 On Mon, Aug 4, 2014 at 5:59 PM, Greg Flanagan vent...@gmail.com wrote:

 Hey Odd,

 This looks promising. Thanks for passing along.

 Greg


 On Sunday, August 3, 2014 3:04:27 AM UTC-7, Odd Möller wrote:

 Hi Greg

 I have a prototype implementation of an Akka extension that uses
 persistence to save scheduler state: https://github.com/odd/akka/tr
 ee/wip-persistence-odd/akka-contrib/src/main/scala/akka/cont
 rib/persistence/scheduling. It is by no means finished and still very
 much a work-in-progress but please have a look anyway, perhaps you'll find
 something of value.

 Greetings
 Odd


 On Thu, Jul 31, 2014 at 10:03 AM, Martynas Mickevičius martynas.m...@
 typesafe.com wrote:

 Hi Greg,

 just dropping this in case you have not seen that it is possible to
 use quartz
 https://github.com/akka/akka/blob/9dae0ad525a92ebf8b0f5f42878a10628ac24aae/akka-samples/akka-sample-camel-scala/src/main/scala/sample/camel/QuartzExample.scala
  through
 akka-camel.


 On Fri, Jul 25, 2014 at 5:46 PM, Greg Flanagan vent...@gmail.com
 wrote:

 Endre,

 Seems reasonable to keep the scheduler light weight and performant.
 My plans were to build something around it. Thanks.

 Greg



 On Friday, July 25, 2014 7:40:39 AM UTC-7, Akka Team wrote:

 Hi Greg,


 On Fri, Jul 25, 2014 at 4:20 PM, Greg Flanagan vent...@gmail.com
 wrote:

 Hey Konrad, I wasn't planning on making it as powerful as quartz
 but simply making what's available in the scheduler persistent, so that
 work that is deferred (i.e. scheduled) will not be lost on jvm 
 shutdown. my
 use case doesn't require Quartz (i.e. run at 4:50 am on Tuesday), but 
 I do
 need the deferred jobs to persisted through jvm restarts.


 The Akka scheduler has been designed for high-volume low-resolution
 timing tasks, typically timeouts -- all with the requirement of high
 performance. While you can combine persistence and the Akka scheduler to
 achieve what you want, we will keep the scheduler simple -- since this 
 is a
 highly performance sensitive module we do not want to add features 
 there.
 You can definitely build something around it though.

 -Endre



 cheers,
 Greg


 On Friday, July 25, 2014 1:18:47 AM UTC-7, Konrad Malawski wrote:

 Hello Greg,
 short question - do you aim to provide something as powerful as
 quartz using this?
 I mean on monday at 13:02 etc. Because that's not really what
 our scheduler is designed to do - we only work with in X amount of 
 time.

 Instead maybe you'd simply like to integrate quartz with akka, as
 this extension does: https://github.com/typesafehub/akka-quartz-
 scheduler
 And let it do it's job :-)


 On Fri, Jul 25, 2014 at 1:05 AM, Greg Flanagan vent...@gmail.com
 wrote:

 I'm interested in combining Akka persistence which Akka
 Scheduling and wanted to know if there was an interest out there for
 something like this? I basically need a scheduler that doesn't loose 
 state
 after a vm crash. I've used Quartz before and it's too much 
 framework for
 what I want. Any requests of features people would like to see in 
 this type
 of module? Has it already been done and I just haven't searched 
 google
 enough?

 Cheers,
 Greg


  --
  Read the docs: http://akka.io/docs/
  Check the FAQ: http://doc.akka.io/docs/akka/c
 urrent/additional/faq.html
  Search the archives: 

Re: [akka-user] Synchronising responses sent from parents and children

2014-08-07 Thread Konrad Malawski
What I'm playing at is:

Assumptions:
I'm assuming we're talking about all these actors in the same JVM, nothing
you wrote is hinting a clustered env.

Execution:
If your actor reaches the point in the code where it `client ! result` and
does *nothing *(bold italic nothing, as in stopping :-)) afterwards – it
just *stops* itself (so no new messages will be processed, even it there
are some in the mailbox left).

Then, there can be *no* supervisionStrategy triggering failure as the send
is the *last* thing this actor has performed.
Then, there will be no next message processed, because it has stopped, thus
no such next message can trigger an supervisionStrategy triggering failure.

Which means that, there is no user-land exception that can happen after
that successful message send.
Exceptions that may trigger the parent's supervision strategy are from
there on only fatal errors, and from these you are not able to recover a
system anyway (out of memory etc).

Which means that, there will either be a successful message send and no
failure, or there will be a failure – so the code will not reach the
message send.

So, in a local setting, you do not need to do anything more than you
currently do – just make sure about this last thing my actor does is this
send rule.


If we're talking about a distributed setting, it's more difficult, and I
suggested a solution of this via replying via the master.
client - master - worker // create work
worker -- done-1 -- master -- done-1 -- client

Which creates more message sends, but then the master *knows *that the job
was successful.
There are optimisations around this scheme one could apply, but as I
understand this thread, we're talking *local *system here.


Hope this helps!



On Thu, Aug 7, 2014 at 4:30 PM, Lawrence Wagerfield 
lawre...@dmz.wagerfield.com wrote:

 Are you suggesting the default decider combined with a one-for-one
 strategy with a max retry attempt of 1, combined with the following code?:

 override def preRestart(exception)
 client ! exception
 context stop self

 On Thursday, August 7, 2014 12:29:05 PM UTC+1, Konrad Malawski wrote:

 Hi Lawrence,
 In general, exactly one entity in a distributed system should be
 responsible for deciding about success / failure,
 otherwise there always will be a race of some kind.

 In your case though, the problem arrises because the service actor does
 not know if the transaction actor has completed the work,
 so how about sending the response back through the transaction actor?

 Also, in your case, can the transaction actor fail after sending it's
 response to the client actor, how would that happen (with a NonFatal
 exception)?
 I'd expect it to do `client ! stuff; context stop self`, is that not the
 case?



 On Thu, Aug 7, 2014 at 8:59 AM, Lawrence Wagerfield 
 lawr...@dmz.wagerfield.com wrote:

 I have problem that involves synchronising outbound messages from a
 parent actor and its child actor. This particular problem is with regards
 to forwarding failure messages to clients.

 Here is the example:

 I have a service actor that receives a request from a client actor*.*

 The service actor creates a new child transaction actor to deal with
 said request, which then response directly to the client actor after
 performing the work.

 If the transaction actor fails, it is stopped by the service actor which
 then sends a failure report to the client actor.

 The problem is the client actor must now support receiving failures
 after receiving the response it is actually interested in - otherwise the
 potential 'post-workload' failures from the transaction actor may
 deadletter, or worse, be misinterpreted by the client actor (i.e. a
 failure for a subsequent transaction).

 I have considered an approach whereby the client actor must wait for
 the transaction actor to terminate before safely continuing, since
 after that point, it can be guaranteed that no more messages will
 be received.

 Is there a common solution to this problem?

  --
  Read the docs: http://akka.io/docs/
  Check the FAQ: http://doc.akka.io/docs/akka/
 current/additional/faq.html
  Search the archives: https://groups.google.com/
 group/akka-user
 ---
 You received this message because you are subscribed to the Google
 Groups Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to akka-user+...@googlegroups.com.
 To post to this group, send email to akka...@googlegroups.com.

 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




 --
 Cheers,
 Konrad 'ktoso' Malawski
 hAkker @ Typesafe

 http://typesafe.com

  --
  Read the docs: http://akka.io/docs/
  Check the FAQ:
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google Groups
 Akka User List group.
 To unsubscribe from 

Re: [akka-user] Synchronising responses sent from parents and children

2014-08-07 Thread Lawrence Wagerfield
It certainly makes sense. I wouldn't expect the send/stop operation to fail 
any more than I would expect the whole supervision framework to fail.

What I'm trying to defend against ultimately comes down to programmer 
error. Its quite likely that I'm being irrational in my perception of how 
errors might be introduced. E.g. a programmer might add some 'exceptional' 
code after the send - that in itself would be a bug, but I'd like for the 
error to be contained and not corrupt the rest of the system with 
race-conditioned 'failure after success' messages.

I believe the approach I posted just before your answer might work, using 
the restart to transmit failure within the transaction itself. It could 
ensure it doesn't send the message if the success message had already been 
sent.

What are your thoughts? 

(p.s. I know that running with the 'incompetent developer' assumption means 
they could quite-equally cock-up the fault handling code - but providing 
they didn't, it would mean all other exceptions would be handled 
gracefully.)

On Thursday, August 7, 2014 4:26:33 PM UTC+1, Konrad Malawski wrote:

 What I'm playing at is:

 Assumptions:
 I'm assuming we're talking about all these actors in the same JVM, nothing 
 you wrote is hinting a clustered env.

 Execution:
 If your actor reaches the point in the code where it `client ! result` and 
 does *nothing *(bold italic nothing, as in stopping :-)) afterwards – 
 it just *stops* itself (so no new messages will be processed, even it 
 there are some in the mailbox left).

 Then, there can be *no* supervisionStrategy triggering failure as the 
 send is the *last* thing this actor has performed.
 Then, there will be no next message processed, because it has stopped, 
 thus no such next message can trigger an supervisionStrategy triggering 
 failure.

 Which means that, there is no user-land exception that can happen after 
 that successful message send.
 Exceptions that may trigger the parent's supervision strategy are from 
 there on only fatal errors, and from these you are not able to recover a 
 system anyway (out of memory etc).

 Which means that, there will either be a successful message send and no 
 failure, or there will be a failure – so the code will not reach the 
 message send.

 So, in a local setting, you do not need to do anything more than you 
 currently do – just make sure about this last thing my actor does is this 
 send rule.


 If we're talking about a distributed setting, it's more difficult, and I 
 suggested a solution of this via replying via the master.
 client - master - worker // create work
 worker -- done-1 -- master -- done-1 -- client

 Which creates more message sends, but then the master *knows *that the 
 job was successful.
 There are optimisations around this scheme one could apply, but as I 
 understand this thread, we're talking *local *system here.


 Hope this helps!

  

 On Thu, Aug 7, 2014 at 4:30 PM, Lawrence Wagerfield 
 lawr...@dmz.wagerfield.com javascript: wrote:

 Are you suggesting the default decider combined with a one-for-one 
 strategy with a max retry attempt of 1, combined with the following code?:

 override def preRestart(exception)
 client ! exception
 context stop self

 On Thursday, August 7, 2014 12:29:05 PM UTC+1, Konrad Malawski wrote:

 Hi Lawrence,
 In general, exactly one entity in a distributed system should be 
 responsible for deciding about success / failure,
 otherwise there always will be a race of some kind.

 In your case though, the problem arrises because the service actor does 
 not know if the transaction actor has completed the work,
 so how about sending the response back through the transaction actor?

 Also, in your case, can the transaction actor fail after sending it's 
 response to the client actor, how would that happen (with a NonFatal 
 exception)?
 I'd expect it to do `client ! stuff; context stop self`, is that not the 
 case?



 On Thu, Aug 7, 2014 at 8:59 AM, Lawrence Wagerfield 
 lawr...@dmz.wagerfield.com wrote:

 I have problem that involves synchronising outbound messages from a 
 parent actor and its child actor. This particular problem is with regards 
 to forwarding failure messages to clients. 

 Here is the example: 

 I have a service actor that receives a request from a client actor*.*

 The service actor creates a new child transaction actor to deal with 
 said request, which then response directly to the client actor after 
 performing the work.

 If the transaction actor fails, it is stopped by the service actor which 
 then sends a failure report to the client actor.

 The problem is the client actor must now support receiving failures 
 after receiving the response it is actually interested in - otherwise the 
 potential 'post-workload' failures from the transaction actor may 
 deadletter, or worse, be misinterpreted by the client actor (i.e. a 
 failure for a subsequent transaction).

 I have considered an approach whereby the client 

[akka-user] Java with Lambda currying factory props method

2014-08-07 Thread Hannes Stockner
Hi,

I am playing around with Java 8 and Akka.

In Scala I liked to use currying in some scenarios in combination with Akka 
Props factory methods.

I tried to use a similar approach with classes from the new 
java.util.function package and would be interested to know if there are 
better approaches to achieve currying with Java.

Example:

Scala:

object ActorB {
  def props (externalDependency: ExternalDependency) () =
Props(new ActorB(externalDependency))
}

class ActorA(actorBProps: () = Props) extends Actor {
  val actorB = context.actorOf(actorBProps(), actorB);
  ...
}

Bootstrap {
  val actorBProps = ActorB.props(externalDependency)
  system.actorOf(ActorA.props(actorBProps));
}


Java 8:

ActorB {
  public static FunctionExternalDependency, SupplierProps props() {
return x - () - Props.create(ActorB.class, () - new ActorB(x));
  }
...
}

ActorA {
  public static FunctionSupplierProps, SupplierProps props() {
return p - () - Props.create(ActorA.class, () - new ActorA(p));
  }

  ActorA(SupplierProps props) {
//create actorB
ActorRef actorB = context.actorOf(props.get(), actorB);
  }
...
}

Bootstrap {
  SupplierProps actorBProps = ActorB.props().apply(externalDependency);
  system.actorOf(ActorA.props().appyl(actorBProps), ActorA);
}

Looking forward to get some feedback.

Thanks


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Akka Remoting fails for large messages

2014-08-07 Thread Syed Ahmed
Hi Endre,
I was using 2.3.3 -- Let me run with 2.3.4 if it makes a difference
thx
-Syed

On Thursday, August 7, 2014 1:19:29 AM UTC-7, drewhk wrote:

 Hi Syed,

 As the very first step, can you tell us what is the Akka version you are 
 using? If it is not Akka 2.3.4, please try to upgrade to 2.3.4 and see if 
 the issue still remains.

 -Endre


 On Thu, Aug 7, 2014 at 12:12 AM, Ryan Tanner ryan@gmail.com 
 javascript: wrote:

 When those large messages are received, what's supposed to happen?  It's 
 possible that whatever processing is triggered by that message is so CPU 
 intensive that Akka's internal remoting can't get any time for its own 
 messages.


 On Wednesday, August 6, 2014 12:08:20 PM UTC-6, Syed Ahmed wrote:

 Hello,
 Im new to Akka/Akka Remoting.. 

 We need to send large messages in the order 500-600MB. After viewing and 
 checking some mailer post I found that I have to change Akka Remote netty 
 tcp settings which I did and my application.conf has the following...

  remote {
 transport = akka.remote.netty.NettyRemoteTransport
 netty.tcp {
   hostname = ip address of host
   port = 2553
   maximum-frame-size = 524288000
send-buffer-size= 5
receive-buffer-size=5
 }
   }

 I create a client and remote server app using akka-remoting. Using 
 protobuff message serialization -- a client sends a large message to server 
 and the server acknowledges with the timestamp when it received after 
 deserialized the same. 

 With above settings things work fine for messages upto 200MB. When I try 
 slightly large message 250MB.. The server shows the following ..

 ---
 [INFO] [08/05/2014 23:41:39.302] 
 [RemoteNodeApp-akka.remote.default-remote-dispatcher-4] 
 [akka.tcp://RemoteNodeApp@remote-ip:2552/system/transports/
 akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%
 2FRemoteNodeApp%40remote_ip%3A36922-1] No response from remote. 
 Handshake timed out or transport failure detector triggered.

 ---

 and the client shows the following 

 [WARN] [08/05/2014 23:41:39.326] 
 [LocalNodeApp-akka.remote.default-remote-dispatcher-6] 
 [akka.tcp://LocalNodeApp@client_ip:2553/system/endpointManager/
 reliableEndpointWriter-akka.tcp%3A%2F%2FRemoteNodeApp%4010.194.188.97%3A2552-0]
  
 Association with remote system [akka.tcp://RemoteNodeApp@server_ip:2552] 
 has failed, address is now gated for [5000] ms. Reason is: [Disassociated].

 and the message is not sent at all..  Any idea what else is missing? 

 BTW -- in the above I don't see any OOM errors or anything and the 
 client/remote app are still up and running. 

 thx
 -Syed 



  -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google Groups 
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to akka-user+...@googlegroups.com javascript:.
 To post to this group, send email to akka...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Akka Remoting fails for large messages

2014-08-07 Thread Syed Ahmed
Hi Ryan,
In my test its a very large string and I just get the string back from the 
message (i.e. it gets deserialized) -- Im not doing anything further. This 
test is to just check on how large message can be send accross. I will 
attempt again with 2.3.4 to see if it makes a difference
thx
-syed

On Wednesday, August 6, 2014 3:12:27 PM UTC-7, Ryan Tanner wrote:

 When those large messages are received, what's supposed to happen?  It's 
 possible that whatever processing is triggered by that message is so CPU 
 intensive that Akka's internal remoting can't get any time for its own 
 messages.

 On Wednesday, August 6, 2014 12:08:20 PM UTC-6, Syed Ahmed wrote:

 Hello,
 Im new to Akka/Akka Remoting.. 

 We need to send large messages in the order 500-600MB. After viewing and 
 checking some mailer post I found that I have to change Akka Remote netty 
 tcp settings which I did and my application.conf has the following...

  remote {
 transport = akka.remote.netty.NettyRemoteTransport
 netty.tcp {
   hostname = ip address of host
   port = 2553
   maximum-frame-size = 524288000
send-buffer-size= 5
receive-buffer-size=5
 }
   }

 I create a client and remote server app using akka-remoting. Using 
 protobuff message serialization -- a client sends a large message to server 
 and the server acknowledges with the timestamp when it received after 
 deserialized the same. 

 With above settings things work fine for messages upto 200MB. When I try 
 slightly large message 250MB.. The server shows the following ..

 ---
 [INFO] [08/05/2014 23:41:39.302] 
 [RemoteNodeApp-akka.remote.default-remote-dispatcher-4] 
 [akka.tcp://RemoteNodeApp@remote-ip:2552/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FRemoteNodeApp%40remote_ip%3A36922-1]
  
 No response from remote. Handshake timed out or transport failure detector 
 triggered.

 ---

 and the client shows the following 

 [WARN] [08/05/2014 23:41:39.326] 
 [LocalNodeApp-akka.remote.default-remote-dispatcher-6] 
 [akka.tcp://LocalNodeApp@client_ip:2553/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FRemoteNodeApp%4010.194.188.97%3A2552-0]
  
 Association with remote system [akka.tcp://RemoteNodeApp@server_ip:2552] 
 has failed, address is now gated for [5000] ms. Reason is: [Disassociated].

 and the message is not sent at all..  Any idea what else is missing? 

 BTW -- in the above I don't see any OOM errors or anything and the 
 client/remote app are still up and running. 

 thx
 -Syed 





-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Improving Akka Persistence wrt CQRS/ES/DDD

2014-08-07 Thread ahjohannessen
On Wednesday, August 6, 2014 9:08:14 AM UTC+1, Martin Krasser wrote:

Kafka maintains an offset for each partition separately and a partition 
 is bound to a single node (disregarding replication). For example, if a 
 Kafka topic is configured to have 2 partitions, each partition starts 
 with offset=0, and, if you consume from that topic you only obtain a 
 partially ordered stream because Kafka doesn't define any ordering 
 across partitions (see Kafka docs for details). This situation is 
 comparable to other distributed datastores. For example, Cassandra only 
 maintains an ordering for entries with the same partition key (i.e. for 
 entries that reside on the same node). 

 In general, if you want to maintain an ordering of entries, you either 
 have to use 

 - a single writer in the whole cluster (which is the case for persistent 
 actors) or 
 - keep entries (that are generated by multiple producers) on a single 
 node so that the server is able to maintain a local counter (which is 
 what Kafka does with offsets for each partition separately) 

 Both limits scalability (as already mentioned by Patrik) for both write 
 throughput and data volume. It may well be that some applications are 
 fine with these limitations and benefit from a total ordering of entries 
 per tag but this should not be the default in akka-persistence. IMO, 
 it would make sense if akka-persistence allows applications to configure 
 an optional ordering per tag so that users can decide to sacrifice 
 scalability if total ordering is needed for a given tag (and it is up to 
 journal implementations how to implement that ordering). 


Interesting idea, especially since not all of us think in terms of 
scalability.
I agree with your opinion wrt optional ordering per tag. It would be nice to
find a middle-ground that made sense wrt scalability *and* the need of
being able to use offset per group of persistent actors of same type.
 

 As already mentioned in a previous post, causal ordering could be a 
 later extension to akka-persistence that goes beyond the limits of a 
 single writer or co-located storage *and* allows for better scalability. 
 I wish I had more time for hacking on a prototype that tracks causalities 
 :) 


That seems to be a great solution, but something tells me that this is 
not going to happen soon.

So, my conclusion is that I will go with using a single persistent actor as 
a journal, 
thus getting a tag + seqNr, and use persistent views to what I otherwise 
would 
use a normal persistent actor for, e.g. as in this sketch: 
 - https://gist.github.com/ahjohannessen/70381de6da3bde1c743e
and use snapshotting to reduce recovery time of persistent views.

Patrik, on a second thought, perhaps tags per event would not be that 
silly, at
least it gives more flexibility. However, I suppose it all depends on what 
you guys
want to use a tag for.

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Improving Akka Persistence wrt CQRS/ES/DDD

2014-08-07 Thread Vaughn Vernon
Can someone (Martin?) please post some rough performance and scalability 
numbers per backing storage type? I see these DDD/ES/CQRS discussions lead 
to consumer-developer limitations based on performance and scalability, but 
I have not seen any actual numbers. So please post numbers in events 
per-second as I would prefer not trying to hunt down such numbers in old 
posts.

I keep saying this, but it seems without much success, but 
akka-persistence, even at its slowest, probably still performs 5x-10x 
better than most relational stores, and at its best perhaps 500x better. I 
often poll my IDDD Workhop students for realistic transactions per-second 
numbers and most of the time it is at or below 100 tps. Some of the higher 
ones are around 1000 tps. A few students identify with 10,000 tps. As far 
as I know, akka-persistence can regularly perform in the 30,000 tps. With 
some store tops out at, what, 500,000 tps? The point I am trying to make 
is, even if you put a singleton sequence in front of your slowest possible 
store, let's assume that it could cost 30%. That would still leave 
performance at 20,000 tps on your slowest store, which is 20x faster than 
many, many enterprise applications. (There are faster ways of producing 
incremental sequences than using a singleton atomic long.)

I vote that you need to have a single sequence across all events in an 
event store. This is going to cover probably 99% of all actor persistence 
needs and it is going to make using akka-persistence way easier.

A suggestion: rather than looking so carefully at akka-persistence for 
performance and scalability increases, I think a lot could be gained by 
looking at false sharing analysis and padding solutions.

Vaughn



On Thursday, August 7, 2014 12:01:26 PM UTC-6, ahjohannessen wrote:

 On Wednesday, August 6, 2014 9:08:14 AM UTC+1, Martin Krasser wrote:

 Kafka maintains an offset for each partition separately and a partition 
 is bound to a single node (disregarding replication). For example, if a 
 Kafka topic is configured to have 2 partitions, each partition starts 
 with offset=0, and, if you consume from that topic you only obtain a 
 partially ordered stream because Kafka doesn't define any ordering 
 across partitions (see Kafka docs for details). This situation is 
 comparable to other distributed datastores. For example, Cassandra only 
 maintains an ordering for entries with the same partition key (i.e. for 
 entries that reside on the same node). 

 In general, if you want to maintain an ordering of entries, you either 
 have to use 

 - a single writer in the whole cluster (which is the case for persistent 
 actors) or 
 - keep entries (that are generated by multiple producers) on a single 
 node so that the server is able to maintain a local counter (which is 
 what Kafka does with offsets for each partition separately) 

 Both limits scalability (as already mentioned by Patrik) for both write 
 throughput and data volume. It may well be that some applications are 
 fine with these limitations and benefit from a total ordering of entries 
 per tag but this should not be the default in akka-persistence. IMO, 
 it would make sense if akka-persistence allows applications to configure 
 an optional ordering per tag so that users can decide to sacrifice 
 scalability if total ordering is needed for a given tag (and it is up to 
 journal implementations how to implement that ordering). 


 Interesting idea, especially since not all of us think in terms of 
 scalability.
 I agree with your opinion wrt optional ordering per tag. It would be nice 
 to
 find a middle-ground that made sense wrt scalability *and* the need of
 being able to use offset per group of persistent actors of same type.
  

 As already mentioned in a previous post, causal ordering could be a 
 later extension to akka-persistence that goes beyond the limits of a 
 single writer or co-located storage *and* allows for better scalability. 
 I wish I had more time for hacking on a prototype that tracks causalities 
 :) 


 That seems to be a great solution, but something tells me that this is 
 not going to happen soon.

 So, my conclusion is that I will go with using a single persistent actor 
 as a journal, 
 thus getting a tag + seqNr, and use persistent views to what I otherwise 
 would 
 use a normal persistent actor for, e.g. as in this sketch: 
  - https://gist.github.com/ahjohannessen/70381de6da3bde1c743e
 and use snapshotting to reduce recovery time of persistent views.

 Patrik, on a second thought, perhaps tags per event would not be that 
 silly, at
 least it gives more flexibility. However, I suppose it all depends on what 
 you guys
 want to use a tag for.


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups 

Re: [akka-user] Improving Akka Persistence wrt CQRS/ES/DDD

2014-08-07 Thread ahjohannessen
On Thursday, August 7, 2014 7:34:15 PM UTC+1, Vaughn Vernon wrote:

I vote that you need to have a single sequence across all events in an 
 event store. This is going to cover probably 99% of all actor persistence 
 needs and it is going to make using akka-persistence way easier.


If that was made optional + tag facility, then those that see it hurts 
scalability would opt-out and others would opt-in and pay the extra penalty.

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Deploying Akka Cluster App To Amazon EC2 and Monitoring

2014-08-07 Thread Akka Team

 I want to deploy the same App in Amazon Cloud EC2 instances. Also how to
 get this app auto scaling in amazon. I ran through web but did not find any
 good articles which could explain this step by step. Waiting for help in
 this regard.

That's more of an EC2 question than it is an akka one.
AFAIR EC2 provides so called auto scaling groups, which start new instances
of an image based on load in the group.
 Simply make such apps start the akka app and join the akka cluster and
you're ready to spread out your load.



 There are no good open source simple to setup tools.

I do find Kamon pretty good, for the time being.
We'll soon introduce a new Tracing SPI, then such tools will become even
better and with less overhead.


 I tried latest app dynamics, was lucky it can monitor/profile/trace akka's
 asynchronous calls. But tool is short of providing below information,

Feel free to ping them with this feedback :-)
The app dynamics team is actively working on it as far as we know :-)


 There are tools like Kamon/Datadog but they have dependency on other tools
 like StatsD,Graphite etc, which are complex to setup.

Well, all data collection tools will have such dependencies. In case of App
Dynamics / New Relic
And monitoring distributed systems itself is a full time job sometimes...
(all the infra work etc).


 Let me know simple ways to monitor akka cluster based apps. Also how to
 monitor akka cluster based apps in Amazon.

If you want simple monitoring tools then go for SaaS offerings. They need
the least ops work from your side.

I personally have monitored apps with: nagios / zabbix + ganglia / graphite
/ statsd and a recent dashboard discovery: graphana (only UI),
and a team I knew was super happy with kibana (that's for logs mostly
AFAIK) as well as home grown solutions...
And well, apps monitoring is not an easy task by itself IMO. We hope to
enable better tools soon via the upcoming Tracing SPI, but again – it will
be libraries and aggregation tools that you have to run somewhere.


-- konrad

-- 
Akka Team
Typesafe - The software stack for applications that scale
Blog: letitcrash.com
Twitter: @akkateam

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Improving Akka Persistence wrt CQRS/ES/DDD

2014-08-07 Thread Patrik Nordwall


 7 aug 2014 kl. 20:57 skrev ahjohannessen ahjohannes...@gmail.com:
 
 On Thursday, August 7, 2014 7:34:15 PM UTC+1, Vaughn Vernon wrote:
 
 I vote that you need to have a single sequence across all events in an event 
 store. This is going to cover probably 99% of all actor persistence needs 
 and it is going to make using akka-persistence way easier.
 
 If that was made optional + tag facility, then those that see it hurts 
 scalability would opt-out and others would opt-in and pay the extra penalty.

Ok, I think it's a good idea to leave it to the journal plugins to implement 
the full ordering as good as is possible with the specific data store. We will 
only require exact order of events per persistenceId.

Any other feedback on the requirements or proposed solution of the improved 
PersistentView?

/Patrik


 -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
  http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google Groups 
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to akka-user+unsubscr...@googlegroups.com.
 To post to this group, send email to akka-user@googlegroups.com.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] How to isolate Actor data model

2014-08-07 Thread Maatary Okouya
Hey Konrard, 

thanks for your input. I'll look into it. 

On Tuesday, August 5, 2014 6:22:30 PM UTC+2, Konrad Malawski wrote:

 Hi Maatary,
 This is more of an architectural question, not really limited to Actors.

 If I get that right you're worried about evolution in system B forcing 
 system A to evolve as well, but you'd rather not change both systems, as 
 essentially A can stay as is, right?
 So instead of directly storing data (Commands) from system A (this I 
 understand as as A's datatypes) receive the command, and emit (persist) 
 new datatypes which will represent *your* representation of that data (so 
 Events).
 Basically, have something convert stuff from world A to world B, and then 
 work on it in world B.

 Not sure if this helps, try reading up on Domain Driven Design, which may 
 help you find your way about designing such systems.

 PS: This has been cross posted to stack overflow: 
 http://stackoverflow.com/questions/25139100/how-to-isolate-actor-data-model 
 Please try using one medium to ask your question at first, otherwise we, 
 the people of the internets, duplicate efforts and discussions. :-)


 On Tue, Aug 5, 2014 at 2:42 PM, Maatary Okouya maatar...@gmail.com 
 javascript: wrote:

 I'm

 in the process of transforming the architecture of my system, which at 
 the level i am interested in, is not multi-threaded for now.

 Let say today i have a component A that communicate with a component B 
 via a command pattern to isolate one another from each other. So as we all 
 know only the command actually bridge them. The Good thing is only the 
 command implementation has to change if one of the component implementation 
 has to change. In my situation component B will evolve over time.

 I was wondering how to replicate that with Akka Actor. Indeed i believe 
 with actors, indeed command do not have the right to manipulate the content 
 of an actor or whatever the actor encapsulate.

 So i would like to know how should this pattern be change to meet actor 
 requirement.

 In my particular case, my command contains data from A that based on the 
 nature of the command trigger the creation of Data in B following its data 
 model.

 Just to give a clue: B is a component that wrap a semantic database. In A 
 we have some java object. It is not their representation as is that has to 
 be entered in B. Think of it like you have a language (we represent with 
 object the sentences that you can form with it) and its semantic that we 
 represent with triples. A is a component that observe message (in that 
 language) and trigger the command that create the appropriate semantic in 
 B. B does some reasoning with this semantic and monitor it over time.

 (This is a gross simplification but it is just to give a context)

 PS:

 I'm new in the Actor field.

 -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google Groups 
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to akka-user+...@googlegroups.com javascript:.
 To post to this group, send email to akka...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




 -- 
 Cheers,
 Konrad 'ktoso' Malawski
 hAkker @ Typesafe

 http://typesafe.com
  

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Akka Remoting fails for large messages

2014-08-07 Thread Syed Ahmed
FYI - Just tried with 2.3.4 and it doesn't change the behavior.. 



On Thursday, August 7, 2014 10:38:38 AM UTC-7, Syed Ahmed wrote:

 Hi Ryan,
 In my test its a very large string and I just get the string back from the 
 message (i.e. it gets deserialized) -- Im not doing anything further. This 
 test is to just check on how large message can be send accross. I will 
 attempt again with 2.3.4 to see if it makes a difference
 thx
 -syed

 On Wednesday, August 6, 2014 3:12:27 PM UTC-7, Ryan Tanner wrote:

 When those large messages are received, what's supposed to happen?  It's 
 possible that whatever processing is triggered by that message is so CPU 
 intensive that Akka's internal remoting can't get any time for its own 
 messages.

 On Wednesday, August 6, 2014 12:08:20 PM UTC-6, Syed Ahmed wrote:

 Hello,
 Im new to Akka/Akka Remoting.. 

 We need to send large messages in the order 500-600MB. After viewing and 
 checking some mailer post I found that I have to change Akka Remote netty 
 tcp settings which I did and my application.conf has the following...

  remote {
 transport = akka.remote.netty.NettyRemoteTransport
 netty.tcp {
   hostname = ip address of host
   port = 2553
   maximum-frame-size = 524288000
send-buffer-size= 5
receive-buffer-size=5
 }
   }

 I create a client and remote server app using akka-remoting. Using 
 protobuff message serialization -- a client sends a large message to server 
 and the server acknowledges with the timestamp when it received after 
 deserialized the same. 

 With above settings things work fine for messages upto 200MB. When I try 
 slightly large message 250MB.. The server shows the following ..

 ---
 [INFO] [08/05/2014 23:41:39.302] 
 [RemoteNodeApp-akka.remote.default-remote-dispatcher-4] 
 [akka.tcp://RemoteNodeApp@remote-ip:2552/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FRemoteNodeApp%40remote_ip%3A36922-1]
  
 No response from remote. Handshake timed out or transport failure detector 
 triggered.

 ---

 and the client shows the following 

 [WARN] [08/05/2014 23:41:39.326] 
 [LocalNodeApp-akka.remote.default-remote-dispatcher-6] 
 [akka.tcp://LocalNodeApp@client_ip:2553/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FRemoteNodeApp%4010.194.188.97%3A2552-0]
  
 Association with remote system [akka.tcp://RemoteNodeApp@server_ip:2552] 
 has failed, address is now gated for [5000] ms. Reason is: [Disassociated].

 and the message is not sent at all..  Any idea what else is missing? 

 BTW -- in the above I don't see any OOM errors or anything and the 
 client/remote app are still up and running. 

 thx
 -Syed 





-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Improving Akka Persistence wrt CQRS/ES/DDD

2014-08-07 Thread Vaughn Vernon
I am sure you have already thought of this, Patrik, but if you leave full 
ordering to the store implementation, it could still have unnecessary 
limitations if the implementor chooses to support sequence only for 
persistenceId. One very big limitation is, if the store doesn't support 
single sequence you still can't play catch-up over the entire store if you 
are dependent on interleaved events across types. You can only re-play all 
events properly if using a global sequence. Well, you could also do so 
using casual consistency, but (a) that's kinda difficult, and (b) it's not 
supported at this time.

Vaughn


On Thursday, August 7, 2014 1:29:33 PM UTC-6, Patrik Nordwall wrote:



 7 aug 2014 kl. 20:57 skrev ahjohannessen ahjoha...@gmail.com 
 javascript::

 On Thursday, August 7, 2014 7:34:15 PM UTC+1, Vaughn Vernon wrote:

 I vote that you need to have a single sequence across all events in an 
 event store. This is going to cover probably 99% of all actor persistence 
 needs and it is going to make using akka-persistence way easier.


 If that was made optional + tag facility, then those that see it hurts 
 scalability would opt-out and others would opt-in and pay the extra penalty.


 Ok, I think it's a good idea to leave it to the journal plugins to 
 implement the full ordering as good as is possible with the specific data 
 store. We will only require exact order of events per persistenceId.

 Any other feedback on the requirements or proposed solution of the 
 improved PersistentView?

 /Patrik


  -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google Groups 
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to akka-user+...@googlegroups.com javascript:.
 To post to this group, send email to akka...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.



-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Database actor design

2014-08-07 Thread Jabbar Azam
Hello,

I'm trying to design a database actor to handle all the database database 
activity. The domain entities are GPS locations and users. I was thinking 
of having two database actors, one with crud operations for GPS locations 
and the second for crud operations for the users. I'm not sure if this 
partition by domain objects is correct!

Or do I have one master DB actor that calls actors for each combination of 
crud operation and domain entity and the result is sent back to the 
requesting actor?

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] What is akka message network overhead?

2014-08-07 Thread Sean Zhong


 Unfortunately there is no way to reduce this overhead without changing the 
 wire layer format, which we cannot do now. As you correctly see, 
 practically all the overhead comes from the path of the destination and 
 sender actor. In the future we have plans to implement a scheme which 
 allows the sender to abbreviate the most common paths used to a single 
 number, but this needs a new protocol.


Hi, Endre and Viktor, 

suppose I want to hack the wire format transport, which source file I 
should change? 

On Thursday, August 7, 2014 5:11:17 PM UTC+8, √ wrote:

 That would be completely outside of Akka.


 On Thu, Aug 7, 2014 at 11:01 AM, Sean Zhong cloc...@gmail.com 
 javascript: wrote:

 compressed link/interface


 Is this configuration inside Akka conf? I cannot find the document, do you 
 have pointer to this?
  

 On Thursday, August 7, 2014 4:58:05 PM UTC+8, √ wrote:

 Hi Sean,


 On Thu, Aug 7, 2014 at 10:49 AM, Sean Zhong cloc...@gmail.com wrote:

 Hi Viktor,

 About wire-compression, do you mean this?

 akka {
  remote {
  compression-scheme = zlib # Options: zlib (lzf to come), leave out 
 for no compression
  zlib-compression-level = 6 # Options: 0-9 (1 being fastest and 9 being 
 the most compressed), default is 6
  }
 }



 No, that's from a legacy version of Akka. I mean using a compressed 
 link/interface.
  

 Will it do compression at message level? or at a batch level(share same 
 source machine and target machine)? 

  

 Hi Endre,

 This is the Akka wire level envelope, cannot be directly controlled by 
 users (unless someone writes a new transport of course).


 Which part of source code I can look at to write a new transport?  






 On Thursday, August 7, 2014 4:22:16 PM UTC+8, √ wrote:

 You can do wire-level compression.


 On Thu, Aug 7, 2014 at 10:09 AM, Endre Varga endre...@typesafe.com 
 wrote:




 On Thu, Aug 7, 2014 at 10:05 AM, √iktor Ҡlang viktor...@gmail.com wrote:

 Or add compression.


 This is the Akka wire level envelope, cannot be directly controlled by 
 users (unless someone writes a new transport of course).
  
 -Endre
  

 On Aug 7, 2014 9:52 AM, Endre Varga endre...@typesafe.com wrote:

  Hi Sean,

 Unfortunately there is no way to reduce this overhead without changing the 
 wire layer format, which we cannot do now. As you correctly see, 
 practically all the overhead comes from the path of the destination and 
 sender actor. In the future we have plans to implement a scheme which 
 allows the sender to abbreviate the most common paths used to a single 
 number, but this needs a new protocol.

 So the answer currently is that you cannot reduce this overhead without 
 introducing some batching scheme yourself: instead of sending MyMessage you 
 can send Array[MyMessage], so the cost of the recipient path is only 
 suffered once for the batch, but not for the individual messages -- i.e. 
 you can amortize the overhead.

 -Endre


 On Thu, Aug 7, 2014 at 8:11 AM, Sean Zhong cloc...@gmail.com wrote:

 Is it possible to reduce the average message overhead?

 200 bytes extra cost per remote message doesn't looks good...


 On Thursday, August 7, 2014 1:45:12 PM UTC+8, Sean Zhong wrote:

 Hi Michael,

 I used wireshark to capture the traffic. I found for each message sent(the 
 message is sent with option noSender), there is an extra cost of ActorPath. 

 For example, for the following msg example, message payload length length 
 is 100 bytes(bold), but there is also a target actor path for 221 
 bytes(red), which is much bigger than message itself.
 Can the actorPath overhead cost be reduced?
 . 

 akka.tcp://app0executor0@192.168.1.53:51582/remote/akka.tcp/
 2120193a-e10b-474e-bccb-8ebc4b3a0247@192.168.1.53:48948/remo
 te/akka.tcp/cluster@192.168.1.54:43676/user/master/Worker1/app_0_executor_
 0/group_1_task_0#-768886794o
 h
 *1880512348131407402383073127833013356174562750285568666448502416582566533241241053122856142164774120*
 :



 On Thursday, August 7, 2014 1:34:42 AM UTC+8, Michael Frank wrote:

  sean- how are you measuring your network thoroughput?  does 140MB/s 
 include TCP, IP, ethernet header data?  are you communicating across local 
 network or across the internet?  the greater the distance your packets have 
 to travel (specifically the number of hops), the higher chance that they 
 will get dropped and retransmitted, or fragmented.  a tool like Wireshark, 
 tcpdump, or ScaPy could help you differentiate utilization at different 
 protocol layers and also identify network instability.

 -Michael

 On 08/06/14 08:46, Sean Zhong wrote:
  
 Konrad, thanks. 

  After enabling the debug flag, 

  I saw the system message like Terminate are using javaSerialiaer, is 
 this expected?

  [DEBUG] [08/06/2014 22:19:11.836] [0e6fb647-7893-4328-a335-5e26e
 2ab080c-akka.actor.default-dispatcher-4] [akka.serialization.Serializat
 ion(akka://0e6fb647-7893-4328-a335-5e26e2ab080c)] Using serializer[
 *akka.serialization.JavaSerializer*] for message