[akka-user] Re: include external conf
Cool, thanks a lot! 在 2015年4月3日星期五 UTC-7上午5:28:13,lutzh写道: Hi, this is covered in http://doc.akka.io/docs/akka/2.3.9/general/configuration.html#Including_files and in much greater detail in https://github.com/typesafehub/config/blob/master/HOCON.md#includes Hope this helps! On Friday, April 3, 2015 at 4:00:53 AM UTC+2, Leon Ma wrote: Hi, In my application.conf, I'd like to include another conf from a dependent jars. Shall I just do include abc.conf or include classpath(abc.conf) Thanks Leon -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Router Resizing
Hi, On Tue, Mar 31, 2015 at 3:39 PM, Zen simrenkaur1...@gmail.com wrote: Hi, I am new to Akka and am exploring router resizing feature of Akka.I have a doubt here that once the router increases the number of routees at a given time , cant it also decrease the number.I mean i read it here http://stackoverflow.com/questions/16649555/dynamically-add-remove-routees-to-a-router-actor that once size is increased the size is increased i.e the number of routees you have to reduce it by giving Poison pill. Please correct me if I am wrong. Maybe this is what you are looking for: http://doc.akka.io/docs/akka/2.3.6/scala/routing.html#Dynamically_Resizable_Pool -Endre Further its related to the previous one only that once you have specified initial number of instances why do u specify lower bound , which at times is lower then the initial number of instances. So does the number of routees decrease by their own when you dont have sufficient amount of work or you have to explicitly give poison pill! Thanks! -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Akka Team Typesafe - Reactive apps on the JVM Blog: letitcrash.com Twitter: @akkateam -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Akka FSM stateTimeout seems broken.
If I understand correctly, the expectation is that *no* StateTimeout should be fired - due to the: stay() forMax(Duration.Inf) `forMax` is defined as such: def forMax(timeout: Duration): State[S, D] = timeout match { case f: FiniteDuration ⇒ copy(timeout = Some(f)) case _ ⇒ copy(timeout = None) } When calling `when(..., stateTimeout = 5.seconds)`, I observed that a `register` method gets called. It updates a mutable map with a time-out. final def when(stateName: S, stateTimeout: FiniteDuration = null)(stateFunction: StateFunction): Unit = register(stateName, stateFunction, Option(stateTimeout)) Lastly, I saw that, per the recent PR's changes (noted above by Konrad), the following if-condition will evaluate to *true*, I believe, since `notifies` gets set to *true* by default - https://github.com/akka/akka/blob/master/akka-actor/src/main/scala/akka/actor/FSM.scala#L126. https://github.com/akka/akka/blob/master/akka-actor/src/main/scala/akka/actor/FSM.scala#L644-649 I'll keep looking at it tomorrow. But I wanted to post here to make sure I'm on the right path. Thanks, Kevin On Monday, April 6, 2015 at 7:25:14 AM UTC-4, Konrad Malawski wrote: Thanks for reporting! -- Cheers, Konrad 'ktoso’ Malawski Akka http://akka.io @ Typesafe http://typesafe.com On 6 April 2015 at 12:53:30, folex (0xd...@gmail.com javascript:) wrote: https://github.com/akka/akka/issues/17140 On Friday, April 3, 2015 at 6:11:02 PM UTC+3, Akka Team wrote: Hi folex, thanks for reporting. Around that time this commit was merged into FSM https://github.com/akka/akka/commit/2c88bb116903b42decb9d8063dc410325a9b9d29 which indeed changes semantics of `stay()` slightly (see documentation). It changed when state transitions are triggered (the events). I think there's a different inconsistency uncovered by your example though... When using `goto(currentState) forMax Duration.Inf` it was ignored (it seems, didn't debug yet, just observed), the same happens with `stay() forMax Duration.Inf`. However `stay() forMax 10.minutes` did override the 5seconds timeout... I think this is a bug and we should investigate in depth - would you mind opening an issue on http://github.com/akka/akka/issues? If you'd like to a PR fixing it would be even more awesome! :-) Thanks a lot in advance! -- Konrad On Fri, Apr 3, 2015 at 10:38 AM, folex 0xd...@gmail.com wrote: sealed trait State import scala.concurrent.duration._ import akka.actor._ sealed trait Data case object Initial extends State case object Waiting extends State case object Empty extends Data class FSMActor extends LoggingFSM[State, Data] { startWith(Initial, Empty) when(Initial) { case Event(Wait, _) = goto(Waiting) } when(Waiting, stateTimeout = 5.seconds) { case Event(Message, _) = println(self.path.name + got message) stay() forMax (Duration.Inf) case Event(StateTimeout, _) = println(self.path.name + got StateTimeout :(); stay() } } val system = ActorSystem(system) val fsm = system.actorOf(Props(new FSMActor), fsm) fsm ! Wait fsm ! Message (code on lpaste: http://lpaste.net/130078) But actor keeps receiving StateTimeout messages I'm using 2.4-SNAPSHOT, and before Apr 02 update, StateTimeout wasn't fired even without `forMax` in stay() clause. Maybe the problem lays in my understanding of stateTimeout argument? Should I use setStateTimeout or something? Thanks in advance. -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+...@googlegroups.com. To post to this group, send email to akka...@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Akka Team Typesafe - Reactive apps on the JVM Blog: letitcrash.com Twitter: @akkateam -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+...@googlegroups.com javascript:. To post to this group, send email to akka...@googlegroups.com javascript:. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Read the docs: http://akka.io/docs/ Check the FAQ:
Re: [akka-user] 2.2.3 Noisy shutdown w/exception
Hi Robert, On Mon, Mar 30, 2015 at 7:13 PM, Robert Metzger metrob...@gmail.com wrote: Thanks for the quick reply. This is the setting: watch-failure-detector{ heartbeat-interval = 10 s acceptable-heartbeat-pause = 100 s The above setting means that you accept 100s of silence from a remote host, therefore no DeathWatch events will be generated earlier than 100 seconds. This explains why you get late notifications. The pre 2.3.9 scenario worked accidentally, now it really works and respects this setting properly. -Endre threshold = 12 } On Mon, Mar 30, 2015 at 5:03 PM, Endre Varga endre.va...@typesafe.com wrote: Hi Robert What is your watch failure detector setting? Detection speed depends on those. There was a bug in earlier remoting that published internal AddressTerminated messages when it was not supposed to (remoting does not consider unreachable machines as dead, that decision is taken by remote DeathWatch or clustering). -Endre On Mon, Mar 30, 2015 at 4:51 PM, Robert Metzger metrob...@gmail.com wrote: A quick follow-up question: I've upgraded Akka from 2.3.7 to 2.3.9. I've noticed that failed remote machines are detected much later in 2.3.9 than In 2.3.7. Akka detected failed machines in less than 5 seconds with 2.3.7. With 2.3.9 it took much more time, in the example below almost 2 minutes. I haven't investigated the issue closer. Maybe this is also caused by our system. Did anything with respect to failure detection change between the two releases? When using Flink on YARN, there are actually two systems monitoring the JVMs: Akka and YARN. From the timestamps in the log, one can easily see that the time until a failed JVM is detected is much longer with 2.3.9 With Akka 2.3.9: *15:07:56,922 *WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp:// flink@130.149.21.2:39280] has failed, address is now gated for [5000] ms. Reason is: [Disassociated]. *yarn detects failure -- 15:07:58,130* INFO org.apache.flink.yarn.ApplicationMaster$$anonfun$2$$anon$1- Container container_1426807853451_0035_01_02 is completed with diagnostics: Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 Killed by external signal *15:08:04,163* WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp:// flink@130.149.21.2:39280] has failed, address is now gated for [5000] ms. Reason is: [Association failed with [akka.tcp://flink@130.149.21.2:39280 ]]. *15:08:14,143* WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp:// flink@130.149.21.2:39280] has failed, address is now gated for [5000] ms. Reason is: [Association failed with [akka.tcp://flink@130.149.21.2:39280 ]]. 15:08:24,149 WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp:// flink@130.149.21.2:39280] has failed, address is now gated for [5000] ms. Reason is: [Association failed with [akka.tcp://flink@130.149.21.2:39280 ]]. 15:08:34,138 WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp:// flink@130.149.21.2:39280] has failed, address is now gated for [5000] ms. Reason is: [Association failed with [akka.tcp://flink@130.149.21.2:39280 ]]. 15:08:44,154 WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp:// flink@130.149.21.2:39280] has failed, address is now gated for [5000] ms. Reason is: [Association failed with [akka.tcp://flink@130.149.21.2:39280 ]]. 15:08:54,158 WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp:// flink@130.149.21.2:39280] has failed, address is now gated for [5000] ms. Reason is: [Association failed with [akka.tcp://flink@130.149.21.2:39280 ]]. 15:09:04,146 WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp:// flink@130.149.21.2:39280] has failed, address is now gated for [5000] ms. Reason is: [Association failed with [akka.tcp://flink@130.149.21.2:39280 ]]. 15:09:14,150 WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp:// flink@130.149.21.2:39280] has failed, address is now gated for [5000] ms. Reason is: [Association failed with [akka.tcp://flink@130.149.21.2:39280 ]]. *15:09:44,165* WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp:// flink@130.149.21.2:39280] has failed, address is now gated for [5000] ms. Reason is: [Association failed with [akka.tcp://flink@130.149.21.2:39280 ]]. 15:09:54,144 WARN akka.remote.RemoteWatcher - Detected unreachable: [akka.tcp://flink@130.149.21.2:39280] *akka sends Terminated -- 15:09:54,154* INFO org.apache.flink.runtime.instance.InstanceManager - Unregistered task
Re: [akka-user] Need help getting custom flow graph to work
Hi Jason, What is exactly that does not work? This is a bit too much code to digest, can you reduce it to a reproducer of your actual problem? -Endre On Mon, Mar 30, 2015 at 6:34 PM, Jason Martens m...@jasonmartens.com wrote: Hello Akka users, I've been trying out akka-streams, and I have a sample project partially working but can't get this particular flow graph to work correctly. I'm writing a wrapper around a slow backend service that will cache objects in another cache backend. When a request comes in, I want the flow to attempt to pull from the cache backend, and if that fails pull from the slow backend and also load the object into the cache. One requirement is that the slow backend can only return entire objects, but that requests are normally only made for a part of the object. To implement this, I have been trying to build a graph that looks something like this: slowBackendSource[FullObject] ~ FlexiRoute[FullObject] ~ LoadIntoCacheSink ~ FlexiRoute[PartialObject]~ FlexiMerge[PartialObject] CacheBackendSource[PartialObject] ~ FlexiMerge[PartialObject] ~ Output[PartialObject] The idea is that I could attach both Sources and the Output sink at runtime before the flow is materialized, since I need to parameterize the Source for the correct PartialObject. Is this the best way of doing this? Should I use a custom ActorPublisher instead that has a side-effect of loading objects into the cache? The current code I'm trying to get working is below, with some type errors I can't figure out how to resolve marked in the comments. Thanks in advance for your help. I'm very excited about the 1.0, whenever it's ready! Jason import akka.stream.FanInShape import akka.stream.FanOutShape import akka.stream.scaladsl._ import akka.stream._ import scala.collection.immutable class ObjectSplitShape[O, P](_init: FanOutShape.Init[(O, P)] = FanOutShape.Name[(O, P)](ObjectSplit)) extends FanOutShape[(O, P)](_init) { val outObject = newOutlet[O](outObject) val outPart = newOutlet[P](outPart) protected override def construct(i: FanOutShape.Init[(O, P)]) = new ObjectSplitShape(i) } class ObjectSplit[O, P] extends FlexiRoute[(O, P), ObjectSplitShape[O, P]]( new ObjectSplitShape, OperationAttributes.name(ObjectSplit)) { import FlexiRoute._ override def createRouteLogic(p: PortT) = new RouteLogic[(O, P)] { override def initialState = State[Any](DemandFrom(p.outPart)) { (ctx, _, element) = val (object, objectPart) = element ctx.emit(p.outObject)(object) ctx.emit(p.outPart)(objectPart) SameState } } } class ObjectPartMergeShape[P](_init: FanInShape.Init[P] = FanInShape.Name(ObjectPartMerge)) extends FanInShape[P](_init) { val slowSource = newInlet[P](slowSource) val cacheSource = newInlet[P](cacheSource) protected override def construct(i: FanInShape.Init[P]) = new ObjectPartMergeShape(i) } class ObjectPartMerge[P] extends FlexiMerge[P, ObjectPartMergeShape[P]]( new ObjectPartMergeShape, OperationAttributes.name(ReadCacheFirst)) { import akka.stream.scaladsl.FlexiMerge._ override def createMergeLogic(p: PortT) = new MergeLogic[P] { override def initialState = State[P](ReadPreferred(p.cacheSource, p.slowSource)) { (ctx, input, element) = ctx.emit(element) SameState } override def initialCompletionHandling = eagerClose } } case class ObjectShape[O, P](slowIn: Inlet[(O, P)], cacheIn: Inlet[P], responseOut: Outlet[P]) extends Shape { override val inlets: immutable.Seq[Inlet[_]] = slowIn :: cacheIn :: Nil override val outlets: immutable.Seq[Outlet[_]] = responseOut :: Nil override def deepCopy() = ObjectShape( new Inlet[(O, P)](slowIn.toString), new Inlet[P](cacheIn.toString), new Outlet[P](responseOut.toString)) override def copyFromPorts(inlets: immutable.Seq[Inlet[_]], outlets: immutable.Seq[Outlet[_]]) = { assert(inlets.size == this.inlets.size) assert(outlets.size == this.outlets.size) //Type mismatch, expected: Inlet[(NotInferedO, NotInferedP)], acutal: Inlet[_] for Inlets(0) ObjectShape(inlets(0), inlets(1), outlets(0)) } } object ObjectGraph { def apply[O, P](slowSource: Inlet[(O, P)], cacheSource: Inlet[P], cacheBackend: CacheBackend): Graph[ObjectShape[O, P], Nothing] = { FlowGraph.partial() { implicit b = import FlowGraph.Implicits._ val objectSplitRoute = b.add(new ObjectSplit()) val objectPartMerge = b.add(new ObjectPartMerge()) val cacheSink = b.add(Sink.foreach[Object](fullObject = cacheBackend.loadIntoCache(fullObject))) slowSource ~ objectSplitRoute.in // Compile error Cannot resolve symbol ~. Should this be a Source type instead of an Inlet? objectSplitRoute.outPart ~ objectPartMerge.slowSource
Re: [akka-user] Re: how to tune akka remoting performance
Hi Ivan, You can try to set the tcp-nodelay option to be false, so you get some batching from the TCP driver. You might want to try to tweak the netty pool-size, too: - akka.remote.netty.tcp.server-socket-worker-pool - akka.remote.netty.tcp.client-socket-worker-pool (http://doc.akka.io/docs/akka/2.3.9/general/configuration.html#akka-remote) On Thu, Apr 2, 2015 at 6:30 PM, Ivan Balashov ibalas...@gmail.com wrote: Patrik, 2015-04-01 20:12 GMT+03:00 Patrik Nordwall patrik.nordw...@gmail.com: Are you sending all messages in one go? Unfortunately, yes, which is rather unrealistic and creates needless GC activity, I fully admit my oversight on this. You can compare with this benchmark: https://github.com/akka/akka/tree/release-2.3/akka-samples/akka-sample-remote-scala/src/main/scala/sample/remote/benchmark Finally I managed to run this example in Intellij IDEA. The key to success was to take master branch rather than 2.x, which is broken Idea-wise ( http://goo.gl/uUJdGD) Here are some results as run on 4 core laptop under OS X: == It took 38830 ms to deliver 100 messages, throughtput 25753 msg/s, max round-trip 62 ms, burst size 100, payload size 100 == It took 35742 ms to deliver 100 messages, throughtput 27978 msg/s, max round-trip 1000 ms, burst size 1000, payload size 100 == It took 34356 ms to deliver 100 messages, throughtput 29106 msg/s, max round-trip 804 ms, burst size 1, payload size 100 == It took 21636 ms to deliver 100 messages, throughtput 46219 msg/s, max round-trip 2033 ms, burst size 5, payload size 100 The results are somewhat comparable to what was observed in my tests with high CPU usage caused particularly by context switching, and partly by Akka Remoting. Running Receiver took 1m 20s CPU time Method,Time (ms),Own Time (ms) *sun.misc.Unsafe.unpark(Object) Unsafe.java (native),9716,9716* sun.nio.ch.KQueueArrayWrapper.kevent0(int, long, int, long) KQueueArrayWrapper.java (native),3841,3838 java.lang.ClassLoader.defineClass1(String, byte[], int, int, ProtectionDomain, String) ClassLoader.java (native),2673,2050 java.net.URI$Parser.scan(int, int, long, long) URI.java,1420,1420 com.yourkit.probes.Table.createRow() Table.java,1123,1123 java.lang.Integer.parseInt(String) Integer.java,1091,1085 java.net.URLClassLoader$1.run() URLClassLoader.java,4844,906 scala.collection.AbstractIterable.init() Iterable.scala,508,469 java.lang.Thread.sleep(long) Thread.java (native),416,416 akka.remote.WireFormats$ActorRefData.init(CodedInputStream, ExtensionRegistryLite) WireFormats.java,419,387 java.lang.AbstractStringBuilder.expandCapacity(int) AbstractStringBuilder.java,381,371 If we divide 1M messages / 80 Sec CPU Time = 12K rps / Core max, which corresponds to what is observed in test. In case my calculations are correct, doesn't this looks a bit unsettling for Akka to be a CPU-bound system, even when the only thing we do is pass around chunks of bytes? ;-) To be honest, akka-remote is not the fastest kid on the block (it is ripe for a redesign), but Patrik and I have seen much better results than you have here. The unpark method that you highlight up there is not really relevant though. -Endre -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Akka Team Typesafe - Reactive apps on the JVM Blog: letitcrash.com Twitter: @akkateam -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Problem with unique name when reusing actor
Hi, On Wed, Apr 1, 2015 at 8:12 AM, Krishna Kadam shrikrishna.kad...@gmail.com wrote: Hi All, I got the same problem while reusing the akka actors by their names, you have mentioned accessing of Future object returned by using getSender.tell() method. I understood the cause of problem, but you have not mentioned solution for this problem. The solution is that you should not close over mutable state in Future callbacks. The sender() call accesses a mutable field. The solution is simply to not do that. 1. What configuration should be used to avoid this problem? This is not related to any configuration, this is a user bug. 2. Is there any other way to avoid this problem? Don't close over mutable state in asynchronous callbacks. -Endre Thanks Regards Krishna Kadam -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Akka Team Typesafe - Reactive apps on the JVM Blog: letitcrash.com Twitter: @akkateam -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Re: How to trace Disassociacion cause? [2.3.9]
Hi Marek, What is your configuration? Also, is there a small reproducer that you can share with us? We cannot fully debug your application since we do these as a paid service, but if you have a small code that reliably reproduces the issue, you can file a bug and we will investigate it in detail. -Endre On Fri, Apr 3, 2015 at 2:36 PM, Marek Żebrowski marek.zebrow...@gmail.com wrote: I got that the other part is breaking connection 2015-04-03 12:07:53,806 INFO akka.actor.LocalActorRef akka://sgActors/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FsgActors%4010.90.23.151%3A39036-1 - Message [akka.remote.transport.Association Handle$Disassociated] from Actor[akka://sgActors/deadLetters] to Actor[akka://sgActors/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FsgActors%4010.90.23.151%3A39036-1#1013015423] was not delivere d. [6] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'. 2015-04-03 12:07:53,809 DEBUG akka.remote.EndpointWriter akka.tcp:// sgActors@10.79.1.14:2555/system/endpointManager/endpointWriter-akka.tcp%3A%2F%2FsgActors% 40app1.sgrouples.com%3A2552-2 - Disassociated [akka.tcp://sg Actors@10.79.1.14:2555] - [akka.tcp://sgact...@app1.sgrouples.com:2552] 2015-04-03 12:07:53,812 DEBUG akka.remote.EndpointWriter akka.tcp:// sgActors@10.79.1.14:2555/system/endpointManager/endpointWriter-akka.tcp%3A%2F%2FsgActors% 40app1.sgrouples.com%3A2552-2 - Disassociated [akka.tcp://sg Actors@10.79.1.14:2555] - [akka.tcp://sgact...@app1.sgrouples.com:2552] But still no idea why - no error message or exception -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Akka Team Typesafe - Reactive apps on the JVM Blog: letitcrash.com Twitter: @akkateam -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Most effective way to integrate streams with traditional actors?
Hi Rich, On Mon, Apr 6, 2015 at 10:15 PM, Rich Henry rhenr...@gmail.com wrote: Hi, I would like to integrate an ActorPublisher with other actors created elsewhere, but I can't find a way to fully specify the path of the materialized ActorRef at configuration time, as the top-level path component always seems to be generated at runtime (i.e. $a). That is not possible to specify. But you can always use that actor as a proxy to your real actor. Also, if you don't care about backpressure you can just use Sink.foreach(ref ! _), or you can use (depends on your use case) src.mapAsync(ref ? _).to(Sink.ignore) if a request-ack style communication is enough for you. -Endre Is there a way to configure my Flow in a way where I can predict the full path for use in an .actorSelection() elsewhere? The only solutions I can see right now is to wrap the Flow in another explicitly created Actor, which would just be boilerplate, or to find a way to pass the materialized ActorRef out into the wild via my own registration function. Any ideas? (and yes I did post a similarly composed question on StackOverflow here ( http://stackoverflow.com/questions/29476654/is-there-a-way-to-get-predictable-actor-naming-with-akka-stream) and will update that question with any solutions you guys may have) Thanks in advance, Rich Henry -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Akka Team Typesafe - Reactive apps on the JVM Blog: letitcrash.com Twitter: @akkateam -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Most effective way to integrate streams with traditional actors?
6 apr 2015 kl. 22:15 skrev Rich Henry rhenr...@gmail.com: Hi, I would like to integrate an ActorPublisher with other actors created elsewhere, but I can't find a way to fully specify the path of the materialized ActorRef at configuration time, as the top-level path component always seems to be generated at runtime (i.e. $a). Is there a way to configure my Flow in a way where I can predict the full path for use in an .actorSelection() elsewhere? This sounds like a dangerous motivation to me: introducing Actors to each other by passing ActorRefs around is much preferred and works almost always—with the only exception of establishing first contact between different systems. In this case it would be better to pass the ActorRef of that other non-stream actor into the stream and have it introduce itself to the outside. Regards, Roland The only solutions I can see right now is to wrap the Flow in another explicitly created Actor, which would just be boilerplate, or to find a way to pass the materialized ActorRef out into the wild via my own registration function. Any ideas? (and yes I did post a similarly composed question on StackOverflow here (http://stackoverflow.com/questions/29476654/is-there-a-way-to-get-predictable-actor-naming-with-akka-stream) and will update that question with any solutions you guys may have) Thanks in advance, Rich Henry -- Read the docs: http://akka.io/docs/ http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com mailto:akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com mailto:akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout https://groups.google.com/d/optout. Dr. Roland Kuhn Akka Tech Lead Typesafe http://typesafe.com/ – Reactive apps on the JVM. twitter: @rolandkuhn http://twitter.com/#!/rolandkuhn -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] StreamTcp exceptions handling
Hi, The connection flow will publish the errors, but you feed them to a BackholeSink. Also, your OnCompleteSink is probably at the wrong place -- it will not tell you anything about whether the TCP connection have sent everything fine or not. As such, you will close the binding before you have sent everything through TCP. -Endre On Mon, Apr 6, 2015 at 3:23 PM, zergood zergoodso...@gmail.com wrote: Hello! My server side: def update: Iterator[ByteString] lazy val binding = StreamTcp(system).bind(address, idleTimeout = 20.seconds) def start(): Future[Unit] = { val firstCompleted = Promise[Unit]() val foreachConnection = ForeachSink[IncomingConnection] { connection = val handleConnection = FlowGraph { implicit b = import akka.stream.scaladsl.FlowGraphImplicits._ val updateSource = Source[ByteString](() = update) val broadcast = Broadcast[ByteString] updateSource ~ broadcast broadcast ~ connection.flow ~ BlackholeSink broadcast ~ OnCompleteSink[ByteString] { res = firstCompleted.complete(res) } } handleConnection.run() } val connectionsMap = binding.connections.to(foreachConnection).run() firstCompleted.future.flatMap {_ = binding.unbind(connectionsMap)} .recoverWith { case ex = binding.unbind(connectionsMap).flatMap(_ = { Future.failed(ex) })} } If i have exception on the client side, this future will be completed with Success anyway. Is there a way to handle tcp exceptions (client disconect for example) ? -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Akka Team Typesafe - Reactive apps on the JVM Blog: letitcrash.com Twitter: @akkateam -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Per-request vs Work Pulling vs Dispatcher tuning
Hi Richard, Are those DB calls async? If so, are they backpressured properly? If not (i.e. they are blocking), are they running on a dedicated dispatcher, which is not shared with the Futures (i.e. not simply import context.dispatcher on actors that do blocking and Futures, but explicitly give a different dispatchers to the Futures)? -Endre On Fri, Apr 3, 2015 at 9:36 PM, Richard Rodseth rrods...@gmail.com wrote: At startup our application reads a bunch of rows d of type D of a table, and for each sets in motion a bunch of Spray client calls and other database calls then spins up a bunch of actors of type C which generate more HTTP and DB calls. A fair amount of Future composition and pipeTo happens in the course of all this. For larger numbers of Ds (and hence Cs), we are seeing timeouts in the futures (there's not an ask in sight). We haven't done any dispatcher tuning other than giving the DbActor it's own dispatcher because of the blocking. We find ourselves wondering if the for each d { self ! ProcessD(d)} should put the Ds on a worker-pull dispatcher, or instead create a per-request ProcessD actor. We've used per-request actors on the API side of the app to eliminate asks and have used the work pulling pattern elsewhere as well. Thoughts? -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Akka Team Typesafe - Reactive apps on the JVM Blog: letitcrash.com Twitter: @akkateam -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Multiple proxies for different sharding regions
Hi, On Sun, Apr 5, 2015 at 6:05 AM, Dragisa Krsmanovic dragis...@gmail.com wrote: Hi all, We are trying to implement a microservices architecture based on Akka clustering. Services can be Akka workers or front end apps that call the workers. Some of the services will be using akka persistence for event sourcing. For those, we need to use sharding to assure that persisted actors don't appear on more than one node. Clients - our front end apps, can use shard regions in proxy-only mode to route messages to right worker nodes. Problem comes up when client needs to talk to more than one of these sharded services. Each of these services can form their own shard region by configuring different roles for Akka sharding. In that case, each service will have their own shard coordinator. But, in that case, I don't see a way to create multiple shard proxies on client to talk to different shards. How can I tell my shard proxy to use different shard coordinators ? AFAIK this is the purpose of the typeName parameter on the start() method. For different services you can use a different typeName String and you can just start multiple proxies. Of course the matching services will need a matching typeName parameter. -Endre The workaround has been to use a round-robin router instead of shard proxy which isn't ideal because it causes messages to make additional hops until they reach the destination. Thanks, Dragisa -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Akka Team Typesafe - Reactive apps on the JVM Blog: letitcrash.com Twitter: @akkateam -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Refer to the same cluster-wide router from several actors in the cluster on the same node
actorOf creates a new actor, and in this case you're doing it at the system level from within another actor: val httpWorkers = context.system.actorOf(FromConfig.props(Props.empty), cluster_router_httpworker) That's the problem. On Sun, Apr 5, 2015 at 6:54 PM, Eugene Dzhurinsky jdeve...@gmail.com wrote: Hello! I have a cluster configuration like this: /cluster_router_httpworker { router = consistent-hashing-group nr-of-instances = 10 routees.paths = [ /user/router_httpworker ] cluster { enabled = on max-nr-of-instances-per-node = 5 allow-local-routees = off use-role = http } }, /cluster_router_chunkworker { router = consistent-hashing-group nr-of-instances = 5 routees.paths = [/user/router_chunkworker ] cluster { enabled = on max-nr-of-instances-per-node = 1 allow-local-routees = off use-role = chunk } } Now I start a *TaskChunkActor* the as below: val sys = ActorSystem(HttpCluster, config) val ref = sys.actorOf(Props[TaskChunkActor].withRouter(RoundRobinPool(10)), router_chunkworker) val clusterRef = sys.actorOf(FromConfig.props(Props.empty), cluster_router_chunkworker) The *TaskChunkActor* initializes its internal references to the *HttpWorker* : val httpWorkers = context.system.actorOf(FromConfig.props(Props.empty), cluster_router_httpworker) At this point I'm getting the exception: akka.actor.InvalidActorNameException: actor name [cluster_router_httpworker] is not unique! at akka.actor.dungeon.ChildrenContainer$NormalChildrenContainer.reserve(ChildrenContainer.scala:130) at akka.actor.dungeon.Children$class.reserveChild(Children.scala:76) at akka.actor.ActorCell.reserveChild(ActorCell.scala:369) at akka.actor.dungeon.Children$class.makeChild(Children.scala:201) at akka.actor.dungeon.Children$class.attachChild(Children.scala:41) at akka.actor.ActorCell.attachChild(ActorCell.scala:369) at akka.actor.ActorSystemImpl.actorOf(ActorSystem.scala:553) It seems that every *TaskChunkActor* tries to create it's own reference to the cluster-related actor, and fails. I see that I could pass the route to every instance of *TaskChunkActor* via constructor, but perhaps there's a way to either create an instance of the actor from config or return the existing one? Thanks! -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Cheers, √ -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] StreamTcp exceptions handling
Thank you for your answer. But If I use onCompleteSink after connection flow, future will not be completed successfully however data will be downloaded by client. If client is failed while downloading there are still no exceptions on server side. Could you provide me some links from documentation about exception handling in StreamTcp? Do I need onCompleteSink in my case at all? updateSource ~ connection.flow ~ OnCompleteSink[ByteString] вторник, 7 апреля 2015 г., 14:16:13 UTC+3 пользователь Akka Team написал: Hi, The connection flow will publish the errors, but you feed them to a BackholeSink. Also, your OnCompleteSink is probably at the wrong place -- it will not tell you anything about whether the TCP connection have sent everything fine or not. As such, you will close the binding before you have sent everything through TCP. -Endre On Mon, Apr 6, 2015 at 3:23 PM, zergood zergoo...@gmail.com javascript: wrote: Hello! My server side: def update: Iterator[ByteString] lazy val binding = StreamTcp(system).bind(address, idleTimeout = 20.seconds) def start(): Future[Unit] = { val firstCompleted = Promise[Unit]() val foreachConnection = ForeachSink[IncomingConnection] { connection = val handleConnection = FlowGraph { implicit b = import akka.stream.scaladsl.FlowGraphImplicits._ val updateSource = Source[ByteString](() = update) val broadcast = Broadcast[ByteString] updateSource ~ broadcast broadcast ~ connection.flow ~ BlackholeSink broadcast ~ OnCompleteSink[ByteString] { res = firstCompleted.complete(res) } } handleConnection.run() } val connectionsMap = binding.connections.to(foreachConnection).run() firstCompleted.future.flatMap {_ = binding.unbind(connectionsMap)} .recoverWith { case ex = binding.unbind(connectionsMap).flatMap(_ = { Future.failed(ex) })} } If i have exception on the client side, this future will be completed with Success anyway. Is there a way to handle tcp exceptions (client disconect for example) ? -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+...@googlegroups.com javascript:. To post to this group, send email to akka...@googlegroups.com javascript:. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Akka Team Typesafe - Reactive apps on the JVM Blog: letitcrash.com Twitter: @akkateam -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] StreamTcp exceptions handling
On Tue, Apr 7, 2015 at 1:47 PM, zergood zergoodso...@gmail.com wrote: Thank you for your answer. But If I use onCompleteSink after connection flow, future will not be completed successfully however data will be downloaded by client. The only reason for that could be that the client does not close the connection after it downloaded the data you have sent. However, connection close might have nothing to do whether the data has been successfully received or not. If client is failed while downloading there are still no exceptions on server side. Could you provide me some links from documentation about exception handling in StreamTcp? There is nothing special about StreamTcp, it works like any other Flow. The onError signal only travels downwards. Do I need onCompleteSink in my case at all? updateSource ~ connection.flow ~ OnCompleteSink[ByteString] This OnCompleteSink will provide a future that will finish with success if the connection has been normally close, or failing otherwise. It can say nothing about whether the user successfully downloaded anything or not by default unless if the client can be assumed to only close the connection once everything has been processed. How does your client look like? -Endre вторник, 7 апреля 2015 г., 14:16:13 UTC+3 пользователь Akka Team написал: Hi, The connection flow will publish the errors, but you feed them to a BackholeSink. Also, your OnCompleteSink is probably at the wrong place -- it will not tell you anything about whether the TCP connection have sent everything fine or not. As such, you will close the binding before you have sent everything through TCP. -Endre On Mon, Apr 6, 2015 at 3:23 PM, zergood zergoo...@gmail.com wrote: Hello! My server side: def update: Iterator[ByteString] lazy val binding = StreamTcp(system).bind(address, idleTimeout = 20.seconds) def start(): Future[Unit] = { val firstCompleted = Promise[Unit]() val foreachConnection = ForeachSink[IncomingConnection] { connection = val handleConnection = FlowGraph { implicit b = import akka.stream.scaladsl.FlowGraphImplicits._ val updateSource = Source[ByteString](() = update) val broadcast = Broadcast[ByteString] updateSource ~ broadcast broadcast ~ connection.flow ~ BlackholeSink broadcast ~ OnCompleteSink[ByteString] { res = firstCompleted.complete(res) } } handleConnection.run() } val connectionsMap = binding.connections.to(foreachConnection).run() firstCompleted.future.flatMap {_ = binding.unbind(connectionsMap)} .recoverWith { case ex = binding.unbind(connectionsMap).flatMap(_ = { Future.failed(ex) })} } If i have exception on the client side, this future will be completed with Success anyway. Is there a way to handle tcp exceptions (client disconect for example) ? -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/ current/additional/faq.html Search the archives: https://groups.google.com/ group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+...@googlegroups.com. To post to this group, send email to akka...@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Akka Team Typesafe - Reactive apps on the JVM Blog: letitcrash.com Twitter: @akkateam -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Re: Best practices for selecting or creating actor
val ref = context.child(name).getOrElse(context.actorOf(props, name)) On Sun, Apr 5, 2015 at 11:53 PM, Adam adamho...@gmail.com wrote: First of all, you shouldn't ever block like this, as you do with Await. As for your question - this sounds like something the parent actor should be responsible for. I'm not even sure the code above works (it at least never occurred to me to try to create an actor using a full path as I always understood the docs as if it shouldn't be possible). I think allowing the parent actor to determine if one of his children exists or not is cleaner. It also has direct access to this through the actor context, so the answer is immediate. On Sunday, April 5, 2015 at 10:50:28 PM UTC+3, Fatih Dönmez wrote: Hi, I want to create an actor if it didn't created already. To check its existence I use actorSelection. If there is no actor defined by the path, I create a new one with actorOf. Is this approach correct? How can I improve the design? Here is the code I come up with currently; val fullPath: String = path.format(appId) var starter: ActorRef = null implicit val timeout = Timeout(5 seconds) try { starter = Await.result(context.actorSelection(fullPath). resolveOne,FiniteDuration(5,TimeUnit.SECONDS)) } catch { case e: ActorNotFound = { starter = context.actorOf(Props(new StarterNode(customer, appId)), name = fullPath) logger.info(Actor [ + fullPath + ] creating for first time) } case e: Exception = logger.error(Actor [ + fullPath + ] selection failed,e) } //TODO if(starter != null) { starter ! event } else { logger.warn(Starter node failed with timeout) } -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Cheers, √ -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Most effective way to integrate streams with traditional actors?
Yes, this was my backup plan -- to create another actor explicitly to act as a gatekeeper to the stream. Just wanted to make sure I wasnt taking the long way around the fence. Thank you. On Tuesday, April 7, 2015 at 7:19:34 AM UTC-4, Akka Team wrote: Hi Rich, On Mon, Apr 6, 2015 at 10:15 PM, Rich Henry rhen...@gmail.com javascript: wrote: Hi, I would like to integrate an ActorPublisher with other actors created elsewhere, but I can't find a way to fully specify the path of the materialized ActorRef at configuration time, as the top-level path component always seems to be generated at runtime (i.e. $a). That is not possible to specify. But you can always use that actor as a proxy to your real actor. Also, if you don't care about backpressure you can just use Sink.foreach(ref ! _), or you can use (depends on your use case) src.mapAsync(ref ? _).to(Sink.ignore) if a request-ack style communication is enough for you. -Endre Is there a way to configure my Flow in a way where I can predict the full path for use in an .actorSelection() elsewhere? The only solutions I can see right now is to wrap the Flow in another explicitly created Actor, which would just be boilerplate, or to find a way to pass the materialized ActorRef out into the wild via my own registration function. Any ideas? (and yes I did post a similarly composed question on StackOverflow here ( http://stackoverflow.com/questions/29476654/is-there-a-way-to-get-predictable-actor-naming-with-akka-stream) and will update that question with any solutions you guys may have) Thanks in advance, Rich Henry -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+...@googlegroups.com javascript:. To post to this group, send email to akka...@googlegroups.com javascript:. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Akka Team Typesafe - Reactive apps on the JVM Blog: letitcrash.com Twitter: @akkateam -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Most effective way to integrate streams with traditional actors?
On Tuesday, April 7, 2015 at 7:31:02 AM UTC-4, rkuhn wrote: 6 apr 2015 kl. 22:15 skrev Rich Henry rhen...@gmail.com javascript:: Hi, I would like to integrate an ActorPublisher with other actors created elsewhere, but I can't find a way to fully specify the path of the materialized ActorRef at configuration time, as the top-level path component always seems to be generated at runtime (i.e. $a). Is there a way to configure my Flow in a way where I can predict the full path for use in an .actorSelection() elsewhere? This sounds like a dangerous motivation to me: introducing Actors to each other by passing ActorRefs around is much preferred and works almost always—with the only exception of establishing first contact between different systems. In this case it would be better to pass the ActorRef of that other non-stream actor into the stream and have it introduce itself to the outside. I suppose any time you use _.actorSelection() your process becomes a bit more dynamic, and there are more things that can go wrong, but I also think there are many situations in which it's perfectly desirable to do so. I'm not sure I would classify it as a particularly dangerous motivation (excuse me if I misunderstand what you mean by dangerous, i would be interested in any elaboration). I think the end result is that I just need to adjust my thinking when it comes to streams -- to associate them more strongly with explicit, supervised, data-processing roles. An easy adjustment to make. That being said, I think the benefit of using them over, say, scalaz-stream, becomes less compelling when you restrict their access patterns in this way. I appreciate the response, Roland. - Rich Regards, Roland The only solutions I can see right now is to wrap the Flow in another explicitly created Actor, which would just be boilerplate, or to find a way to pass the materialized ActorRef out into the wild via my own registration function. Any ideas? (and yes I did post a similarly composed question on StackOverflow here ( http://stackoverflow.com/questions/29476654/is-there-a-way-to-get-predictable-actor-naming-with-akka-stream) and will update that question with any solutions you guys may have) Thanks in advance, Rich Henry -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+...@googlegroups.com javascript:. To post to this group, send email to akka...@googlegroups.com javascript:. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. *Dr. Roland Kuhn* *Akka Tech Lead* Typesafe http://typesafe.com/ – Reactive apps on the JVM. twitter: @rolandkuhn http://twitter.com/#!/rolandkuhn -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] StreamTcp exceptions handling
Here is my client code: def downloadUpdate(address: InetSocketAddress, outputFilePath:String)( implicit system: ActorSystem):Unit = { import scala.concurrent.duration._ implicit val actorStreamMaterializer = ActorFlowMaterializer() val connection = StreamTcp(system).outgoingConnection(address, connectTimeout = 20.seconds, idleTimeout = 20.seconds) val updateSink = OnCompleteSink[Unit] { case Success(_) = case Failure(ex) = } val download = FlowGraph { implicit b = import akka.stream.scaladsl.FlowGraphImplicits._ val in = Source(Promise().future) val process = Flow[ByteString].map(chunk = { receiveDataStreamChunk(chunk, outputFilePath) }) in ~ connection.flow ~ process ~ updateSink } download.run() } def receiveDataStreamChunk(chunk: ByteString, outputFilePath:String): Unit receiveDataStreamChunk method simply write chunks to file. As I understand you correctly client needs to close connection when all data is received. In my scenario only server knows if data is sended completly or not. May be I don't understand you but why is TCP connection not closed by server automatically when everything is sended? If there is an exception in the receiveDataStreamChunk method, does it lead to closing client TCP connection? Or do I need to close it in updateSink Failure case? вторник, 7 апреля 2015 г., 15:00:39 UTC+3 пользователь drewhk написал: On Tue, Apr 7, 2015 at 1:47 PM, zergood zergoo...@gmail.com javascript: wrote: Thank you for your answer. But If I use onCompleteSink after connection flow, future will not be completed successfully however data will be downloaded by client. The only reason for that could be that the client does not close the connection after it downloaded the data you have sent. However, connection close might have nothing to do whether the data has been successfully received or not. If client is failed while downloading there are still no exceptions on server side. Could you provide me some links from documentation about exception handling in StreamTcp? There is nothing special about StreamTcp, it works like any other Flow. The onError signal only travels downwards. Do I need onCompleteSink in my case at all? updateSource ~ connection.flow ~ OnCompleteSink[ByteString] This OnCompleteSink will provide a future that will finish with success if the connection has been normally close, or failing otherwise. It can say nothing about whether the user successfully downloaded anything or not by default unless if the client can be assumed to only close the connection once everything has been processed. How does your client look like? -Endre вторник, 7 апреля 2015 г., 14:16:13 UTC+3 пользователь Akka Team написал: Hi, The connection flow will publish the errors, but you feed them to a BackholeSink. Also, your OnCompleteSink is probably at the wrong place -- it will not tell you anything about whether the TCP connection have sent everything fine or not. As such, you will close the binding before you have sent everything through TCP. -Endre On Mon, Apr 6, 2015 at 3:23 PM, zergood zergoo...@gmail.com wrote: Hello! My server side: def update: Iterator[ByteString] lazy val binding = StreamTcp(system).bind(address, idleTimeout = 20.seconds) def start(): Future[Unit] = { val firstCompleted = Promise[Unit]() val foreachConnection = ForeachSink[IncomingConnection] { connection = val handleConnection = FlowGraph { implicit b = import akka.stream.scaladsl.FlowGraphImplicits._ val updateSource = Source[ByteString](() = update) val broadcast = Broadcast[ByteString] updateSource ~ broadcast broadcast ~ connection.flow ~ BlackholeSink broadcast ~ OnCompleteSink[ByteString] { res = firstCompleted.complete(res) } } handleConnection.run() } val connectionsMap = binding.connections.to(foreachConnection).run () firstCompleted.future.flatMap {_ = binding.unbind(connectionsMap)} .recoverWith { case ex = binding.unbind(connectionsMap).flatMap(_ = { Future.failed(ex) })} } If i have exception on the client side, this future will be completed with Success anyway. Is there a way to handle tcp exceptions (client disconect for example) ? -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/ current/additional/faq.html Search the archives: https://groups.google.com/ group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+...@googlegroups.com. To post to this group, send email to akka...@googlegroups.com. Visit this group at
Re: [akka-user] Akka FSM stateTimeout seems broken.
I added a few print statements to https://github.com/akka/akka/blob/master/akka-actor/src/main/scala/akka/actor/FSM.scala#L639-660. Then, I re-ran *folex*'s example to reproduce the bug. scala fsm ! Message currentState.stateName: Initial nextState.notifies: true currentState.timeout: None stateTimeouts(currentState.stateName): Some(5 seconds) fsm got StateTimeout :( currentState.stateName: Waiting nextState.notifies: false currentState.timeout: None stateTimeouts(currentState.s) fsm got StateTimeout :( It looks to me that this line of code is the offending line: val timeout = if (currentState.timeout.isDefined) currentState.timeout else stateTimeouts(currentState.stateName) https://github.com/akka/akka/blob/master/akka-actor/src/main/scala/akka/actor/FSM.scala#L651 And this map includes that value (5 seconds) due to the call to `when` https://github.com/akka/akka/blob/master/akka-actor/src/main/scala/akka/actor/FSM.scala#L313-L314 which calls: https://github.com/akka/akka/blob/master/akka-actor/src/main/scala/akka/actor/FSM.scala#L533-L541 -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] StreamTcp exceptions handling
Is there an api for closing client connection? вторник, 7 апреля 2015 г., 16:40:46 UTC+3 пользователь Akka Team написал: On Tue, Apr 7, 2015 at 3:38 PM, zergood zergoo...@gmail.com javascript: wrote: Here is my client code: def downloadUpdate(address: InetSocketAddress, outputFilePath:String)( implicit system: ActorSystem):Unit = { import scala.concurrent.duration._ implicit val actorStreamMaterializer = ActorFlowMaterializer() val connection = StreamTcp(system).outgoingConnection(address, connectTimeout = 20.seconds, idleTimeout = 20.seconds) val updateSink = OnCompleteSink[Unit] { case Success(_) = case Failure(ex) = } val download = FlowGraph { implicit b = import akka.stream.scaladsl.FlowGraphImplicits._ val in = Source(Promise().future) val process = Flow[ByteString].map(chunk = { receiveDataStreamChunk(chunk, outputFilePath) }) in ~ connection.flow ~ process ~ updateSink } download.run() } def receiveDataStreamChunk(chunk: ByteString, outputFilePath:String): Unit receiveDataStreamChunk method simply write chunks to file. As I understand you correctly client needs to close connection when all data is received. In my scenario only server knows if data is sended completly or not. May be I don't understand you but why is TCP connection not closed by server automatically when everything is sended? It does, but TCP has half-close, so it does not terminate the server-side flow until the client closes its half of the TCP connection. -Endre If there is an exception in the receiveDataStreamChunk method, does it lead to closing client TCP connection? Or do I need to close it in updateSink Failure case? вторник, 7 апреля 2015 г., 15:00:39 UTC+3 пользователь drewhk написал: On Tue, Apr 7, 2015 at 1:47 PM, zergood zergoo...@gmail.com wrote: Thank you for your answer. But If I use onCompleteSink after connection flow, future will not be completed successfully however data will be downloaded by client. The only reason for that could be that the client does not close the connection after it downloaded the data you have sent. However, connection close might have nothing to do whether the data has been successfully received or not. If client is failed while downloading there are still no exceptions on server side. Could you provide me some links from documentation about exception handling in StreamTcp? There is nothing special about StreamTcp, it works like any other Flow. The onError signal only travels downwards. Do I need onCompleteSink in my case at all? updateSource ~ connection.flow ~ OnCompleteSink[ByteString] This OnCompleteSink will provide a future that will finish with success if the connection has been normally close, or failing otherwise. It can say nothing about whether the user successfully downloaded anything or not by default unless if the client can be assumed to only close the connection once everything has been processed. How does your client look like? -Endre вторник, 7 апреля 2015 г., 14:16:13 UTC+3 пользователь Akka Team написал: Hi, The connection flow will publish the errors, but you feed them to a BackholeSink. Also, your OnCompleteSink is probably at the wrong place -- it will not tell you anything about whether the TCP connection have sent everything fine or not. As such, you will close the binding before you have sent everything through TCP. -Endre On Mon, Apr 6, 2015 at 3:23 PM, zergood zergoo...@gmail.com wrote: Hello! My server side: def update: Iterator[ByteString] lazy val binding = StreamTcp(system).bind(address, idleTimeout = 20.seconds) def start(): Future[Unit] = { val firstCompleted = Promise[Unit]() val foreachConnection = ForeachSink[IncomingConnection] { connection = val handleConnection = FlowGraph { implicit b = import akka.stream.scaladsl.FlowGraphImplicits._ val updateSource = Source[ByteString](() = update) val broadcast = Broadcast[ByteString] updateSource ~ broadcast broadcast ~ connection.flow ~ BlackholeSink broadcast ~ OnCompleteSink[ByteString] { res = firstCompleted.complete(res) } } handleConnection.run() } val connectionsMap = binding.connections.to(foreachConnection). run() firstCompleted.future.flatMap {_ = binding.unbind(connectionsMap )} .recoverWith { case ex = binding.unbind(connectionsMap). flatMap(_ = { Future.failed(ex) })} } If i have exception on the client side, this future will be completed with Success anyway. Is there a way to handle tcp exceptions (client disconect for example) ? -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/c
Re: [akka-user] StreamTcp exceptions handling
On Tue, Apr 7, 2015 at 3:38 PM, zergood zergoodso...@gmail.com wrote: Here is my client code: def downloadUpdate(address: InetSocketAddress, outputFilePath:String)( implicit system: ActorSystem):Unit = { import scala.concurrent.duration._ implicit val actorStreamMaterializer = ActorFlowMaterializer() val connection = StreamTcp(system).outgoingConnection(address, connectTimeout = 20.seconds, idleTimeout = 20.seconds) val updateSink = OnCompleteSink[Unit] { case Success(_) = case Failure(ex) = } val download = FlowGraph { implicit b = import akka.stream.scaladsl.FlowGraphImplicits._ val in = Source(Promise().future) val process = Flow[ByteString].map(chunk = { receiveDataStreamChunk(chunk, outputFilePath) }) in ~ connection.flow ~ process ~ updateSink } download.run() } def receiveDataStreamChunk(chunk: ByteString, outputFilePath:String): Unit receiveDataStreamChunk method simply write chunks to file. As I understand you correctly client needs to close connection when all data is received. In my scenario only server knows if data is sended completly or not. May be I don't understand you but why is TCP connection not closed by server automatically when everything is sended? It does, but TCP has half-close, so it does not terminate the server-side flow until the client closes its half of the TCP connection. -Endre If there is an exception in the receiveDataStreamChunk method, does it lead to closing client TCP connection? Or do I need to close it in updateSink Failure case? вторник, 7 апреля 2015 г., 15:00:39 UTC+3 пользователь drewhk написал: On Tue, Apr 7, 2015 at 1:47 PM, zergood zergoo...@gmail.com wrote: Thank you for your answer. But If I use onCompleteSink after connection flow, future will not be completed successfully however data will be downloaded by client. The only reason for that could be that the client does not close the connection after it downloaded the data you have sent. However, connection close might have nothing to do whether the data has been successfully received or not. If client is failed while downloading there are still no exceptions on server side. Could you provide me some links from documentation about exception handling in StreamTcp? There is nothing special about StreamTcp, it works like any other Flow. The onError signal only travels downwards. Do I need onCompleteSink in my case at all? updateSource ~ connection.flow ~ OnCompleteSink[ByteString] This OnCompleteSink will provide a future that will finish with success if the connection has been normally close, or failing otherwise. It can say nothing about whether the user successfully downloaded anything or not by default unless if the client can be assumed to only close the connection once everything has been processed. How does your client look like? -Endre вторник, 7 апреля 2015 г., 14:16:13 UTC+3 пользователь Akka Team написал: Hi, The connection flow will publish the errors, but you feed them to a BackholeSink. Also, your OnCompleteSink is probably at the wrong place -- it will not tell you anything about whether the TCP connection have sent everything fine or not. As such, you will close the binding before you have sent everything through TCP. -Endre On Mon, Apr 6, 2015 at 3:23 PM, zergood zergoo...@gmail.com wrote: Hello! My server side: def update: Iterator[ByteString] lazy val binding = StreamTcp(system).bind(address, idleTimeout = 20.seconds) def start(): Future[Unit] = { val firstCompleted = Promise[Unit]() val foreachConnection = ForeachSink[IncomingConnection] { connection = val handleConnection = FlowGraph { implicit b = import akka.stream.scaladsl.FlowGraphImplicits._ val updateSource = Source[ByteString](() = update) val broadcast = Broadcast[ByteString] updateSource ~ broadcast broadcast ~ connection.flow ~ BlackholeSink broadcast ~ OnCompleteSink[ByteString] { res = firstCompleted.complete(res) } } handleConnection.run() } val connectionsMap = binding.connections.to(foreachConnection).run () firstCompleted.future.flatMap {_ = binding.unbind(connectionsMap) } .recoverWith { case ex = binding.unbind(connectionsMap).flatMap (_ = { Future.failed(ex) })} } If i have exception on the client side, this future will be completed with Success anyway. Is there a way to handle tcp exceptions (client disconect for example) ? -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/c urrent/additional/faq.html Search the archives: https://groups.google.com/grou p/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe
Re: [akka-user] Per-request vs Work Pulling vs Dispatcher tuning
Hi Endre The db calls are Slick 2.x (i.e. blocking) but are within a DbActor with its own dispatcher. The futures don't have a dedicated dispatcher, but there shouldn't be anything blocking in the actor that is spawning them. On Tue, Apr 7, 2015 at 4:21 AM, Akka Team akka.offic...@gmail.com wrote: Hi Richard, Are those DB calls async? If so, are they backpressured properly? If not (i.e. they are blocking), are they running on a dedicated dispatcher, which is not shared with the Futures (i.e. not simply import context.dispatcher on actors that do blocking and Futures, but explicitly give a different dispatchers to the Futures)? -Endre On Fri, Apr 3, 2015 at 9:36 PM, Richard Rodseth rrods...@gmail.com wrote: At startup our application reads a bunch of rows d of type D of a table, and for each sets in motion a bunch of Spray client calls and other database calls then spins up a bunch of actors of type C which generate more HTTP and DB calls. A fair amount of Future composition and pipeTo happens in the course of all this. For larger numbers of Ds (and hence Cs), we are seeing timeouts in the futures (there's not an ask in sight). We haven't done any dispatcher tuning other than giving the DbActor it's own dispatcher because of the blocking. We find ourselves wondering if the for each d { self ! ProcessD(d)} should put the Ds on a worker-pull dispatcher, or instead create a per-request ProcessD actor. We've used per-request actors on the API side of the app to eliminate asks and have used the work pulling pattern elsewhere as well. Thoughts? -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Akka Team Typesafe - Reactive apps on the JVM Blog: letitcrash.com Twitter: @akkateam -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Most effective way to integrate streams with traditional actors?
Hi Rich, 7 apr 2015 kl. 14:50 skrev Rich Henry rhenr...@gmail.com: On Tuesday, April 7, 2015 at 7:31:02 AM UTC-4, rkuhn wrote: 6 apr 2015 kl. 22:15 skrev Rich Henry rhen...@gmail.com javascript:: Hi, I would like to integrate an ActorPublisher with other actors created elsewhere, but I can't find a way to fully specify the path of the materialized ActorRef at configuration time, as the top-level path component always seems to be generated at runtime (i.e. $a). Is there a way to configure my Flow in a way where I can predict the full path for use in an .actorSelection() elsewhere? This sounds like a dangerous motivation to me: introducing Actors to each other by passing ActorRefs around is much preferred and works almost always—with the only exception of establishing first contact between different systems. In this case it would be better to pass the ActorRef of that other non-stream actor into the stream and have it introduce itself to the outside. I suppose any time you use _.actorSelection() your process becomes a bit more dynamic, and there are more things that can go wrong, but I also think there are many situations in which it's perfectly desirable to do so. I'm not sure I would classify it as a particularly dangerous motivation (excuse me if I misunderstand what you mean by dangerous, i would be interested in any elaboration). The danger I was referring to is that depending on the precise layout of the supervision hierarchy is more brittle than passing ActorRefs around explicitly: the former will just break without warning while the latter can be noticed and fixed locally when changing a specific part of the hierarchy. My thinking for Akka Typed is to not include actorSelection at all—it is not currently present and its role may just be replaced by the Receptionist pattern, possibly coupled with CRDT-based knowledge dissemination of which service is offered where in a cluster. I think the end result is that I just need to adjust my thinking when it comes to streams -- to associate them more strongly with explicit, supervised, data-processing roles. An easy adjustment to make. That being said, I think the benefit of using them over, say, scalaz-stream, becomes less compelling when you restrict their access patterns in this way. My recommendation would be to switch that view point around: the supervised Actor base of the stream implementation is not essential, other flow materializers might do things radically differently. Accessing Actors from within the stream looks the same whether you use Akka Streams or scalaz-stream AFAICT. What is the benefit that you feel is being lost? Regards, Roland I appreciate the response, Roland. - Rich Regards, Roland The only solutions I can see right now is to wrap the Flow in another explicitly created Actor, which would just be boilerplate, or to find a way to pass the materialized ActorRef out into the wild via my own registration function. Any ideas? (and yes I did post a similarly composed question on StackOverflow here (http://stackoverflow.com/questions/29476654/is-there-a-way-to-get-predictable-actor-naming-with-akka-stream http://stackoverflow.com/questions/29476654/is-there-a-way-to-get-predictable-actor-naming-with-akka-stream) and will update that question with any solutions you guys may have) Thanks in advance, Rich Henry -- Read the docs: http://akka.io/docs/ http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+...@googlegroups.com javascript:. To post to this group, send email to akka...@googlegroups.com javascript:. Visit this group at http://groups.google.com/group/akka-user http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout https://groups.google.com/d/optout. Dr. Roland Kuhn Akka Tech Lead Typesafe http://typesafe.com/ – Reactive apps on the JVM. twitter: @rolandkuhn http://twitter.com/#!/rolandkuhn -- Read the docs: http://akka.io/docs/ http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com
[akka-user] akka issues with AysncHttpClient
Hey, The app has an akka router that polls a redis queue. Ten child routees of this router, in round robin fashion, make an async crawl to an API using Apache HttpComponents AysncHttpClient. On completion / failure of these requests, a callback to send a message to the corresponding routees. *Problems:* *MESSAGE DROP FROM ROUTER TO ROUTEE* 1. Router polls about ~10 K messages from redis queue. Sends around ~4 K requests to routees. Around ~6 K messages are lost. It just drops these messages somehow. Routee code shows no log message for these ~6K messages. Plus, there are no log of dropping messages. *Routee code *snippet looks like this : public void onReceive(Object message) { if (message instanceof APIRequest) { log.info(message received +message.getUniqueId); //(1) final ActorRef sender = getSelf(); crawl(sender, message); //crawl messages the routee using AsyncHttpClient } } *Some observations:* If routees do not make a async crawl and just sends itself a completion message, then there is no drop of messages(1). All log info messages are logged. Please help me to figure out how to debug for the lost messages. *Router Config :* RoundRobinRoutingLogic 10 Routees *Akka version* : 2.3.9 *org.apache.httpcomponents.httpasyncclient version: *4.0.2 *JDK*: 1.8.0 u31 -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Akka FSM stateTimeout seems broken.
Hi Kevin! You are right in that overriding a timeout with an infinite one is not possible (I’d call it a design restriction of the current API), but just passing 100.days should be a reasonable workaround (Int.MaxValue.nanos will not work since the system’s scheduler is limited to 2^31 * akka.scheduler.tick-duration). Concerning the behavior of forMax: it is a one-time override for this particular state change, modifying the stateTimeouts map would violate the integrity of the declarative DSL. One mistake we have not yet corrected, though, is that StateTimeout is a bit of a misnomer—more accurate would be StateBasedReceiveTimeout. Unfortunately changing this is basically impossible without way too much gratuitous breakage. Regards, Roland 7 apr 2015 kl. 17:08 skrev Kevin Meredith kevin.m.mered...@gmail.com: I replaced `Duration.Inf` with `1.minute`: # begins in the Initial state scala fsm ! Message currentState.stateName: Initial nextState.notifies: true currentState.timeout: None stateTimeouts(currentState.stateName): Some(5 seconds) # Now it's in the Waiting state fsm got message currentState.stateName: Waiting nextState.notifies: false currentState.timeout: Some(1 minute) stateTimeouts(currentState.stateName): Some(5 seconds) # but, then the timeout is set to None somewhere scala fsm got StateTimeout :( currentState.stateName: Waiting nextState.notifies: false currentState.timeout: None stateTimeouts(currentState.stateName): Some(5 seconds) fsm got StateTimeout :( # and it resumes with the 5 seconds currentState.stateName: Waiting nextState.notifies: false currentState.timeout: None stateTimeouts(currentState.stateName): Some(5 seconds) Would it be worthwhile for `forMax` to update the mutable stateTimeouts map? So, the new `forMax` would look like: def forMax(timeout: Duration): State[S, D] = timeout match { case f: FiniteDuration ⇒ copy(timeout = Some(f)); stateTimeouts(currentState.stateName) = f case _ ⇒ copy(timeout = None); stateTimeouts.remove(currentState.stateName) } Does that make sense? -- Read the docs: http://akka.io/docs/ http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com mailto:akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com mailto:akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout https://groups.google.com/d/optout. Dr. Roland Kuhn Akka Tech Lead Typesafe http://typesafe.com/ – Reactive apps on the JVM. twitter: @rolandkuhn http://twitter.com/#!/rolandkuhn -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Most effective way to integrate streams with traditional actors?
On Tuesday, April 7, 2015 at 11:17:18 AM UTC-4, rkuhn wrote: Hi Rich, 7 apr 2015 kl. 14:50 skrev Rich Henry rhen...@gmail.com javascript:: On Tuesday, April 7, 2015 at 7:31:02 AM UTC-4, rkuhn wrote: 6 apr 2015 kl. 22:15 skrev Rich Henry rhen...@gmail.com: Hi, I would like to integrate an ActorPublisher with other actors created elsewhere, but I can't find a way to fully specify the path of the materialized ActorRef at configuration time, as the top-level path component always seems to be generated at runtime (i.e. $a). Is there a way to configure my Flow in a way where I can predict the full path for use in an .actorSelection() elsewhere? This sounds like a dangerous motivation to me: introducing Actors to each other by passing ActorRefs around is much preferred and works almost always—with the only exception of establishing first contact between different systems. In this case it would be better to pass the ActorRef of that other non-stream actor into the stream and have it introduce itself to the outside. I suppose any time you use _.actorSelection() your process becomes a bit more dynamic, and there are more things that can go wrong, but I also think there are many situations in which it's perfectly desirable to do so. I'm not sure I would classify it as a particularly dangerous motivation (excuse me if I misunderstand what you mean by dangerous, i would be interested in any elaboration). The danger I was referring to is that depending on the precise layout of the supervision hierarchy is more brittle than passing ActorRefs around explicitly: the former will just break without warning while the latter can be noticed and fixed locally when changing a specific part of the hierarchy. My thinking for Akka Typed is to not include actorSelection at all—it is not currently present and its role may just be replaced by the Receptionist pattern, possibly coupled with CRDT-based knowledge dissemination of which service is offered where in a cluster. This all sounds good -- im not fond of the actorSelection process itself, but rather what it allows you to do. I'm definitely open to alternatives. I think explicit actor specification is a mechanism by which end-user intent can be incorporated into the running actor-system. If actorSelection didn't exist, for my current application at least, I would have to roll my own. And in the distributed case (which isn't currently part of my application, but I need that avenue to be open going forward) -- without something like the Receptionist/CRDT solution you talk about above, there will likely be some querying (actorSelection) or concrete(-ish) pathing going on somewhere in your application. I think the end result is that I just need to adjust my thinking when it comes to streams -- to associate them more strongly with explicit, supervised, data-processing roles. An easy adjustment to make. That being said, I think the benefit of using them over, say, scalaz-stream, becomes less compelling when you restrict their access patterns in this way. My recommendation would be to switch that view point around: the supervised Actor base of the stream implementation is not essential, other flow materializers might do things radically differently. Accessing Actors from within the stream looks the same whether you use Akka Streams or scalaz-stream AFAICT. What is the benefit that you feel is being lost? Perhaps you're right here -- the back-pressure alone probably warrants selecting akka-stream at this point. I guess the idea that I had to make two actors (a publisher basically just for buffering/flow-control, and a named facade to integrate with my existing actor hierarchy) to get a stream going in my app threw me a bit. Regards, Roland I appreciate the response, Roland. - Rich Regards, Roland The only solutions I can see right now is to wrap the Flow in another explicitly created Actor, which would just be boilerplate, or to find a way to pass the materialized ActorRef out into the wild via my own registration function. Any ideas? (and yes I did post a similarly composed question on StackOverflow here ( http://stackoverflow.com/questions/29476654/is-there-a-way-to-get-predictable-actor-naming-with-akka-stream) and will update that question with any solutions you guys may have) Thanks in advance, Rich Henry -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+...@googlegroups.com. To post to this group, send email to akka...@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For
[akka-user] NewRelic Integration for Akka Application
Hi, I have application written using Akka library, and I want to use New Relic to monitor it. I tried to instrument the akka application using Newrelic's custom java transaction traces. But data about those traces are not shown in Newrelic dashboard. I've found this related question in SO which is more than an year old : http://stackoverflow.com/questions/21327181/new-relic-async-tracing-in-akka-without-play . Has anyone faces same issue recently or any workaround for it or is it more of a newrelic issue not supporting Non-play akka applications ? I am using Akka 2.3.4 Thanks Azim -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Http client and chunked encoding
Thanks a lot. Seems quite obvious now that you explained it. On Tue, Apr 7, 2015 at 12:25 PM, Akka Team akka.offic...@gmail.com wrote: Hi Rüdiger, You are waiting on the finishFuture of the foreach, but not waiting on the internal stream you materialize inside foreach. The finishFuture will complete as soon as .run() has been called on the internal stream, which is probably earlier than the completion of that internal stream itself. You can change that internal stream to use a Sink.onComplete which will return a Future, and then you can change the enclosing foreach to a mapAsync, so that the final Future completes only after the internal stream completes. -Endre On Wed, Apr 1, 2015 at 10:18 AM, rklaehn rkla...@gmail.com wrote: Hi all, I am trying to consume a chunked http stream from the client side. The code is basically identical to the gist https://gist.github.com/rklaehn/3f26c3f80e5870831f52#file-client-example ```scala val printChunksConsumer = Sink.foreach[HttpResponse] { res = if(res.status == StatusCodes.OK) { println(Got 200!) if(res.entity.isChunked) println(Chunky!) res.entity.dataBytes.map { chunk = System.out.write(chunk.toArray) System.out.flush() }.to(Sink.ignore).run() } else println(res.status) } ``` However, it seems that this still does not work. When I do a request that produces a very long chunked response, the map never gets executed. I tried various ways of accessing the chunks: matching on the entity and mapping the chunks, dataBytes, getDataBytes. Nothing seems to make a difference. The server side is definitely working. At least it works like a charm when using curl. This is using akka-http 1.0-M5. Any ideas? Rüdiger -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Akka Team Typesafe - Reactive apps on the JVM Blog: letitcrash.com Twitter: @akkateam -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
[akka-user] [Urgent] :Send attachment to email Gateway using Akka Actors
Hello, I am new to Akka, Scala. I have to build a service which sends emails with attachment to emailIds given. I am using Sendgrid as a gateway. For the attachment I have a file uploaded in S3 of size 28KB. I have REST service to which I can pass document Id through which I can fetch the document as InputStream. Now this input Stream has to be sent to many email Ids . All this downloading the file is handled by an actor called attachmentActor which I am creating below. Now lets say I have two emailIds which I need to send that attachment to, the problem I am facing is its not sending complete file to both , infact 28KB file gets divided into 16KB and 12KB which are finally sent to emailIds. so emailId 1 would receive 16KB //it should actually have 28KB email 2 would receive 12KB //it should actually have 28KB Following is the code. class SendgridConsumer{ def receive(request: EmailRequest) = { val service = Sendgrid(username , password) val logData = request.logData var errorMessage = new String val attachmentRef = system.actorOf(Props[AttachmentRequestConsumer], attachmentActor) val future = attachmentRef ? AttachmentRequest(request.documentId.get) var targetStream = Await.result(future, timeout.duration).asInstanceOf[InputStream] val results = request.emailContacts.par.map( emailContact = { val email=postData(new Email(),request , emailContact, targetStream,request.documentName.get) val sendGridResponse=service.send(email) } } postData() creates an Email Object *This is my Attachment Actor* class AttachmentRequestConsumer extends Actor with ActorLogging { def receive = { case request:AttachmentRequest = { log.info( inside Attachment RequestConsumer with document Id: + request.documentId) val req: HttpRequest = Http(url) val response = req.asBytes var targetStream = ByteSource.wrap(response.body).openStream() log.info(response body : + response.body) sender ! targetStream targetStream.close() } } } Thanks, Arpit. -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
[akka-user] Java Testkit - create Actor problem
Hi, I'm an Akka noob so apologies if this is a dumb question. I'm trying to make the Testkit work with Java, starting with JUnit testing. Example taken from documentation, tweaked to make it compile, source included below. When I run the test using JUnit 4 in Eclipse Luna, I get the stack trace below. I'm using 2.11-2.3.9 jars. There hasn't been any discussion of testkit for a couple of years, has everyone given up on Java and moved to the Scala version? Or am I doing something stupid? (I have non-testkit Akka systems runnning happily in the same build / run environment). TIA, Tom. akka.actor.ActorInitializationException: You cannot create an instance of [test.MyActor] explicitly using the constructor (new). You have to use one of the 'actorOf' factory methods to create a new actor. See the documentation. at akka.actor.ActorInitializationException$.apply(Actor.scala:165) at akka.actor.Actor$class.$init$(Actor.scala:421) at akka.actor.UntypedActor.init(UntypedActor.scala:97) at test.MyActor.init(MyActor.java:12) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:422) at org.junit.runners.BlockJUnit4ClassRunner.createTest(BlockJUnit4ClassRunner.java:195) at org.junit.runners.BlockJUnit4ClassRunner$1.runReflectiveCall(BlockJUnit4ClassRunner.java:244) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.BlockJUnit4ClassRunner.methodBlock(BlockJUnit4ClassRunner.java:241) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192) package test; import static org.junit.Assert.*; import org.junit.Test; import akka.actor.ActorSystem; import akka.actor.Props; import akka.actor.UntypedActor; import akka.testkit.TestActorRef; public class MyActor extends UntypedActor { public void onReceive(Object o) throws Exception { if (o.equals(say42)) { getSender().tell(42, getSelf()); } else if (o instanceof Exception) { throw (Exception) o; } } public boolean testMe() { return true; } @Test public void demonstrateTestActorRef() { final ActorSystem system = ActorSystem.apply(); final Props props = Props.create(MyActor.class); final TestActorRefMyActor ref = TestActorRef.create(system, props, testA); final MyActor actor = ref.underlyingActor(); assertTrue(actor.testMe()); } } -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.