[akka-user] Akka http load balancer
Hello! I have a task to develop a http balance loader for my 2 servers. Here is my qucik and very dirty implementation https://gist.github.com/zergood/e705cd6ce4cfec47c0a5. The main problem with it is performance, this solution is slower than my single server. What is the reason of performance degradation? Could you give me any advices how to make http load balancer with akka-http? I am using scala-2.11 and akka-http 1.0-RC3. -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Migrating from wandoulabs websockets to stock akka-io
Hi Roland, On Wednesday, 27 May 2015 08:18:09 UTC+1, rkuhn wrote: For this venture you need no HTTP documentation Actually I'd like to go futher than akka-http and see documentation that allows me to use WebSockets with akka-io directly, bypassing the Streaming API. The historic limitation has always been that the HTTP ConnectionManager of akka-io was unable to handle the upgrade request. Wandoulabs use their own actor to handle HTTP and UHTTP, but it has annoying side effects. I'm interested to know how akka-http has managed to implement WebSockets with that constraint in place (or if that constraint has been lifted). The statement “there are no promises around back-pressure” indicates that you did not, in fact, understand the full extent of what Akka Streams are. Just because something is a Flow does not make any promises about how back pressure is actually implemented in that flow: you have even pointed out how to create open hoses, or just blow up internal buffers when downstream doesn't consume fast enough. I'd like to know if the underlying Source of incoming client messages on a websocket endpoint will respect the backpressure from the `handleWebsocketMessages: Flow` that is dealing with it (i.e. not read from the network unless `handleWebsocketMessages` is pulling), and conversely if the underlying Sink back to the client is going to backpressure using akka-io's Ack mechanism so that `handleWebsocketMessages` will only be pulled when akka-io gives the green lights. We’d love to improve the documentation in this regard, but we’d need to first figure out where their deficiency is, hence I’m talking with you. I think clarity on the above two points would be a good start. In addition, and more generally, integration between hoses and hammers is extremely important --- unless you intentionally want to limit akka-streams uptake to green field projects only. The project I work on has 1.2 million lines of Scala code with legacy components dating from Scala 2.6. There isn't a snowball's chance in hell of rewriting it to use akka-streams. What I'm missing is the ability to hook an existing actor system into something that expects a Flow, with back pressure preserved. As I hopefully explained in my other mail about hoses: your Actors would need to implement the full spec, they’d need to be watertight. This is a start. Is there a test framework that can be used to stress test the implementation of a Stream / Actor bridge? How we implement Flows internally should be of no consequence On the contrary, I feel the implementation is of huge significance. Firstly, it helps to understand the expected performance, and second it is critical when writing integration code. It is extremely bizarre that both projects should be released under the akka banner, yet be so siloed. In order to solve this particular problem you’ll need to carefully describe the back-pressure protocol spoken by your Actor. The Actor is expecting to send messages directly to akka-io and speaks the akka-io Ack protocol using a simple object as the Ack message. It expects an Ack before it will send a message upstream (and it greedily consumes data, but that could easily be changed). Best regards, Sam -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Akka http vs Spray performance
Hi, At the current point the aim of streams 1.0 was to reach a desired functionality level. Basically there is zero performance work done at this point. We will improve performance in later versions though. We basically refactored streams API in roughly every second release, so we are currently happy that the current API looks usable and most of the bugs are ironed out. -Endre On Wed, May 27, 2015 at 1:44 PM, zergood zergoodso...@gmail.com wrote: I've done little benchmarks to compare spray and akka-http performance. I use default jvm and akka settings. So you can see that there is an significant performance difference between it. code: spray https://gist.github.com/zergood/18bae0adc2e774c31233. akka-http https://gist.github.com/zergood/53977efd500985a34ea1. versions: spray 1.3.3 akka-http 1.0-RC3 scala 2.11.6 java 1.8 wrk output for spray: Running 1m test @ http://127.0.0.1:8080/dictionaries/hello/suggestions?ngr=hond 30 threads and 64 connections Thread Stats Avg Stdev Max +/- Stdev Latency 2.14ms9.82ms 78.22ms 98.22% Req/Sec 2.55k 609.68 4.22k78.12% 4322357 requests in 1.00m, 614.20MB read Requests/sec: 72044.97 Transfer/sec: 10.24MB wrk output for akka-http: Running 1m test @ http://127.0.0.1:3535/dictionaries/hello/suggestions?ngr=hond 30 threads and 64 connections Thread Stats Avg Stdev Max +/- Stdev Latency 5.39ms6.82ms 108.07ms 92.80% Req/Sec 454.43126.73 679.00 77.77% 811836 requests in 1.00m, 115.36MB read Requests/sec: 13531.62 Transfer/sec: 1.92MB Is there any akka-http config options to increase performance to the same level as spray? -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
[akka-user] Re: [akka-http] how does the formFields directive work?
Yes, that works, Thanks! I think that this should probably be mentioned in the docs somewhere. On Wednesday, 27 May 2015 08:21:37 UTC+1, Giovanni Alberto Caporaletti wrote: You need both an implicit FlowMaterializer and an implicit ExecutionContext in scope. I haven't followed the chain to check why those are needed but basically I keep adding them (one or the other or both) wherever I have compilation errors and it works. It should be something related to marshalling, I'll leave the answer to the more expert guys around here. val route = extractFlowMaterializer { implicit mat = implicit val ec = mat.executionContext formFields('color, 'age.as[Int]) { (color, age) = complete(sThe color is '$color' and the age ten years ago was ${age - 10}) } } On Tuesday, 26 May 2015 14:35:12 UTC+2, Ian Phillips wrote: If I try to follow the example from the documentation it doesn't compile. val route = formFields('color, 'age.as[Int]) { (color, age) = complete(sThe color is '$color' and the age ten years ago was ${age - 10}) } I get the following error: too many arguments for method formFields: (pdm: akka.http.scaladsl.server.directives.FormFieldDirectives.FieldMagnet)pdm.Out formFields('color, 'age.as[Int]) { (color, age) = ^ one error found Is the documentation wrong, or (more likely) what stupid mistake am I making with this? In case it makes a difference here, I'm importing: import akka.http.scaladsl.server._ import akka.http.scaladsl.server.Directives._ Cheers, Ian. -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
[akka-user] adding AtLeastOnceDelivery trait broke my test
Hi to all, I have something strange that I cannot understand. I have this test that worked very well give online status NeverLogged for never logged user in new WithUsersData() { val auth = createAndAuthenticateUser val user1 = auth._1 val sid1 = auth._2 val tp1 = TestProbe() val dispatcher1 = TestActorRef(DispatcherActor.props(tp1.ref)) dispatcher1 ! Presence(rid_test2, sid1) tp1.expectMsgType[Notification.Presence] } Then I added AtLeastOnceDelivery trait to the DispatcherActor for messages it needs to send to other actors. Keep in mind that in my test, dispatcher1 is receiving a message, not sending one. It broke my tests. It seems to me that dispatcher1 is not ready when I fire the Presence message, since I see no logging from that actor. However if I change the code this way: for(i - 0 to 0){ dispatcher1 ! Presence(rid_test2, sid1) } or in this way Thread.sleep(50) dispatcher1 ! Presence(rid_test2, sid1) Test is successful. Could you give me some advice in investigating this problem? Thank you in advance for your precious time. Giampaolo -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
[akka-user] Akka http vs Spray performance
I've done little benchmarks to compare spray and akka-http performance. I use default jvm and akka settings. So you can see that there is an significant performance difference between it. code: spray https://gist.github.com/zergood/18bae0adc2e774c31233. akka-http https://gist.github.com/zergood/53977efd500985a34ea1. versions: spray 1.3.3 akka-http 1.0-RC3 scala 2.11.6 java 1.8 wrk output for spray: Running 1m test @ http://127.0.0.1:8080/dictionaries/hello/suggestions?ngr=hond 30 threads and 64 connections Thread Stats Avg Stdev Max +/- Stdev Latency 2.14ms9.82ms 78.22ms 98.22% Req/Sec 2.55k 609.68 4.22k78.12% 4322357 requests in 1.00m, 614.20MB read Requests/sec: 72044.97 Transfer/sec: 10.24MB wrk output for akka-http: Running 1m test @ http://127.0.0.1:3535/dictionaries/hello/suggestions?ngr=hond 30 threads and 64 connections Thread Stats Avg Stdev Max +/- Stdev Latency 5.39ms6.82ms 108.07ms 92.80% Req/Sec 454.43126.73 679.00 77.77% 811836 requests in 1.00m, 115.36MB read Requests/sec: 13531.62 Transfer/sec: 1.92MB Is there any akka-http config options to increase performance to the same level as spray? -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
[akka-user] Instrumenting Akka
Hello, I have an Actor system that implements a polling functionality (every x minutes an Actor is woken up and it creates a bunch of other actors that do something and then get killed). However, over a few days of running I get GC Out of Memory errors. What tools are out there to instrument what is going on and why I am running out of memory? I suppose one possibility is that some actors are queueing up messages and eventually the queue gets too large? How do I find out what is going on? Thanks! -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Akka http load balancer
Hi, Instead of Http.request, you should use the Flow returned by Http.superPool() (see http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-RC3/scala/http/client-side/request-level.html). That flattens out the Futures and you get responses instead. That also makes Balance actually aware of response times. OTOH I don't think performance wise akka-http is currently up to the task of being a balancer. -Endre On Wed, May 27, 2015 at 1:21 PM, zergood zergoodso...@gmail.com wrote: Hello! I have a task to develop a http balance loader for my 2 servers. Here is my qucik and very dirty implementation https://gist.github.com/zergood/e705cd6ce4cfec47c0a5. The main problem with it is performance, this solution is slower than my single server. What is the reason of performance degradation? Could you give me any advices how to make http load balancer with akka-http? I am using scala-2.11 and akka-http 1.0-RC3. -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Akka http load balancer
On Wed, May 27, 2015 at 2:01 PM, Endre Varga endre.va...@typesafe.com wrote: Hi, Instead of Http.request, you should use the Flow returned by Http.superPool() (see http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-RC3/scala/http/client-side/request-level.html). That flattens out the Futures and you get responses instead. That also makes Balance actually aware of response times. OTOH I don't think performance wise akka-http is currently up to the task of being a balancer. …at the moment. -Endre On Wed, May 27, 2015 at 1:21 PM, zergood zergoodso...@gmail.com wrote: Hello! I have a task to develop a http balance loader for my 2 servers. Here is my qucik and very dirty implementation https://gist.github.com/zergood/e705cd6ce4cfec47c0a5. The main problem with it is performance, this solution is slower than my single server. What is the reason of performance degradation? Could you give me any advices how to make http load balancer with akka-http? I am using scala-2.11 and akka-http 1.0-RC3. -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Cheers, √ -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Migrating from wandoulabs websockets to stock akka-io
Hi Sam, 27 maj 2015 kl. 12:01 skrev Sam Halliday sam.halli...@gmail.com: Hi Roland, On Wednesday, 27 May 2015 08:18:09 UTC+1, rkuhn wrote: For this venture you need no HTTP documentation Actually I'd like to go futher than akka-http and see documentation that allows me to use WebSockets with akka-io directly, bypassing the Streaming API. The historic limitation has always been that the HTTP ConnectionManager of akka-io was unable to handle the upgrade request. Wandoulabs use their own actor to handle HTTP and UHTTP, but it has annoying side effects. I'm interested to know how akka-http has managed to implement WebSockets with that constraint in place (or if that constraint has been lifted). Maybe I misunderstand you still, but ConnectionManager is not something that we have in Akka HTTP, and with akka-io you seem to mean the akka.io package in akka-actor, which is not related to HTTP as far as user API is concerned—Streams have TCP support that abstracts over the low-level mechanism and provides higher-level semantics including back-pressure. Bypassing these mechanisms is neither desirable nor possible. The whole architecture of the Wandoulabs solution is completely unrelated to how Akka HTTP works, so the constraint you mention (which I don’t know more about, never having used that library) is unlikely to exist; but we also did not “lift” it since we completely replaced the mechanism wholesale. When thinking about Streams it is best to first forget Actors, learn how to work with Streams alone, and then as an advanced topic interface with plain Actors. The statement “there are no promises around back-pressure” indicates that you did not, in fact, understand the full extent of what Akka Streams are. Just because something is a Flow does not make any promises about how back pressure is actually implemented in that flow: you have even pointed out how to create open hoses, or just blow up internal buffers when downstream doesn't consume fast enough. This is not true: a Flow has one open input and one open output port and all data elements that flow through these ports will do so governed by the Reactive Streams back-pressure semantics. This means that the Flow has the ability to slow down the Source that is connected to it and it also reacts to slow-down requests from the Sink that it will be connected to. It is the Sink.actorRef implementation that presents the proper “hose” interface but then just lets the water fall into the bucket. There is also a Sink.ignore which replaces the bucket with a black hole … I'd like to know if the underlying Source of incoming client messages on a websocket endpoint will respect the backpressure from the `handleWebsocketMessages: Flow` that is dealing with it (i.e. not read from the network unless `handleWebsocketMessages` is pulling), and conversely if the underlying Sink back to the client is going to backpressure using akka-io's Ack mechanism so that `handleWebsocketMessages` will only be pulled when akka-io gives the green lights. Yes, of course, this is what the Flow interface designates as explained above. And you are also right that a few layers down akka.io’s Ack and ResumeReading mechanisms are used to propagate the back-pressure to and from the TCP socket. We’d love to improve the documentation in this regard, but we’d need to first figure out where their deficiency is, hence I’m talking with you. I think clarity on the above two points would be a good start. In addition, and more generally, integration between hoses and hammers is extremely important --- unless you intentionally want to limit akka-streams uptake to green field projects only. The project I work on has 1.2 million lines of Scala code with legacy components dating from Scala 2.6. There isn't a snowball's chance in hell of rewriting it to use akka-streams. Right. My responses to you have been deliberately cautious in order to make it clear that for a successful integration between hammers and hoses you will need a solid understanding of each of those in isolation. I think that point got across :-) Now, looking at Integrating With Actors http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-RC3/scala/stream-integrations.html you’ll see ActorPublisher and ActorSubscriber. These two helpers handle some of the Reactive Streams protocol primitives but leave the exact use of back-pressure and data generation up to you, I think this is what you are looking for. Be aware, though, that combining both sides into one Actor—a so-called Processor—is more difficult than it seems due to failure and termination handling: if you forget to handle a certain ordering of events then the stream will not properly be torn down when “finished” and your websocket connections turn into Actor leaks. I admit to being a bit dramatic here, but I don’t want to leave the dangers unmentioned. There is another way of
Re: [akka-user] Akka http vs Spray performance
Ok. I understand. Thank you for your work, good luck. среда, 27 мая 2015 г., 14:49:10 UTC+3 пользователь drewhk написал: Hi, At the current point the aim of streams 1.0 was to reach a desired functionality level. Basically there is zero performance work done at this point. We will improve performance in later versions though. We basically refactored streams API in roughly every second release, so we are currently happy that the current API looks usable and most of the bugs are ironed out. -Endre On Wed, May 27, 2015 at 1:44 PM, zergood zergoo...@gmail.com javascript: wrote: I've done little benchmarks to compare spray and akka-http performance. I use default jvm and akka settings. So you can see that there is an significant performance difference between it. code: spray https://gist.github.com/zergood/18bae0adc2e774c31233. akka-http https://gist.github.com/zergood/53977efd500985a34ea1. versions: spray 1.3.3 akka-http 1.0-RC3 scala 2.11.6 java 1.8 wrk output for spray: Running 1m test @ http://127.0.0.1:8080/dictionaries/hello/suggestions?ngr=hond 30 threads and 64 connections Thread Stats Avg Stdev Max +/- Stdev Latency 2.14ms9.82ms 78.22ms 98.22% Req/Sec 2.55k 609.68 4.22k78.12% 4322357 requests in 1.00m, 614.20MB read Requests/sec: 72044.97 Transfer/sec: 10.24MB wrk output for akka-http: Running 1m test @ http://127.0.0.1:3535/dictionaries/hello/suggestions?ngr=hond 30 threads and 64 connections Thread Stats Avg Stdev Max +/- Stdev Latency 5.39ms6.82ms 108.07ms 92.80% Req/Sec 454.43126.73 679.00 77.77% 811836 requests in 1.00m, 115.36MB read Requests/sec: 13531.62 Transfer/sec: 1.92MB Is there any akka-http config options to increase performance to the same level as spray? -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+...@googlegroups.com javascript:. To post to this group, send email to akka...@googlegroups.com javascript:. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Migrating from wandoulabs websockets to stock akka-io
On Wednesday, 27 May 2015 14:01:46 UTC+1, rkuhn wrote: Maybe I misunderstand you still, but ConnectionManager is not something that we have in Akka HTTP I am referring to the `manager` field in an implementation of `akka.io.IO.Extension`. The limitation that I'm referring to lies with spray-can, in that it cannot handle the WebSockets Upgrade request... presumably akka-http bypasses this because your HTTP manager is more advanced and you are not using spray-can in the first instance. I should very much like to have access to the new HTTP manager so that I can bypass the Streams API for legacy integration with Actors. However, it would appear that you're saying there is no way to use akka-io for HTTP communication in this way, and that is a great shame. The spray-can/http integration with the actor frameworks is fantastic, and the loss of this integration will be sorely missed when it is deprecated. Just because something is a Flow does not make any promises ... This is not true: a Flow has one open input and one open output port and all data elements that flow through these ports will do so governed by the Reactive Streams back-pressure semantics. This means that the Flow has the ability to slow down the Source that is connected to it and it also reacts to slow-down requests from the Sink that it will be connected to. This means nothing if the Source doesn't backpressure properly or the Sink just acks everything. I don't really care about what happens in one component, I care about the entire system. In your new websockets API, the user provides the implementation of the Flow... what I care about is that the framework behaves correctly in their Source and Sink. In particular, I'd like to have confirmation that network packets are only read from the network when the Source is told to progress and that the backpressure callback to the user-provided Flow is only invoked when akka-io has confirmed that it has written the last message to the the network (i.e. the akka-io Ack). Details of these points are *absolutely essential* to the management of the heap and I do not want to simply assume that it is true. With regards to the rest of your comments and suggestions, thanks for that. I shall study it further if I have time to undertake writing a wrapper layer. In the short-term, it looks like the barrier to entry is far too high without a convenient Stream/Actor bridge in place, so I will be sticking with wandoulabs' and smootoo's convenience classes. Best regards, Sam -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
[akka-user] Re: Announcement: Opinionated RabbitMQ library using Akka / Reactive Streams
Hi Tim, This looks great - I was just thinking of implementing something like this myself, so the timing couldn't have been better. ^_^ I do have a couple of questions, though. From what I see in AsyncAckingConsumer, the default error handling strategy is to acknowledge the message once the retry limit has been reached. This means that some messages could, theoretically, be lost (no at-least-once-delivery). Have you considered rejecting the message instead as a default, or providing another built-in strategy that does that? It's not a problem to implement it independently, but it could be a bit surprising for the user. Regarding configuration, is there any way of configuring the connection dynamically? I couldn't find anywhere in the code that overrides the settings read from the default config file. For example, in my use case I have to be able to open connections to several different RabbitMQ clusters, and it doesn't seem to be possible with the current implementation. As a side note, is it possible to change the configuration element to something a little less general (maybe rabbit-op.rabbitmq)? Finally, a couple of things regarding the stream module: From my understanding of streams, creating a Source and creating a flow should not depend on the actual RabbitMQ connection/subscription. Instead, the subscription (and possible the connection/channel as well?) should only be created once the flow gets materialized. Have you considered using Source.actorPublisher(Props) to create the actor and subscribe in the actor's preStart or something? The other thing about streams ties in with Roland's comments (I think) about the use of Futures with streams. It means that the entire flow must now be aware of the fact that it's a RabbitMQ flow (or at the very least, that its messages contain the Future), so it is not as composable as it might have been otherwise. Also, I don't see how it plays with streams' error handling mechanism / strategies. At the same time, the use of Futures to track messages is very elegant, and I don't see any easy way of achieving something similar with streams (maybe something using BidiFlow?). In any case, like I already said, this looks like a very nice library. If you need any help with it, please let me know - I would love to contribute to it. Tal On Monday, May 11, 2015 at 7:23:50 AM UTC+3, Tim Harper wrote: I have developed a high-level library for efficiently setting up resilient, fault-tolerant RabbitMQ consumers using Akka and Akka Reactive Streams. Some of the features: - Recovery: - Consumers automatically reconnect and subscribe if the connection is lost - Messages published can optionally - Integration - Connection settings pulled from Typesafe config library - Asyncronous, concurrent consumption using Scala native Futures or the new Akka Streams project. - Common pattern for serialization allows easy integration with serialization libraries such play-json or json4s - Common pattern for exception handling to publish errors to Airbrake, Syslog, or all of the above - Modular - Composition favored over inheritance enabling flexible and high code reuse. - Modeled - Queue binding, exchange binding modeled with case classes - Publishing mechansims also modeled - Reliability - Builds on the excellent [Akka RabbitMQ client]( https://github.com/thenewmotion/akka-rabbitmq) library for easy recovery. - Built-in consumer error recovery strategy in which messages are re-delivered to the message queue and retried (not implemented for akka-streams integration as retry mechanism affects message order) - With a single message, pause all consumers if service health check fails (IE: database unavailable); easily resume the same. - Graceful shutdown - Consumers and streams can immediately unsubscribe, but stay alive long enough to wait for any messages to finish being processed. - Tested - Extensive integration tests The source is available here: https://github.com/SpinGo/op-rabbit We have been using the library internally at SpinGo for a year and I am working towards a 1.0.0 release candidate. We're using the streaming integration as the foundation for a billing system which is heavily based on reliable message-order, and at-least-once-delivery guarantees. I'm rather excited to share it with the world, and would be grateful for feedback. I plan on creating an Activator project to help people learn the library quickly. Some examples are on the github page. More examples can be found in the tests. Feedback, is of course, appreciated. Tim -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List
Re: [akka-user] Akka http load balancer
On Wed, May 27, 2015 at 2:33 PM, Viktor Klang viktor.kl...@gmail.com wrote: On Wed, May 27, 2015 at 2:01 PM, Endre Varga endre.va...@typesafe.com wrote: Hi, Instead of Http.request, you should use the Flow returned by Http.superPool() (see http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-RC3/scala/http/client-side/request-level.html). That flattens out the Futures and you get responses instead. That also makes Balance actually aware of response times. OTOH I don't think performance wise akka-http is currently up to the task of being a balancer. …at the moment. Doh, that currently eluded me, see my response as emphasis :) -Endre On Wed, May 27, 2015 at 1:21 PM, zergood zergoodso...@gmail.com wrote: Hello! I have a task to develop a http balance loader for my 2 servers. Here is my qucik and very dirty implementation https://gist.github.com/zergood/e705cd6ce4cfec47c0a5. The main problem with it is performance, this solution is slower than my single server. What is the reason of performance degradation? Could you give me any advices how to make http load balancer with akka-http? I am using scala-2.11 and akka-http 1.0-RC3. -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Cheers, √ -- Cheers, √ -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Long-running HTTP requests not marking the TCP slot as busy
I will file an issue. The problem would exist and even be amplified by long-polling GET requests, as they would be seen as idempotent. Moreover, in Akka it looks like disabling pipelining (setting it to 1) might not be enough if the connection is seen as idle, at least it didn't work for me. Maybe we should have a way of marking a request as potentially long running, which would make the slot busy as long as the connection has not terminated, and non-idempotent methods could set this flag by default. 2015-05-27 19:58 GMT+02:00 Roland Kuhn goo...@rkuhn.info: Hi Samuel, what you describe sounds like a bug, and I think I know how it arises as well. Would you please file an issue with this log and explanation? Thanks! Independently, it seems that you are pointing out an overall problematic relation between pipelining and long-running HTTP/1.1 requests: would this very same problem not exist for long-polling GET requests? If a user wants to schedule a long-running request of this kind, it seems to me that manual care would need to be taken to disable pipelining on the connection that is being used. How is this usually handled? Regards, Roland 27 maj 2015 kl. 18:59 skrev Samuel Tardieu s...@rfc1149.net: Hi. Using Akka HTTP Streams 1.0-RC3 in Scala, I have the impression that a connection can be used to pipeline another request even if a non-idempotent request is being executed (vs. a non-idempotent request being completed). I had the same problem in 1.0-RC2 but took no time to investigate. The context: I’m using an in-house library to access CouchDB databases over HTTP named “canape”. I create a database and wait for the result, I then open a long-running connection to the “_changes” stream which returns live database changes and send them to a “fold” sink in the background, and I immediately create 5 documents named “docid“ and wait for 100 milliseconds after every document creation. In the example below, we have a single TCP stream dumped with wireshark. Note how the request to “_changes” (which represents a live stream of changes in CouchDB) returns a chunked response, and the chunked response is obviously not over when the PUT to create the second document (“docid2”) is sent over the same connection. Even though I took care of using POST to ensure that Akka does see the “_changes” request as non-idempotent. Note that the first document (“docid1”) has been created on another TCP connection (and thus not shown here), probably because it has been sent right after the request to the “_changes” stream, which means that at this time Akka probably considered the non-idempotent request to be running. However, the second document is being created on the busy original TCP connection, as if as soon as the headers were sent back the connection is considered idle again, although its entity is still being transmitted and may be for a long time. Since the PUT for “docid2” is still blocked on this connection, we can also see that “docid3”, “docid4” and “docid5” have been properly created using another TCP connection. Right now, I work around this by creating a new host connection pool for every long-running connection, but it is a waste or resources and causes some code duplication. The problem shown below happens when I try to use a common pool for all requests (which should work fine). Any idea of what might be wrong here? PUT /canape-test-db-d45ad182-e2ec-4387-b5a1-a51d016c0312/ HTTP/1.1 Host: localhost:5984 User-Agent: canape for Scala Accept: application/json Content-Length: 0 HTTP/1.1 201 Created Server: CouchDB/1.6.1 (Erlang OTP/17) Location: http://localhost:5984/canape-test-db-d45ad182-e2ec-4387-b5a1-a51d016c0312 Date: Wed, 27 May 2015 16:43:24 GMT Content-Type: application/json Content-Length: 12 Cache-Control: must-revalidate {ok:true} POST /canape-test-db-d45ad182-e2ec-4387-b5a1-a51d016c0312/_changes?feed=continuous HTTP/1.1 Host: localhost:5984 User-Agent: canape for Scala Accept: application/json Content-Type: application/json Content-Length: 2 {} HTTP/1.1 200 OK Transfer-Encoding: chunked Server: CouchDB/1.6.1 (Erlang OTP/17) Date: Wed, 27 May 2015 16:43:25 GMT Content-Type: application/json Cache-Control: must-revalidate 51 {seq:1,id:docid1,changes:[{rev:1-967a00dff5e02add41819138abb3284d}]} PUT /canape-test-db-d45ad182-e2ec-4387-b5a1-a51d016c0312/docid2 HTTP/1.1 Host: localhost:5984 User-Agent: canape for Scala Accept: application/json Content-Type: application/json Content-Length: 2 {} 51 {seq:2,id:docid3,changes:[{rev:1-967a00dff5e02add41819138abb3284d}]} 51 {seq:3,id:docid4,changes:[{rev:1-967a00dff5e02add41819138abb3284d}]} 51 {seq:4,id:docid5,changes:[{rev:1-967a00dff5e02add41819138abb3284d}]} (hangs there, as the connection is still busy sending the long-running _changes stream, the creation of “docid2” will timeout) --
Re: [akka-user] Akka http load balancer
I added Http().superPool() to my implementation, but performance is still low. Performance does not increase after adding third server. It seems strange to me. Adam, yeah we have a reason not to use nginx or something similar, because there will be an additional business logic, not only load balancing. But if we can't fix the performance issue, nginx and etc would be a plan b. 2015-05-27 17:14 GMT+03:00 Adam Shannon adam.shan...@banno.com: Is there a specific reason to not use another piece of software for this? I'm thinking of something like nginx or haproxy. Both of which are much more hardened and performant in regards to serving as proxies for HTTP traffic. On Wed, May 27, 2015 at 7:34 AM, Viktor Klang viktor.kl...@gmail.com wrote: On Wed, May 27, 2015 at 2:33 PM, Viktor Klang viktor.kl...@gmail.com wrote: On Wed, May 27, 2015 at 2:01 PM, Endre Varga endre.va...@typesafe.com wrote: Hi, Instead of Http.request, you should use the Flow returned by Http.superPool() (see http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-RC3/scala/http/client-side/request-level.html). That flattens out the Futures and you get responses instead. That also makes Balance actually aware of response times. OTOH I don't think performance wise akka-http is currently up to the task of being a balancer. …at the moment. Doh, that currently eluded me, see my response as emphasis :) -Endre On Wed, May 27, 2015 at 1:21 PM, zergood zergoodso...@gmail.com wrote: Hello! I have a task to develop a http balance loader for my 2 servers. Here is my qucik and very dirty implementation https://gist.github.com/zergood/e705cd6ce4cfec47c0a5. The main problem with it is performance, this solution is slower than my single server. What is the reason of performance degradation? Could you give me any advices how to make http load balancer with akka-http? I am using scala-2.11 and akka-http 1.0-RC3. -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Cheers, √ -- Cheers, √ -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Adam Shannon | Software Engineer | Banno | Jack Henry 206 6th Ave Suite 1020 | Des Moines, IA 50309 | Cell: 515.867.8337 -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to a topic in the Google Groups Akka User List group. To unsubscribe from this topic, visit https://groups.google.com/d/topic/akka-user/_wd1vl0mzfE/unsubscribe. To unsubscribe from this group and all its topics, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from
[akka-user] Re: Akka cluster with 1 seed node
I probably confused the readers with my long explanation so I'll ask the question straight forward: If I have a cluster with one seed node, how do I make the cluster reconnect to that seed node if I restart it? Is there any resilient configuration I'm not aware of that can make the other nodes try to re-associate themselves with such restarted seed node? -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
[akka-user] Convert Source[ByteStream] to Java InputStream
I'm working with an akka-http application that takes in a POST request and needs to convert the body to a java.io.InputStream. Specifically, I need to go from a Source[akka.util.ByteString, Any] (which is what request.entity.dataBytes returns) to a java.io.InputStream. I can't seem to find the magic to make the conversion work. Can someone point me in the right direction? Seems like I'm missing an implicit conversion or a wrapping class. Thanks, Mike -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Long-running HTTP requests not marking the TCP slot as busy
Hi Samuel, what you describe sounds like a bug, and I think I know how it arises as well. Would you please file an issue with this log and explanation? Thanks! Independently, it seems that you are pointing out an overall problematic relation between pipelining and long-running HTTP/1.1 requests: would this very same problem not exist for long-polling GET requests? If a user wants to schedule a long-running request of this kind, it seems to me that manual care would need to be taken to disable pipelining on the connection that is being used. How is this usually handled? Regards, Roland 27 maj 2015 kl. 18:59 skrev Samuel Tardieu s...@rfc1149.net: Hi. Using Akka HTTP Streams 1.0-RC3 in Scala, I have the impression that a connection can be used to pipeline another request even if a non-idempotent request is being executed (vs. a non-idempotent request being completed). I had the same problem in 1.0-RC2 but took no time to investigate. The context: I’m using an in-house library to access CouchDB databases over HTTP named “canape”. I create a database and wait for the result, I then open a long-running connection to the “_changes” stream which returns live database changes and send them to a “fold” sink in the background, and I immediately create 5 documents named “docid“ and wait for 100 milliseconds after every document creation. In the example below, we have a single TCP stream dumped with wireshark. Note how the request to “_changes” (which represents a live stream of changes in CouchDB) returns a chunked response, and the chunked response is obviously not over when the PUT to create the second document (“docid2”) is sent over the same connection. Even though I took care of using POST to ensure that Akka does see the “_changes” request as non-idempotent. Note that the first document (“docid1”) has been created on another TCP connection (and thus not shown here), probably because it has been sent right after the request to the “_changes” stream, which means that at this time Akka probably considered the non-idempotent request to be running. However, the second document is being created on the busy original TCP connection, as if as soon as the headers were sent back the connection is considered idle again, although its entity is still being transmitted and may be for a long time. Since the PUT for “docid2” is still blocked on this connection, we can also see that “docid3”, “docid4” and “docid5” have been properly created using another TCP connection. Right now, I work around this by creating a new host connection pool for every long-running connection, but it is a waste or resources and causes some code duplication. The problem shown below happens when I try to use a common pool for all requests (which should work fine). Any idea of what might be wrong here? PUT /canape-test-db-d45ad182-e2ec-4387-b5a1-a51d016c0312/ HTTP/1.1 Host: localhost:5984 User-Agent: canape for Scala Accept: application/json Content-Length: 0 HTTP/1.1 201 Created Server: CouchDB/1.6.1 (Erlang OTP/17) Location: http://localhost:5984/canape-test-db-d45ad182-e2ec-4387-b5a1-a51d016c0312 http://localhost:5984/canape-test-db-d45ad182-e2ec-4387-b5a1-a51d016c0312 Date: Wed, 27 May 2015 16:43:24 GMT Content-Type: application/json Content-Length: 12 Cache-Control: must-revalidate {ok:true} POST /canape-test-db-d45ad182-e2ec-4387-b5a1-a51d016c0312/_changes?feed=continuous HTTP/1.1 Host: localhost:5984 User-Agent: canape for Scala Accept: application/json Content-Type: application/json Content-Length: 2 {} HTTP/1.1 200 OK Transfer-Encoding: chunked Server: CouchDB/1.6.1 (Erlang OTP/17) Date: Wed, 27 May 2015 16:43:25 GMT Content-Type: application/json Cache-Control: must-revalidate 51 {seq:1,id:docid1,changes:[{rev:1-967a00dff5e02add41819138abb3284d}]} PUT /canape-test-db-d45ad182-e2ec-4387-b5a1-a51d016c0312/docid2 HTTP/1.1 Host: localhost:5984 User-Agent: canape for Scala Accept: application/json Content-Type: application/json Content-Length: 2 {} 51 {seq:2,id:docid3,changes:[{rev:1-967a00dff5e02add41819138abb3284d}]} 51 {seq:3,id:docid4,changes:[{rev:1-967a00dff5e02add41819138abb3284d}]} 51 {seq:4,id:docid5,changes:[{rev:1-967a00dff5e02add41819138abb3284d}]} (hangs there, as the connection is still busy sending the long-running _changes stream, the creation of “docid2” will timeout) -- Read the docs: http://akka.io/docs/ http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group.
Re: [akka-user] question on getting mailbox confirmation
Hi Shawn, On Mon, May 25, 2015 at 4:29 PM, Shawn Garner shawndgar...@gmail.com wrote: The arbiter sounds like what we would need. Are there any activator templates or example of someone using an arbiter in Akka you can point me to? This way I could demo it to my coworker without having to write something from scratch? You could most probably illustrate it on a whiteboard in a simple step-by-step way. But AFAIK there's no pre-cooked version. A lot of the problems at my company are cultural rather than matching problems to solutions like you said. We have a lot of shared projects which are basically canned solutions to standard problems and are basically required to use. If a solution doesn't look exactly the same and fit into the same paradigm then it is not viewed as viable. It's viewed as resolving a problem that has already been solved. I understand, and have had similar situations in the past. The question that usually helps here is why change is being considered if the existing solutions work so well? So we're talking about an aging set of canned solutions which are getting to be about 10 years old without anyone considering anything else along the way. The people who created those are still at the company and very stuck on them. Part of it is rampant sunken cost fallacy. Oh, I hear you. Territorialism can be extremely destructive and a source of much inertia. Something that always resonated with me is You have to pick people up where they are and not where you want them to be., is there a common ground/understanding/position that you can build from? We have a lot of batch import/export jobs which instead of run daily could be modeled as reacting to events in runtime with Akka. (I've been thinking/searching about how to get events into Akka when certain tables change or get data inserted into them without writing code to detect such events. aka Event sourcing from SQL Server database) We are having some growing pains where our batch jobs are taking too long. We have admin pages which don't update themselves because they need to wait on large batch imports in a single transaction and dirty reads are not allowed in our databases at the db server level. Check. That's where the illusion of a strongly consistent reality starts to break down. Seems like a good time to take a step back and focus on the real problem rather than the symptom (the mess in the DB). So in my mind Akka is a viable solution (and a much preferable one) to a problem over something like Spring Batch and Spring Integration. Those Spring things are also mostly canned solutions which look like what our current shared solutions expect so they would always be preferred over Akka. Absolutely. Is this a use-case for Akka Streams perhaps? Thanks, Shawn On Monday, May 25, 2015 at 4:53:58 AM UTC-5, √ wrote: Hi Shawn, The use case you're referring to (transitive ordering) I think traditionally is accomplished in the actor model with the use of arbiters: https://en.wikipedia.org/wiki/Indeterminacy_in_concurrent_computation I.e. you need a single source of truth w.r.t. the ordering, and since Actors are islands of order in a sea of chaos you just need A and B to agree on an intermediary when communicating with C, as you say, remoting would otherwise be very interesting (and different mailbox implementations may give you different semantics). Burdening all communication with acks (which could get lost as well, and the original message too) just because it is needed in some cases, would not be responsible :) In general, with your coworker, I suspect it is a matter of mapping problems to solutions rather than solutions to solutions (i.e. it may not be meaningful to map all solution in his/her world to the world of actors, but instead focus on how certain problems are solved with actors and then he/she can do the mapping from other-technology-prefered solution to the actor-solution themselves) Does that make sense? On Sat, May 23, 2015 at 4:41 PM, Shawn Garner shawnd...@gmail.com wrote: I was talking with a coworker and he has some custom behavior he can't understand how to do anything useful without. He want's an tell message (a') sent from A - C to not allow actor A to continue until after there is confirmation that A's message is in C's mailbox. This way if a sends (a'') to from A - B and B sends a message (b') fro B - C that C should process messages always in the defined order a',b'. I was looking at the mailboxes and it sees like even if you use the UnboundedPriorityMailbox and it seems to me that with a tell message it probably is not blocking the sending actor but some kind of internal component of Akka. So I'm thinking UnboundedPriorityMailbox breaks down when you start talking remote actors and http actors in that the first one deserialized would block the other one but not block the sender until the message is in the mailbox. Also I'm
Re: [akka-user] Akka http load balancer
Is there a specific reason to not use another piece of software for this? I'm thinking of something like nginx or haproxy. Both of which are much more hardened and performant in regards to serving as proxies for HTTP traffic. On Wed, May 27, 2015 at 7:34 AM, Viktor Klang viktor.kl...@gmail.com wrote: On Wed, May 27, 2015 at 2:33 PM, Viktor Klang viktor.kl...@gmail.com wrote: On Wed, May 27, 2015 at 2:01 PM, Endre Varga endre.va...@typesafe.com wrote: Hi, Instead of Http.request, you should use the Flow returned by Http.superPool() (see http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-RC3/scala/http/client-side/request-level.html). That flattens out the Futures and you get responses instead. That also makes Balance actually aware of response times. OTOH I don't think performance wise akka-http is currently up to the task of being a balancer. …at the moment. Doh, that currently eluded me, see my response as emphasis :) -Endre On Wed, May 27, 2015 at 1:21 PM, zergood zergoodso...@gmail.com wrote: Hello! I have a task to develop a http balance loader for my 2 servers. Here is my qucik and very dirty implementation https://gist.github.com/zergood/e705cd6ce4cfec47c0a5. The main problem with it is performance, this solution is slower than my single server. What is the reason of performance degradation? Could you give me any advices how to make http load balancer with akka-http? I am using scala-2.11 and akka-http 1.0-RC3. -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Cheers, √ -- Cheers, √ -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Adam Shannon | Software Engineer | Banno | Jack Henry 206 6th Ave Suite 1020 | Des Moines, IA 50309 | Cell: 515.867.8337 -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Configuring Akka to run as a job server using Play framework
On 05/23/15 11:40, Mark Joslin wrote: Hi Michael, thanks for the tips. I believe that would be a bit overkill for this project though it would seem fun. I've decided to take a pretty minimalistic approach: * Start two seed nodes on AWS * Have all the API servers connect to the cluster via these seed nodes * Have all API servers create 1 master actor on the cluster (and maintain a reference to it) which has a router actor underneath it of worker actors Simply said, every API server has one master that lives on the cluster and each master has a router of max 5 worker actors underneath it. I'm under the assumption jobs get distribute among the cluster in some sort of intelligent fashion. Please let me know if I'm under the wrong assumption. this isn't very clear to me, perhaps a some ascii box art would help? regarding job distribution, i think you have the wrong assumption. if you are taking a minimalistic approach, then it is *you* who is building the intelligence :) However, I'm still a bit confused when it comes to splitting the play project up. I have a play project with 4 sub-modules underneath it, but I only want one of the modules. I imagine I can go about this in a few different ways: * Turn the one sub-module into a git repo itself and then reference it via git submodules o One - Regular backend that contains all the submodules but now contains a git reference o Two - Seed server that contains git submodule * Package the entire play backend (with submodules) into a jar and copy over to seed server git repo configuration. Treat it more as an artifact. This seems faster, but I had trouble referencing my class files doing this in a test project Anyway, I'm sure I'll figure things out but appreciate any tips. not knowing anything about your project layout, i would suggest creating artifacts and shipping those to servers. in that way you know that each server is bit-for-bit the same. not a new idea but it has a new popular term: 'immutable infrastructure': http://chadfowler.com/blog/2013/06/23/immutable-deployments/ -Michael -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
[akka-user] Long-running HTTP requests not marking the TCP slot as busy
Hi. Using Akka HTTP Streams 1.0-RC3 in Scala, I have the impression that a connection can be used to pipeline another request even if a non-idempotent request is being executed (vs. a non-idempotent request being completed). I had the same problem in 1.0-RC2 but took no time to investigate. The context: I’m using an in-house library to access CouchDB databases over HTTP named “canape”. I create a database and wait for the result, I then open a long-running connection to the “_changes” stream which returns live database changes and send them to a “fold” sink in the background, and I immediately create 5 documents named “docid“ and wait for 100 milliseconds after every document creation. In the example below, we have a single TCP stream dumped with wireshark. Note how the request to “_changes” (which represents a live stream of changes in CouchDB) returns a chunked response, and the chunked response is obviously not over when the PUT to create the second document (“docid2”) is sent over the same connection. Even though I took care of using POST to ensure that Akka does see the “_changes” request as non-idempotent. Note that the first document (“docid1”) has been created on another TCP connection (and thus not shown here), probably because it has been sent right after the request to the “_changes” stream, which means that at this time Akka probably considered the non-idempotent request to be running. However, the second document is being created on the busy original TCP connection, as if as soon as the headers were sent back the connection is considered idle again, although its entity is still being transmitted and may be for a long time. Since the PUT for “docid2” is still blocked on this connection, we can also see that “docid3”, “docid4” and “docid5” have been properly created using another TCP connection. Right now, I work around this by creating a new host connection pool for every long-running connection, but it is a waste or resources and causes some code duplication. The problem shown below happens when I try to use a common pool for all requests (which should work fine). Any idea of what might be wrong here? PUT /canape-test-db-d45ad182-e2ec-4387-b5a1-a51d016c0312/ HTTP/1.1 Host: localhost:5984 User-Agent: canape for Scala Accept: application/json Content-Length: 0 HTTP/1.1 201 Created Server: CouchDB/1.6.1 (Erlang OTP/17) Location: http://localhost:5984/canape-test-db-d45ad182-e2ec-4387-b5a1-a51d016c0312 Date: Wed, 27 May 2015 16:43:24 GMT Content-Type: application/json Content-Length: 12 Cache-Control: must-revalidate {ok:true} POST /canape-test-db-d45ad182-e2ec-4387-b5a1-a51d016c0312/_changes?feed=continuous HTTP/1.1 Host: localhost:5984 User-Agent: canape for Scala Accept: application/json Content-Type: application/json Content-Length: 2 {} HTTP/1.1 200 OK Transfer-Encoding: chunked Server: CouchDB/1.6.1 (Erlang OTP/17) Date: Wed, 27 May 2015 16:43:25 GMT Content-Type: application/json Cache-Control: must-revalidate 51 {seq:1,id:docid1,changes:[{rev:1-967a00dff5e02add41819138abb3284d}]} PUT /canape-test-db-d45ad182-e2ec-4387-b5a1-a51d016c0312/docid2 HTTP/1.1 Host: localhost:5984 User-Agent: canape for Scala Accept: application/json Content-Type: application/json Content-Length: 2 {} 51 {seq:2,id:docid3,changes:[{rev:1-967a00dff5e02add41819138abb3284d}]} 51 {seq:3,id:docid4,changes:[{rev:1-967a00dff5e02add41819138abb3284d}]} 51 {seq:4,id:docid5,changes:[{rev:1-967a00dff5e02add41819138abb3284d}]} (hangs there, as the connection is still busy sending the long-running _changes stream, the creation of “docid2” will timeout) -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Re: Akka cluster with 1 seed node
One way is that in your seed node subscribe to cluster membership changes and write the current set of nodes to a file. When you restart the seed node you construct the list of seed nodes from the file, and include your own address as the first element in the seed-nodes list. Then it will first try to join the other nodes, before joining itself. If you want something that handles this out of the box you should try Typesafe ConductR http://www.typesafe.com/products/conductr. Cheers, Patrik On Wed, May 27, 2015 at 8:56 PM, Guido Medina oxyg...@gmail.com wrote: I probably confused the readers with my long explanation so I'll ask the question straight forward: If I have a cluster with one seed node, how do I make the cluster reconnect to that seed node if I restart it? Is there any resilient configuration I'm not aware of that can make the other nodes try to re-associate themselves with such restarted seed node? -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Patrik Nordwall Typesafe http://typesafe.com/ - Reactive apps on the JVM Twitter: @patriknw -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Convert Source[ByteStream] to Java InputStream
Yes, you’re missing InputStreamSink https://github.com/akka/akka/issues/17338. It is not terribly complicated to write, I think, but we have not yet gotten around to it. Regards, Roland 27 maj 2015 kl. 21:20 skrev Michael Hamrah mham...@gmail.com: I'm working with an akka-http application that takes in a POST request and needs to convert the body to a java.io.InputStream. Specifically, I need to go from a Source[akka.util.ByteString, Any] (which is what request.entity.dataBytes returns) to a java.io.InputStream. I can't seem to find the magic to make the conversion work. Can someone point me in the right direction? Seems like I'm missing an implicit conversion or a wrapping class. Thanks, Mike -- Read the docs: http://akka.io/docs/ http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com mailto:akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com mailto:akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout https://groups.google.com/d/optout. Dr. Roland Kuhn Akka Tech Lead Typesafe http://typesafe.com/ – Reactive apps on the JVM. twitter: @rolandkuhn http://twitter.com/#!/rolandkuhn -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Akka Streams Remote Materialization
Hi Oliver, we do not (currently) support distributed materialization of streams. The reason is that it will require implementing redelivery for stream messages and a number of related issues which need to be fixed, which has not happened yet. Currently we are focusing on getting the 1.0 out the door, which means API stability, we also need to work on in-memory performance as it has not yet been a focus, and is a critical point for making Akka HTTP as performant as Spray - at which point we'll be happy to recommend using streams in production systems. Please remember that 1.0 still means that streams are experimental. The distributed scenario is a very interesting one, but we do not have enough people/time to throw at that problem currently as other tasks are more urgent. Hope this explains things a bit! -- konrad On Sat, May 23, 2015 at 3:18 AM, Oliver Winks oliverwinks.develo...@gmail.com wrote: Hi, The way I understand materialisation in Akka Streams is that the ActorFlowMaterializer will create a number of actors which are used to process the flows within a stream. Is it possible to control the number and location of actors that get materialised when running a Flow? I'd like to be able to create remote actors on several machines for processing my FlowGraph. Thanks, Oli. -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Akka Team Typesafe - Reactive apps on the JVM Blog: letitcrash.com Twitter: @akkateam -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Akka http vs Spray performance
I'm already using http/streams in a minor component, but since you're talking about this: do you have a rough idea of when the performance improvement work will start? Of course I'm not talking about the exact day but it would be very useful to know if we can expect something by the end of the year, more or less. I mean, if I had a 75% chance to have 50% the performance of spray by the end of the year, I'd start using akka-http right away for my project... it's an investment ;) thanks G On Wednesday, 27 May 2015 13:49:10 UTC+2, drewhk wrote: Hi, At the current point the aim of streams 1.0 was to reach a desired functionality level. Basically there is zero performance work done at this point. We will improve performance in later versions though. We basically refactored streams API in roughly every second release, so we are currently happy that the current API looks usable and most of the bugs are ironed out. -Endre On Wed, May 27, 2015 at 1:44 PM, zergood zergoo...@gmail.com javascript: wrote: I've done little benchmarks to compare spray and akka-http performance. I use default jvm and akka settings. So you can see that there is an significant performance difference between it. code: spray https://gist.github.com/zergood/18bae0adc2e774c31233. akka-http https://gist.github.com/zergood/53977efd500985a34ea1. versions: spray 1.3.3 akka-http 1.0-RC3 scala 2.11.6 java 1.8 wrk output for spray: Running 1m test @ http://127.0.0.1:8080/dictionaries/hello/suggestions?ngr=hond 30 threads and 64 connections Thread Stats Avg Stdev Max +/- Stdev Latency 2.14ms9.82ms 78.22ms 98.22% Req/Sec 2.55k 609.68 4.22k78.12% 4322357 requests in 1.00m, 614.20MB read Requests/sec: 72044.97 Transfer/sec: 10.24MB wrk output for akka-http: Running 1m test @ http://127.0.0.1:3535/dictionaries/hello/suggestions?ngr=hond 30 threads and 64 connections Thread Stats Avg Stdev Max +/- Stdev Latency 5.39ms6.82ms 108.07ms 92.80% Req/Sec 454.43126.73 679.00 77.77% 811836 requests in 1.00m, 115.36MB read Requests/sec: 13531.62 Transfer/sec: 1.92MB Is there any akka-http config options to increase performance to the same level as spray? -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+...@googlegroups.com javascript:. To post to this group, send email to akka...@googlegroups.com javascript:. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Migrating from wandoulabs websockets to stock akka-io
Hi Sam, it might be better to take a step back before potentially running in the wrong direction. First off, Akka HTTP offers a complete solution for everything HTTP (including websockets) within an ActorSystem. Before deciding to combine this with another tool I recommend that you explore first how Akka HTTP works, because it introduces several fundamentally new concepts. In particular, when talking about it as “Spray 2.0” it is important to note that everything ActorRef-related in Spray has been replaced by Streams—a completely different abstraction that is not an Actor. The whole underpinnings are completely rewritten in a radically different fashion, so don’t expect any Spray modules that live “beneath the surface” to seamlessly fit onto Akka HTTP. We could go into the details Wandoulabs’ websocket add-on, but I don’t see much value in discussing that before the basics are clear. The other piece of information that I’m lacking is why you would want to “retrofit” something in this context, it might be better to explain the ends and not the means in order to get help. Regards, Roland 23 maj 2015 kl. 12:38 skrev Sam Halliday sam.halli...@gmail.com: Hi all, I'm very excited that akka-io now has WebSocket support. In ENSIME, we're planning on using this wrapper over wandoulab's websockets https://github.com/smootoo/simple-spray-websockets to easily create a REST/WebSockets endpoint with JSON marshalling for a sealed family, with backpressure. Smootoo's wrapper works really well, and I have had the pleasure of using it in a corporate environment so I trust it to be stable. For future proofing, it would seem sensible to move to stock akka-io for WebSockets, so I'm considering sending a PR to retrofit the wrapper. I have a couple of questions about that: 1. does akka-io's HTTP singleton actor support WebSockets now? That was the big caveat about using wandoulabs. It means all kinds of workarounds if you want to just use HTTP in the same actor system. 2. is there a migration guide for wandoulabs to akka-io? Or would it be best just to rewrite the wrapper from scratch on top of akka-io? 3. where is the documentation? This just has a big TODO on it http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-RC3/scala/http/client-side/websocket-support.html http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-RC3/scala/http/routing-dsl/websocket-support.html I can't even find any examples. I guess the key thing is the handshaking, which would mean rewriting this bit (and the corresponding client side handshake) https://github.com/smootoo/simple-spray-websockets/blob/master/src/main/scala/org/suecarter/websocket/WebSocket.scala#L167 Best regards, Sam -- Read the docs: http://akka.io/docs/ http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com mailto:akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com mailto:akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout https://groups.google.com/d/optout. Dr. Roland Kuhn Akka Tech Lead Typesafe http://typesafe.com/ – Reactive apps on the JVM. twitter: @rolandkuhn http://twitter.com/#!/rolandkuhn -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
[akka-user] Re: Testing Akka Actors using mocks
Frown upon mocks in actor testing, though they do have their place. This use case is perfect for mocks because the LogReaderDisruptor main(Array()) is long running process (which will continue to run forever. To give you some context, LogReaderDisruptor is entry to our current project as a single process. We are converting this to akka based project where first step is crash receovery, as in if Actor (running LogReaderDisruptor main(Array())) gets killed (because of some exception), we restart it. I can't comment on whether this is best design strategy, but this for sure adds value to our project and gives us ways to add fault tolerance to our application. Does that makes sense? Thanks On Tuesday, May 26, 2015 at 3:56:16 PM UTC-7, Harit Himanshu wrote: I am new to entire ecosystem including `Scala`, `Akka` and `ScalaTest` I am working on a problem where my `Actor` gives call to external system. case object LogProcessRequest class LProcessor extends Actor { val log = Logging(context.system, this) def receive = { case LogProcessRequest = log.debug(starting log processing) LogReaderDisruptor main(Array()) } } The `LogReaderDisruptor main(Array())` is a `Java` class that does many other things. The test I have currently looks like class LProcessorSpec extends UnitTestSpec(testSystem) { A mocked log processor should { be called in { val logProcessorActor = system.actorOf(Props[LProcessor]) logProcessorActor ! LogProcessRequest } } } where `UnitTestSpec` looks like (and inspired from [here][1]) import akka.actor.ActorSystem import akka.testkit.{ImplicitSender, TestKit} import org.scalatest.matchers.MustMatchers import org.scalatest.{BeforeAndAfterAll, WordSpecLike} abstract class UnitTestSpec(name: String) extends TestKit(ActorSystem(name)) with WordSpecLike with MustMatchers with BeforeAndAfterAll with ImplicitSender { override def afterAll() { system.shutdown() } } **Question** - How can I mock the call to `LogReaderDisruptor main(Array())` and verify that it was called? I am coming from `Java`, `JUnit`, `Mockito` land and something that I would have done here would be doNothing().when(logReaderDisruptor).main(Matchers.StringanyVararg()) verify(logReaderDisruptor, times(1)).main(Matchers.StringanyVararg()) I am not sure how to translate that with ScalaTest here. Also, This code may not be idiomatic, since I am very new and learning [1]: http://www.superloopy.io/articles/2013/scalatest-with-akka.html -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Flow from an Actor
26 maj 2015 kl. 14:54 skrev Sam Halliday sam.halli...@gmail.com: Hi all, I need to interface an Actor with an API that requires a Flow. The actor can receive a sealed trait family of inputs and will only send (a different) sealed family of outputs to upstream, so I suspect that will help matters. Looking in FlowOps, it looks like I can create a Flow from a partial function, but there isn't anything that would just simply take an ActorRef. Instead of trying to make sense of method signatures I highly recommend reading the documentation first—we spent considerable effort on describing the entirely new abstractions that we have built, and you will not understand the point behind the signatures without knowing what “Flow” entails. Am I missing something trivial to just upgrade an ActoRef to a Flow? (Obviously there is a bunch of extra messages the actor will have to handle, such as backpressure messages etc... but assume that's all taken care of) Yes, when just using Flows and our DSL then we construct Actors for you that take care of all these things. But when you write those Actors yourself, then obviously you need to take care of these things. Regards, Roland Best regards, Sam -- Read the docs: http://akka.io/docs/ http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com mailto:akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com mailto:akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout https://groups.google.com/d/optout. Dr. Roland Kuhn Akka Tech Lead Typesafe http://typesafe.com/ – Reactive apps on the JVM. twitter: @rolandkuhn http://twitter.com/#!/rolandkuhn -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Flow from an Actor
26 maj 2015 kl. 16:02 skrev Sam Halliday sam.halli...@gmail.com: To maybe try and formalise this a little bit more, and abstract away from WebSockets (that will only muddy the water). Lets say we have an Actor already that looks like this sealed trait Incoming sealed trait Outgoing class SimpleActor(upstream: ActorRef) extends Actor { def receive = { case in: Incoming = // work, including some upstream ! outgoing case Ack = // ack for the last message upstream case other = // work, including some upstream ! outgoing } } How do I wrap that as a Flow[Incoming, Outgoing, Unit] ? Short answer: by converting it to a PushPullStage (see the docs http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-RC3/scala/stream-customize.html; and here are the implementations https://github.com/akka/akka/blob/release-2.3-dev/akka-stream/src/main/scala/akka/stream/impl/fusing/Ops.scala of the common combinators) Long answer: the flow control protocol described by Reactive Streams http://www.reactive-streams.org/ is a lot more involved than your simplistic Ack protocol, there are failure conditions and corner cases to be considered to make it work reliably. Implementing that protocol yourself is a non-trivial task, we do not consider Publisher/Subscriber to be end-user API for that reason. Please trust me when I report that we spent 1.5 years together with some really bright engineers hammering out the details and trying to find the minimal specification that works, so the reason for the complexity is not that we complicated things, it lies within the subject matter itself. Regards, Roland On Tuesday, 26 May 2015 14:09:45 UTC+1, Sam Halliday wrote: Re: asyncyMap. I don't think that is going to work, there is no implied single response to each query (which I gather is what you're suggesting)? And I need some way of receiving new messages from upstream. The existing Actor is both a sink (i.e. it consumes messages from upstream, not necessarily responding to each one) and a source (i.e. it can send an effectively infinite number of messages). It is using backpressure, but only using its own `Ack` message. For some context, I'm retrofitting some code that is using this WebSockets layer around wandoulabs, e.g. https://github.com/smootoo/simple-spray-websockets/blob/master/src/test/scala/org/suecarter/websocket/WebSocketSpec.scala#L150 https://github.com/smootoo/simple-spray-websockets/blob/master/src/test/scala/org/suecarter/websocket/WebSocketSpec.scala#L150 But the newly released akka-io layer expects a Flow. The `Ack` is being received when messages were sent directly to the I/O layer. Presumably, the backpressure is implemented differently now... although I am not sure how yet. That's the second problem once I can actually get everything hooked up. On Tuesday, 26 May 2015 13:57:43 UTC+1, √ wrote: Not knowing what your actor is trying to do, what about Flow.mapAsync + ask? -- Cheers, √ On 26 May 2015 14:54, Sam Halliday sam.ha...@gmail.com wrote: Hi all, I need to interface an Actor with an API that requires a Flow. The actor can receive a sealed trait family of inputs and will only send (a different) sealed family of outputs to upstream, so I suspect that will help matters. Looking in FlowOps, it looks like I can create a Flow from a partial function, but there isn't anything that would just simply take an ActorRef. Am I missing something trivial to just upgrade an ActoRef to a Flow? (Obviously there is a bunch of extra messages the actor will have to handle, such as backpressure messages etc... but assume that's all taken care of) Best regards, Sam -- Read the docs: http://akka.io/docs/ http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+...@googlegroups.com . To post to this group, send email to akka...@googlegroups.com . Visit this group at http://groups.google.com/group/akka-user http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout https://groups.google.com/d/optout. -- Read the docs: http://akka.io/docs/ http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To
Re: [akka-user] Migrating from wandoulabs websockets to stock akka-io
Hi Roland, I've read the documentation, several times, I've even given you feedback on the documentation in an earlier milestone phase. Also, the documentation for WebSockets in Akka is TODO and TODO. The documentation on the routing directives are extremely sparse. In particular, there are no promises around the implementation of back pressure from the new websockets. What I'm missing is the ability to hook an existing actor system into something that expects a Flow, with back pressure preserved. I understand Flow, but I don't understand the implementation in terms of Actors (which incidentally, is exactly my primary feedback on the earlier documentation). You're now confusing me further by saying that Streams are not actors, because I was told at the time that streams are implemented in terms of actors. In case you didn't pick up on it, I'm planning on moving away from wandoulabs, not integrate it. This is the key piece, distilled into a standalone problem. Best regards, Sam On 27 May 2015 7:35 am, Roland Kuhn goo...@rkuhn.info wrote: Hi Sam, it might be better to take a step back before potentially running in the wrong direction. First off, Akka HTTP offers a complete solution for everything HTTP (including websockets) within an ActorSystem. Before deciding to combine this with another tool I recommend that you explore first how Akka HTTP works, because it introduces several fundamentally new concepts. In particular, when talking about it as “Spray 2.0” it is important to note that everything ActorRef-related in Spray has been replaced by Streams—a completely different abstraction that is *not* an Actor. The whole underpinnings are completely rewritten in a radically different fashion, so don’t expect any Spray modules that live “beneath the surface” to seamlessly fit onto Akka HTTP. We could go into the details Wandoulabs’ websocket add-on, but I don’t see much value in discussing that before the basics are clear. The other piece of information that I’m lacking is why you would want to “retrofit” something in this context, it might be better to explain the ends and not the means in order to get help. Regards, Roland 23 maj 2015 kl. 12:38 skrev Sam Halliday sam.halli...@gmail.com: Hi all, I'm very excited that akka-io now has WebSocket support. In ENSIME, we're planning on using this wrapper over wandoulab's websockets https://github.com/smootoo/simple-spray-websockets to easily create a REST/WebSockets endpoint with JSON marshalling for a sealed family, with backpressure. Smootoo's wrapper works really well, and I have had the pleasure of using it in a corporate environment so I trust it to be stable. For future proofing, it would seem sensible to move to stock akka-io for WebSockets, so I'm considering sending a PR to retrofit the wrapper. I have a couple of questions about that: 1. does akka-io's HTTP singleton actor support WebSockets now? That was the big caveat about using wandoulabs. It means all kinds of workarounds if you want to just use HTTP in the same actor system. 2. is there a migration guide for wandoulabs to akka-io? Or would it be best just to rewrite the wrapper from scratch on top of akka-io? 3. where is the documentation? This just has a big TODO on it http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-RC3/scala/http/client-side/websocket-support.html http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-RC3/scala/http/routing-dsl/websocket-support.html I can't even find any examples. I guess the key thing is the handshaking, which would mean rewriting this bit (and the corresponding client side handshake) https://github.com/smootoo/simple-spray-websockets/blob/master/src/main/scala/org/suecarter/websocket/WebSocket.scala#L167 Best regards, Sam -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. *Dr. Roland Kuhn* *Akka Tech Lead* Typesafe http://typesafe.com/ – Reactive apps on the JVM. twitter: @rolandkuhn http://twitter.com/#!/rolandkuhn -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to a topic in the Google Groups Akka User List group. To unsubscribe from this topic, visit https://groups.google.com/d/topic/akka-user/39HItLST7lw/unsubscribe. To
Re: [akka-user] state of backpressure in websockets?
Simple question: you have a faucet and a drain and you want to get the water from one to the other without spilling any, how do you do that? Of course you connect the two with a hose, and you do so while making sure that everything is tight. That way the pressure in the faucet will be relieved only through the drain, but the drain can limit how much water flows through the whole system—through back-pressure. Akka HTTP is built on Streams for exactly this reason, a Flow is a watertight connection between a Sink and a Source. If you build your solution with these elements then not water is spilt (and no kittens die). Sink.actorRef corresponds to leaving the end of the hose open and unconnected, placing a bucket below it that will hopefully catch most of the water. So, to conclude: if you want to use our nice and watertight websockets, then you’ll have to use hoses (Streams) and not hammers (Actors). Of course you can use a hammer to build a “hose” but that will be a lot of work since you’ll effectively be smithing a manifold. Regards, Roland 26 maj 2015 kl. 18:41 skrev Sam Halliday sam.halli...@gmail.com: Hi all, I have another thread about retrofitting wandoulabs websockets to use akka-io, which is proving painful, but I wanted to separate out this aspect of the questioning. Before I invest any more time into it, I'd like to know if the new websockets implementation actually implements backpressure on the server and client side, for both reading and writing from the socket (there are four channels requiring backpressure in a single client/server connection). Even if the implementation has backpressure at the IO level, it looks like the only way to create a Flow from an Actor is via Sink.actorRef (plus some other magic with Sources and the DSL that I haven't figured out yet) ... and that explicitly says in the documentation there is no back-pressure signal from the destination actor, i.e. if the actor is not consuming the messages fast enough the mailbox of the actor will grow which means that passing off to an actor backend to implement the websockets server is ultimately not going to have any backpressure when reading off the socket. I don't know what the situation is for writing to the socket, but certainly this is something that my current backend library is able to handle. So is this reactive, or what? Best regards, Sam -- Read the docs: http://akka.io/docs/ http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com mailto:akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com mailto:akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout https://groups.google.com/d/optout. Dr. Roland Kuhn Akka Tech Lead Typesafe http://typesafe.com/ – Reactive apps on the JVM. twitter: @rolandkuhn http://twitter.com/#!/rolandkuhn -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] [akka streams] question on some time related use cases
Hi, btw can Stage by stateful? Is R/W from/to this in a PushPullStage thread safe? var state : Map[A,Cancellable] = Map.empty Thanks, Jakub On Friday, January 23, 2015 at 2:42:11 AM UTC+1, Frank Sauer wrote: Thanks for the pointers Endre, I’ll explore those ideas. Frank On Jan 22, 2015, at 4:02 AM, Endre Varga endre...@typesafe.com javascript: wrote: On Thu, Jan 22, 2015 at 5:07 AM, Frank Sauer fsau...@gmail.com javascript: wrote: Update, in a simple test scenario like so val ticks = Source(1 second, 1 second, () = Hello) val flow = ticks.transform(() = new FilterFor[String](10 seconds)(x = true)).to(Sink.foreach(println(_))) flow.run() I'm seeing the following error, so this doesn't work at all and I'm not sure it is because of threading: java.lang.ArrayIndexOutOfBoundsException: -1 at akka.stream.impl.fusing.OneBoundedInterpreter.akka$stream$impl$fusing$OneBoundedInterpreter$$currentOp(Interpreter.scala:175) at akka.stream.impl.fusing.OneBoundedInterpreter$State$class.push(Interpreter.scala:209) at akka.stream.impl.fusing.OneBoundedInterpreter$$anon$1.push(Interpreter.scala:278) at experiments.streams.time$FilterFor$$anonfun$1.apply$mcV$sp(time.scala:46) at akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117) at scala.concurrent.impl.ExecutionContextImpl$AdaptedForkJoinTask.exec(ExecutionContextImpl.scala:121) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) I think I'm violating the one very important rule mentioned in the docs - when the timer fires it calls a push on the context but there is also a pull going on concurrently(?) - and this is indeed breaking in spectacular ways as expected :) I have no idea how to implement this correctly. It looked pretty simple at first, but alas... See my previous mail. The main problem here is mixing backpressured streams (your data) and non-backpressured events (timer triggers) in a safe fashion. Well, the main problem is not how to implement it, but how to expose an API to users which is as safe as possible. We have groupedWithin, takeWithin and dropWithin as timer based ops, but no customization for now. -Endre On Wednesday, January 21, 2015 at 8:51:21 PM UTC-5, Frank Sauer wrote: Thanks, I came up with the following, but I have some questions: /** * Holds elements of type A for a given finite duration after a predicate p first yields true and as long as subsequent * elements matching that first element (e.g. are equal) still satisfy the predicate. If a matching element arrives during * the given FiniteDuration for which the predicate p does not hold, the original element will NOT be pushed downstream. * Only when the timer expires and no matching elements have been seen for which p does not hold, will elem be pushed * downstream. * * @param duration The polling interval during which p has to hold true * @param pThe predicate that has to remain true during the duration * @param system implicit required to schedule timers * @tparam A type of the elements */ class FilterFor[A](duration : FiniteDuration)(p: A = Boolean)(implicit system: ActorSystem) extends PushStage[A,A] { var state : Map[A,Cancellable] = Map.empty override def onPush(elem: A, ctx: Context[A]): Directive = state.get(elem) match { case Some(timer) if !p(elem) = // pending timer but condition no longer holds = cancel timer timer.cancel() state = state - elem ctx.pull() case None if p(elem) = // no pending timer and predicate true - start and cache new timer val timer = system.scheduler.scheduleOnce(duration) { // when timer fires, remove from state and push elem downstream state = state - elem ctx.push(elem); // is this safe? } state = state + (elem - timer) ctx.pull() case _ = ctx.pull() // otherwise simply wait for the next upstream element } } My main concerns are these: 1) Is it safe to invoke ctx.push from the thread on which the timer fires? 2) How do I react to upstream or downstream finish or cancel events - do I have to? 3) Can I integrate this into the DSL without using transform, e.g. can I somehow add a filterFor method on something via a pimp my library? Any and all pointers would be very much appreciated, Thanks, Frank On Friday, January 16, 2015 at 11:52:03 AM UTC-5, Akka Team wrote: Hi Frank! We do not have such operations off-the-shelf, however they are easily implementable by using custom stream processing stages:
[akka-user] [2.3.11] Is akka.persistence production ready?
Hi, I'm going to use akka.persistence in production. Is it ready? What are more stable alternatives? Thanks Amir -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
[akka-user] is it a good way to initialize Actor?
I need to run a static method on a Java class (provided as a jar dependency). I also wanted to be able to write test on that. I created a wrapper class in Scala that will call this method. It looks like object CurrentLogProcessor { } class CurrentLogProcessor { def run: Unit = LogReaderDisruptor.main(Array()) } case object LogProcessRequest object LProcessor { def props(currentLogProcessor: CurrentLogProcessor) = Props(new LProcessor(currentLogProcessor)) } class LProcessor(currentLogProcessor: CurrentLogProcessor) extends Actor { val log = Logging(context.system, this) def receive = { case LogProcessRequest = log.debug(starting log processing) currentLogProcessor run } } and in my App I call this actor as val logProcessor = system.actorOf(LProcessor.props(new CurrentLogProcessor), logProcessor) logProcessor ! LogProcessRequest This compiles and works fine. Also, I am planning to mock(CurrentLogProcessor) so that I may be able to test it. Does this approach looks good? I am new to akka so wanted to make sure I am not making mistakes here Thank you -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
[akka-user] Cluster sharded actors never receive messages
I have a single Akka node running local to my machine. It seems to join itself fine. Messages never make it to the sharded actors and I can't tell if they are created. Code and logs below, thanks in advance. application.conf: akka { actor { provider = akka.cluster.ClusterActorRefProvider } remote { log-remote-lifecycle-events = off netty.tcp { hostname = 127.0.0.1 port = 2551 } } cluster { seed-nodes = [akka.tcp://default@127.0.0.1:2551] auto-down-unreachable-after = 10s } } * actor sending to sharded actor: class ClusterTestWrapperActor(val clusterSharding: ClusterSharding) extends Actor with ActorLogging { implicit val timeout = Timeout(1 second) override def receive: Receive = { case msg: NewMessage = val clusterTestActor: ActorRef = clusterSharding.shardRegion(ClusterTest) clusterTestActor ! msg case msg: GetMessages = val clusterTestActor: ActorRef = clusterSharding.shardRegion(ClusterTest) val future = clusterTestActor ? msg val result = Await.result(future, timeout.duration) sender() ! result } } * sharded actor (this is the one who never gets the messages): class ClusterTestActor() extends PersistentActor with ActorLogging { var messages: List[String] = Nil var name: String = _ // I'd rather have this be a val set in the constructor, but I don't know how since the shard constructs it override def persistenceId: String = sClusterTestActor-$name // not sure the relationship between this and akka cluster id override def receiveCommand: Receive = { case msg: NewMessage = println(s*** ClusterTestActor receive $msg) name = msg.name persist(msg) { event = messages = s${event.message} :: messages } case msg: GetMessages = println(s*** ClusterTestActor receive $msg) sender() ! Messages(msg.name, messages) } override def receiveRecover: Receive = { case msg: NewMessage = messages = s**recovered** ${msg.message} :: messages println(s** recover $msg) } } sealed trait Inputs case class GetMessages(name: String) extends Inputs case class NewMessage(name: String, message: String) extends Inputs sealed trait Outputs case class Messages(name: String, messages: List[String]) extends Outputs * cluster sharding start (I can verify that this is starting, and that the IdExtractor is only used to see if it is defined for the partial function, never actually evaluated/extracted): val idExtractor: ShardRegion.IdExtractor = { case msg @ GetMessages(name) = println(s ***${name}**) (name, msg) case msg @ NewMessage(name, message) = println(s${name}**) (name, msg) } val shardResolver: ShardRegion.ShardResolver = { case GetMessages(name) = name.head.toString case NewMessage(name, _) = println(s**${name.head.toString}**) name.head.toString } clusterSharding.start( typeName = ClusterTest, entryProps = Some(Props[ClusterTestActor]), idExtractor = idExtractor, shardResolver = shardResolver) I've also got a BroadcastActor for monitoring and it strangely receives the Messages that I send, but only the first one. class BroadcastActor extends Actor with ActorLogging { private val cluster = Cluster(context.system) override def preStart(): Unit = { cluster.subscribe( self, initialStateMode = InitialStateAsEvents, classOf[MemberEvent], classOf[UnreachableMember]) } override def postStop(): Unit = cluster.unsubscribe(self) def receive = { case msg = log.info(s$msg) } } * relevant logs make up the rest of this post: Akka config: Config(SimpleConfigObject({PID:35270,akka:{actor:{creation-timeout:20s,debug:{autoreceive:off,event-stream:off,fsm:off,lifecycle:off,receive:off,router-misconfiguration:off,unhandled:off},default-dispatcher:{attempt-teamwork:on,default-executor:{fallback:fork-join-executor},executor:default-executor,fork-join-executor:{parallelism-factor:3,parallelism-max:64,parallelism-min:8},mailbox-requirement:,shutdown-timeout:1s,thread-pool-executor:{allow-core-timeout:on,core-pool-size-factor:3,core-pool-size-max:64,core-pool-size-min:8,keep-alive-time:60s,max-pool-size-factor:3,max-pool-size-max:64,max-pool-size-min:8,task-queue-size:-1,task-queue-type:linked},throughput:5,throughput-deadline-time:0ms,type:Dispatcher},default-mailbox:{mailbox-capacity:1000,mailbox-push-timeout-time:10s,mailbox-type:akka.dispatch.UnboundedMailbox,stash-capacity:-1},deployment:{default:{cluster:{allow-local-routees:on,enabled:off,max-nr-of-instances-per-node:1,routees-path:,use-role:},dispatcher:,mailbox:,metrics-selector:mix,nr-of-instances:1,remote:,resizer:{backoff-rate:0.1,backoff-threshold:0.3,enabled:off,lower-bound:1,messages-per-resize:10,pressure-threshold:1,rampup-rate:0.2,upper-bound:10},routees:{paths:[]},router:from-code,tail-chopping-router:{interval:10
Re: [akka-user] Announcement: Opinionated RabbitMQ library using Akka / Reactive Streams
On May 27, 2015, at 07:47, Tal Pressman kir...@gmail.com wrote: Hi Tim, This looks great - I was just thinking of implementing something like this myself, so the timing couldn't have been better. ^_^ Glad to hear it! I do have a couple of questions, though. From what I see in AsyncAckingConsumer, the default error handling strategy is to acknowledge the message once the retry limit has been reached. This means that some messages could, theoretically, be lost (no at-least-once-delivery). Have you considered rejecting the message instead as a default, or providing another built-in strategy that does that? It's not a problem to implement it independently, but it could be a bit surprising for the user. You're right; after the retry amount, the messages are lost. Also, worse, is that message order is not maintained. This makes the recovery strategy good only when message order does not matter. I had to do this because I use message headers to count retries. If you'll look at impl/AsyncAckingConsumer.scala https://github.com/SpinGo/op-rabbit/blob/master/core/src/main/scala/com/spingo/op_rabbit/AsyncAckingConsumer.scala you'll find the implementation for the RecoveryStrategy withRetry. I'd be open to an alternative recovery strategy. If the future returned by the RecoveryStrategy is a failure, then the original delivery gets Nacked. Regarding configuration, is there any way of configuring the connection dynamically? I couldn't find anywhere in the code that overrides the settings read from the default config file. For example, in my use case I have to be able to open connections to several different RabbitMQ clusters, and it doesn't seem to be possible with the current implementation. This is doable; we could model the configuration values available with a case class, then create a method fromConfig to pull it from typesafe configuration; make it the default. As a side note, is it possible to change the configuration element to something a little less general (maybe rabbit-op.rabbitmq)? That may be a good idea, especially if op-rabbit begins adding more configuration options other than just how to connect to rabbitmq; however, I'd hope to avoid adding more configuration if possible. Finally, a couple of things regarding the stream module: From my understanding of streams, creating a Source and creating a flow should not depend on the actual RabbitMQ connection/subscription. Instead, the subscription (and possible the connection/channel as well?) should only be created once the flow gets materialized. Have you considered using Source.actorPublisher(Props) to create the actor and subscribe in the actor's preStart or something? I just about went down this route originally, but due to code constraints I didn't. I've since refactored the code however to make that feasible. This would be a simpler approach. The other thing about streams ties in with Roland's comments (I think) about the use of Futures with streams. It means that the entire flow must now be aware of the fact that it's a RabbitMQ flow (or at the very least, that its messages contain the Future), so it is not as composable as it might have been otherwise. Also, I don't see how it plays with streams' error handling mechanism / strategies. At the same time, the use of Futures to track messages is very elegant, and I don't see any easy way of achieving something similar with streams (maybe something using BidiFlow?). I've considered Roland's comments deeply. I started down the route of implementing his suggestion and abandoned it as complexity grew. We have cases where the messages will be routed to different message sinks, or perhaps delivered to two different sinks. Passing along a promise allows us to fork this promise, such that the upstream promise is fulfilled only after the two forked downstream promises are (that way, enabling us to declare the work is done once the byproduct messages have all been confirmed to be persisted. In any case, like I already said, this looks like a very nice library. If you need any help with it, please let me know - I would love to contribute to it. If you'd like to toy with alternative recovery strategies, that would be most helpful. I can work on switching the streams over to your suggestion of having them handle allocating and subscription to the subscription. Tal Thanks! Tim -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at
Re: [akka-user] Migrating from wandoulabs websockets to stock akka-io
27 maj 2015 kl. 08:51 skrev Sam Halliday sam.halli...@gmail.com: Hi Roland, I've read the documentation, several times, I've even given you feedback on the documentation in an earlier milestone phase. Also, the documentation for WebSockets in Akka is TODO and TODO. The documentation on the routing directives are extremely sparse. In particular, there are no promises around the implementation of back pressure from the new websockets. For this venture you need no HTTP documentation, and the documentation for Streams is complete (only lacking a translation to Java in one place). The statement “there are no promises around back-pressure” indicates that you did not, in fact, understand the full extent of what Akka Streams are. We’d love to improve the documentation in this regard, but we’d need to first figure out where their deficiency is, hence I’m talking with you. What I'm missing is the ability to hook an existing actor system into something that expects a Flow, with back pressure preserved. As I hopefully explained in my other mail about hoses: your Actors would need to implement the full spec, they’d need to be watertight. This is why “simple” Sink.actorRef integration will not work if you don’t intend on spilling messages. I understand Flow, but I don't understand the implementation in terms of Actors (which incidentally, is exactly my primary feedback on the earlier documentation). You're now confusing me further by saying that Streams are not actors, because I was told at the time that streams are implemented in terms of actors. How we implement Flows internally should be of no consequence—suffice it to say that the project took 1.5 years because that task is genuinely hard. In case you didn't pick up on it, I'm planning on moving away from wandoulabs, not integrate it. This is the key piece, distilled into a standalone problem. In order to solve this particular problem you’ll need to carefully describe the back-pressure protocol spoken by your Actor. As Viktor suggested, an easy way to integrate is via the ask pattern, and that works because of the 1:1 correspondence between ins and outs that allow automatic back-pressure propagation. If that is not applicable then there is no generic solution. Regards, Roland Best regards, Sam On 27 May 2015 7:35 am, Roland Kuhn goo...@rkuhn.info mailto:goo...@rkuhn.info wrote: Hi Sam, it might be better to take a step back before potentially running in the wrong direction. First off, Akka HTTP offers a complete solution for everything HTTP (including websockets) within an ActorSystem. Before deciding to combine this with another tool I recommend that you explore first how Akka HTTP works, because it introduces several fundamentally new concepts. In particular, when talking about it as “Spray 2.0” it is important to note that everything ActorRef-related in Spray has been replaced by Streams—a completely different abstraction that is not an Actor. The whole underpinnings are completely rewritten in a radically different fashion, so don’t expect any Spray modules that live “beneath the surface” to seamlessly fit onto Akka HTTP. We could go into the details Wandoulabs’ websocket add-on, but I don’t see much value in discussing that before the basics are clear. The other piece of information that I’m lacking is why you would want to “retrofit” something in this context, it might be better to explain the ends and not the means in order to get help. Regards, Roland 23 maj 2015 kl. 12:38 skrev Sam Halliday sam.halli...@gmail.com mailto:sam.halli...@gmail.com: Hi all, I'm very excited that akka-io now has WebSocket support. In ENSIME, we're planning on using this wrapper over wandoulab's websockets https://github.com/smootoo/simple-spray-websockets https://github.com/smootoo/simple-spray-websockets to easily create a REST/WebSockets endpoint with JSON marshalling for a sealed family, with backpressure. Smootoo's wrapper works really well, and I have had the pleasure of using it in a corporate environment so I trust it to be stable. For future proofing, it would seem sensible to move to stock akka-io for WebSockets, so I'm considering sending a PR to retrofit the wrapper. I have a couple of questions about that: 1. does akka-io's HTTP singleton actor support WebSockets now? That was the big caveat about using wandoulabs. It means all kinds of workarounds if you want to just use HTTP in the same actor system. 2. is there a migration guide for wandoulabs to akka-io? Or would it be best just to rewrite the wrapper from scratch on top of akka-io? 3. where is the documentation? This just has a big TODO on it http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-RC3/scala/http/client-side/websocket-support.html
[akka-user] Re: [akka-http] how does the formFields directive work?
You need both an implicit FlowMaterializer and an implicit ExecutionContext in scope. I haven't followed the chain to check why those are needed but basically I keep adding them (one or the other or both) wherever I have compilation errors and it works. It should be something related to marshalling, I'll leave the answer to the more expert guys around here. val route = extractFlowMaterializer { implicit mat = implicit val ec = mat.executionContext formFields('color, 'age.as[Int]) { (color, age) = complete(sThe color is '$color' and the age ten years ago was ${age - 10}) } } On Tuesday, 26 May 2015 14:35:12 UTC+2, Ian Phillips wrote: If I try to follow the example from the documentation it doesn't compile. val route = formFields('color, 'age.as[Int]) { (color, age) = complete(sThe color is '$color' and the age ten years ago was ${age - 10}) } I get the following error: too many arguments for method formFields: (pdm: akka.http.scaladsl.server.directives.FormFieldDirectives.FieldMagnet)pdm.Out formFields('color, 'age.as[Int]) { (color, age) = ^ one error found Is the documentation wrong, or (more likely) what stupid mistake am I making with this? In case it makes a difference here, I'm importing: import akka.http.scaladsl.server._ import akka.http.scaladsl.server.Directives._ Cheers, Ian. -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
[akka-user] Re: [akka-http] how does the formFields directive work?
Well, since I'm here, let me ask: all the marshalling code requires an execution context (I can see that), but why a materializer? I can't find where the implicit mat is needed anywhere Thanks G On Wednesday, 27 May 2015 09:21:37 UTC+2, Giovanni Alberto Caporaletti wrote: You need both an implicit FlowMaterializer and an implicit ExecutionContext in scope. I haven't followed the chain to check why those are needed but basically I keep adding them (one or the other or both) wherever I have compilation errors and it works. It should be something related to marshalling, I'll leave the answer to the more expert guys around here. val route = extractFlowMaterializer { implicit mat = implicit val ec = mat.executionContext formFields('color, 'age.as[Int]) { (color, age) = complete(sThe color is '$color' and the age ten years ago was ${age - 10}) } } On Tuesday, 26 May 2015 14:35:12 UTC+2, Ian Phillips wrote: If I try to follow the example from the documentation it doesn't compile. val route = formFields('color, 'age.as[Int]) { (color, age) = complete(sThe color is '$color' and the age ten years ago was ${age - 10}) } I get the following error: too many arguments for method formFields: (pdm: akka.http.scaladsl.server.directives.FormFieldDirectives.FieldMagnet)pdm.Out formFields('color, 'age.as[Int]) { (color, age) = ^ one error found Is the documentation wrong, or (more likely) what stupid mistake am I making with this? In case it makes a difference here, I'm importing: import akka.http.scaladsl.server._ import akka.http.scaladsl.server.Directives._ Cheers, Ian. -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
[akka-user] Re: [akka-http] how does the formFields directive work?
Got it! Sorry for the spam: implicit def defaultUrlEncodedFormDataUnmarshaller(implicit fm: FlowMaterializer): FromEntityUnmarshaller[FormData] FYI: To find this I had to create fake methods with the implicit parameters required from the code (e.g. def x()(implicit a: FromEntityUnmarshaller [StrictForm], b:FromStrictFormFieldUnmarshaller[String]) = ???) and follow the implicit calls using the ide. Is there a better way to do this? Thanks again G On Wednesday, 27 May 2015 09:29:22 UTC+2, Giovanni Alberto Caporaletti wrote: Well, since I'm here, let me ask: all the marshalling code requires an execution context (I can see that), but why a materializer? I can't find where the implicit mat is needed anywhere Thanks G On Wednesday, 27 May 2015 09:21:37 UTC+2, Giovanni Alberto Caporaletti wrote: You need both an implicit FlowMaterializer and an implicit ExecutionContext in scope. I haven't followed the chain to check why those are needed but basically I keep adding them (one or the other or both) wherever I have compilation errors and it works. It should be something related to marshalling, I'll leave the answer to the more expert guys around here. val route = extractFlowMaterializer { implicit mat = implicit val ec = mat.executionContext formFields('color, 'age.as[Int]) { (color, age) = complete(sThe color is '$color' and the age ten years ago was ${age - 10}) } } On Tuesday, 26 May 2015 14:35:12 UTC+2, Ian Phillips wrote: If I try to follow the example from the documentation it doesn't compile. val route = formFields('color, 'age.as[Int]) { (color, age) = complete(sThe color is '$color' and the age ten years ago was ${age - 10}) } I get the following error: too many arguments for method formFields: (pdm: akka.http.scaladsl.server.directives.FormFieldDirectives.FieldMagnet)pdm.Out formFields('color, 'age.as[Int]) { (color, age) = ^ one error found Is the documentation wrong, or (more likely) what stupid mistake am I making with this? In case it makes a difference here, I'm importing: import akka.http.scaladsl.server._ import akka.http.scaladsl.server.Directives._ Cheers, Ian. -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.