Re: [akka-user] Akka actors not processing messages after a while
Hi Yogesh, what do you see in your server logs? Try to set log level to eleven, I mean DEBUG, http://doc.akka.io/docs/akka/2.3.7/scala/logging.html and check if you see something out of the ordinary. How fast messages are processed in the backend? Do you do blocking there? Do you use Await? No errors and unresponsive actors may indicate thread starvation in the dispatcher that is driving your backend actor system, which may be caused by blocking operations in your actor code. On Tue, Nov 25, 2014 at 10:08 PM, Yogesh yogesh30...@gmail.com wrote: Hi, I am developing an application using scala and akka actors where I have a client and a server. Client is on a remote machine. I have set up a scheduler on client which send messages to server at regular intervals. On server end, I have created smallest mailbox pool of server actors which will be processing the messages sent by the client. Server actors process messages for few seconds say 60 sec and then stop processing the messages. I am not able to figure out what exactly is happening at the server end? Are the server actors getting killed? I have verified that client is continuously sending the messages. Are the messages getting lost in between client and server? Thanks for your help. Regards, Yogesh -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Martynas Mickevičius Typesafe http://typesafe.com/ – Reactive http://www.reactivemanifesto.org/ Apps on the JVM -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Re: Trying to figure out why all threads for a dispatcher will block at the same time consistently
Great investigative work! Thanks for letting us know what was the problem. On Wed, Nov 26, 2014 at 5:22 AM, Soumya Simanta soumya.sima...@gmail.com wrote: I stand corrected. The blocking is not happening at Unsafe.park but the following ch.qos.logback.core.AppenderBase.doAppend(Object) AppenderBase.java:64 ch.qos.logback.classic.Logger.info(String) Logger.java:607 I removed the calls to the logger and its working now. Reporting here just in case someone else is facing the same/similar issue. On Tuesday, November 25, 2014 6:13:09 PM UTC-5, Soumya Simanta wrote: I'm using the following Scala and Akka versions. scalaVersion := 2.10.3 val akkaVersion = 2.3.6 On Tuesday, November 25, 2014 5:39:52 PM UTC-5, Soumya Simanta wrote: Looks like the following is the call that blocks. ForkJoinPool.java:2075 sun.misc.Unsafe.park(boolean, long) On Tuesday, November 25, 2014 12:12:14 PM UTC-5, Soumya Simanta wrote: Not all threads in my dispatcher are running to 100%. Following is my dispatcher. my-dispatcher{ type = Dispatcher executor = fork-join-executor fork-join-executor { # Min number of threads to cap factor-based parallelism number to parallelism-min = 8 # Parallelism (threads) ... ceil(available processors * factor) parallelism-factor = 2.0 # Max number of threads to cap factor-based parallelism number to parallelism-max = 120 } throughput = 200 } I'm calling my actors using a router. val actor = context.actorOf(Props(new MyActor(pubRedisClient, pubChanName)).withRouter(SmallestMailboxRouter(nrOfInstances = 20)).withDispatcher(akka.my-dispatcher), name = my-analyzer-router) What I cannot understand is that all threads in my dispatching go into the blocking state at the same time (please see the screenshots). Looks like this is happening consistently. My intuition is if there is a blocking piece of code then it's very strange that the blocking piece is being executed at the exact same time again and again. Any idea why this would be happening? Thanks -Soumya -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Martynas Mickevičius Typesafe http://typesafe.com/ – Reactive http://www.reactivemanifesto.org/ Apps on the JVM -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] akka http+streams applying TCP backpressure
Thanks, sounds like I must be doing something wrong. This isn't a huge priority right now, so I'll probably worry about it later. The situation is that I have an app streaming a (potentially large) request in from a client and forwards it on to another downstream application. I made a test (runs in a separate process) that sends an infinitely large chunked request (its source is an ActorPublisher that creates a new chunk whenever it is requested), I also create a mock version of the downstream application (using akka http) which requests a single chunk and then waits indefinitely. I'd expect after a few seconds the app will stop streaming through data (after the buffers fill up), and at the very least to notice some difference between a) mock downstream cancels the entity stream immediately a) mock downstream requests a single chunk then waits b) mock downstream requests all chunks But they all stream in about 300-400MB before hitting my request timeout. (Interestingly the size of the chunks doesn't seem to have much affect on the amount streamed) -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] akka http+streams applying TCP backpressure
Hi Joe, There is a configuration section for the underlying IO stuff: # The maximum number of bytes delivered by a `Received` message. Before # more data is read from the network the connection actor will try to # do other work. akka.io.tcp.max-received-message-size = unlimited (see http://doc.akka.io/docs/akka/2.3.7/general/configuration.html#akka-actor) You can set that to a fixed value (recommended for streams, so we might want to make it a default for stream IO) because otherwise a lot of things get slurped in one go. Since backpressure in streams is counted as elements, a few ByteStrings with a huge size can make your stuff slurped in completely (since one Received message can contain bytes as much as the network buffers can hold). Btw, a reproducer would help. -Endre On Wed, Nov 26, 2014 at 1:08 PM, Joe Edwards joeyedward...@googlemail.com wrote: Thanks, sounds like I must be doing something wrong. This isn't a huge priority right now, so I'll probably worry about it later. The situation is that I have an app streaming a (potentially large) request in from a client and forwards it on to another downstream application. I made a test (runs in a separate process) that sends an infinitely large chunked request (its source is an ActorPublisher that creates a new chunk whenever it is requested), I also create a mock version of the downstream application (using akka http) which requests a single chunk and then waits indefinitely. I'd expect after a few seconds the app will stop streaming through data (after the buffers fill up), and at the very least to notice some difference between a) mock downstream cancels the entity stream immediately a) mock downstream requests a single chunk then waits b) mock downstream requests all chunks But they all stream in about 300-400MB before hitting my request timeout. (Interestingly the size of the chunks doesn't seem to have much affect on the amount streamed) -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Design question on with akka persistence
Thanks a lot. My question was more on the pattern of creating and poison pilling a new PersistentView for each request. Since the query optimized view of mine changes a lot on request params. Is it the right way to do it? On Monday, November 24, 2014 4:10:27 AM UTC+5:30, Andrew Easter wrote: Thanks, Patrik - makes a whole lot of sense! -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] testing akka cluster on Blue Waters @ NCSA
On Tue, Nov 25, 2014 at 9:35 PM, Nidhi nidhi.ashwani.aggar...@gmail.com wrote: Thank you Patrik, we are restricting the sender to send lesser queries now. Sounds good. Could you tell us what exactly is to be used from the fix ? We have akka 2.3.6. There is nothing you need to enable to use the improvements, apart from using Akka 2.3.3 or later. Latest is 2.3.7 (always recommended to use latest stable). Details can be found in the pull request, if you are interested. https://github.com/akka/akka/pull/2116 I think I published some benchmark results here: https://groups.google.com/forum/#!msg/akka-dev/mFvz_d737t4/pZSmbFRLAV8J Note that serialization is often a bottleneck. In case you use (default) Java serialization you should use something faster. Let me know if I misunderstood your question. Regards, Patrik On Tuesday, November 25, 2014 2:46:44 AM UTC-5, Patrik Nordwall wrote: On Mon, Nov 24, 2014 at 1:32 AM, Nidhi nidhi.ashwa...@gmail.com wrote: Hello all, we are working on a course project to simulate twitter server and twitter users and test the server for the load it can handle. We have users on one system (with one client master and around 10,000 user actors) and server on another machine(one master and 1000 worker actors) to resolve the queries it gets from the user actors remotely. We are sending an average of 6000 queries/sec to the server. We have one TCP connection between the 2 systems and we get the following warning [TwitterServerSystem-akka.remote.default-remote-dispatcher-6] [akka.tcp://TwitterServerSystem@122.122.122.122:2552/system/ endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FTwit terClientSystem%40127.0.0.1%3A56833-0/endpointWriter http://www.google.com/url?q=http%3A%2F%2FTwitterServerSystem%4010.136.43.158%3A52335%2Fsystem%2FendpointManager%2FreliableEndpointWriter-akka.tcp%253A%252F%252FTwitterClientSystem%2540127.0.0.1%253A56833-0%2FendpointWritersa=Dsntz=1usg=AFQjCNFv8cFbjN-nkft3bH--gLuNWbbdVQ] [65138] buffered messages in EndpointWriter for [akka.tcp:// TwitterClientSystem@127.0.0.1:56833 http://www.google.com/url?q=http%3A%2F%2FTwitterClientSystem%40127.0.0.1%3A56833sa=Dsntz=1usg=AFQjCNHDGGNE6uGCLY9WRq5G-oSxo0FaGQ]. You should probably implement flow control to avoid flooding the remote connection. This warning indicates that many remote messages have been queued on the sender side. It is configured by: # Log warning if the number of messages in the backoff buffer in the endpoint # writer exceeds this limit. It can be disabled by setting the value to off. akka.remote.log-buffer-size-exceeding = 5 The recommendation about flow control means that you should add some application level protocol (messages) between sender and receiver that controls how much the sender is allowed to produce before it stops sending more. Without that you will get out of memory if the sender continues to produce faster than what can be consumed. Just came across this thread and thought it is relevant to our problem. We know since we have one tcp connection(can we increase the number of connections?), it might be a bottleneck. Both the server and client machines buffer messages for each other. How do we go about using this new fix ? The improvements for sending bursts of remote messages have been included in Akka 2.3.x since 2.3.3. Regards, Patrik Thank you. Regards, Nidhi On Friday, April 25, 2014 9:48:55 AM UTC-4, Patrik Nordwall wrote: Boris, you should try the timestamped snapshot 2.3-20140425-151510 that is published to repo http://repo.akka.io/snapshots/ It is supposed to handle bursts of many messages without (much) degraded throughput or false failure detection. More details here: https://groups.google.com/d/msg/akka-dev/mFvz_d737t4/pZSmbFRLAV8J Regards, Patrik On Mon, Mar 24, 2014 at 5:00 PM, √iktor Ҡlang viktor...@gmail.com wrote: Nice! On Mon, Mar 24, 2014 at 4:55 PM, Boris Capitanu bor...@gmail.com wrote: Anyway, one thing to try is to set akka.remote.backoff-interval to a larger value while setting the send-buffer-size to 1024000b. I would try with backoffs 0.5s and 1s. While 1s is not a very good setting, it is a good way to test our hypothesis. I've used the backoff-interval = 0.5s and send-buffer-size=1024000b and I do see the timings becoming more consistent (albeit worse). The standard deviation of the timings observed is much lower. Well - I think we narrowed down the issue. I'll wait for a fix... I'll be glad to test any nightly builds that include a fix if it would be helpful. -Boris -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/c urrent/additional/faq.html Search the archives: https://groups.google.com/grou p/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send
[akka-user] Akka streams - stability
What is the status of Akka streams? Are there major changes on the roadmap or the API is stable? Where can I find a bugtracker of Akka streams? Best, Maciej -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Akka streams - stability
We’ll announce when it’s “stable stable” ;-) No huge changes planned, but there still may be a few here and there. Issue tracker is github, as for the rest of Akka as well: https://github.com/akka/akka/issues?q=is%3Aopen+is%3Aissue+label%3At%3Astream Happy hakking! -- Konrad 'ktoso' Malawski hAkker @ typesafe http://akka.io -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Re: how to control number of threads started by a RoundRobin worker.
Glad to hear that. /Patrik On Tue, Nov 25, 2014 at 8:52 PM, Hector123 haidar.h...@gmail.com wrote: I got this resolved , it was an error with the configuration json object. here is the correct setting akka { actor{ default-dispatcher { type = Dispatcher executor = thread-pool-executor throughput = 1000 fork-join-executor { parallelism-min = 32 parallelism-factor = 0.5 parallelism-max = 64 } On Tuesday, November 25, 2014 8:04:27 AM UTC-8, Hector123 wrote: Hello, I created an application that uses Akka with RoundRobin routers. The application takes a list of files to processes them in parallel. My issue is that regardless of the number of workers that I specify the application processes only 12 files at a time. Is there a certain setting that I need to change ? my code : val conf1 = ConfigFactory.load(ConfigFactory.parseString( akka default-dispatcher { type = Dispatcher executor = fork-join-executor throughput = 1000 fork-join-executor { parallelism-min = 32 parallelism-factor = 0.5 parallelism-max = 64 } } })) val system = ActorSystem(MySystem, conf1) val master = system.actorOf(Props[MasterClassName],name=myactor) master ! CaseClass1 and then in the master class , here is how I initialize the workers: val nworkers=20 // or 30 val workers = context.actorOf(Props[ClassWorker].withRouter( RoundRobinRouter(nworkers))) Any idea why ? -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Patrik Nordwall Typesafe http://typesafe.com/ - Reactive apps on the JVM Twitter: @patriknw -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Re: Trying to figure out why all threads for a dispatcher will block at the same time consistently
Glad to hear that you found the problem. You might want to use the Akka async logging API and/or try the Logback AsyncAppender http://logback.qos.ch/manual/appenders.html#AsyncAppender. Regards, Patrik On Wed, Nov 26, 2014 at 11:17 AM, Martynas Mickevičius martynas.mickevic...@typesafe.com wrote: Great investigative work! Thanks for letting us know what was the problem. On Wed, Nov 26, 2014 at 5:22 AM, Soumya Simanta soumya.sima...@gmail.com wrote: I stand corrected. The blocking is not happening at Unsafe.park but the following ch.qos.logback.core.AppenderBase.doAppend(Object) AppenderBase.java:64 ch.qos.logback.classic.Logger.info(String) Logger.java:607 I removed the calls to the logger and its working now. Reporting here just in case someone else is facing the same/similar issue. On Tuesday, November 25, 2014 6:13:09 PM UTC-5, Soumya Simanta wrote: I'm using the following Scala and Akka versions. scalaVersion := 2.10.3 val akkaVersion = 2.3.6 On Tuesday, November 25, 2014 5:39:52 PM UTC-5, Soumya Simanta wrote: Looks like the following is the call that blocks. ForkJoinPool.java:2075 sun.misc.Unsafe.park(boolean, long) On Tuesday, November 25, 2014 12:12:14 PM UTC-5, Soumya Simanta wrote: Not all threads in my dispatcher are running to 100%. Following is my dispatcher. my-dispatcher{ type = Dispatcher executor = fork-join-executor fork-join-executor { # Min number of threads to cap factor-based parallelism number to parallelism-min = 8 # Parallelism (threads) ... ceil(available processors * factor) parallelism-factor = 2.0 # Max number of threads to cap factor-based parallelism number to parallelism-max = 120 } throughput = 200 } I'm calling my actors using a router. val actor = context.actorOf(Props(new MyActor(pubRedisClient, pubChanName)).withRouter(SmallestMailboxRouter(nrOfInstances = 20)).withDispatcher(akka.my-dispatcher), name = my-analyzer-router) What I cannot understand is that all threads in my dispatching go into the blocking state at the same time (please see the screenshots). Looks like this is happening consistently. My intuition is if there is a blocking piece of code then it's very strange that the blocking piece is being executed at the exact same time again and again. Any idea why this would be happening? Thanks -Soumya -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Martynas Mickevičius Typesafe http://typesafe.com/ – Reactive http://www.reactivemanifesto.org/ Apps on the JVM -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Patrik Nordwall Typesafe http://typesafe.com/ - Reactive apps on the JVM Twitter: @patriknw -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Design question on with akka persistence
On Wed, Nov 26, 2014 at 1:20 PM, vin...@indix.com wrote: Thanks a lot. My question was more on the pattern of creating and poison pilling a new PersistentView for each request. Since the query optimized view of mine changes a lot on request params. Is it the right way to do it? Depends on how many events you have to replay to be able to answer the query, but it sounds rather inefficient. /Patrik On Monday, November 24, 2014 4:10:27 AM UTC+5:30, Andrew Easter wrote: Thanks, Patrik - makes a whole lot of sense! -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Patrik Nordwall Typesafe http://typesafe.com/ - Reactive apps on the JVM Twitter: @patriknw -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] remote akka problem - not able to communicate
Hard to say what can be wrong. Firewall? What do you see in the logs? You can also try to run with akka.loglevel=DEBUG Regards, Patrik On Tue, Nov 25, 2014 at 7:52 AM, Suman Adak sumana...@gmail.com wrote: Dear All, I have some problem with remote akka. I'm trying to create one akka remote actor node and connect it from a different machine. I opened the port 2551,2552,2553 ports of both the machines. But after creating the remote actor in one machine I'm not able to access it from the other machine. Even I'm not able to access the process via TELNET, but in NETSTAT the process reviels its running status in port 2551. Please help me ..I am pasting my code snippet along with this mail... My code snipet for machine A(remote): _ 1. Conf file: akka { actor { provider = akka.remote.RemoteActorRefProvider } remote { transport = akka.remote.netty.NettyRemoteTransport netty.tcp { hostname = 192.168.161.131 port = 2551 } } } 2. Code: import akka.actor._ import com.typesafe.config.ConfigFactory import java.io.File import com.typesafe.config.ConfigParseOptions object HelloRemote { def main(args: Array[String]): Unit = { val rootCfg = ConfigFactory.parseFile(new File(./test/remote.conf), ConfigParseOptions.defaults()) val system = ActorSystem(HelloRemoteSystem, rootCfg) val remoteActor = system.actorOf(Props[RemoteActor], name = RemoteActor) remoteActor ! The RemoteActor is alive } } class RemoteActor extends Actor { def receive = { case msg: String = println(sRemoteActor received message '$msg') sender ! Hello from the RemoteActor } } My code snipet for machine B(Local): 1. Conf file: akka { actor { provider = akka.remote.RemoteActorRefProvider } remote { transport = akka.remote.netty.NettyRemoteTransport netty.tcp { hostname = 192.168.161.222 port = 2551 } } } 2. Code: import akka.actor._ import com.typesafe.config.ConfigFactory import java.io.File import com.typesafe.config.ConfigParseOptions object Local { def main(args: Array[String]): Unit = { val rootCfg = ConfigFactory.parseFile(new File(./test/local.conf), ConfigParseOptions.defaults()) implicit val system = ActorSystem(LocalSystem, rootCfg) val localActor = system.actorOf(Props[LocalActor], name = LocalActor) // the local actor localActor ! START // start the action } } class LocalActor extends Actor { // create the remote actor val remote = context.actorSelection(akka.tcp:// HelloRemoteSystem@192.168.161.131:2551/user/RemoteActor) var counter = 0 def receive = { case START = remote.tell(hi) remote ! Hello from the LocalActor case msg: String = println(sLocalActor received message: '$msg') if (counter 5) { sender ! Hello back to you counter += 1 } } } -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Patrik Nordwall Typesafe http://typesafe.com/ - Reactive apps on the JVM Twitter: @patriknw -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Akka stream throttle the transmission bandwidth
Looks like you got some advice over at akka-dev: https://groups.google.com/d/msg/akka-dev/piuPD8xSnpM/E6E8nBUCrecJ /Patrik On Tue, Nov 25, 2014 at 1:40 PM, Nicolas Jozwiak n.jozw...@gmail.com wrote: Hello, I’m currently using akka stream 0.11 to process some big files and I want to throttle the transmission bandwidth. For example I would like to limit the throughput to XXX Kb/s. I saw the groupedWithin function but it does not fill my needs. Is there a way to do that ? Or is it planned ? Thanks for your help. -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout. -- Patrik Nordwall Typesafe http://typesafe.com/ - Reactive apps on the JVM Twitter: @patriknw -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
[akka-user] Using a BalancingPool with PriorityGenerator
Hi I am trying to get a balancing pool working in conjunction with a priority mailbox (eg using the PriorityGenerator) I expected the following to work : context.actorOf(BalancingPool(5).withDispatcher(prio-dispatcher).props(Props(new Actor))) but it seems that the PriorityGenerator never gets called. Does anyone have any idea on how to do this ? -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
Re: [akka-user] Akka Streams - Chunked Encoding
Using a CURL, I do seem to get some data going down, it does the concatenation. HTTP/1.1 200 OK * Server akka-http/2.3.7 is not blacklisted Server: akka-http/2.3.7 Date: Wed, 26 Nov 2014 22:50:32 GMT Transfer-Encoding: chunked Content-Type: application/x-gzip * Connection #0 to host localhost left intact Hello thereMy name is JohnI like potatoes I am doing multiple files and concatenating them together simply to simulate chunking I do see the following output however, which has me concerned: [INFO] [11/26/2014 17:50:32.128] [default-akka.actor.default-dispatcher-9] [akka://default/user/IO-HTTP/$a/flow-30-0-iterable] Message [akka.actor.Terminated] from Actor[akka://default/user/IO-HTTP/$a/flow-30-0-iterable/$a#-169553250] to Actor[akka://default/user/IO-HTTP/$a/flow-30-0-iterable#-1655378203] was not delivered. [7] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'. [INFO] [11/26/2014 17:50:32.129] [default-akka.actor.default-dispatcher-6] [akka://default/user/IO-HTTP/$a/flow-31-0-iterator] Message [akka.stream.impl.RequestMore] from Actor[akka://default/deadLetters] to Actor[akka://default/user/IO-HTTP/$a/flow-31-0-iterator#-959699535] was not delivered. [8] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'. [INFO] [11/26/2014 17:50:32.130] [default-akka.actor.default-dispatcher-11] [akka://default/user/IO-HTTP/$a/flow-31-1-map] Message [akka.stream.impl.RequestMore] from Actor[akka://default/deadLetters] to Actor[akka://default/user/IO-HTTP/$a/flow-31-1-map#-52495379] was not delivered. [9] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'. [INFO] [11/26/2014 17:50:32.131] [default-akka.actor.default-dispatcher-14] [akka://default/user/IO-HTTP/$a/flow-27-1-identity] Message [akka.stream.impl.RequestMore] from Actor[akka://default/deadLetters] to Actor[akka://default/user/IO-HTTP/$a/flow-27-1-identity#-2121255318] was not delivered. [10] dead letters encountered, no more dead letters will be logged. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'. On Wednesday, November 26, 2014 5:34:34 PM UTC-5, rklaehn wrote: It seems to me that you are gzip-compressing each file individually. So the result of your request will be the concatenation of multiple gzipped files, which does not make any sense. You should get some data though. Have you tried testing with wget or curl? On Wed, Nov 26, 2014 at 11:27 PM, Paul Cleary pcle...@gmail.com javascript: wrote: I am trying to created a chunked response from a series of files on the file system. I am using the Test Server code as the basis. Here is my sample code: val requestHandler: HttpRequest ⇒ HttpResponse = { case HttpRequest(GET, Uri.Path(/), _, _, _) ⇒ val dir = /test-data/ println(s\r\n handling request) val s = Source(files.listFiles().filter(_.getName().endsWith(.txt)).toIterator) map { file = println(s\r\n reading file ${file.getName}) java.nio.file.Files.readAllBytes(Paths.get(file.getAbsolutePath)) } map { byteArray = println(s\r\n getting bytes ${byteArray.length}) new GzipCompressor().compress(ByteString(byteArray)) } HttpResponse(entity = HttpEntity.Chunked.fromData(MediaTypes.`application/x-gzip`, s)) case _: HttpRequest ⇒ HttpResponse(404, entity = Unknown resource!) } streamingServer foreach { case Http.ServerBinding(localAddress, connectionStream) = Source(connectionStream) foreach { case Http.IncomingConnection(remoteAddress, requestProducer, responseConsumer) = println(s\r\n Accepted new connection from $remoteAddress) Source(requestProducer).map(requestHandler).to(Sink(responseConsumer)).run() } } When I run this, I get a gz file in my browser, but I am not getting any of the data. Do I need to specify that this is a gzip stream or do something else so all the bytes get down? Here is the response header I am seeing: Content-Type: application/x-gzip Date: Wed, 26 Nov 2014 22:26:04 GMT Server: akka-http/2.3.7 Transfer-Encoding: chunked -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails
Re: [akka-user] Akka Streams - Chunked Encoding
I am not completely sure, but the dead letters are probably nothing to worry about. I had a similar experience when I played around with it a few weeks ago. See this thread: https://groups.google.com/d/topic/akka-user/4ybCq7mEp4g/discussion On Wed, Nov 26, 2014 at 11:52 PM, Paul Cleary pclear...@gmail.com wrote: Using a CURL, I do seem to get some data going down, it does the concatenation. HTTP/1.1 200 OK * Server akka-http/2.3.7 is not blacklisted Server: akka-http/2.3.7 Date: Wed, 26 Nov 2014 22:50:32 GMT Transfer-Encoding: chunked Content-Type: application/x-gzip * Connection #0 to host localhost left intact Hello thereMy name is JohnI like potatoes I am doing multiple files and concatenating them together simply to simulate chunking I do see the following output however, which has me concerned: [INFO] [11/26/2014 17:50:32.128] [default-akka.actor.default-dispatcher-9] [akka://default/user/IO-HTTP/$a/flow-30-0-iterable] Message [akka.actor.Terminated] from Actor[akka://default/user/IO-HTTP/$a/flow-30-0-iterable/$a#-169553250] to Actor[akka://default/user/IO-HTTP/$a/flow-30-0-iterable#-1655378203] was not delivered. [7] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'. [INFO] [11/26/2014 17:50:32.129] [default-akka.actor.default-dispatcher-6] [akka://default/user/IO-HTTP/$a/flow-31-0-iterator] Message [akka.stream.impl.RequestMore] from Actor[akka://default/deadLetters] to Actor[akka://default/user/IO-HTTP/$a/flow-31-0-iterator#-959699535] was not delivered. [8] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'. [INFO] [11/26/2014 17:50:32.130] [default-akka.actor.default-dispatcher-11] [akka://default/user/IO-HTTP/$a/flow-31-1-map] Message [akka.stream.impl.RequestMore] from Actor[akka://default/deadLetters] to Actor[akka://default/user/IO-HTTP/$a/flow-31-1-map#-52495379] was not delivered. [9] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'. [INFO] [11/26/2014 17:50:32.131] [default-akka.actor.default-dispatcher-14] [akka://default/user/IO-HTTP/$a/flow-27-1-identity] Message [akka.stream.impl.RequestMore] from Actor[akka://default/deadLetters] to Actor[akka://default/user/IO-HTTP/$a/flow-27-1-identity#-2121255318] was not delivered. [10] dead letters encountered, no more dead letters will be logged. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'. On Wednesday, November 26, 2014 5:34:34 PM UTC-5, rklaehn wrote: It seems to me that you are gzip-compressing each file individually. So the result of your request will be the concatenation of multiple gzipped files, which does not make any sense. You should get some data though. Have you tried testing with wget or curl? On Wed, Nov 26, 2014 at 11:27 PM, Paul Cleary pcle...@gmail.com wrote: I am trying to created a chunked response from a series of files on the file system. I am using the Test Server code as the basis. Here is my sample code: val requestHandler: HttpRequest ⇒ HttpResponse = { case HttpRequest(GET, Uri.Path(/), _, _, _) ⇒ val dir = /test-data/ println(s\r\n handling request) val s = Source(files.listFiles().filter(_.getName().endsWith(.txt)).toIterator) map { file = println(s\r\n reading file ${file.getName}) java.nio.file.Files.readAllBytes(Paths.get(file.getAbsolutePath)) } map { byteArray = println(s\r\n getting bytes ${byteArray.length}) new GzipCompressor().compress(ByteString(byteArray)) } HttpResponse(entity = HttpEntity.Chunked.fromData(MediaTypes.`application/x-gzip`, s)) case _: HttpRequest ⇒ HttpResponse(404, entity = Unknown resource!) } streamingServer foreach { case Http.ServerBinding(localAddress, connectionStream) = Source(connectionStream) foreach { case Http.IncomingConnection(remoteAddress, requestProducer, responseConsumer) = println(s\r\n Accepted new connection from $remoteAddress) Source(requestProducer).map(requestHandler).to(Sink(responseConsumer)).run() } } When I run this, I get a gz file in my browser, but I am not getting any of the data. Do I need to specify that this is a gzip stream or do something else so all the bytes get down? Here is the response header I am seeing: Content-Type: application/x-gzip Date: Wed, 26 Nov 2014 22:26:04 GMT Server: akka-http/2.3.7 Transfer-Encoding: chunked -- Read the docs: http://akka.io/docs/ Check the FAQ:
[akka-user] Re: remote akka problem - not able to communicate
Thanks to all, Problem got solved. mistake was transport = akka.remote.netty.NettyRemoteTransport it should be akka.remote.netty.NettyTransport Thanks a lot!! On Tuesday, November 25, 2014 12:22:36 PM UTC+5:30, Suman Adak wrote: Dear All, I have some problem with remote akka. I'm trying to create one akka remote actor node and connect it from a different machine. I opened the port 2551,2552,2553 ports of both the machines. But after creating the remote actor in one machine I'm not able to access it from the other machine. Even I'm not able to access the process via TELNET, but in NETSTAT the process reviels its running status in port 2551. Please help me ..I am pasting my code snippet along with this mail... My code snipet for machine A(remote): _ 1. Conf file: akka { actor { provider = akka.remote.RemoteActorRefProvider } remote { transport = akka.remote.netty.NettyRemoteTransport netty.tcp { hostname = 192.168.161.131 port = 2551 } } } 2. Code: import akka.actor._ import com.typesafe.config.ConfigFactory import java.io.File import com.typesafe.config.ConfigParseOptions object HelloRemote { def main(args: Array[String]): Unit = { val rootCfg = ConfigFactory.parseFile(new File(./test/remote.conf), ConfigParseOptions.defaults()) val system = ActorSystem(HelloRemoteSystem, rootCfg) val remoteActor = system.actorOf(Props[RemoteActor], name = RemoteActor) remoteActor ! The RemoteActor is alive } } class RemoteActor extends Actor { def receive = { case msg: String = println(sRemoteActor received message '$msg') sender ! Hello from the RemoteActor } } My code snipet for machine B(Local): 1. Conf file: akka { actor { provider = akka.remote.RemoteActorRefProvider } remote { transport = akka.remote.netty.NettyRemoteTransport netty.tcp { hostname = 192.168.161.222 port = 2551 } } } 2. Code: import akka.actor._ import com.typesafe.config.ConfigFactory import java.io.File import com.typesafe.config.ConfigParseOptions object Local { def main(args: Array[String]): Unit = { val rootCfg = ConfigFactory.parseFile(new File(./test/local.conf), ConfigParseOptions.defaults()) implicit val system = ActorSystem(LocalSystem, rootCfg) val localActor = system.actorOf(Props[LocalActor], name = LocalActor) // the local actor localActor ! START // start the action } } class LocalActor extends Actor { // create the remote actor val remote = context.actorSelection(akka.tcp:// HelloRemoteSystem@192.168.161.131:2551/user/RemoteActor) var counter = 0 def receive = { case START = remote.tell(hi) remote ! Hello from the LocalActor case msg: String = println(sLocalActor received message: '$msg') if (counter 5) { sender ! Hello back to you counter += 1 } } } -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.