Hi Endre,

My preliminary tests playing with the following settings show no 
improvement (but in some cases even worse performance):

      send-buffer-size = X
      server-socket-worker-pool = {
        pool-size-min = Y
        pool-size-max = Y
      }
      client-socket-worker-pool = {
        pool-size-min = Z
        pool-size-max = Z
      }

I tried different values for X, Y, Z, such as X = 256000b,  X=512000b, 
X=1024000b... and Y, Z = 4 and 8
I tried the settings separately, and together... none of the combinations I 
tried improved the performance of what I've been seeing before.

It looked that whenever I was increasing the value for X, the performance 
was actually getting worse.  While the changes to Y and Z in isolation made 
no observable difference in the timings recorded.

there is another setting that controls the thread-pool (dispatcher) for the 
> remoting subsystem (this is independent of the netty pools). You should 
> probably increase the size of that as well. The setting is under: 
> "akka.remote.default-remote-dispatcher", see  
> http://doc.akka.io/docs/akka/2.3.0/scala/remoting.html#Remote_Configurationfor
>  the complete configuration.
>
>
I will try that and report back.
 

> Since you run everything on localhost fairness issues might dominate the 
> case: some subsystems getting less CPU share since they have less threads 
> to run on, and fairness is enforced by the OS per thread. 
>

Well... again, the sender and receiver were ran in two separate JVM 
processes, so the subsystems in the two different processes shouldn't be 
competing with each other based on the configuration limits imposed by the 
akka configuration used.  The other problem might be competition for CPU 
time on the machine itself... but if you look at the CPU utilization graphs 
in the profiler, the CPU is also not nearly used at it's max capacity. 

I'll try the dispatcher setting and report back...

Oh, what I forgot to mention is that none of the hypothesis we explored so 
far seem to explain (in my mind at least) why the later sets of 20K 
messages are being received progressively faster (more sample run outputs 
to see that are available in the README on GitHub).  In a standard 
producer-consumer scenario where the producer is the sender actor (which 
produced 1.6M messages) and the consumer is the akka remoting subsystem 
(which consumes data from that queue to send out over the network), I would 
expect the consumer to behave the same way while draining messages from the 
queue.

Regarding flow control --- I don't want to get to that yet... I'm not even 
sure how I would know how to detect/trigger flow control. In 
(oversimplified) traditional flow control the receiver detects that it's 
being overwhelmed and informs the sender to rate-limit its output until the 
receiver is capable of processing more messages...etc. In my case the 
receiver actor (the application layer) would gladly consume as much data as 
it gets.... it's not being overwhelmed to the point where it should/could 
be sending a message to the sender to tell it to "slow down". 

Here's the fun part... I noticed that if I add a "Thread.sleep(5000)" in 
the for loop on the sender side after each 100K messages sent (commented 
out now in the GitHub code) then sending even 1M messages is done at the 
same timings per 20K as I see when I send just 100K messages (without the 
thread.sleep).  

-Boris

-- 
>>>>>>>>>>      Read the docs: http://akka.io/docs/
>>>>>>>>>>      Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>      Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.

Reply via email to