Re: [akka-user] Akka Http Logging with traceId

2016-09-12 Thread algermissen1971
Hi Arun

Take a look at kamon.io

Jan

Sent from my iPhone

> On 13.09.2016, at 07:48, Arun  wrote:
> 
> Hi,
> 
> We have a requirement where we need to log information with traceId (or 
> unique identifier) for  a given HTTP request across route and actors. This 
> can help us to trace logging.
> 
> The configuration is as following:
> 
> akka {
>   loggers = ["akka.event.slf4j.Slf4jLogger"]
>   loglevel = "INFO"
>   logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"
> }
> 
> and logback.xml is as following:
> 
> 
> 
> 
> true
> 
> @timestamp
> msg
> [ignore]
> [ignore]
> logger
> [ignore]
> 
>  class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
> 80
> 2048
> 20
>   true
> 
> 
> 
> 
> 
> 
> 
> 
> Please let me know how we can enable traceId from route to actor systems.
> 
> Thanks
> Arun
> -- 
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ: 
> >> http://doc.akka.io/docs/akka/current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> --- 
> You received this message because you are subscribed to the Google Groups 
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Testing routes and marshalling

2016-09-12 Thread Richard Rodseth
Thank you. I got it working with a custom test actor.
If you get a chance to look at the follow-up topic "Debugging marshalling
implicits", it would be much appreciated
A shorter question would be:

Is this the right way to debug when a type cannot be marshalled?
val prm = implicitly[Future[PimpedResult[(StatusCode, Result[StatusDTO])]]
=> ToResponseMarshallable]


On Fri, Sep 9, 2016 at 8:26 AM, Akka Team  wrote:

> Hi Richard,
>
> The HTTP Testkit expects to have gotten the reply already when check
> executes, so that is why it does not work.
>
> You can use the autopilot feature of the TestProbe or a custom test actor
> responding the way you want in the test, it could also keep incoming
> requests for later assertions.
>
> --
> Johan
> Akka Team
>
> On Wed, Sep 7, 2016 at 1:47 AM, Richard Rodseth 
> wrote:
>
>> Oy. I'm getting tied up in knots. I guess the requestHandlerProbe.reply
>> won't work without a proper sender. I usually put my route definition in an
>> actor, but moved it to a trait so that it could be used by the test kit.
>> But I did it like this
>>
>> def route(requestHandler: ActorRef) = ...
>>
>> Can anyone point me at a sample which uses the ask pattern in routes, and
>> also tests the routes with the ScalatestRouteTest?
>>
>> On Tue, Sep 6, 2016 at 12:56 PM, Richard Rodseth 
>> wrote:
>>
>>> So the example here is not very realistic because the route does not
>>> depend on any actors:
>>>
>>> http://doc.akka.io/docs/akka/2.4.9/scala/http/routing-dsl/te
>>> stkit.html#Usage
>>>
>>> I have a status api that returns Either[APIError, StatusDTO].
>>>
>>> val statusRoute = path("status") {
>>>
>>>   get {
>>>
>>> handleErrors {
>>>
>>>   onSuccess(requestHandler ? RequestHandler.AskForStatus) {
>>>
>>>   complete(result)
>>>
>>>   }
>>>
>>> }
>>>
>>>   }
>>>
>>> }
>>>  Pondering whether to make a mock actor for requestHandler, or use a
>>> probe.
>>>
>>> This works:
>>>
>>>   Get("/status") ~> myRoute ~> check {
>>>
>>> requestHandlerProbe.expectMsgClass(100 millis,
>>> RequestHandler.AskForStatus.getClass)
>>>
>>>   }
>>>
>>> This times out
>>>
>>>   Get("/status") ~> myRoute ~> check {
>>>
>>> requestHandlerProbe.expectMsgClass(100 millis,
>>> RequestHandler.AskForStatus.getClass)
>>>
>>>  val result: Either[APIError, StatusDTO] = Right(StatusDTO("message"
>>> ))
>>>
>>> requestHandlerProbe.reply(result)
>>>
>>> println(this.responseEntity)
>>>
>>>   }
>>>
>>> Any errors in my use of probe.reply ? Would it be better to test
>>> response marshalling on its own and limit routing tests to expectMsg?
>>>
>>>
>>>
>> --
>> >> Read the docs: http://akka.io/docs/
>> >> Check the FAQ: http://doc.akka.io/docs/akka/c
>> urrent/additional/faq.html
>> >> Search the archives: https://groups.google.com/group/akka-user
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to akka-user+unsubscr...@googlegroups.com.
>> To post to this group, send email to akka-user@googlegroups.com.
>> Visit this group at https://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> --
> Akka Team
> Lightbend  - Reactive apps on the JVM
> Twitter: @akkateam
>
> --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ: http://doc.akka.io/docs/akka/
> current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Getting started with akka tutorials

2016-09-12 Thread lksaj4
Ok,

Obviously there had to be something so basic since it was a tutorial file. 
I should have read the code with a little more thought and notice that it 
already had a main-function.

Thanks for pointing out the obvious!


On Sunday, September 11, 2016 at 6:53:30 PM UTC+3, Ivan Vyshnevskyi wrote:
>
> Hi,
>
> Removing akka.Main from your last command should help:
> /usr/lib/jvm/java-8-openjdk-amd64/bin/java -classpath ".:$JARS" 
> docs.http.javadsl.server.HighLevelServerExample
>
> akka.Main is only required to start your application when all you have is 
> a top-level actor. In this case HighLevelServerExample.java 
> 
>  is a main class itself — it has a public static void main() that creates 
> ActorSystem, setups a flow and waits for input on the stdin before 
> shutdown.
>
> On 11 September 2016 at 13:21,  wrote:
>
>> Hi,
>>
>> I'm trying to build a rest server with akka-http, but I'm having trouble 
>> getting anything to run. I'm still trying to get the tutorials to run, but 
>> I'm stuck. What am I missing? Any help would be appreciated. Here's what I 
>> tried:
>>
>> Clone the akka repository:
>> cd ~/workspace
>> git clone https://github.com/akka/akka.git
>>
>> Find the example servers:
>> cd ~/workspace/akka/akka-docs/rst/java/code/docs/http/javadsl/server
>>
>> Use jar-files that maven has downloaded previously, and make a classpath 
>> string from them:
>> JARS=".:$(find ~/.m2/repository/{com/typesafe/,org/scala-lang/} -name 
>> \*.jar|tr '\n' ':'|sed 's/:$//')"
>> echo $JARS|sed "s#$HOME#~#g"
>>
>> These results in:
>>
>> .:~/.m2/repository/com/typesafe/akka/akka-actor_2.11/2.4.9/akka-actor_2.11-2.4.9.jar:~/.m2/repository/com/typesafe/akka/akka-parsing_2.11/2.4.9/akka-parsing_2.11-2.4.9.jar:~/.m2/repository/com/typesafe/akka/akka-http-core_2.11/2.4.9/akka-http-core_2.11-2.4.9.jar:~/.m2/repository/com/typesafe/akka/akka-stream_2.11/2.4.9/akka-stream_2.11-2.4.9.jar:~/.m2/repository/com/typesafe/akka/akka-http-experimental_2.11/2.4.9/akka-http-experimental_2.11-2.4.9.jar:~/.m2/repository/com/typesafe/ssl-config-core_2.11/0.2.1/ssl-config-core_2.11-0.2.1.jar:~/.m2/repository/com/typesafe/ssl-config-akka_2.11/0.2.1/ssl-config-akka_2.11-0.2.1.jar:~/.m2/repository/com/typesafe/config/1.3.0/config-1.3.0.jar:~/.m2/repository/org/scala-lang/modules/scala-java8-compat_2.11/0.7.0/scala-java8-compat_2.11-0.7.0.jar:~/.m2/repository/org/scala-lang/modules/scala-parser-combinators_2.11/1.0.4/scala-parser-combinators_2.11-1.0.4.jar:~/.m2/repository/org/scala-lang/scala-library/2.11.8/scala-library-2.11.8.jar
>>
>> Compile the HighLevelServerExample:
>> /usr/lib/jvm/java-8-openjdk-amd64/bin/javac -classpath ".:$JARS" 
>> HighLevelServerExample.java
>>
>> Execute it from the right directory:
>> cd ~/workspace/akka/akka-docs/rst/java/code/
>> /usr/lib/jvm/java-8-openjdk-amd64/bin/java -classpath ".:$JARS" akka.Main 
>> docs.http.javadsl.server.HighLevelServerExample
>>
>> Which gives me:
>> Exception in thread "main" java.lang.ClassCastException: interface 
>> akka.actor.Actor is not assignable from class 
>> docs.http.javadsl.server.HighLevelServerExample
>> at 
>> akka.actor.ReflectiveDynamicAccess$$anonfun$getClassFor$1.apply(ReflectiveDynamicAccess.scala:23)
>> at 
>> akka.actor.ReflectiveDynamicAccess$$anonfun$getClassFor$1.apply(ReflectiveDynamicAccess.scala:20)
>> at scala.util.Try$.apply(Try.scala:192)
>> at 
>> akka.actor.ReflectiveDynamicAccess.getClassFor(ReflectiveDynamicAccess.scala:20)
>> at akka.Main$.main(Main.scala:32)
>> at akka.Main.main(Main.scala)
>>
>> -- 
>> >> Read the docs: http://akka.io/docs/
>> >> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit 

Re: [akka-user] Cluster seed nodes resulting in multiple split brains?

2016-09-12 Thread kraythe
All I do is issue a Cluster.get(system).leave(cluster.selfAddress)

On Monday, September 12, 2016 at 9:49:22 AM UTC-5, √ wrote:
>
> What are you using/doing for downing?
>
> On Mon, Sep 12, 2016 at 4:13 PM, kraythe  
> wrote:
>
>> No, we have disabled that as per suggestion in the docs. Should we?
>>
>> --
>> >>  Read the docs: http://akka.io/docs/
>> >>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >>  Search the archives: 
>> https://groups.google.com/group/akka-user
>> ---
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> -- 
> Cheers,
> √
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Cluster seed nodes resulting in multiple split brains?

2016-09-12 Thread kraythe
No, According to suggestions in the docs, I am not running auto-downing. 
Should I change that policy ? 

On Sunday, September 11, 2016 at 3:22:22 PM UTC-5, √ wrote:
>
> Are you running auto-downing?
>
> On Sat, Sep 10, 2016 at 11:43 PM, kraythe  
> wrote:
>
>> Thanks for the information. I have read most of these. Can I take it from 
>> your responses that you agree that this could be a split brain problem? I 
>> am just wondering if using all nodes as seed nodes is what is causing the 
>> issue. In production our seed nodes are fixed IPs. But when we run in the 
>> cloud we have to do auto-discovery. That is what makes the problem 
>> complicated. I respect that ConductR has solved this problem perhaps, and I 
>> am all in favor of going to commercial, but like I said the project I am on 
>> will have to be out and making profit before I can even suggest typing us 
>> to a particular platform purchase, especially since that is a rather large 
>> recurring expenditure. Furthermore, the project has a decent amount of 
>> legacy code that will have to be overcome. Its not a pure Actor program 
>> just yet. I have to have ROI to make those changes and right now I am 
>> trying to fry other fish on the development schedule; such as converting 
>> some of that legacy transactional code into actor models. 
>>
>> In the meantime, I need to make sure that the system is stable in 
>> development. What worries me is the weird behavior of when one node goes 
>> down now other nodes start reporting the problems connecting to the 
>> coordinator. Its like they aren't in a cooperative. Its almost like A is 
>> connected to B, C and D are connected to E and then when one goes away a 
>> cascade failure takes over. In the case of the right split brain i would 
>> have assumed that if E goes away, C or D will take over the duties and life 
>> will go on. However that doesn't seem to happen. It seems almost like we 
>> have a chain rather than cluster. I hope I am making myself clear. 
>>
>> On Saturday, September 10, 2016 at 12:42:06 PM UTC-5, Patrik Nordwall 
>> wrote:
>>>
>>> You asked for links
>>>
>>>
>>> http://doc.akka.io/docs/akka/2.4/scala/cluster-usage.html#Joining_to_Seed_Nodes
>>>
>>> http://doc.akka.io/docs/akka/2.4/scala/cluster-usage.html#Downing
>>>
>>> https://conductr.lightbend.com/docs/1.1.x/Home
>>> lör 10 sep. 2016 kl. 19:27 skrev Viktor Klang :
>>>
 ConductR* is designed to properly seed, and update, Akka Cluster based 
 applications, and the Akka Split Brain Resolver provides deterministic 
 partition handling.

 ConductR runs well on EC2: 
 https://conductr.lightbend.com/docs/1.0.x/Install#EC2-Installation

 * ConductR and SBR are Lightbend products

 -- 
 Cheers,
 √

 On Sep 10, 2016 6:46 PM, "kraythe"  wrote:

> I am not following you on this one. Is there a blog post or article 
> you can reference me to ? 
>
> Thanks to you both. 
>
> On Saturday, September 10, 2016 at 11:34:26 AM UTC-5, √ wrote:
>>
>> There's also ConductR + SBR
>>
>> -- 
>> Cheers,
>> √
>>
>> On Sep 10, 2016 5:09 PM, "Patrik Nordwall"  
>> wrote:
>>
>>> Are you aware of the importance of the first seed node, the one you 
>>> have listed as first element in sees-nodes list? See documentation.
>>>
>>> You can get decent behavior if you wait with joining until the list 
>>> of discovered nodes stabilize, i.e. not changing within X seconds. Then 
>>> sort them to make sure the same is used as the first from all places. 
>>> Then 
>>> joinSeedNodes with that sorted list.
>>>
>>> To be completely safe you must manually decide which one to use as 
>>> the first seed node.
>>>
>>> /Patrik
>>>
>>> fre 9 sep. 2016 kl. 20:32 skrev kraythe :
>>>
 Greetings, 

 We are having some problems with our cluster configuration that 
 manifest themselves in the following log lines (redacted for 
 confidentiality reasons. 

 Sep 09 00:58:10 host1.mycompany.com application-9001.log:  2016-09-
 09 05:58:10 + - [WARN] - [OrdersActor] 
 akka://myCompany/user/OrdersActor/291 
 -  (291) #recordTxns, sending 54 txns to UserActor took 0.0044229 
 seconds
 Sep 09 00:58:19 host1.mycompany.com application-9001.log:  2016-09-
 09 05:58:19 + - [WARN] - [ShardRegion] akka.tcp://
 myCompany@10.8.1.169:2551/system/sharding/UserActor -  Trying to 
 register to coordinator at [None], but no acknowledgement. Total [54] 
 buffered messages.

 I have traced this to the configuration of the cluster. We are 
 running this on Amazon AWS and the code includes use of Hazelcast for 
 finding the IPs of the other nodes (mostly 

Re: [akka-user] Cluster seed nodes resulting in multiple split brains?

2016-09-12 Thread Viktor Klang
What are you using/doing for downing?

On Mon, Sep 12, 2016 at 4:13 PM, kraythe  wrote:

> No, we have disabled that as per suggestion in the docs. Should we?
>
> --
> >>  Read the docs: http://akka.io/docs/
> >>  Check the FAQ: http://doc.akka.io/docs/akka/c
> urrent/additional/faq.html
> >>  Search the archives: https://groups.google.com/grou
> p/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Cheers,
√

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Cluster seed nodes resulting in multiple split brains?

2016-09-12 Thread kraythe
No, we have disabled that as per suggestion in the docs. Should we?

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Terminate ActorSystem on stream failure

2016-09-12 Thread Victor
Ok, I see on the Akka Gitter (https://gitter.im/akka/akka) a message by 
@drewhk which seems to confirm that we have to use materialized value to be 
notified of a stream completion, but it's to us to return something as a 
materialized value which can notify us.

On September 12, 2016 3:44 PM, @drewhk wrote:
"to be notified once everything is "done" [...] streams itself cannot 
solve. You need support from the Sinks themselves to give you a signal 
(usually in the form of a materialized value) that they are done"

My stream is an AMQP stream which never complete (it enriches messages), 
but it can fails so ATM I return a Future as a materialized value from my 
sink and on future failure, I terminate the actor system (because I want my 
service to fails to let other supervisor services handle this failure).

He also wrote that "this is the topic of one of the upcoming blog posts", 
so I will see what is considered a good solution :)

Le vendredi 9 septembre 2016 17:50:24 UTC+2, Victor a écrit :
>
> Hi,
>
> How can I terminate the ActorSystem running my stream when the stream 
> fails?
>
> If I have the following stream:
>
> A -> B -> C
>
> and B fails with a stoppingStrategy, what happens exactly? A and C actors 
> are still running? How can I catch such stopping to then terminate the 
> ActorSystem?
>
> I think I have to use materialized value but it's not clear because if 
> it's the solution I would have to return a Future as a materialized value 
> from each of my stages to then listen on failures. It seems heavy :)
>
> Thank in advance,
> Victor
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread Konrad Malawski
That's very cool - thanks for posting these Christian!
We didn't actually compare with Play, didn't have time somehow; We were
focused on beating Spray :-)

Very cool to see we're on par with Netty Play.

-- 
Konrad `ktoso` Malawski
Akka  @ Lightbend 

On 12 September 2016 at 15:08:11, Christian Schmitt (
c.schm...@briefdomain.de) wrote:

Reflog:

schmitch@deployster:~/projects/schmitch/wrk2$ git reflog HEAD

c4250ac HEAD@{0}: clone: from https://github.com/giltene/wrk2.git

Am Montag, 12. September 2016 15:07:08 UTC+2 schrieb Christian Schmitt:
>
> it is actually wrk2:
>
> schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk --version
>
> wrk 4.0.0 [kqueue] Copyright (C) 2012 Will Glozer
>
>
> I compiled it on the mac against the homebrew openssl library.
>
> Actually I also thing that at something like 60k-70k packages my client
> network gear and the switch starts to fall behind (thats why the latency is
> so high).
>
> Am Montag, 12. September 2016 15:01:40 UTC+2 schrieb √:
>>
>> https://github.com/giltene/wrk2
>>
>> On Mon, Sep 12, 2016 at 2:59 PM, Christian Schmitt <
>> c.sc...@briefdomain.de> wrote:
>>
>>> extracted from my gist:
>>>
>>> akka-http:
>>> schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk -t2 -c100 -d300s
>>> -R120k http://192.168.179.157:3000
>>> Running 5m test @ http://192.168.179.157:3000
>>>   2 threads and 100 connections
>>>   Thread calibration: mean lat.: 787.360ms, rate sampling interval:
>>> 2975ms
>>>   Thread calibration: mean lat.: 585.613ms, rate sampling interval:
>>> 2473ms
>>>   Thread Stats   Avg  Stdev Max   +/- Stdev
>>> Latency30.11s22.42s1.58m62.48%
>>> Req/Sec44.77k 4.77k   54.28k58.88%
>>>   26888534 requests in 5.00m, 4.50GB read
>>> Requests/sec:  89628.49
>>> Transfer/sec: 15.37MB
>>>
>>> play with netty and native enabled (netty without native is exactly the
>>> same as akka-http:
>>> schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk -t2 -c100 -d300s
>>> -R120k http://192.168.179.157:9000
>>> Running 5m test @ http://192.168.179.157:9000
>>>   2 threads and 100 connections
>>>   Thread calibration: mean lat.: 625.068ms, rate sampling interval:
>>> 2504ms
>>>   Thread calibration: mean lat.: 696.276ms, rate sampling interval:
>>> 2562ms
>>>   Thread Stats   Avg  Stdev Max   +/- Stdev
>>> Latency28.14s18.49s1.32m61.39%
>>> Req/Sec46.78k 3.23k   51.52k52.63%
>>>   28079997 requests in 5.00m, 4.02GB read
>>> Requests/sec:  93600.05
>>> Transfer/sec: 13.74MB
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> Am Montag, 12. September 2016 14:52:48 UTC+2 schrieb √:

 What does wrk2 say?

 On Mon, Sep 12, 2016 at 2:37 PM, Christian Schmitt <
 c.sc...@briefdomain.de> wrote:

> I just compared Playframework on Netty vs Akka-http guess thats fair
> since play is quite high level.
>
> Performance for 2k, 20k, 120k Req/s (2k was used to warmup the VM):
> https://gist.github.com/schmitch/2ca3359bc34560c6063d0b00eb0a7aac
> Projects: https://github.com/schmitch/performance (akka-http is just
> his project + @volatile on the var)
>
> Am Montag, 12. September 2016 13:04:29 UTC+2 schrieb Konrad Malawski:
>>
>>
>>
>> --
>> Konrad `ktoso` Malawski
>> Akka  @ Lightbend 
>>
>> On 12 September 2016 at 12:56:46, Christian Schmitt (
>> c.sc...@briefdomain.de) wrote:
>>
>> actually wouldn't it be more reasonable to try it against netty?
>>
>> Yes and no. Then one should compare raw IO APIs, and none of the
>> high-level features Akka HTTP provides (routing, trivial back-pressured
>> entity streaming, fully typesafe http model) etc.
>>
>> It's a fun experiment to see how much faster Netty is, but I don't
>> think it's the goal here – if you really want to write each and every
>> microservice with raw Netty APIs–enjoy, but I don't think that's the 
>> nicest
>> API to just bang out a service in 4 minutes :)
>>
>> (Note, much love for Netty here, but I don't think comparing 1:1 with
>> Akka HTTP here is the right way to look at it (yes, of course we'll be
>> slower ;-)).
>>
>>
>> I mean that node is slower than akka-http isn't something I wonder
>> about.
>>
>> You'd be surprised what node people claim about its performance ;-)
>>
>> Am Montag, 12. September 2016 12:12:29 UTC+2 schrieb Konrad Malawski:
>>>
>>> Hi Adam,
>>> thanks for sharing the runs!
>>> Your benchmarking method is good - thanks for doing a proper warmup
>>> and using wrk2 :-)
>>> Notice that the multiple second response times in node basically
>>> mean it's not keeping up and stalling the connections (also known as
>>> coordinated emission).
>>>
>>> It's great to see 

Re: [akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread Viktor Klang
Thanks for confirming :)

On Mon, Sep 12, 2016 at 3:08 PM, Christian Schmitt  wrote:

> Reflog:
>
> schmitch@deployster:~/projects/schmitch/wrk2$ git reflog HEAD
>
> c4250ac HEAD@{0}: clone: from https://github.com/giltene/wrk2.git
>
> Am Montag, 12. September 2016 15:07:08 UTC+2 schrieb Christian Schmitt:
>>
>> it is actually wrk2:
>>
>> schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk --version
>>
>> wrk 4.0.0 [kqueue] Copyright (C) 2012 Will Glozer
>>
>>
>> I compiled it on the mac against the homebrew openssl library.
>>
>> Actually I also thing that at something like 60k-70k packages my client
>> network gear and the switch starts to fall behind (thats why the latency is
>> so high).
>>
>> Am Montag, 12. September 2016 15:01:40 UTC+2 schrieb √:
>>>
>>> https://github.com/giltene/wrk2
>>>
>>> On Mon, Sep 12, 2016 at 2:59 PM, Christian Schmitt <
>>> c.sc...@briefdomain.de> wrote:
>>>
 extracted from my gist:

 akka-http:
 schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk -t2 -c100 -d300s
 -R120k http://192.168.179.157:3000
 Running 5m test @ http://192.168.179.157:3000
   2 threads and 100 connections
   Thread calibration: mean lat.: 787.360ms, rate sampling interval:
 2975ms
   Thread calibration: mean lat.: 585.613ms, rate sampling interval:
 2473ms
   Thread Stats   Avg  Stdev Max   +/- Stdev
 Latency30.11s22.42s1.58m62.48%
 Req/Sec44.77k 4.77k   54.28k58.88%
   26888534 requests in 5.00m, 4.50GB read
 Requests/sec:  89628.49
 Transfer/sec: 15.37MB

 play with netty and native enabled (netty without native is exactly the
 same as akka-http:
 schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk -t2 -c100 -d300s
 -R120k http://192.168.179.157:9000
 Running 5m test @ http://192.168.179.157:9000
   2 threads and 100 connections
   Thread calibration: mean lat.: 625.068ms, rate sampling interval:
 2504ms
   Thread calibration: mean lat.: 696.276ms, rate sampling interval:
 2562ms
   Thread Stats   Avg  Stdev Max   +/- Stdev
 Latency28.14s18.49s1.32m61.39%
 Req/Sec46.78k 3.23k   51.52k52.63%
   28079997 requests in 5.00m, 4.02GB read
 Requests/sec:  93600.05
 Transfer/sec: 13.74MB

























 Am Montag, 12. September 2016 14:52:48 UTC+2 schrieb √:
>
> What does wrk2 say?
>
> On Mon, Sep 12, 2016 at 2:37 PM, Christian Schmitt <
> c.sc...@briefdomain.de> wrote:
>
>> I just compared Playframework on Netty vs Akka-http guess thats fair
>> since play is quite high level.
>>
>> Performance for 2k, 20k, 120k Req/s (2k was used to warmup the VM):
>> https://gist.github.com/schmitch/2ca3359bc34560c6063d0b00eb0a7aac
>> Projects: https://github.com/schmitch/performance (akka-http is just
>> his project + @volatile on the var)
>>
>> Am Montag, 12. September 2016 13:04:29 UTC+2 schrieb Konrad Malawski:
>>>
>>>
>>>
>>> --
>>> Konrad `ktoso` Malawski
>>> Akka  @ Lightbend 
>>>
>>> On 12 September 2016 at 12:56:46, Christian Schmitt (
>>> c.sc...@briefdomain.de) wrote:
>>>
>>> actually wouldn't it be more reasonable to try it against netty?
>>>
>>> Yes and no. Then one should compare raw IO APIs, and none of the
>>> high-level features Akka HTTP provides (routing, trivial back-pressured
>>> entity streaming, fully typesafe http model) etc.
>>>
>>> It's a fun experiment to see how much faster Netty is, but I don't
>>> think it's the goal here – if you really want to write each and every
>>> microservice with raw Netty APIs–enjoy, but I don't think that's the 
>>> nicest
>>> API to just bang out a service in 4 minutes :)
>>>
>>> (Note, much love for Netty here, but I don't think comparing 1:1
>>> with Akka HTTP here is the right way to look at it (yes, of course 
>>> we'll be
>>> slower ;-)).
>>>
>>>
>>> I mean that node is slower than akka-http isn't something I wonder
>>> about.
>>>
>>> You'd be surprised what node people claim about its performance ;-)
>>>
>>> Am Montag, 12. September 2016 12:12:29 UTC+2 schrieb Konrad
>>> Malawski:

 Hi Adam,
 thanks for sharing the runs!
 Your benchmarking method is good - thanks for doing a proper warmup
 and using wrk2 :-)
 Notice that the multiple second response times in node basically
 mean it's not keeping up and stalling the connections (also known as
 coordinated emission).

 It's great to see such side by side with node, thanks for sharing
 it again.
 Happy hakking!

Re: [akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread Christian Schmitt
Reflog:

schmitch@deployster:~/projects/schmitch/wrk2$ git reflog HEAD

c4250ac HEAD@{0}: clone: from https://github.com/giltene/wrk2.git

Am Montag, 12. September 2016 15:07:08 UTC+2 schrieb Christian Schmitt:
>
> it is actually wrk2:
>
> schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk --version
>
> wrk 4.0.0 [kqueue] Copyright (C) 2012 Will Glozer
>
>
> I compiled it on the mac against the homebrew openssl library.
>
> Actually I also thing that at something like 60k-70k packages my client 
> network gear and the switch starts to fall behind (thats why the latency is 
> so high).
>
> Am Montag, 12. September 2016 15:01:40 UTC+2 schrieb √:
>>
>> https://github.com/giltene/wrk2
>>
>> On Mon, Sep 12, 2016 at 2:59 PM, Christian Schmitt <
>> c.sc...@briefdomain.de> wrote:
>>
>>> extracted from my gist:
>>>
>>> akka-http:
>>> schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk -t2 -c100 -d300s 
>>> -R120k http://192.168.179.157:3000
>>> Running 5m test @ http://192.168.179.157:3000
>>>   2 threads and 100 connections
>>>   Thread calibration: mean lat.: 787.360ms, rate sampling interval: 
>>> 2975ms
>>>   Thread calibration: mean lat.: 585.613ms, rate sampling interval: 
>>> 2473ms
>>>   Thread Stats   Avg  Stdev Max   +/- Stdev
>>> Latency30.11s22.42s1.58m62.48%
>>> Req/Sec44.77k 4.77k   54.28k58.88%
>>>   26888534 requests in 5.00m, 4.50GB read
>>> Requests/sec:  89628.49
>>> Transfer/sec: 15.37MB
>>>
>>> play with netty and native enabled (netty without native is exactly the 
>>> same as akka-http:
>>> schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk -t2 -c100 -d300s 
>>> -R120k http://192.168.179.157:9000
>>> Running 5m test @ http://192.168.179.157:9000
>>>   2 threads and 100 connections
>>>   Thread calibration: mean lat.: 625.068ms, rate sampling interval: 
>>> 2504ms
>>>   Thread calibration: mean lat.: 696.276ms, rate sampling interval: 
>>> 2562ms
>>>   Thread Stats   Avg  Stdev Max   +/- Stdev
>>> Latency28.14s18.49s1.32m61.39%
>>> Req/Sec46.78k 3.23k   51.52k52.63%
>>>   28079997 requests in 5.00m, 4.02GB read
>>> Requests/sec:  93600.05
>>> Transfer/sec: 13.74MB
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> Am Montag, 12. September 2016 14:52:48 UTC+2 schrieb √:

 What does wrk2 say?

 On Mon, Sep 12, 2016 at 2:37 PM, Christian Schmitt <
 c.sc...@briefdomain.de> wrote:

> I just compared Playframework on Netty vs Akka-http guess thats fair 
> since play is quite high level.
>
> Performance for 2k, 20k, 120k Req/s (2k was used to warmup the VM): 
> https://gist.github.com/schmitch/2ca3359bc34560c6063d0b00eb0a7aac
> Projects: https://github.com/schmitch/performance (akka-http is just 
> his project + @volatile on the var)
>
> Am Montag, 12. September 2016 13:04:29 UTC+2 schrieb Konrad Malawski:
>>
>>
>>
>> -- 
>> Konrad `ktoso` Malawski
>> Akka  @ Lightbend 
>>
>> On 12 September 2016 at 12:56:46, Christian Schmitt (
>> c.sc...@briefdomain.de) wrote:
>>
>> actually wouldn't it be more reasonable to try it against netty?
>>
>> Yes and no. Then one should compare raw IO APIs, and none of the 
>> high-level features Akka HTTP provides (routing, trivial back-pressured 
>> entity streaming, fully typesafe http model) etc. 
>>
>> It's a fun experiment to see how much faster Netty is, but I don't 
>> think it's the goal here – if you really want to write each and every 
>> microservice with raw Netty APIs–enjoy, but I don't think that's the 
>> nicest 
>> API to just bang out a service in 4 minutes :)
>>
>> (Note, much love for Netty here, but I don't think comparing 1:1 with 
>> Akka HTTP here is the right way to look at it (yes, of course we'll be 
>> slower ;-)).
>>
>>
>> I mean that node is slower than akka-http isn't something I wonder 
>> about.
>>
>> You'd be surprised what node people claim about its performance ;-)
>>
>> Am Montag, 12. September 2016 12:12:29 UTC+2 schrieb Konrad Malawski: 
>>>
>>> Hi Adam, 
>>> thanks for sharing the runs!
>>> Your benchmarking method is good - thanks for doing a proper warmup 
>>> and using wrk2 :-)
>>> Notice that the multiple second response times in node basically 
>>> mean it's not keeping up and stalling the connections (also known as 
>>> coordinated emission).
>>>
>>> It's great to see such side by side with node, thanks for sharing it 
>>> again.
>>> Happy hakking!
>>>
>>> On Mon, Sep 12, 2016 at 10:33 AM, Adam  wrote:
>>>
 Hi,

 I'd just like to share my satisfaction from Akka HTTP performance 
 in 2.4.10.
 I'm diagnosing some low level Node.js 

Re: [akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread Christian Schmitt
it is actually wrk2:

schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk --version

wrk 4.0.0 [kqueue] Copyright (C) 2012 Will Glozer


I compiled it on the mac against the homebrew openssl library.

Actually I also thing that at something like 60k-70k packages my client 
network gear and the switch starts to fall behind (thats why the latency is 
so high).

Am Montag, 12. September 2016 15:01:40 UTC+2 schrieb √:
>
> https://github.com/giltene/wrk2
>
> On Mon, Sep 12, 2016 at 2:59 PM, Christian Schmitt  > wrote:
>
>> extracted from my gist:
>>
>> akka-http:
>> schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk -t2 -c100 -d300s 
>> -R120k http://192.168.179.157:3000
>> Running 5m test @ http://192.168.179.157:3000
>>   2 threads and 100 connections
>>   Thread calibration: mean lat.: 787.360ms, rate sampling interval: 2975ms
>>   Thread calibration: mean lat.: 585.613ms, rate sampling interval: 2473ms
>>   Thread Stats   Avg  Stdev Max   +/- Stdev
>> Latency30.11s22.42s1.58m62.48%
>> Req/Sec44.77k 4.77k   54.28k58.88%
>>   26888534 requests in 5.00m, 4.50GB read
>> Requests/sec:  89628.49
>> Transfer/sec: 15.37MB
>>
>> play with netty and native enabled (netty without native is exactly the 
>> same as akka-http:
>> schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk -t2 -c100 -d300s 
>> -R120k http://192.168.179.157:9000
>> Running 5m test @ http://192.168.179.157:9000
>>   2 threads and 100 connections
>>   Thread calibration: mean lat.: 625.068ms, rate sampling interval: 2504ms
>>   Thread calibration: mean lat.: 696.276ms, rate sampling interval: 2562ms
>>   Thread Stats   Avg  Stdev Max   +/- Stdev
>> Latency28.14s18.49s1.32m61.39%
>> Req/Sec46.78k 3.23k   51.52k52.63%
>>   28079997 requests in 5.00m, 4.02GB read
>> Requests/sec:  93600.05
>> Transfer/sec: 13.74MB
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> Am Montag, 12. September 2016 14:52:48 UTC+2 schrieb √:
>>>
>>> What does wrk2 say?
>>>
>>> On Mon, Sep 12, 2016 at 2:37 PM, Christian Schmitt <
>>> c.sc...@briefdomain.de> wrote:
>>>
 I just compared Playframework on Netty vs Akka-http guess thats fair 
 since play is quite high level.

 Performance for 2k, 20k, 120k Req/s (2k was used to warmup the VM): 
 https://gist.github.com/schmitch/2ca3359bc34560c6063d0b00eb0a7aac
 Projects: https://github.com/schmitch/performance (akka-http is just 
 his project + @volatile on the var)

 Am Montag, 12. September 2016 13:04:29 UTC+2 schrieb Konrad Malawski:
>
>
>
> -- 
> Konrad `ktoso` Malawski
> Akka  @ Lightbend 
>
> On 12 September 2016 at 12:56:46, Christian Schmitt (
> c.sc...@briefdomain.de) wrote:
>
> actually wouldn't it be more reasonable to try it against netty?
>
> Yes and no. Then one should compare raw IO APIs, and none of the 
> high-level features Akka HTTP provides (routing, trivial back-pressured 
> entity streaming, fully typesafe http model) etc. 
>
> It's a fun experiment to see how much faster Netty is, but I don't 
> think it's the goal here – if you really want to write each and every 
> microservice with raw Netty APIs–enjoy, but I don't think that's the 
> nicest 
> API to just bang out a service in 4 minutes :)
>
> (Note, much love for Netty here, but I don't think comparing 1:1 with 
> Akka HTTP here is the right way to look at it (yes, of course we'll be 
> slower ;-)).
>
>
> I mean that node is slower than akka-http isn't something I wonder 
> about.
>
> You'd be surprised what node people claim about its performance ;-)
>
> Am Montag, 12. September 2016 12:12:29 UTC+2 schrieb Konrad Malawski: 
>>
>> Hi Adam, 
>> thanks for sharing the runs!
>> Your benchmarking method is good - thanks for doing a proper warmup 
>> and using wrk2 :-)
>> Notice that the multiple second response times in node basically mean 
>> it's not keeping up and stalling the connections (also known as 
>> coordinated 
>> emission).
>>
>> It's great to see such side by side with node, thanks for sharing it 
>> again.
>> Happy hakking!
>>
>> On Mon, Sep 12, 2016 at 10:33 AM, Adam  wrote:
>>
>>> Hi,
>>>
>>> I'd just like to share my satisfaction from Akka HTTP performance in 
>>> 2.4.10.
>>> I'm diagnosing some low level Node.js performance issues and while 
>>> running various tests that only require the most basic "Hello World" 
>>> style 
>>> code, I decided to take a few minutes to check how would Akka HTTP 
>>> handle 
>>> the same work.
>>> I was quite impressed with the results, so I thought I'd share.
>>>
>>> I'm running two c4.large instances (so two cores on each 

Re: [akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread Viktor Klang
https://github.com/giltene/wrk2

On Mon, Sep 12, 2016 at 2:59 PM, Christian Schmitt  wrote:

> extracted from my gist:
>
> akka-http:
> schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk -t2 -c100 -d300s
> -R120k http://192.168.179.157:3000
> Running 5m test @ http://192.168.179.157:3000
>   2 threads and 100 connections
>   Thread calibration: mean lat.: 787.360ms, rate sampling interval: 2975ms
>   Thread calibration: mean lat.: 585.613ms, rate sampling interval: 2473ms
>   Thread Stats   Avg  Stdev Max   +/- Stdev
> Latency30.11s22.42s1.58m62.48%
> Req/Sec44.77k 4.77k   54.28k58.88%
>   26888534 requests in 5.00m, 4.50GB read
> Requests/sec:  89628.49
> Transfer/sec: 15.37MB
>
> play with netty and native enabled (netty without native is exactly the
> same as akka-http:
> schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk -t2 -c100 -d300s
> -R120k http://192.168.179.157:9000
> Running 5m test @ http://192.168.179.157:9000
>   2 threads and 100 connections
>   Thread calibration: mean lat.: 625.068ms, rate sampling interval: 2504ms
>   Thread calibration: mean lat.: 696.276ms, rate sampling interval: 2562ms
>   Thread Stats   Avg  Stdev Max   +/- Stdev
> Latency28.14s18.49s1.32m61.39%
> Req/Sec46.78k 3.23k   51.52k52.63%
>   28079997 requests in 5.00m, 4.02GB read
> Requests/sec:  93600.05
> Transfer/sec: 13.74MB
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Am Montag, 12. September 2016 14:52:48 UTC+2 schrieb √:
>>
>> What does wrk2 say?
>>
>> On Mon, Sep 12, 2016 at 2:37 PM, Christian Schmitt <
>> c.sc...@briefdomain.de> wrote:
>>
>>> I just compared Playframework on Netty vs Akka-http guess thats fair
>>> since play is quite high level.
>>>
>>> Performance for 2k, 20k, 120k Req/s (2k was used to warmup the VM):
>>> https://gist.github.com/schmitch/2ca3359bc34560c6063d0b00eb0a7aac
>>> Projects: https://github.com/schmitch/performance (akka-http is just
>>> his project + @volatile on the var)
>>>
>>> Am Montag, 12. September 2016 13:04:29 UTC+2 schrieb Konrad Malawski:



 --
 Konrad `ktoso` Malawski
 Akka  @ Lightbend 

 On 12 September 2016 at 12:56:46, Christian Schmitt (
 c.sc...@briefdomain.de) wrote:

 actually wouldn't it be more reasonable to try it against netty?

 Yes and no. Then one should compare raw IO APIs, and none of the
 high-level features Akka HTTP provides (routing, trivial back-pressured
 entity streaming, fully typesafe http model) etc.

 It's a fun experiment to see how much faster Netty is, but I don't
 think it's the goal here – if you really want to write each and every
 microservice with raw Netty APIs–enjoy, but I don't think that's the nicest
 API to just bang out a service in 4 minutes :)

 (Note, much love for Netty here, but I don't think comparing 1:1 with
 Akka HTTP here is the right way to look at it (yes, of course we'll be
 slower ;-)).


 I mean that node is slower than akka-http isn't something I wonder
 about.

 You'd be surprised what node people claim about its performance ;-)

 Am Montag, 12. September 2016 12:12:29 UTC+2 schrieb Konrad Malawski:
>
> Hi Adam,
> thanks for sharing the runs!
> Your benchmarking method is good - thanks for doing a proper warmup
> and using wrk2 :-)
> Notice that the multiple second response times in node basically mean
> it's not keeping up and stalling the connections (also known as 
> coordinated
> emission).
>
> It's great to see such side by side with node, thanks for sharing it
> again.
> Happy hakking!
>
> On Mon, Sep 12, 2016 at 10:33 AM, Adam  wrote:
>
>> Hi,
>>
>> I'd just like to share my satisfaction from Akka HTTP performance in
>> 2.4.10.
>> I'm diagnosing some low level Node.js performance issues and while
>> running various tests that only require the most basic "Hello World" 
>> style
>> code, I decided to take a few minutes to check how would Akka HTTP handle
>> the same work.
>> I was quite impressed with the results, so I thought I'd share.
>>
>> I'm running two c4.large instances (so two cores on each instance) -
>> one running the HTTP service and another running wrk2.
>> I've tested only two short sets (seeing as I have other work to do):
>>
>>1. use 2 threads to simulate 100 concurrent users pushing 2k
>>requests/sec for 5 minutes
>>2. use 2 threads to simulate 100 concurrent users pushing 20k
>>requests/sec for 5 minutes
>>
>> In both cases, the tests are actually executed twice without a
>> restart in between and I throw away the results of the first run.
>>
>> The first run is just to get JIT and other adaptive 

Re: [akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread Christian Schmitt
extracted from my gist:

akka-http:
schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk -t2 -c100 -d300s -R120k 
http://192.168.179.157:3000
Running 5m test @ http://192.168.179.157:3000
  2 threads and 100 connections
  Thread calibration: mean lat.: 787.360ms, rate sampling interval: 2975ms
  Thread calibration: mean lat.: 585.613ms, rate sampling interval: 2473ms
  Thread Stats   Avg  Stdev Max   +/- Stdev
Latency30.11s22.42s1.58m62.48%
Req/Sec44.77k 4.77k   54.28k58.88%
  26888534 requests in 5.00m, 4.50GB read
Requests/sec:  89628.49
Transfer/sec: 15.37MB

play with netty and native enabled (netty without native is exactly the 
same as akka-http:
schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk -t2 -c100 -d300s -R120k 
http://192.168.179.157:9000
Running 5m test @ http://192.168.179.157:9000
  2 threads and 100 connections
  Thread calibration: mean lat.: 625.068ms, rate sampling interval: 2504ms
  Thread calibration: mean lat.: 696.276ms, rate sampling interval: 2562ms
  Thread Stats   Avg  Stdev Max   +/- Stdev
Latency28.14s18.49s1.32m61.39%
Req/Sec46.78k 3.23k   51.52k52.63%
  28079997 requests in 5.00m, 4.02GB read
Requests/sec:  93600.05
Transfer/sec: 13.74MB

























Am Montag, 12. September 2016 14:52:48 UTC+2 schrieb √:
>
> What does wrk2 say?
>
> On Mon, Sep 12, 2016 at 2:37 PM, Christian Schmitt  > wrote:
>
>> I just compared Playframework on Netty vs Akka-http guess thats fair 
>> since play is quite high level.
>>
>> Performance for 2k, 20k, 120k Req/s (2k was used to warmup the VM): 
>> https://gist.github.com/schmitch/2ca3359bc34560c6063d0b00eb0a7aac
>> Projects: https://github.com/schmitch/performance (akka-http is just his 
>> project + @volatile on the var)
>>
>> Am Montag, 12. September 2016 13:04:29 UTC+2 schrieb Konrad Malawski:
>>>
>>>
>>>
>>> -- 
>>> Konrad `ktoso` Malawski
>>> Akka  @ Lightbend 
>>>
>>> On 12 September 2016 at 12:56:46, Christian Schmitt (
>>> c.sc...@briefdomain.de) wrote:
>>>
>>> actually wouldn't it be more reasonable to try it against netty?
>>>
>>> Yes and no. Then one should compare raw IO APIs, and none of the 
>>> high-level features Akka HTTP provides (routing, trivial back-pressured 
>>> entity streaming, fully typesafe http model) etc. 
>>>
>>> It's a fun experiment to see how much faster Netty is, but I don't think 
>>> it's the goal here – if you really want to write each and every 
>>> microservice with raw Netty APIs–enjoy, but I don't think that's the nicest 
>>> API to just bang out a service in 4 minutes :)
>>>
>>> (Note, much love for Netty here, but I don't think comparing 1:1 with 
>>> Akka HTTP here is the right way to look at it (yes, of course we'll be 
>>> slower ;-)).
>>>
>>>
>>> I mean that node is slower than akka-http isn't something I wonder about.
>>>
>>> You'd be surprised what node people claim about its performance ;-)
>>>
>>> Am Montag, 12. September 2016 12:12:29 UTC+2 schrieb Konrad Malawski: 

 Hi Adam, 
 thanks for sharing the runs!
 Your benchmarking method is good - thanks for doing a proper warmup and 
 using wrk2 :-)
 Notice that the multiple second response times in node basically mean 
 it's not keeping up and stalling the connections (also known as 
 coordinated 
 emission).

 It's great to see such side by side with node, thanks for sharing it 
 again.
 Happy hakking!

 On Mon, Sep 12, 2016 at 10:33 AM, Adam  wrote:

> Hi,
>
> I'd just like to share my satisfaction from Akka HTTP performance in 
> 2.4.10.
> I'm diagnosing some low level Node.js performance issues and while 
> running various tests that only require the most basic "Hello World" 
> style 
> code, I decided to take a few minutes to check how would Akka HTTP handle 
> the same work.
> I was quite impressed with the results, so I thought I'd share.
>
> I'm running two c4.large instances (so two cores on each instance) - 
> one running the HTTP service and another running wrk2.
> I've tested only two short sets (seeing as I have other work to do):
>
>1. use 2 threads to simulate 100 concurrent users pushing 2k 
>requests/sec for 5 minutes
>2. use 2 threads to simulate 100 concurrent users pushing 20k 
>requests/sec for 5 minutes 
>
> In both cases, the tests are actually executed twice without a restart 
> in between and I throw away the results of the first run.
>
> The first run is just to get JIT and other adaptive mechanisms to do 
> their thing.
>
> 5 minutes seems to be enough based on the CPU behavior I see, but for 
> a more "official" test I'd probably use something longer.
>
>
> As for the code, I was using vanilla Node code - the kind you see as 

Re: [akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread Viktor Klang
What does wrk2 say?

On Mon, Sep 12, 2016 at 2:37 PM, Christian Schmitt  wrote:

> I just compared Playframework on Netty vs Akka-http guess thats fair since
> play is quite high level.
>
> Performance for 2k, 20k, 120k Req/s (2k was used to warmup the VM):
> https://gist.github.com/schmitch/2ca3359bc34560c6063d0b00eb0a7aac
> Projects: https://github.com/schmitch/performance (akka-http is just his
> project + @volatile on the var)
>
> Am Montag, 12. September 2016 13:04:29 UTC+2 schrieb Konrad Malawski:
>>
>>
>>
>> --
>> Konrad `ktoso` Malawski
>> Akka  @ Lightbend 
>>
>> On 12 September 2016 at 12:56:46, Christian Schmitt (
>> c.sc...@briefdomain.de) wrote:
>>
>> actually wouldn't it be more reasonable to try it against netty?
>>
>> Yes and no. Then one should compare raw IO APIs, and none of the
>> high-level features Akka HTTP provides (routing, trivial back-pressured
>> entity streaming, fully typesafe http model) etc.
>>
>> It's a fun experiment to see how much faster Netty is, but I don't think
>> it's the goal here – if you really want to write each and every
>> microservice with raw Netty APIs–enjoy, but I don't think that's the nicest
>> API to just bang out a service in 4 minutes :)
>>
>> (Note, much love for Netty here, but I don't think comparing 1:1 with
>> Akka HTTP here is the right way to look at it (yes, of course we'll be
>> slower ;-)).
>>
>>
>> I mean that node is slower than akka-http isn't something I wonder about.
>>
>> You'd be surprised what node people claim about its performance ;-)
>>
>> Am Montag, 12. September 2016 12:12:29 UTC+2 schrieb Konrad Malawski:
>>>
>>> Hi Adam,
>>> thanks for sharing the runs!
>>> Your benchmarking method is good - thanks for doing a proper warmup and
>>> using wrk2 :-)
>>> Notice that the multiple second response times in node basically mean
>>> it's not keeping up and stalling the connections (also known as coordinated
>>> emission).
>>>
>>> It's great to see such side by side with node, thanks for sharing it
>>> again.
>>> Happy hakking!
>>>
>>> On Mon, Sep 12, 2016 at 10:33 AM, Adam  wrote:
>>>
 Hi,

 I'd just like to share my satisfaction from Akka HTTP performance in
 2.4.10.
 I'm diagnosing some low level Node.js performance issues and while
 running various tests that only require the most basic "Hello World" style
 code, I decided to take a few minutes to check how would Akka HTTP handle
 the same work.
 I was quite impressed with the results, so I thought I'd share.

 I'm running two c4.large instances (so two cores on each instance) -
 one running the HTTP service and another running wrk2.
 I've tested only two short sets (seeing as I have other work to do):

1. use 2 threads to simulate 100 concurrent users pushing 2k
requests/sec for 5 minutes
2. use 2 threads to simulate 100 concurrent users pushing 20k
requests/sec for 5 minutes

 In both cases, the tests are actually executed twice without a restart
 in between and I throw away the results of the first run.

 The first run is just to get JIT and other adaptive mechanisms to do
 their thing.

 5 minutes seems to be enough based on the CPU behavior I see, but for a
 more "official" test I'd probably use something longer.


 As for the code, I was using vanilla Node code - the kind you see as
 the most basic example (no web frameworks or anything) but for Akka, I used
 the high level DSL.


 Here's the Code:


 *Akka HTTP*


 package com.example.rest

 import akka.actor.ActorSystem
 import akka.http.scaladsl.Http
 import akka.http.scaladsl.server.Directives._
 import akka.stream.ActorMaterializer


 case class Reply(message: String = "Hello World", userCount: Int)

 object MyJsonProtocol
   extends akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport
 with spray.json.DefaultJsonProtocol {

   implicit val replyFormat = jsonFormat2(Reply.apply)
 }

 object FullWebServer {
   var userCount = 0;

   def getReply() = {
 userCount += 1
 Reply(userCount=userCount)
   }

   def main(args: Array[String]) {
 implicit val system = ActorSystem()
 implicit val materializer = ActorMaterializer()
 import MyJsonProtocol._

 val route =
   get {
 complete(getReply())
   }

 // `route` will be implicitly converted to `Flow` using 
 `RouteResult.route2HandlerFlow`
 val bindingFuture = Http().bindAndHandle(route, "0.0.0.0", 3000)
 println("Server online at http://127.0.0.1:3000/;)
   }
 }


 *Node*

 var http = require('http');

 let userCount = 0;
 var server = http.createServer(function (request, 

Re: [akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread Viktor Klang
That would've been a good comment on that line of code :)

On Mon, Sep 12, 2016 at 2:45 PM, אדם חונן  wrote:

> In my original code I really didn't care about that value or it's validity.
> The only thing I wanted to achieve was different JSON messages like in
> Node where, BTW, this variables exists twice - once per process.
> If you really need to share mutable state Node is already out of the
> conversation...
>
> On Mon, Sep 12, 2016 at 3:40 PM, Viktor Klang 
> wrote:
>
>> @volatile on the var will not really help, += is not an atomic
>> instruction.
>>
>> --
>> Cheers,
>> √
>>
>> On Sep 12, 2016 2:37 PM, "Christian Schmitt" 
>> wrote:
>>
>>> I just compared Playframework on Netty vs Akka-http guess thats fair
>>> since play is quite high level.
>>>
>>> Performance for 2k, 20k, 120k Req/s (2k was used to warmup the VM):
>>> https://gist.github.com/schmitch/2ca3359bc34560c6063d0b00eb0a7aac
>>> Projects: https://github.com/schmitch/performance (akka-http is just
>>> his project + @volatile on the var)
>>>
>>> Am Montag, 12. September 2016 13:04:29 UTC+2 schrieb Konrad Malawski:



 --
 Konrad `ktoso` Malawski
 Akka  @ Lightbend 

 On 12 September 2016 at 12:56:46, Christian Schmitt (
 c.sc...@briefdomain.de) wrote:

 actually wouldn't it be more reasonable to try it against netty?

 Yes and no. Then one should compare raw IO APIs, and none of the
 high-level features Akka HTTP provides (routing, trivial back-pressured
 entity streaming, fully typesafe http model) etc.

 It's a fun experiment to see how much faster Netty is, but I don't
 think it's the goal here – if you really want to write each and every
 microservice with raw Netty APIs–enjoy, but I don't think that's the nicest
 API to just bang out a service in 4 minutes :)

 (Note, much love for Netty here, but I don't think comparing 1:1 with
 Akka HTTP here is the right way to look at it (yes, of course we'll be
 slower ;-)).


 I mean that node is slower than akka-http isn't something I wonder
 about.

 You'd be surprised what node people claim about its performance ;-)

 Am Montag, 12. September 2016 12:12:29 UTC+2 schrieb Konrad Malawski:
>
> Hi Adam,
> thanks for sharing the runs!
> Your benchmarking method is good - thanks for doing a proper warmup
> and using wrk2 :-)
> Notice that the multiple second response times in node basically mean
> it's not keeping up and stalling the connections (also known as 
> coordinated
> emission).
>
> It's great to see such side by side with node, thanks for sharing it
> again.
> Happy hakking!
>
> On Mon, Sep 12, 2016 at 10:33 AM, Adam  wrote:
>
>> Hi,
>>
>> I'd just like to share my satisfaction from Akka HTTP performance in
>> 2.4.10.
>> I'm diagnosing some low level Node.js performance issues and while
>> running various tests that only require the most basic "Hello World" 
>> style
>> code, I decided to take a few minutes to check how would Akka HTTP handle
>> the same work.
>> I was quite impressed with the results, so I thought I'd share.
>>
>> I'm running two c4.large instances (so two cores on each instance) -
>> one running the HTTP service and another running wrk2.
>> I've tested only two short sets (seeing as I have other work to do):
>>
>>1. use 2 threads to simulate 100 concurrent users pushing 2k
>>requests/sec for 5 minutes
>>2. use 2 threads to simulate 100 concurrent users pushing 20k
>>requests/sec for 5 minutes
>>
>> In both cases, the tests are actually executed twice without a
>> restart in between and I throw away the results of the first run.
>>
>> The first run is just to get JIT and other adaptive mechanisms to do
>> their thing.
>>
>> 5 minutes seems to be enough based on the CPU behavior I see, but for
>> a more "official" test I'd probably use something longer.
>>
>>
>> As for the code, I was using vanilla Node code - the kind you see as
>> the most basic example (no web frameworks or anything) but for Akka, I 
>> used
>> the high level DSL.
>>
>>
>> Here's the Code:
>>
>>
>> *Akka HTTP*
>>
>>
>> package com.example.rest
>>
>> import akka.actor.ActorSystem
>> import akka.http.scaladsl.Http
>> import akka.http.scaladsl.server.Directives._
>> import akka.stream.ActorMaterializer
>>
>>
>> case class Reply(message: String = "Hello World", userCount: Int)
>>
>> object MyJsonProtocol
>>   extends akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport
>> with spray.json.DefaultJsonProtocol {
>>
>>   implicit val replyFormat = 

Re: [akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread אדם חונן
In my original code I really didn't care about that value or it's validity.
The only thing I wanted to achieve was different JSON messages like in Node
where, BTW, this variables exists twice - once per process.
If you really need to share mutable state Node is already out of the
conversation...

On Mon, Sep 12, 2016 at 3:40 PM, Viktor Klang 
wrote:

> @volatile on the var will not really help, += is not an atomic instruction.
>
> --
> Cheers,
> √
>
> On Sep 12, 2016 2:37 PM, "Christian Schmitt" 
> wrote:
>
>> I just compared Playframework on Netty vs Akka-http guess thats fair
>> since play is quite high level.
>>
>> Performance for 2k, 20k, 120k Req/s (2k was used to warmup the VM):
>> https://gist.github.com/schmitch/2ca3359bc34560c6063d0b00eb0a7aac
>> Projects: https://github.com/schmitch/performance (akka-http is just his
>> project + @volatile on the var)
>>
>> Am Montag, 12. September 2016 13:04:29 UTC+2 schrieb Konrad Malawski:
>>>
>>>
>>>
>>> --
>>> Konrad `ktoso` Malawski
>>> Akka  @ Lightbend 
>>>
>>> On 12 September 2016 at 12:56:46, Christian Schmitt (
>>> c.sc...@briefdomain.de) wrote:
>>>
>>> actually wouldn't it be more reasonable to try it against netty?
>>>
>>> Yes and no. Then one should compare raw IO APIs, and none of the
>>> high-level features Akka HTTP provides (routing, trivial back-pressured
>>> entity streaming, fully typesafe http model) etc.
>>>
>>> It's a fun experiment to see how much faster Netty is, but I don't think
>>> it's the goal here – if you really want to write each and every
>>> microservice with raw Netty APIs–enjoy, but I don't think that's the nicest
>>> API to just bang out a service in 4 minutes :)
>>>
>>> (Note, much love for Netty here, but I don't think comparing 1:1 with
>>> Akka HTTP here is the right way to look at it (yes, of course we'll be
>>> slower ;-)).
>>>
>>>
>>> I mean that node is slower than akka-http isn't something I wonder about.
>>>
>>> You'd be surprised what node people claim about its performance ;-)
>>>
>>> Am Montag, 12. September 2016 12:12:29 UTC+2 schrieb Konrad Malawski:

 Hi Adam,
 thanks for sharing the runs!
 Your benchmarking method is good - thanks for doing a proper warmup and
 using wrk2 :-)
 Notice that the multiple second response times in node basically mean
 it's not keeping up and stalling the connections (also known as coordinated
 emission).

 It's great to see such side by side with node, thanks for sharing it
 again.
 Happy hakking!

 On Mon, Sep 12, 2016 at 10:33 AM, Adam  wrote:

> Hi,
>
> I'd just like to share my satisfaction from Akka HTTP performance in
> 2.4.10.
> I'm diagnosing some low level Node.js performance issues and while
> running various tests that only require the most basic "Hello World" style
> code, I decided to take a few minutes to check how would Akka HTTP handle
> the same work.
> I was quite impressed with the results, so I thought I'd share.
>
> I'm running two c4.large instances (so two cores on each instance) -
> one running the HTTP service and another running wrk2.
> I've tested only two short sets (seeing as I have other work to do):
>
>1. use 2 threads to simulate 100 concurrent users pushing 2k
>requests/sec for 5 minutes
>2. use 2 threads to simulate 100 concurrent users pushing 20k
>requests/sec for 5 minutes
>
> In both cases, the tests are actually executed twice without a restart
> in between and I throw away the results of the first run.
>
> The first run is just to get JIT and other adaptive mechanisms to do
> their thing.
>
> 5 minutes seems to be enough based on the CPU behavior I see, but for
> a more "official" test I'd probably use something longer.
>
>
> As for the code, I was using vanilla Node code - the kind you see as
> the most basic example (no web frameworks or anything) but for Akka, I 
> used
> the high level DSL.
>
>
> Here's the Code:
>
>
> *Akka HTTP*
>
>
> package com.example.rest
>
> import akka.actor.ActorSystem
> import akka.http.scaladsl.Http
> import akka.http.scaladsl.server.Directives._
> import akka.stream.ActorMaterializer
>
>
> case class Reply(message: String = "Hello World", userCount: Int)
>
> object MyJsonProtocol
>   extends akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport
> with spray.json.DefaultJsonProtocol {
>
>   implicit val replyFormat = jsonFormat2(Reply.apply)
> }
>
> object FullWebServer {
>   var userCount = 0;
>
>   def getReply() = {
> userCount += 1
> Reply(userCount=userCount)
>   }
>
>   def main(args: Array[String]) {
> implicit val system = 

Re: [akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread Viktor Klang
@volatile on the var will not really help, += is not an atomic instruction.

-- 
Cheers,
√

On Sep 12, 2016 2:37 PM, "Christian Schmitt" 
wrote:

> I just compared Playframework on Netty vs Akka-http guess thats fair since
> play is quite high level.
>
> Performance for 2k, 20k, 120k Req/s (2k was used to warmup the VM):
> https://gist.github.com/schmitch/2ca3359bc34560c6063d0b00eb0a7aac
> Projects: https://github.com/schmitch/performance (akka-http is just his
> project + @volatile on the var)
>
> Am Montag, 12. September 2016 13:04:29 UTC+2 schrieb Konrad Malawski:
>>
>>
>>
>> --
>> Konrad `ktoso` Malawski
>> Akka  @ Lightbend 
>>
>> On 12 September 2016 at 12:56:46, Christian Schmitt (
>> c.sc...@briefdomain.de) wrote:
>>
>> actually wouldn't it be more reasonable to try it against netty?
>>
>> Yes and no. Then one should compare raw IO APIs, and none of the
>> high-level features Akka HTTP provides (routing, trivial back-pressured
>> entity streaming, fully typesafe http model) etc.
>>
>> It's a fun experiment to see how much faster Netty is, but I don't think
>> it's the goal here – if you really want to write each and every
>> microservice with raw Netty APIs–enjoy, but I don't think that's the nicest
>> API to just bang out a service in 4 minutes :)
>>
>> (Note, much love for Netty here, but I don't think comparing 1:1 with
>> Akka HTTP here is the right way to look at it (yes, of course we'll be
>> slower ;-)).
>>
>>
>> I mean that node is slower than akka-http isn't something I wonder about.
>>
>> You'd be surprised what node people claim about its performance ;-)
>>
>> Am Montag, 12. September 2016 12:12:29 UTC+2 schrieb Konrad Malawski:
>>>
>>> Hi Adam,
>>> thanks for sharing the runs!
>>> Your benchmarking method is good - thanks for doing a proper warmup and
>>> using wrk2 :-)
>>> Notice that the multiple second response times in node basically mean
>>> it's not keeping up and stalling the connections (also known as coordinated
>>> emission).
>>>
>>> It's great to see such side by side with node, thanks for sharing it
>>> again.
>>> Happy hakking!
>>>
>>> On Mon, Sep 12, 2016 at 10:33 AM, Adam  wrote:
>>>
 Hi,

 I'd just like to share my satisfaction from Akka HTTP performance in
 2.4.10.
 I'm diagnosing some low level Node.js performance issues and while
 running various tests that only require the most basic "Hello World" style
 code, I decided to take a few minutes to check how would Akka HTTP handle
 the same work.
 I was quite impressed with the results, so I thought I'd share.

 I'm running two c4.large instances (so two cores on each instance) -
 one running the HTTP service and another running wrk2.
 I've tested only two short sets (seeing as I have other work to do):

1. use 2 threads to simulate 100 concurrent users pushing 2k
requests/sec for 5 minutes
2. use 2 threads to simulate 100 concurrent users pushing 20k
requests/sec for 5 minutes

 In both cases, the tests are actually executed twice without a restart
 in between and I throw away the results of the first run.

 The first run is just to get JIT and other adaptive mechanisms to do
 their thing.

 5 minutes seems to be enough based on the CPU behavior I see, but for a
 more "official" test I'd probably use something longer.


 As for the code, I was using vanilla Node code - the kind you see as
 the most basic example (no web frameworks or anything) but for Akka, I used
 the high level DSL.


 Here's the Code:


 *Akka HTTP*


 package com.example.rest

 import akka.actor.ActorSystem
 import akka.http.scaladsl.Http
 import akka.http.scaladsl.server.Directives._
 import akka.stream.ActorMaterializer


 case class Reply(message: String = "Hello World", userCount: Int)

 object MyJsonProtocol
   extends akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport
 with spray.json.DefaultJsonProtocol {

   implicit val replyFormat = jsonFormat2(Reply.apply)
 }

 object FullWebServer {
   var userCount = 0;

   def getReply() = {
 userCount += 1
 Reply(userCount=userCount)
   }

   def main(args: Array[String]) {
 implicit val system = ActorSystem()
 implicit val materializer = ActorMaterializer()
 import MyJsonProtocol._

 val route =
   get {
 complete(getReply())
   }

 // `route` will be implicitly converted to `Flow` using 
 `RouteResult.route2HandlerFlow`
 val bindingFuture = Http().bindAndHandle(route, "0.0.0.0", 3000)
 println("Server online at http://127.0.0.1:3000/;)
   }
 }


 *Node*

 var http = require('http');

 let 

Re: [akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread Christian Schmitt
I just compared Playframework on Netty vs Akka-http guess thats fair since 
play is quite high level.

Performance for 2k, 20k, 120k Req/s (2k was used to warmup the VM): 
https://gist.github.com/schmitch/2ca3359bc34560c6063d0b00eb0a7aac
Projects: https://github.com/schmitch/performance (akka-http is just his 
project + @volatile on the var)

Am Montag, 12. September 2016 13:04:29 UTC+2 schrieb Konrad Malawski:
>
>
>
> -- 
> Konrad `ktoso` Malawski
> Akka  @ Lightbend 
>
> On 12 September 2016 at 12:56:46, Christian Schmitt (
> c.sc...@briefdomain.de ) wrote:
>
> actually wouldn't it be more reasonable to try it against netty?
>
> Yes and no. Then one should compare raw IO APIs, and none of the 
> high-level features Akka HTTP provides (routing, trivial back-pressured 
> entity streaming, fully typesafe http model) etc. 
>
> It's a fun experiment to see how much faster Netty is, but I don't think 
> it's the goal here – if you really want to write each and every 
> microservice with raw Netty APIs–enjoy, but I don't think that's the nicest 
> API to just bang out a service in 4 minutes :)
>
> (Note, much love for Netty here, but I don't think comparing 1:1 with Akka 
> HTTP here is the right way to look at it (yes, of course we'll be slower 
> ;-)).
>
>
> I mean that node is slower than akka-http isn't something I wonder about.
>
> You'd be surprised what node people claim about its performance ;-)
>
> Am Montag, 12. September 2016 12:12:29 UTC+2 schrieb Konrad Malawski: 
>>
>> Hi Adam, 
>> thanks for sharing the runs!
>> Your benchmarking method is good - thanks for doing a proper warmup and 
>> using wrk2 :-)
>> Notice that the multiple second response times in node basically mean 
>> it's not keeping up and stalling the connections (also known as coordinated 
>> emission).
>>
>> It's great to see such side by side with node, thanks for sharing it 
>> again.
>> Happy hakking!
>>
>> On Mon, Sep 12, 2016 at 10:33 AM, Adam  wrote:
>>
>>> Hi,
>>>
>>> I'd just like to share my satisfaction from Akka HTTP performance in 
>>> 2.4.10.
>>> I'm diagnosing some low level Node.js performance issues and while 
>>> running various tests that only require the most basic "Hello World" style 
>>> code, I decided to take a few minutes to check how would Akka HTTP handle 
>>> the same work.
>>> I was quite impressed with the results, so I thought I'd share.
>>>
>>> I'm running two c4.large instances (so two cores on each instance) - one 
>>> running the HTTP service and another running wrk2.
>>> I've tested only two short sets (seeing as I have other work to do):
>>>
>>>1. use 2 threads to simulate 100 concurrent users pushing 2k 
>>>requests/sec for 5 minutes
>>>2. use 2 threads to simulate 100 concurrent users pushing 20k 
>>>requests/sec for 5 minutes 
>>>
>>> In both cases, the tests are actually executed twice without a restart 
>>> in between and I throw away the results of the first run.
>>>
>>> The first run is just to get JIT and other adaptive mechanisms to do 
>>> their thing.
>>>
>>> 5 minutes seems to be enough based on the CPU behavior I see, but for a 
>>> more "official" test I'd probably use something longer.
>>>
>>>
>>> As for the code, I was using vanilla Node code - the kind you see as the 
>>> most basic example (no web frameworks or anything) but for Akka, I used the 
>>> high level DSL.
>>>
>>>
>>> Here's the Code:
>>>
>>>
>>> *Akka HTTP*
>>>
>>>
>>> package com.example.rest
>>>
>>> import akka.actor.ActorSystem
>>> import akka.http.scaladsl.Http
>>> import akka.http.scaladsl.server.Directives._
>>> import akka.stream.ActorMaterializer
>>>
>>>
>>> case class Reply(message: String = "Hello World", userCount: Int)
>>>
>>> object MyJsonProtocol
>>>   extends akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport
>>> with spray.json.DefaultJsonProtocol {
>>>
>>>   implicit val replyFormat = jsonFormat2(Reply.apply)
>>> }
>>>
>>> object FullWebServer {
>>>   var userCount = 0;
>>>
>>>   def getReply() = {
>>> userCount += 1
>>> Reply(userCount=userCount)
>>>   }
>>>
>>>   def main(args: Array[String]) {
>>> implicit val system = ActorSystem()
>>> implicit val materializer = ActorMaterializer()
>>> import MyJsonProtocol._
>>>
>>> val route =
>>>   get {
>>> complete(getReply())
>>>   }
>>>
>>> // `route` will be implicitly converted to `Flow` using 
>>> `RouteResult.route2HandlerFlow`
>>> val bindingFuture = Http().bindAndHandle(route, "0.0.0.0", 3000)
>>> println("Server online at http://127.0.0.1:3000/;)
>>>   }
>>> }
>>>
>>>
>>> *Node*
>>>
>>> var http = require('http');
>>>
>>> let userCount = 0;
>>> var server = http.createServer(function (request, response) {
>>> userCount++;
>>> response.writeHead(200, {"Content-Type": "application/json"});
>>> const hello = {msg: "Hello world", userCount: userCount};
>>> response.end(JSON.stringify(hello));
>>> });
>>>

Re: [akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread Konrad Malawski
-- 
Konrad `ktoso` Malawski
Akka  @ Lightbend 

On 12 September 2016 at 12:56:46, Christian Schmitt (
c.schm...@briefdomain.de) wrote:

actually wouldn't it be more reasonable to try it against netty?

Yes and no. Then one should compare raw IO APIs, and none of the high-level
features Akka HTTP provides (routing, trivial back-pressured entity
streaming, fully typesafe http model) etc.

It's a fun experiment to see how much faster Netty is, but I don't think
it's the goal here – if you really want to write each and every
microservice with raw Netty APIs–enjoy, but I don't think that's the nicest
API to just bang out a service in 4 minutes :)

(Note, much love for Netty here, but I don't think comparing 1:1 with Akka
HTTP here is the right way to look at it (yes, of course we'll be slower
;-)).


I mean that node is slower than akka-http isn't something I wonder about.

You'd be surprised what node people claim about its performance ;-)

Am Montag, 12. September 2016 12:12:29 UTC+2 schrieb Konrad Malawski:
>
> Hi Adam,
> thanks for sharing the runs!
> Your benchmarking method is good - thanks for doing a proper warmup and
> using wrk2 :-)
> Notice that the multiple second response times in node basically mean it's
> not keeping up and stalling the connections (also known as coordinated
> emission).
>
> It's great to see such side by side with node, thanks for sharing it again.
> Happy hakking!
>
> On Mon, Sep 12, 2016 at 10:33 AM, Adam 
> wrote:
>
>> Hi,
>>
>> I'd just like to share my satisfaction from Akka HTTP performance in
>> 2.4.10.
>> I'm diagnosing some low level Node.js performance issues and while
>> running various tests that only require the most basic "Hello World" style
>> code, I decided to take a few minutes to check how would Akka HTTP handle
>> the same work.
>> I was quite impressed with the results, so I thought I'd share.
>>
>> I'm running two c4.large instances (so two cores on each instance) - one
>> running the HTTP service and another running wrk2.
>> I've tested only two short sets (seeing as I have other work to do):
>>
>>1. use 2 threads to simulate 100 concurrent users pushing 2k
>>requests/sec for 5 minutes
>>2. use 2 threads to simulate 100 concurrent users pushing 20k
>>requests/sec for 5 minutes
>>
>> In both cases, the tests are actually executed twice without a restart in
>> between and I throw away the results of the first run.
>>
>> The first run is just to get JIT and other adaptive mechanisms to do
>> their thing.
>>
>> 5 minutes seems to be enough based on the CPU behavior I see, but for a
>> more "official" test I'd probably use something longer.
>>
>>
>> As for the code, I was using vanilla Node code - the kind you see as the
>> most basic example (no web frameworks or anything) but for Akka, I used the
>> high level DSL.
>>
>>
>> Here's the Code:
>>
>>
>> *Akka HTTP*
>>
>>
>> package com.example.rest
>>
>> import akka.actor.ActorSystem
>> import akka.http.scaladsl.Http
>> import akka.http.scaladsl.server.Directives._
>> import akka.stream.ActorMaterializer
>>
>>
>> case class Reply(message: String = "Hello World", userCount: Int)
>>
>> object MyJsonProtocol
>>   extends akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport
>> with spray.json.DefaultJsonProtocol {
>>
>>   implicit val replyFormat = jsonFormat2(Reply.apply)
>> }
>>
>> object FullWebServer {
>>   var userCount = 0;
>>
>>   def getReply() = {
>> userCount += 1
>> Reply(userCount=userCount)
>>   }
>>
>>   def main(args: Array[String]) {
>> implicit val system = ActorSystem()
>> implicit val materializer = ActorMaterializer()
>> import MyJsonProtocol._
>>
>> val route =
>>   get {
>> complete(getReply())
>>   }
>>
>> // `route` will be implicitly converted to `Flow` using 
>> `RouteResult.route2HandlerFlow`
>> val bindingFuture = Http().bindAndHandle(route, "0.0.0.0", 3000)
>> println("Server online at http://127.0.0.1:3000/;)
>>   }
>> }
>>
>>
>> *Node*
>>
>> var http = require('http');
>>
>> let userCount = 0;
>> var server = http.createServer(function (request, response) {
>> userCount++;
>> response.writeHead(200, {"Content-Type": "application/json"});
>> const hello = {msg: "Hello world", userCount: userCount};
>> response.end(JSON.stringify(hello));
>> });
>>
>> server.listen(3000);
>>
>> console.log("Server running at http://127.0.0.1:3000/;);
>>
>> (to be more exact there's also some wrapping code because I'm running this 
>> in a cluster so all cores can be utilized)
>>
>>
>> So for the first test, things are pretty much the same - Akka HTTP uses
>> less CPU (4-6% vs. 10% in Node) and has a slightly lower average response
>> time, but a higher max response time.
>>
>> Not very interesting.
>>
>>
>> The second test was more one sided though.
>>
>>
>> The Node version maxed out the CPU and got the following results:
>>
>>
>> Running 5m test @ 

Re: [akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread Christian Schmitt
actually wouldn't it be more reasonable to try it against netty?
I mean that node is slower than akka-http isn't something I wonder about.

Am Montag, 12. September 2016 12:12:29 UTC+2 schrieb Konrad Malawski:
>
> Hi Adam,
> thanks for sharing the runs!
> Your benchmarking method is good - thanks for doing a proper warmup and 
> using wrk2 :-)
> Notice that the multiple second response times in node basically mean it's 
> not keeping up and stalling the connections (also known as coordinated 
> emission).
>
> It's great to see such side by side with node, thanks for sharing it again.
> Happy hakking!
>
> On Mon, Sep 12, 2016 at 10:33 AM, Adam  
> wrote:
>
>> Hi,
>>
>> I'd just like to share my satisfaction from Akka HTTP performance in 
>> 2.4.10.
>> I'm diagnosing some low level Node.js performance issues and while 
>> running various tests that only require the most basic "Hello World" style 
>> code, I decided to take a few minutes to check how would Akka HTTP handle 
>> the same work.
>> I was quite impressed with the results, so I thought I'd share.
>>
>> I'm running two c4.large instances (so two cores on each instance) - one 
>> running the HTTP service and another running wrk2.
>> I've tested only two short sets (seeing as I have other work to do):
>>
>>1. use 2 threads to simulate 100 concurrent users pushing 2k 
>>requests/sec for 5 minutes
>>2. use 2 threads to simulate 100 concurrent users pushing 20k 
>>requests/sec for 5 minutes
>>
>> In both cases, the tests are actually executed twice without a restart in 
>> between and I throw away the results of the first run.
>>
>> The first run is just to get JIT and other adaptive mechanisms to do 
>> their thing.
>>
>> 5 minutes seems to be enough based on the CPU behavior I see, but for a 
>> more "official" test I'd probably use something longer.
>>
>>
>> As for the code, I was using vanilla Node code - the kind you see as the 
>> most basic example (no web frameworks or anything) but for Akka, I used the 
>> high level DSL.
>>
>>
>> Here's the Code:
>>
>>
>> *Akka HTTP*
>>
>>
>> package com.example.rest
>>
>> import akka.actor.ActorSystem
>> import akka.http.scaladsl.Http
>> import akka.http.scaladsl.server.Directives._
>> import akka.stream.ActorMaterializer
>>
>>
>> case class Reply(message: String = "Hello World", userCount: Int)
>>
>> object MyJsonProtocol
>>   extends akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport
>> with spray.json.DefaultJsonProtocol {
>>
>>   implicit val replyFormat = jsonFormat2(Reply.apply)
>> }
>>
>> object FullWebServer {
>>   var userCount = 0;
>>
>>   def getReply() = {
>> userCount += 1
>> Reply(userCount=userCount)
>>   }
>>
>>   def main(args: Array[String]) {
>> implicit val system = ActorSystem()
>> implicit val materializer = ActorMaterializer()
>> import MyJsonProtocol._
>>
>> val route =
>>   get {
>> complete(getReply())
>>   }
>>
>> // `route` will be implicitly converted to `Flow` using 
>> `RouteResult.route2HandlerFlow`
>> val bindingFuture = Http().bindAndHandle(route, "0.0.0.0", 3000)
>> println("Server online at http://127.0.0.1:3000/;)
>>   }
>> }
>>
>>
>> *Node*
>>
>> var http = require('http');
>>
>> let userCount = 0;
>> var server = http.createServer(function (request, response) {
>> userCount++;
>> response.writeHead(200, {"Content-Type": "application/json"});
>> const hello = {msg: "Hello world", userCount: userCount};
>> response.end(JSON.stringify(hello));
>> });
>>
>> server.listen(3000);
>>
>> console.log("Server running at http://127.0.0.1:3000/;);
>>
>> (to be more exact there's also some wrapping code because I'm running this 
>> in a cluster so all cores can be utilized)
>>
>>
>> So for the first test, things are pretty much the same - Akka HTTP uses 
>> less CPU (4-6% vs. 10% in Node) and has a slightly lower average response 
>> time, but a higher max response time.
>>
>> Not very interesting.
>>
>>
>> The second test was more one sided though.
>>
>>
>> The Node version maxed out the CPU and got the following results:
>>
>>
>> Running 5m test @ http://srv-02:3000/
>>   2 threads and 100 connections
>>   Thread calibration: mean lat.: 215.794ms, rate sampling interval: 1623ms
>>   Thread calibration: mean lat.: 366.732ms, rate sampling interval: 1959ms
>>   Thread Stats   Avg  Stdev Max   +/- Stdev
>> Latency 5.31s 4.48s   16.66s65.79%
>> Req/Sec 9.70k 0.87k   10.86k57.85%
>>   5806492 requests in 5.00m, 1.01GB read
>> Requests/sec:  19354.95
>> Transfer/sec:  3.43MB
>>
>>
>> Whereas for the Akka HTTP version I saw each core using ~40% CPU 
>> throughout the test and I had the following results:
>>
>> Running 5m test @ http://srv-02:3000/
>>   2 threads and 100 connections
>>   Thread calibration: mean lat.: 5.044ms, rate sampling interval: 10ms
>>   Thread calibration: mean lat.: 5.308ms, rate sampling interval: 10ms
>>   

Re: [akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread Viktor Klang
Cool! (you may want to use an AtomicInteger to generate unique sequence
numbers)

On Mon, Sep 12, 2016 at 12:12 PM, Konrad Malawski 
wrote:

> Hi Adam,
> thanks for sharing the runs!
> Your benchmarking method is good - thanks for doing a proper warmup and
> using wrk2 :-)
> Notice that the multiple second response times in node basically mean it's
> not keeping up and stalling the connections (also known as coordinated
> emission).
>
> It's great to see such side by side with node, thanks for sharing it again.
> Happy hakking!
>
> On Mon, Sep 12, 2016 at 10:33 AM, Adam  wrote:
>
>> Hi,
>>
>> I'd just like to share my satisfaction from Akka HTTP performance in
>> 2.4.10.
>> I'm diagnosing some low level Node.js performance issues and while
>> running various tests that only require the most basic "Hello World" style
>> code, I decided to take a few minutes to check how would Akka HTTP handle
>> the same work.
>> I was quite impressed with the results, so I thought I'd share.
>>
>> I'm running two c4.large instances (so two cores on each instance) - one
>> running the HTTP service and another running wrk2.
>> I've tested only two short sets (seeing as I have other work to do):
>>
>>1. use 2 threads to simulate 100 concurrent users pushing 2k
>>requests/sec for 5 minutes
>>2. use 2 threads to simulate 100 concurrent users pushing 20k
>>requests/sec for 5 minutes
>>
>> In both cases, the tests are actually executed twice without a restart in
>> between and I throw away the results of the first run.
>>
>> The first run is just to get JIT and other adaptive mechanisms to do
>> their thing.
>>
>> 5 minutes seems to be enough based on the CPU behavior I see, but for a
>> more "official" test I'd probably use something longer.
>>
>>
>> As for the code, I was using vanilla Node code - the kind you see as the
>> most basic example (no web frameworks or anything) but for Akka, I used the
>> high level DSL.
>>
>>
>> Here's the Code:
>>
>>
>> *Akka HTTP*
>>
>>
>> package com.example.rest
>>
>> import akka.actor.ActorSystem
>> import akka.http.scaladsl.Http
>> import akka.http.scaladsl.server.Directives._
>> import akka.stream.ActorMaterializer
>>
>>
>> case class Reply(message: String = "Hello World", userCount: Int)
>>
>> object MyJsonProtocol
>>   extends akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport
>> with spray.json.DefaultJsonProtocol {
>>
>>   implicit val replyFormat = jsonFormat2(Reply.apply)
>> }
>>
>> object FullWebServer {
>>   var userCount = 0;
>>
>>   def getReply() = {
>> userCount += 1
>> Reply(userCount=userCount)
>>   }
>>
>>   def main(args: Array[String]) {
>> implicit val system = ActorSystem()
>> implicit val materializer = ActorMaterializer()
>> import MyJsonProtocol._
>>
>> val route =
>>   get {
>> complete(getReply())
>>   }
>>
>> // `route` will be implicitly converted to `Flow` using 
>> `RouteResult.route2HandlerFlow`
>> val bindingFuture = Http().bindAndHandle(route, "0.0.0.0", 3000)
>> println("Server online at http://127.0.0.1:3000/;)
>>   }
>> }
>>
>>
>> *Node*
>>
>> var http = require('http');
>>
>> let userCount = 0;
>> var server = http.createServer(function (request, response) {
>> userCount++;
>> response.writeHead(200, {"Content-Type": "application/json"});
>> const hello = {msg: "Hello world", userCount: userCount};
>> response.end(JSON.stringify(hello));
>> });
>>
>> server.listen(3000);
>>
>> console.log("Server running at http://127.0.0.1:3000/;);
>>
>> (to be more exact there's also some wrapping code because I'm running this 
>> in a cluster so all cores can be utilized)
>>
>>
>> So for the first test, things are pretty much the same - Akka HTTP uses
>> less CPU (4-6% vs. 10% in Node) and has a slightly lower average response
>> time, but a higher max response time.
>>
>> Not very interesting.
>>
>>
>> The second test was more one sided though.
>>
>>
>> The Node version maxed out the CPU and got the following results:
>>
>>
>> Running 5m test @ http://srv-02:3000/
>>   2 threads and 100 connections
>>   Thread calibration: mean lat.: 215.794ms, rate sampling interval: 1623ms
>>   Thread calibration: mean lat.: 366.732ms, rate sampling interval: 1959ms
>>   Thread Stats   Avg  Stdev Max   +/- Stdev
>> Latency 5.31s 4.48s   16.66s65.79%
>> Req/Sec 9.70k 0.87k   10.86k57.85%
>>   5806492 requests in 5.00m, 1.01GB read
>> Requests/sec:  19354.95
>> Transfer/sec:  3.43MB
>>
>>
>> Whereas for the Akka HTTP version I saw each core using ~40% CPU
>> throughout the test and I had the following results:
>>
>> Running 5m test @ http://srv-02:3000/
>>   2 threads and 100 connections
>>   Thread calibration: mean lat.: 5.044ms, rate sampling interval: 10ms
>>   Thread calibration: mean lat.: 5.308ms, rate sampling interval: 10ms
>>   Thread Stats   Avg  Stdev Max   +/- Stdev
>> Latency 

Re: [akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread Konrad Malawski
Hi Adam,
thanks for sharing the runs!
Your benchmarking method is good - thanks for doing a proper warmup and
using wrk2 :-)
Notice that the multiple second response times in node basically mean it's
not keeping up and stalling the connections (also known as coordinated
emission).

It's great to see such side by side with node, thanks for sharing it again.
Happy hakking!

On Mon, Sep 12, 2016 at 10:33 AM, Adam  wrote:

> Hi,
>
> I'd just like to share my satisfaction from Akka HTTP performance in
> 2.4.10.
> I'm diagnosing some low level Node.js performance issues and while running
> various tests that only require the most basic "Hello World" style code, I
> decided to take a few minutes to check how would Akka HTTP handle the same
> work.
> I was quite impressed with the results, so I thought I'd share.
>
> I'm running two c4.large instances (so two cores on each instance) - one
> running the HTTP service and another running wrk2.
> I've tested only two short sets (seeing as I have other work to do):
>
>1. use 2 threads to simulate 100 concurrent users pushing 2k
>requests/sec for 5 minutes
>2. use 2 threads to simulate 100 concurrent users pushing 20k
>requests/sec for 5 minutes
>
> In both cases, the tests are actually executed twice without a restart in
> between and I throw away the results of the first run.
>
> The first run is just to get JIT and other adaptive mechanisms to do their
> thing.
>
> 5 minutes seems to be enough based on the CPU behavior I see, but for a
> more "official" test I'd probably use something longer.
>
>
> As for the code, I was using vanilla Node code - the kind you see as the
> most basic example (no web frameworks or anything) but for Akka, I used the
> high level DSL.
>
>
> Here's the Code:
>
>
> *Akka HTTP*
>
>
> package com.example.rest
>
> import akka.actor.ActorSystem
> import akka.http.scaladsl.Http
> import akka.http.scaladsl.server.Directives._
> import akka.stream.ActorMaterializer
>
>
> case class Reply(message: String = "Hello World", userCount: Int)
>
> object MyJsonProtocol
>   extends akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport
> with spray.json.DefaultJsonProtocol {
>
>   implicit val replyFormat = jsonFormat2(Reply.apply)
> }
>
> object FullWebServer {
>   var userCount = 0;
>
>   def getReply() = {
> userCount += 1
> Reply(userCount=userCount)
>   }
>
>   def main(args: Array[String]) {
> implicit val system = ActorSystem()
> implicit val materializer = ActorMaterializer()
> import MyJsonProtocol._
>
> val route =
>   get {
> complete(getReply())
>   }
>
> // `route` will be implicitly converted to `Flow` using 
> `RouteResult.route2HandlerFlow`
> val bindingFuture = Http().bindAndHandle(route, "0.0.0.0", 3000)
> println("Server online at http://127.0.0.1:3000/;)
>   }
> }
>
>
> *Node*
>
> var http = require('http');
>
> let userCount = 0;
> var server = http.createServer(function (request, response) {
> userCount++;
> response.writeHead(200, {"Content-Type": "application/json"});
> const hello = {msg: "Hello world", userCount: userCount};
> response.end(JSON.stringify(hello));
> });
>
> server.listen(3000);
>
> console.log("Server running at http://127.0.0.1:3000/;);
>
> (to be more exact there's also some wrapping code because I'm running this in 
> a cluster so all cores can be utilized)
>
>
> So for the first test, things are pretty much the same - Akka HTTP uses
> less CPU (4-6% vs. 10% in Node) and has a slightly lower average response
> time, but a higher max response time.
>
> Not very interesting.
>
>
> The second test was more one sided though.
>
>
> The Node version maxed out the CPU and got the following results:
>
>
> Running 5m test @ http://srv-02:3000/
>   2 threads and 100 connections
>   Thread calibration: mean lat.: 215.794ms, rate sampling interval: 1623ms
>   Thread calibration: mean lat.: 366.732ms, rate sampling interval: 1959ms
>   Thread Stats   Avg  Stdev Max   +/- Stdev
> Latency 5.31s 4.48s   16.66s65.79%
> Req/Sec 9.70k 0.87k   10.86k57.85%
>   5806492 requests in 5.00m, 1.01GB read
> Requests/sec:  19354.95
> Transfer/sec:  3.43MB
>
>
> Whereas for the Akka HTTP version I saw each core using ~40% CPU
> throughout the test and I had the following results:
>
> Running 5m test @ http://srv-02:3000/
>   2 threads and 100 connections
>   Thread calibration: mean lat.: 5.044ms, rate sampling interval: 10ms
>   Thread calibration: mean lat.: 5.308ms, rate sampling interval: 10ms
>   Thread Stats   Avg  Stdev Max   +/- Stdev
> Latency 1.83ms1.27ms  78.91ms   95.96%
> Req/Sec10.55k 1.79k   28.22k75.98%
>   5997552 requests in 5.00m, 1.00GB read
> Requests/sec:  19991.72
> Transfer/sec:  3.41MB
>
>
> Which is not a huge increase over 2K requests/sec:
>
>
> Running 5m test @ http://srv-02:3000/
>   2 threads and 100 connections
>   

[akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread Adam
Hi,

I'd just like to share my satisfaction from Akka HTTP performance in 2.4.10.
I'm diagnosing some low level Node.js performance issues and while running 
various tests that only require the most basic "Hello World" style code, I 
decided to take a few minutes to check how would Akka HTTP handle the same 
work.
I was quite impressed with the results, so I thought I'd share.

I'm running two c4.large instances (so two cores on each instance) - one 
running the HTTP service and another running wrk2.
I've tested only two short sets (seeing as I have other work to do):

   1. use 2 threads to simulate 100 concurrent users pushing 2k 
   requests/sec for 5 minutes
   2. use 2 threads to simulate 100 concurrent users pushing 20k 
   requests/sec for 5 minutes

In both cases, the tests are actually executed twice without a restart in 
between and I throw away the results of the first run.

The first run is just to get JIT and other adaptive mechanisms to do their 
thing.

5 minutes seems to be enough based on the CPU behavior I see, but for a 
more "official" test I'd probably use something longer.


As for the code, I was using vanilla Node code - the kind you see as the 
most basic example (no web frameworks or anything) but for Akka, I used the 
high level DSL.


Here's the Code:


*Akka HTTP*


package com.example.rest

import akka.actor.ActorSystem
import akka.http.scaladsl.Http
import akka.http.scaladsl.server.Directives._
import akka.stream.ActorMaterializer


case class Reply(message: String = "Hello World", userCount: Int)

object MyJsonProtocol
  extends akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport
with spray.json.DefaultJsonProtocol {

  implicit val replyFormat = jsonFormat2(Reply.apply)
}

object FullWebServer {
  var userCount = 0;

  def getReply() = {
userCount += 1
Reply(userCount=userCount)
  }

  def main(args: Array[String]) {
implicit val system = ActorSystem()
implicit val materializer = ActorMaterializer()
import MyJsonProtocol._

val route =
  get {
complete(getReply())
  }

// `route` will be implicitly converted to `Flow` using 
`RouteResult.route2HandlerFlow`
val bindingFuture = Http().bindAndHandle(route, "0.0.0.0", 3000)
println("Server online at http://127.0.0.1:3000/;)
  }
}


*Node*

var http = require('http');

let userCount = 0;
var server = http.createServer(function (request, response) {
userCount++;
response.writeHead(200, {"Content-Type": "application/json"});
const hello = {msg: "Hello world", userCount: userCount};
response.end(JSON.stringify(hello));
});

server.listen(3000);

console.log("Server running at http://127.0.0.1:3000/;);

(to be more exact there's also some wrapping code because I'm running this in a 
cluster so all cores can be utilized)


So for the first test, things are pretty much the same - Akka HTTP uses 
less CPU (4-6% vs. 10% in Node) and has a slightly lower average response 
time, but a higher max response time.

Not very interesting.


The second test was more one sided though.


The Node version maxed out the CPU and got the following results:


Running 5m test @ http://srv-02:3000/
  2 threads and 100 connections
  Thread calibration: mean lat.: 215.794ms, rate sampling interval: 1623ms
  Thread calibration: mean lat.: 366.732ms, rate sampling interval: 1959ms
  Thread Stats   Avg  Stdev Max   +/- Stdev
Latency 5.31s 4.48s   16.66s65.79%
Req/Sec 9.70k 0.87k   10.86k57.85%
  5806492 requests in 5.00m, 1.01GB read
Requests/sec:  19354.95
Transfer/sec:  3.43MB


Whereas for the Akka HTTP version I saw each core using ~40% CPU throughout 
the test and I had the following results:

Running 5m test @ http://srv-02:3000/
  2 threads and 100 connections
  Thread calibration: mean lat.: 5.044ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 5.308ms, rate sampling interval: 10ms
  Thread Stats   Avg  Stdev Max   +/- Stdev
Latency 1.83ms1.27ms  78.91ms   95.96%
Req/Sec10.55k 1.79k   28.22k75.98%
  5997552 requests in 5.00m, 1.00GB read
Requests/sec:  19991.72
Transfer/sec:  3.41MB


Which is not a huge increase over 2K requests/sec:


Running 5m test @ http://srv-02:3000/
  2 threads and 100 connections
  Thread calibration: mean lat.: 1.565ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.557ms, rate sampling interval: 10ms
  Thread Stats   Avg  Stdev Max   +/- Stdev
Latency 1.07ms  479.75us   8.09ms   62.57%
Req/Sec 1.06k   131.65 1.78k79.05%
  599804 requests in 5.00m, 101.77MB read
Requests/sec:   1999.33
Transfer/sec:347.39KB



In summary, I know this is far from a conclusive test, but I was still 
quite excited to see the results.

Keep up the good work!

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html