Re: [akka-user] Stream within Actor Supervision

2016-08-04 Thread Gary Struthers
Thanks Konrad, when I skimmed that page I read it as supervision didn't 
work with GraphStage, which I use a lot but reading slowly I see it's 
GraphStage junction that's not supported and I don't use that. This gives 
me what I need.

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Using TestProbe to automate test by replying automatically just to ensure that the test goes

2016-08-04 Thread Maatary Okouya


I am trying to get a test probe to reply with an acknowledgement, whenever 
it receive any message .

I wrote the following code in my test but it does not work:

val chgtWriter = new TestProbe(system)  {

  def receive: Receive = {

case m => println("receive messagereplying with ACK"); sender() ! 
ACK

  }

}

Is there a way to do that. The actor that is actually sending the message 
to the test probe is definitely running on another thread than the 
TestThread. Below you can see the full test as currently crafted.

feature("The changeSetActor periodically fetch new change set following a 
schedule") {


scenario("A ChangeSetActor fetch new changeset from a Fetcher Actor that return 
a full and an empty ChangeSet"){


  Given("a ChangeSetActor with a schedule of fetching a message every 10 
seconds, a ChangeFetcher and a ChangeWriter")

val chgtFetcher = TestProbe()

val chgtWriter = new TestProbe(system)  {

  def receive: Receive = {

case m => println("receive message {} replying with ACK"); sender() ! 
ACK

  }

}
val fromTime = Instant.now().truncatedTo(ChronoUnit.SECONDS)
val chgtActor = system.actorOf(ChangeSetActor.props(chgtWriter.ref, 
chgtFetcher.ref, fromTime))

  When("all are started")


  Then("The Change Fetcher should receive at least 3 messages from the 
ChangeSetActor within 40 seconds")

var changesetSNum = 1

val received = chgtFetcher.receiveWhile( 40 seconds) {

  case FetchNewChangeSet(m) => {

println(s"received: FetchNewChangeSet(${m}")

if (changesetSNum == 1) {
chgtFetcher.reply(NewChangeSet(changeSet1))
changesetSNum += 1
  }
  else
chgtFetcher.reply(NoAvailableChangeSet)
}

  }

received.size should be (3)
}

}

The changeSetActor is fully tested and works. The test hang with the 
ChangeWriter. It never receive a message in the receive method.

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Akka Java IO TLS

2016-08-04 Thread Konrad Malawski
I have found handing SSL/TLS from JVM a PITA. Why you don't put the SSL
termination in front of the Akka HTTP endpoint (e.g. nginx) ?


I'd honestly +1 that  (a lot).

-- 
Konrad `ktoso` Malawski
Akka  @ Lightbend 

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Akka Java IO TLS

2016-08-04 Thread Любен
I have found handing SSL/TLS from JVM a PITA. Why you don't put the SSL
termination in front of the Akka HTTP endpoint (e.g. nginx) ?

On Wed, Jul 27, 2016 at 4:36 PM, Akka Team  wrote:

> The easiest path would probably be Akka Streams for TCP (
> http://doc.akka.io/docs/akka/2.4/scala/stream/stream-io.html) and the
> existing TLS graph module (http://doc.akka.io/api/akka/
> 2.4/#akka.stream.scaladsl.TLS$). There isn't much documentation on the
> TLS module, but you should be able look at how it is used in Akka HTTP.
>
> I don't think you would benefit hugely from the fact that Akka remoting
> uses Netty if you want to go down that path, most of the classes are Akka
> private and not very generic. You can of course just introduce Netty as a
> separate dependency and use that directly if you want.
>
> --
> Johan
>
> On Thu, Jul 21, 2016 at 9:12 PM, Vinay Gajjala  wrote:
>
>> After rethinking and researching was wondering if I can configure/code
>> the netty API which Akka uses to achieve the TLS. I am now sure how
>> efficient is this. Any help would be greatly appreciated
>>
>> Thanks
>> Vinay
>>
>> On Monday, July 18, 2016 at 11:13:33 AM UTC-5, Vinay Gajjala wrote:
>>>
>>> Hi
>>>
>>> I am a newbie in Akka and I implemented a TCP server which listens to
>>> device traffic. I have searched online and could not find any concrete
>>> examples of how to configure TLS using Akka IO.
>>>
>>> I am not sure if I am missing the obvious.
>>>
>>> Thanks in advance,
>>> Vinay
>>>
>> --
>> >> Read the docs: http://akka.io/docs/
>> >> Check the FAQ: http://doc.akka.io/docs/akka/
>> current/additional/faq.html
>> >> Search the archives: https://groups.google.com/group/akka-user
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to akka-user+unsubscr...@googlegroups.com.
>> To post to this group, send email to akka-user@googlegroups.com.
>> Visit this group at https://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
> --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ: http://doc.akka.io/docs/akka/
> current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Akka Cluster (with Sharding) not working without auto-down-unreachable-after

2016-08-04 Thread Justin du coeur
It does do reassignment -- but it has to know to do that.  Keep in mind
that "down" is the master switch here: until the node is downed, the rest
of the system doesn't *know* that NodeA should be avoided.  I haven't dug
into that particular code, but I assume from what you're saying that the
allocation algorithm doesn't take unreachability into account when choosing
where to allocate the shard, just up/down.  I suspect that unreachability
is too local and transient to use as the basis for these allocations.

Keep in mind that you're looking at this from a relatively all-knowing
global perspective, but each node is working from a very localized and
imperfect one.  All it knows is "I can't currently reach NodeA".  It has no
a priori way of knowing whether NodeA has been taken offline (so it should
be avoided), or there's simply been a transient network glitch between here
and there (so things are *mostly* business as usual).  Downing is how you
tell it, "No, really, stop using this node"; until then, most of the code
assumes that the more-common transient situation is the case.  It's
*probably* possible to take unreachability into account in the case you're
describing, but it doesn't surprise me if that's not true.

Also, keep in mind that, IIRC, there are a few cluster singletons involved
here, at least behind the scenes.  If NodeA currently owns one of the key
singletons (such as the ShardCoordinator), and it hasn't been downed, I
imagine the rest of the cluster is going to *quickly* lock up, because the
result is that nobody is authorized to make these sorts of allocation
decisions.

All that said -- keep in mind, I'm just a user of this stuff, and am
talking at the edges of my knowledge.  Konrad's the actual expert...

On Thu, Aug 4, 2016 at 4:59 PM, Eric Swenson  wrote:

> While I'm in the process of implementing your proposed solution, I did
> want to make sure I understood why I'm seeing the failures I'm seeing when
> a node is taken offline, auto-down is disabled, and no one is handling the
> UnreachableNode message.  Let me try to explain what I think is happening
> and perhaps you (or someone else who knows more about this than I) can
> confirm or refute.
>
> In the case of akka-cluster-sharding, a shard might exist on the
> unreachable node.  Since the node is not yet marked as "down", the cluster
> simply cannot handle an incoming message for that shard.  To create another
> sharded actor on an available cluster node might duplicate the unreachable
> node state.  In the case of akka-persistence actors, even though a new
> shard actor could resurrect any journaled state, we cannot be sure that the
> old unreachable node might not at any time, add other events to the
> journal, or come online and try to continue operating on the shard.
>
> Is that the reason why I see the following behavior:  NodeA is online.
> NodeB comes online and joins the cluster.  A request comes in from
> akka-http and is sent to the shard region.  It goes to NodeA which creates
> an actor to handle the sharded object.  NodeA is taken offline (unbeknownst
> to the akka-cluster).  Another message for the above-mentioned shard comes
> in from akka-http and is sent to the shard region. The shard region can't
> reach NodeA.  NodeA isn't marked as down.  So the shard region cannot
> create another actor (on an available Node). It can only wait (until
> timeout) for NodeA to become reachable.  Since, in my scenario, NodeA will
> never become reachable and NodeB is the only one online, all requests for
> old shards timeout.
>
> If the above logic is true, I have one last issue:  In the above scenario,
> if a message comes into the shard region for a shard that WOULD HAVE BEEN
> allocated to NodeA but has never yet been assigned to an actor on NodeA,
> and NodeA is unreachable, why can it simply be assigned to another Node?
>  is it because the shard-to-node algorithm is fixed (by default) and there
> is no dynamic ability to "reassign" to an available Node?
>
> Thanks again.  -- Eric
>
> On Wednesday, August 3, 2016 at 7:00:42 PM UTC-7, Justin du coeur wrote:
>>
>> The keyword here is "auto".  Autodowning is an *incredibly braindead*
>> algorithm for dealing with nodes coming out of service, and if you use it
>> in production you more or less guarantee disaster, because that algorithm
>> can't cope with cluster partition.  You *do* need to deal with downing, but
>> you have to get something smarter than that.
>>
>> Frankly, if you're already hooking into AWS, I *suspect* the best
>> approach is to leverage that -- when a node goes offline, you have some
>> code to detect that through the ECS APIs, react to it, and manually down
>> that node.  (I'm planning on something along those lines for my system, but
>> haven't actually tried yet.)  But whether you do that or something else,
>> you've got to add *something* that does downing.
>>
>> I believe the official party line is "Buy a Lightbend Subscription",
>> 

Re: [akka-user] Stream within Actor Supervision

2016-08-04 Thread Konrad Malawski
Hello there,
please read the docs on Stream error handling:
http://doc.akka.io/docs/akka/2.4.9-RC1/scala/stream/stream-error.html#stream-error-scala

-- 
Konrad `ktoso` Malawski
Akka  @ Lightbend 

On 4 August 2016 at 23:47:27, Gary Struthers (agilej...@earthlink.net)
wrote:

If an Actor contains a Stream what happens when the stream throws an
exception and there is no stream Decider to handle it? Can the Actor's
supervisor handle it and Resume, Restart, and Stop the Actor with the
stream?

Gary
--
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
http://doc.akka.io/docs/akka/current/additional/faq.html
>> Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the Google Groups
"Akka User List" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Stream within Actor Supervision

2016-08-04 Thread Gary Struthers
If an Actor contains a Stream what happens when the stream throws an 
exception and there is no stream Decider to handle it? Can the Actor's 
supervisor handle it and Resume, Restart, and Stop the Actor with the 
stream?

Gary 

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Akka Cluster (with Sharding) not working without auto-down-unreachable-after

2016-08-04 Thread Eric Swenson
While I'm in the process of implementing your proposed solution, I did want 
to make sure I understood why I'm seeing the failures I'm seeing when a 
node is taken offline, auto-down is disabled, and no one is handling the 
UnreachableNode message.  Let me try to explain what I think is happening 
and perhaps you (or someone else who knows more about this than I) can 
confirm or refute.

In the case of akka-cluster-sharding, a shard might exist on the 
unreachable node.  Since the node is not yet marked as "down", the cluster 
simply cannot handle an incoming message for that shard.  To create another 
sharded actor on an available cluster node might duplicate the unreachable 
node state.  In the case of akka-persistence actors, even though a new 
shard actor could resurrect any journaled state, we cannot be sure that the 
old unreachable node might not at any time, add other events to the 
journal, or come online and try to continue operating on the shard.

Is that the reason why I see the following behavior:  NodeA is online. 
 NodeB comes online and joins the cluster.  A request comes in from 
akka-http and is sent to the shard region.  It goes to NodeA which creates 
an actor to handle the sharded object.  NodeA is taken offline (unbeknownst 
to the akka-cluster).  Another message for the above-mentioned shard comes 
in from akka-http and is sent to the shard region. The shard region can't 
reach NodeA.  NodeA isn't marked as down.  So the shard region cannot 
create another actor (on an available Node). It can only wait (until 
timeout) for NodeA to become reachable.  Since, in my scenario, NodeA will 
never become reachable and NodeB is the only one online, all requests for 
old shards timeout.

If the above logic is true, I have one last issue:  In the above scenario, 
if a message comes into the shard region for a shard that WOULD HAVE BEEN 
allocated to NodeA but has never yet been assigned to an actor on NodeA, 
and NodeA is unreachable, why can it simply be assigned to another Node? 
 is it because the shard-to-node algorithm is fixed (by default) and there 
is no dynamic ability to "reassign" to an available Node? 

Thanks again.  -- Eric

On Wednesday, August 3, 2016 at 7:00:42 PM UTC-7, Justin du coeur wrote:
>
> The keyword here is "auto".  Autodowning is an *incredibly braindead* 
> algorithm for dealing with nodes coming out of service, and if you use it 
> in production you more or less guarantee disaster, because that algorithm 
> can't cope with cluster partition.  You *do* need to deal with downing, but 
> you have to get something smarter than that.
>
> Frankly, if you're already hooking into AWS, I *suspect* the best approach 
> is to leverage that -- when a node goes offline, you have some code to 
> detect that through the ECS APIs, react to it, and manually down that node. 
>  (I'm planning on something along those lines for my system, but haven't 
> actually tried yet.)  But whether you do that or something else, you've got 
> to add *something* that does downing.
>
> I believe the official party line is "Buy a Lightbend Subscription", 
> through which you can get their Split Brain Resolver, which is a fairly 
> battle-hardened module for dealing with this problem.  That's not strictly 
> necessary, but you *do* need to have a reliable solution...
>
> On Wed, Aug 3, 2016 at 8:42 PM, Eric Swenson  > wrote:
>
>> We have an akka-cluster/sharding application deployed an AWS/ECS, where 
>> each instance of the application is a Docker container.  An ECS service 
>> launches N instances of the application based on configuration data.  It is 
>> not possible to know, for certain, the IP addresses of the cluster 
>> members.  Upon startup, before the AkkaSystem is created, the code 
>> currently polls AWS and determines the IP addresses of all the Docker hosts 
>> (which potentially could run the akka application).  It sets these IP 
>> addresses as the seed nodes before bringing up the akka cluster system. The 
>> configuration for these has, up until yesterday always included the 
>> akka.cluster.auto-down-unreachable-after configuration setting.  And it has 
>> always worked.  Furthermore, it supports two very critical requirements:
>>
>> a) an instance of the application can be removed at any time, due to 
>> scaling or rolling updates
>> b) an instance of the application can be added at any time, due to 
>> scaling or rolling updates
>>
>> On the advice of an Akka expert on the Gitter channel, I removed the 
>> auto-down-unreachable-after setting, which, as documented, is dangerous for 
>> production.  As a result the system no longer supports rolling updates.  A 
>> rolling update occurs thus:  a new version of the application is deployed 
>> (a new ECS task definition is created with a new Docker image).  The ECS 
>> service launches a new task (Docker container running on an available host) 
>> and once that container becomes stable, it kills one of the 

Re: [akka-user] Cassandra persistence with Kryo-serialization - recovery and persistence performance issue

2016-08-04 Thread Justin du coeur
Several thoughts:

First and most important -- as far as I can tell, there's little community
experience using Kryo for Akka Persistence.  I'm currently building that
out for my own project, and when I asked about it, nobody could name a
project that was already doing it.  It's been a bit of an adventure getting
it right -- indeed, I wound up contributing an enhancement to the romix
library

a couple of weeks ago, which will be in 0.4.2.  Most of the community
appear to be using protobuf for persistence.  So keep in mind that this may
be a bit bleeding-edge.  (Eventually, after my code gets to production,
I'll probably do a long blog entry on it.  The relevant branch can be found
here .)

As for your error, I'm a bit surprised you got that with "default", which
isn't supposed to be using class IDs but should instead be using FQCNs
across the board.  "default" *is* explicitly slow, which might account for
some of the time issues (although I wouldn't expect it to be *that* slow),
but the error you're showing is kinda weird.  The implications seems to be
that, for some reason, it's synthesizing some class IDs even in "default"
mode.

Mind, that error is *exactly* what I'd expect to see if you use
"incremental".  Really, I don't think "incremental" is ever a great idea,
but it is absolutely a terrible one for persistence.  It's a recipe for
accidentally persisting random class IDs that can't later be deserialized.
 "Automatic" mode should, in principle, work, but "incremental" is very
likely to cause random accidental failures.

Personally, I think that "explicit" is the most sensible way to go.  It's a
bit of a pain in the ass, and will give you errors if you fail to
pre-register any classes (and you will discover that you need to register
many, many standard-library classes to get it to work), but when you're
using this in the context of persistence I think it's the sanest approach,
ensuring that *all* of your persisted classes are using the more-efficient
IDs and *all* of them are pre-registered...

On Thu, Aug 4, 2016 at 1:25 PM, Muthukumaran Kothandaraman <
muthu.kmk2...@gmail.com> wrote:

> Hi,
>
> I am using following combination for a basic persistence actor
>
> Version combination : Cassandra 3.7 + akka-persistence-cassandra-0.7 +
> akka 2.4.8 + akka-kryo-serialization_2.11 version-0.4.1
>
> I am using following conf to serialize using kryo-serialization
>
> kryo  {
>
> type = "graph"
> idstrategy = "default"
> buffer-size = 4096
> max-buffer-size = -1
> use-manifests = false
> post-serialization-transformations = "lz4,aes"
> implicit-registration-logging = false
> kryo-trace = false
> kryo-custom-serializer-init = "CustomKryoSerializerInitFQCN"
> resolve-subclasses = false
> mappings {
> "com.myexperiments.akkaexps.persistence.events.Evt" = 20
> }
> classes = [
> "com.myexperiments.akkaexps.persistence.events.Evt"
> ]
> }
>
>
> Observations :
>
> ==
>
>
> 1. I am able to see that the events do get persisted without any issues - 
> checked via cqlsh of Cassandra to verify the message count in akka.messages 
> table
>
>Performance observation : there is an abnormal reduction in 
> persistence-rate of events
>
>in fact,with Kryo serialization + persistAsync I got around ~580 
> events persisted/sec with Cassandra plugin when compared to plain java 
> serialization which for same test run on same machine yielded upto 800 
> events/sec
>
>which looks weird. Cassandra runs in local node - no clustering 
> (trying to thrash-out all variances before I can go to cluster with larger 
> configuration so that I can isolate issues)
>
>
>
> 2. During recovery phase, however,  I got following exception and recovery 
> failed. Also tried changing idstrategy from 'default' to 'incremental'  and 
> still facing the same Exception
>
>
> [ERROR] [08/02/2016 10:46:16.119] [example-akka.actor.default-dispatcher-8] 
> [akka://example/system/cassandra-journal/$b/flow-1-0-asyncReplayMessages] 
> Encountered unregistered class ID: 
> 1406735620*com.esotericsoftware.kryo.KryoException: Encountered unregistered 
> class ID: 1406735620*
>   at 
> com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:137)
>   at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:670)
>   at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:781)
>   at 
> com.romix.akka.serialization.kryo.KryoBasedSerializer.fromBinary(KryoSerializer.scala:483)
>   at 
> com.romix.akka.serialization.kryo.KryoSerializer.fromBinary(KryoSerializer.scala:339)
>   at 
> akka.serialization.Serialization$$anonfun$deserialize$2.apply(Serialization.scala:124)
>   at scala.util.Try$.apply(Try.scala:192)
>   at 

Re: [akka-user] Akka Cluster (with Sharding) not working without auto-down-unreachable-after

2016-08-04 Thread Eric Swenson
Thanks, Konrad. I will replace the use of auto-down with a scheme such as 
that proposed by Justin. I have also reached out to Lightbend (and received 
a call back already) regarding subscription services from Lightbend.  -- 
Eric

On Thursday, August 4, 2016 at 1:26:35 AM UTC-7, Konrad Malawski wrote:
>
> Just to re-affirm what Justin wrote there.
>
> Auto downing is "auto". It's dumb. That's why it's not safe.
> The safer automatic downing modes ones are in 
> doc.akka.io/docs/akka/rp-16s01p05/scala/split-brain-resolver.html
> Yes, that's a commercial thing.
>
> If you don't want to use these, use EC2's APIs - they have APIs from which 
> you can get information about state like that.
>
> -- 
> Konrad `ktoso` Malawski
> Akka  @ Lightbend 
>
> On 4 August 2016 at 04:00:34, Justin du coeur (jduc...@gmail.com 
> ) wrote:
>
> The keyword here is "auto".  Autodowning is an *incredibly braindead* 
> algorithm for dealing with nodes coming out of service, and if you use it 
> in production you more or less guarantee disaster, because that algorithm 
> can't cope with cluster partition.  You *do* need to deal with downing, but 
> you have to get something smarter than that. 
>
> Frankly, if you're already hooking into AWS, I *suspect* the best approach 
> is to leverage that -- when a node goes offline, you have some code to 
> detect that through the ECS APIs, react to it, and manually down that node. 
>  (I'm planning on something along those lines for my system, but haven't 
> actually tried yet.)  But whether you do that or something else, you've got 
> to add *something* that does downing.
>
> I believe the official party line is "Buy a Lightbend Subscription", 
> through which you can get their Split Brain Resolver, which is a fairly 
> battle-hardened module for dealing with this problem.  That's not strictly 
> necessary, but you *do* need to have a reliable solution...
>
> On Wed, Aug 3, 2016 at 8:42 PM, Eric Swenson  > wrote:
>
>> We have an akka-cluster/sharding application deployed an AWS/ECS, where 
>> each instance of the application is a Docker container.  An ECS service 
>> launches N instances of the application based on configuration data.  It is 
>> not possible to know, for certain, the IP addresses of the cluster 
>> members.  Upon startup, before the AkkaSystem is created, the code 
>> currently polls AWS and determines the IP addresses of all the Docker hosts 
>> (which potentially could run the akka application).  It sets these IP 
>> addresses as the seed nodes before bringing up the akka cluster system. The 
>> configuration for these has, up until yesterday always included the 
>> akka.cluster.auto-down-unreachable-after configuration setting.  And it has 
>> always worked.  Furthermore, it supports two very critical requirements: 
>>
>> a) an instance of the application can be removed at any time, due to 
>> scaling or rolling updates
>> b) an instance of the application can be added at any time, due to 
>> scaling or rolling updates
>>
>> On the advice of an Akka expert on the Gitter channel, I removed the 
>> auto-down-unreachable-after setting, which, as documented, is dangerous for 
>> production.  As a result the system no longer supports rolling updates.  A 
>> rolling update occurs thus:  a new version of the application is deployed 
>> (a new ECS task definition is created with a new Docker image).  The ECS 
>> service launches a new task (Docker container running on an available host) 
>> and once that container becomes stable, it kills one of the remaining 
>> instances (cluster members) to bring the number of instances to some 
>> configured value.  
>>
>> When this happens, akka-cluster becomes very unhappy and becomes 
>> unresponsive.  Without the auto-down-unreachable-after setting, it keeps 
>> trying to talk to the old cluster members. which is no longer present.  It 
>> appears to NOT recover from this.  There is a constant barrage of messages 
>> of the form:
>>
>> [DEBUG] [08/04/2016 00:19:27.126] 
>> [ClusterSystem-cassandra-plugin-default-dispatcher-27] 
>> [akka.actor.LocalActorRefProvider(akka://ClusterSystem)] resolve of path 
>> sequence [/system/sharding/ExperimentInstance#-389574371] failed
>> [DEBUG] [08/04/2016 00:19:27.140] 
>> [ClusterSystem-cassandra-plugin-default-dispatcher-27] 
>> [akka.actor.LocalActorRefProvider(akka://ClusterSystem)] resolve of path 
>> sequence [/system/sharding/ExperimentInstance#-389574371] failed
>> [DEBUG] [08/04/2016 00:19:27.142] 
>> [ClusterSystem-cassandra-plugin-default-dispatcher-27] 
>> [akka.actor.LocalActorRefProvider(akka://ClusterSystem)] resolve of path 
>> sequence [/system/sharding/ExperimentInstance#-389574371] failed
>> [DEBUG] [08/04/2016 00:19:27.143] 
>> [ClusterSystem-cassandra-plugin-default-dispatcher-27] 
>> [akka.actor.LocalActorRefProvider(akka://ClusterSystem)] resolve of path 
>> sequence [/system/sharding/ExperimentInstance#-389574371] 

Re: [akka-user] Akka Cluster (with Sharding) not working without auto-down-unreachable-after

2016-08-04 Thread Eric Swenson
Thanks very much, Justin.  I appreciate your suggested approach and will 
implement something along those lines.  In summary, I believe, I should do 
the following:

1) handle notifications pf nodes going offline in my application
2) query AWS/ECS to see if the node is *really* supposed to be offline 
(meaning that it has been removed for autoscaling or replacement reasons),
3) if yes, then manually down the node

This makes perfect sense. The philosophy of going to the single source of 
truth about the state of the cluster (in this case, AWS/ECS) seems to be 
apt here.

Thanks again.  -- Eric

On Wednesday, August 3, 2016 at 7:00:42 PM UTC-7, Justin du coeur wrote:
>
> The keyword here is "auto".  Autodowning is an *incredibly braindead* 
> algorithm for dealing with nodes coming out of service, and if you use it 
> in production you more or less guarantee disaster, because that algorithm 
> can't cope with cluster partition.  You *do* need to deal with downing, but 
> you have to get something smarter than that.
>
> Frankly, if you're already hooking into AWS, I *suspect* the best approach 
> is to leverage that -- when a node goes offline, you have some code to 
> detect that through the ECS APIs, react to it, and manually down that node. 
>  (I'm planning on something along those lines for my system, but haven't 
> actually tried yet.)  But whether you do that or something else, you've got 
> to add *something* that does downing.
>
> I believe the official party line is "Buy a Lightbend Subscription", 
> through which you can get their Split Brain Resolver, which is a fairly 
> battle-hardened module for dealing with this problem.  That's not strictly 
> necessary, but you *do* need to have a reliable solution...
>
> On Wed, Aug 3, 2016 at 8:42 PM, Eric Swenson  > wrote:
>
>> We have an akka-cluster/sharding application deployed an AWS/ECS, where 
>> each instance of the application is a Docker container.  An ECS service 
>> launches N instances of the application based on configuration data.  It is 
>> not possible to know, for certain, the IP addresses of the cluster 
>> members.  Upon startup, before the AkkaSystem is created, the code 
>> currently polls AWS and determines the IP addresses of all the Docker hosts 
>> (which potentially could run the akka application).  It sets these IP 
>> addresses as the seed nodes before bringing up the akka cluster system. The 
>> configuration for these has, up until yesterday always included the 
>> akka.cluster.auto-down-unreachable-after configuration setting.  And it has 
>> always worked.  Furthermore, it supports two very critical requirements:
>>
>> a) an instance of the application can be removed at any time, due to 
>> scaling or rolling updates
>> b) an instance of the application can be added at any time, due to 
>> scaling or rolling updates
>>
>> On the advice of an Akka expert on the Gitter channel, I removed the 
>> auto-down-unreachable-after setting, which, as documented, is dangerous for 
>> production.  As a result the system no longer supports rolling updates.  A 
>> rolling update occurs thus:  a new version of the application is deployed 
>> (a new ECS task definition is created with a new Docker image).  The ECS 
>> service launches a new task (Docker container running on an available host) 
>> and once that container becomes stable, it kills one of the remaining 
>> instances (cluster members) to bring the number of instances to some 
>> configured value.  
>>
>> When this happens, akka-cluster becomes very unhappy and becomes 
>> unresponsive.  Without the auto-down-unreachable-after setting, it keeps 
>> trying to talk to the old cluster members. which is no longer present.  It 
>> appears to NOT recover from this.  There is a constant barrage of messages 
>> of the form:
>>
>> [DEBUG] [08/04/2016 00:19:27.126] 
>> [ClusterSystem-cassandra-plugin-default-dispatcher-27] 
>> [akka.actor.LocalActorRefProvider(akka://ClusterSystem)] resolve of path 
>> sequence [/system/sharding/ExperimentInstance#-389574371] failed
>> [DEBUG] [08/04/2016 00:19:27.140] 
>> [ClusterSystem-cassandra-plugin-default-dispatcher-27] 
>> [akka.actor.LocalActorRefProvider(akka://ClusterSystem)] resolve of path 
>> sequence [/system/sharding/ExperimentInstance#-389574371] failed
>> [DEBUG] [08/04/2016 00:19:27.142] 
>> [ClusterSystem-cassandra-plugin-default-dispatcher-27] 
>> [akka.actor.LocalActorRefProvider(akka://ClusterSystem)] resolve of path 
>> sequence [/system/sharding/ExperimentInstance#-389574371] failed
>> [DEBUG] [08/04/2016 00:19:27.143] 
>> [ClusterSystem-cassandra-plugin-default-dispatcher-27] 
>> [akka.actor.LocalActorRefProvider(akka://ClusterSystem)] resolve of path 
>> sequence [/system/sharding/ExperimentInstance#-389574371] failed
>> [DEBUG] [08/04/2016 00:19:27.143] 
>> [ClusterSystem-cassandra-plugin-default-dispatcher-27] 
>> [akka.actor.LocalActorRefProvider(akka://ClusterSystem)] resolve of path 
>> sequence 

[akka-user] RE: explanation of pickMaxOfThree

2016-08-04 Thread gitted

   
   1. Hello,

Hoping someone can explain the example given herE: 
http://doc.akka.io/docs/akka/2.4.9-RC1/scala/stream/stream-graphs.html

   1. 
   2. val pickMaxOfThree = GraphDSL.create() { implicit b =>
   3. import GraphDSL.Implicits._
   4.  
   5. val zip1 = b.add(ZipWith[Int, Int, Int](math.max _))
   6. val zip2 = b.add(ZipWith[Int, Int, Int](math.max _))
   7. zip1.out ~> zip2.in0
   8.  
   9. UniformFanInShape(zip2.out, zip1.in0, zip1.in1, zip2.in1)
   10. }

This graph is returning a UniformFanInShape, which from what I understand 
is multiple inputs and a single output correct?

Can someone detail what this line is doing:

   1. val zip1 = b.add(ZipWith[Int, Int, Int](math.max _))

Why does ZipWith have 3 type parameters? Is it just input, input and output 
i.e [Int, Int, Int]

Can someone explain what .out, .in0, .in1 etc. are doing here? 

   1. UniformFanInShape(zip2.out, zip1.in0, zip1.in1, zip2.in1)




-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Cassandra persistence with Kryo-serialization - recovery and persistence performance issue

2016-08-04 Thread Muthukumaran Kothandaraman
Hi, 

I am using following combination for a basic persistence actor

Version combination : Cassandra 3.7 + akka-persistence-cassandra-0.7 + akka 
2.4.8 + akka-kryo-serialization_2.11 version-0.4.1 

I am using following conf to serialize using kryo-serialization

kryo  {

type = "graph"
idstrategy = "default" 
buffer-size = 4096
max-buffer-size = -1
use-manifests = false
post-serialization-transformations = "lz4,aes"
implicit-registration-logging = false
kryo-trace = false
kryo-custom-serializer-init = "CustomKryoSerializerInitFQCN"
resolve-subclasses = false
mappings {
"com.myexperiments.akkaexps.persistence.events.Evt" = 20
}
classes = [
"com.myexperiments.akkaexps.persistence.events.Evt"
]
}


Observations : 

==


1. I am able to see that the events do get persisted without any issues - 
checked via cqlsh of Cassandra to verify the message count in akka.messages 
table 

   Performance observation : there is an abnormal reduction in persistence-rate 
of events

   in fact,with Kryo serialization + persistAsync I got around ~580 
events persisted/sec with Cassandra plugin when compared to plain java 
serialization which for same test run on same machine yielded upto 800 
events/sec

   which looks weird. Cassandra runs in local node - no clustering 
(trying to thrash-out all variances before I can go to cluster with larger 
configuration so that I can isolate issues)



2. During recovery phase, however,  I got following exception and recovery 
failed. Also tried changing idstrategy from 'default' to 'incremental'  and 
still facing the same Exception


[ERROR] [08/02/2016 10:46:16.119] [example-akka.actor.default-dispatcher-8] 
[akka://example/system/cassandra-journal/$b/flow-1-0-asyncReplayMessages] 
Encountered unregistered class ID: 
1406735620*com.esotericsoftware.kryo.KryoException: Encountered unregistered 
class ID: 1406735620*
at 
com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:137)
at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:670)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:781)
at 
com.romix.akka.serialization.kryo.KryoBasedSerializer.fromBinary(KryoSerializer.scala:483)
at 
com.romix.akka.serialization.kryo.KryoSerializer.fromBinary(KryoSerializer.scala:339)
at 
akka.serialization.Serialization$$anonfun$deserialize$2.apply(Serialization.scala:124)
at scala.util.Try$.apply(Try.scala:192)
at akka.serialization.Serialization.deserialize(Serialization.scala:114)
at 
akka.persistence.serialization.MessageSerializer.akka$persistence$serialization$MessageSerializer$$payload(MessageSerializer.scala:216)
at 
akka.persistence.serialization.MessageSerializer.akka$persistence$serialization$MessageSerializer$$persistent(MessageSerializer.scala:198)
at 
akka.persistence.serialization.MessageSerializer.fromBinary(MessageSerializer.scala:69)
at 
akka.persistence.serialization.MessageSerializer.fromBinary(MessageSerializer.scala:28)
at 
akka.serialization.Serialization$$anonfun$deserialize$3.apply(Serialization.scala:142)
at scala.util.Try$.apply(Try.scala:192)
at akka.serialization.Serialization.deserialize(Serialization.scala:142)
at 
akka.persistence.cassandra.query.EventsByPersistenceIdPublisher.persistentFromByteBuffer(EventsByPersistenceIdPublisher.scala:90)
at 
akka.persistence.cassandra.query.EventsByPersistenceIdPublisher.extractEvent(EventsByPersistenceIdPublisher.scala:84)
at 
akka.persistence.cassandra.query.EventsByPersistenceIdPublisher.updateState(EventsByPersistenceIdPublisher.scala:77)
at 
akka.persistence.cassandra.query.EventsByPersistenceIdPublisher.updateState(EventsByPersistenceIdPublisher.scala:48)
at 
akka.persistence.cassandra.query.QueryActorPublisher.exhaustFetch(QueryActorPublisher.scala:213)
at 
akka.persistence.cassandra.query.QueryActorPublisher.akka$persistence$cassandra$query$QueryActorPublisher$$exhaustFetchAndBecome(QueryActorPublisher.scala:117)
at 
akka.persistence.cassandra.query.QueryActorPublisher$$anonfun$idle$1.applyOrElse(QueryActorPublisher.scala:101)
at akka.actor.Actor$class.aroundReceive(Actor.scala:484)
at 
akka.persistence.cassandra.query.QueryActorPublisher.akka$stream$actor$ActorPublisher$$super$aroundReceive(QueryActorPublisher.scala:45)
at 
akka.stream.actor.ActorPublisher$class.aroundReceive(ActorPublisher.scala:270)
at 
akka.persistence.cassandra.query.QueryActorPublisher.aroundReceive(QueryActorPublisher.scala:45)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
at akka.actor.ActorCell.invoke(ActorCell.scala:495)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
at akka.dispatch.Mailbox.run(Mailbox.scala:224)
at 

Re: [akka-user] Akka Stream

2016-08-04 Thread Konrad Malawski
That's correct - streams are fully typed and we're pretty proud of that :)

One difference I'd like to highlight though is that Streams don't directly
"replace" Actors.
In many cases they do, for local processing pipelines etc. However Streams
(any
reactive streams implementation in fact) don't solve the distribution
aspect – that's where Actors excel at.

By distribution I mean features like Cluster Sharding for example:
http://doc.akka.io/docs/akka/current/scala.html
So all distributed systems bits are still best served by Actors, however
all their local bits can often be solved by Streams.

Hope this helps.
You may also like this talk from Scala Matsuri, where I talk a bit around
some of these questions:
https://www.youtube.com/watch?v=mlli4LCLmzM

-- 
Konrad `ktoso` Malawski
Akka  @ Lightbend 

On 4 August 2016 at 18:13:48, Mike Khan (mike.k...@hmrc.gov.uk) wrote:

Thanks Konrad.

I'm looking into using Akka Streams because (correct me if I am wrong) I
can achieve a higher level of type safety than using Akka Actors directly.
The receive method of an Actor can pretty much take any type which I find
worrying. Having looked into Akka Streams I realized that the an equivalent
Akka Stream implementation can be designed which is equivalent to the the
Akka Actor design in terms of end result. But the beauty with the Akka
Stream approach is that I can't just input just any type. I can only input
the one that is defined by the source.

On Thursday, 4 August 2016 13:57:28 UTC+1, Konrad Malawski wrote:
>
> Yes, this is the right place.
> Announcements are here, as well as general community discussions.
>
> For news only you may want to subscribe to: akka.io/news
> And for blogs from the core team there's akka.io/blog
>
> --
> Konrad `ktoso` Malawski
> Akka  @ Lightbend 
>
> On 4 August 2016 at 14:08:45, Mike Khan (mike...@hmrc.gov.uk )
> wrote:
>
> Hi
>
> Is this the correct channel to stay up to date with Akka Streams? I would
> like to keep up to date on the direction and latest changes to the library.
>
> Mike
> --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ: http://doc.akka.io/docs/akka/
> current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+...@googlegroups.com .
> To post to this group, send email to akka...@googlegroups.com
> .
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>
> --
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
http://doc.akka.io/docs/akka/current/additional/faq.html
>> Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the Google Groups
"Akka User List" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Akka Stream

2016-08-04 Thread Mike Khan
Thanks Konrad.

I'm looking into using Akka Streams because (correct me if I am wrong) I 
can achieve a higher level of type safety than using Akka Actors directly. 
The receive method of an Actor can pretty much take any type which I find 
worrying. Having looked into Akka Streams I realized that the an equivalent 
Akka Stream implementation can be designed which is equivalent to the the 
Akka Actor design in terms of end result. But the beauty with the Akka 
Stream approach is that I can't just input just any type. I can only input 
the one that is defined by the source.

On Thursday, 4 August 2016 13:57:28 UTC+1, Konrad Malawski wrote:
>
> Yes, this is the right place.
> Announcements are here, as well as general community discussions.
>
> For news only you may want to subscribe to: akka.io/news
> And for blogs from the core team there's akka.io/blog 
>
> -- 
> Konrad `ktoso` Malawski
> Akka  @ Lightbend 
>
> On 4 August 2016 at 14:08:45, Mike Khan (mike...@hmrc.gov.uk ) 
> wrote:
>
> Hi 
>
> Is this the correct channel to stay up to date with Akka Streams? I would 
> like to keep up to date on the direction and latest changes to the library.
>
> Mike
> --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ: 
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups 
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to akka-user+...@googlegroups.com .
> To post to this group, send email to akka...@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Multipart Fileupload problem with Akka 2.4.8 and 2.4.9-RC1

2016-08-04 Thread Konrad Malawski
Glad that was it Scott.
I'd recommend keeping an akka.version property in properties in Maven,
and bump it there each time instead of specifically keeping the version in
all dependencies explicitly.

Hope this helps, happy hakking!

-- 
Konrad `ktoso` Malawski
Akka  @ Lightbend 

On 4 August 2016 at 17:53:29, Scott Lunel (scott.lu...@gmail.com) wrote:

Hey Johan,

Thanks for the quick reply.

Looks like you were right. My Maven configuration is rather complicated,
and one of the child projects was not pulling in the correct version of
akka-http.

Such a silly mistake.

On Thursday, August 4, 2016 at 11:30:15 AM UTC-4, Johan Andrén wrote:
>
> Hi Scott,
>
> Seems like you are mixing different versions of akka-http and
> akka-streams. Look over your dependencies!
>
> --
> Johan
> Akka Team
>
> On Thursday, August 4, 2016 at 5:28:56 PM UTC+2, Scott Lunel wrote:
>>
>> Hello everyone,
>>
>>
>> I seem to be having a problem with Multipart file upload since Akka
>> 2.4.8. It works perfectly fine in 2.4.7.
>>
>> I've google searched for anything related to this and haven't found much,
>> so I've decided to post here.
>>
>> Code to reproduce:
>>
>> val upload = path("upload") {
>> post {
>>   fileUpload("file") {
>> case (fileInfo, bytes) ⇒
>>   complete("Done")
>>   }
>> }
>> }
>>
>>
>> The exception:
>>
>> Uncaught error from thread [toplevel-akka.actor.default-dispatcher-10]
>> shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for
>> ActorSystem[toplevel]
>> java.lang.NoSuchMethodError: akka.stream.ActorMaterializer$
>> .downcast(Lakka/stream/Materializer;)Lakka/stream/ActorMaterializer;
>> at akka.http.scaladsl.unmarshalling.MultipartUnmarshallers$$
>> anonfun$multipartUnmarshaller$1$$anonfun$apply$1$$anonfun$
>> apply$2$$anonfun$2.apply(MultipartUnmarshallers.scala:78)
>> at akka.http.scaladsl.unmarshalling.MultipartUnmarshallers$$
>> anonfun$multipartUnmarshaller$1$$anonfun$apply$1$$anonfun$
>> apply$2$$anonfun$2.apply(MultipartUnmarshallers.scala:78)
>> at scala.Option.getOrElse(Option.scala:121)
>> at akka.http.scaladsl.unmarshalling.MultipartUnmarshallers$$
>> anonfun$multipartUnmarshaller$1$$anonfun$apply$1$$anonfun$apply$2.apply(
>> MultipartUnmarshallers.scala:78)
>> at akka.http.scaladsl.unmarshalling.MultipartUnmarshallers$$
>> anonfun$multipartUnmarshaller$1$$anonfun$apply$1$$anonfun$apply$2.apply(
>> MultipartUnmarshallers.scala:71)
>> at akka.http.scaladsl.unmarshalling.Unmarshaller$$
>> anon$1.apply(Unmarshaller.scala:52)
>> at akka.http.scaladsl.unmarshalling.LowerPriorityGenericUnmarshall
>> ers$$anonfun$messageUnmarshallerFromEntityUnmarshaller$1$$anonfun$apply$
>> 1$$anonfun$apply$2.apply(GenericUnmarshallers.scala:20)
>> at akka.http.scaladsl.unmarshalling.LowerPriorityGenericUnmarshall
>> ers$$anonfun$messageUnmarshallerFromEntityUnmarshaller$1$$anonfun$apply$
>> 1$$anonfun$apply$2.apply(GenericUnmarshallers.scala:20)
>> at akka.http.scaladsl.unmarshalling.Unmarshaller$$
>> anon$1.apply(Unmarshaller.scala:52)
>>
>>
>> This appears to be throwing an exception when the code hits:
>>
>> entity(as[Multipart.FormData])
>>
>> I've written my own custom code to do multipart form file uploads and I
>> initially suspected it was a problem on my end. However, after testing the
>> above code (using one of the file upload directives), this error is being
>> thrown in the akka http source.
>>
>> Is anyone else having this issue? I can't seem to get multipart file
>> uploads to work in 2.4.8 and 2.4.9-RC1.
>>
>> Any help / information would be greatly appreciated.
>>
>>
>> Regards,
>>
>> Scott.
>>
> --
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
http://doc.akka.io/docs/akka/current/additional/faq.html
>> Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the Google Groups
"Akka User List" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Multipart Fileupload problem with Akka 2.4.8 and 2.4.9-RC1

2016-08-04 Thread Scott Lunel
Hey Johan,

Thanks for the quick reply.

Looks like you were right. My Maven configuration is rather complicated, 
and one of the child projects was not pulling in the correct version of 
akka-http.

Such a silly mistake.

On Thursday, August 4, 2016 at 11:30:15 AM UTC-4, Johan Andrén wrote:
>
> Hi Scott, 
>
> Seems like you are mixing different versions of akka-http and 
> akka-streams. Look over your dependencies!
>
> --
> Johan
> Akka Team
>
> On Thursday, August 4, 2016 at 5:28:56 PM UTC+2, Scott Lunel wrote:
>>
>> Hello everyone,
>>
>>
>> I seem to be having a problem with Multipart file upload since Akka 
>> 2.4.8. It works perfectly fine in 2.4.7.
>>
>> I've google searched for anything related to this and haven't found much, 
>> so I've decided to post here.
>>
>> Code to reproduce:
>>
>> val upload = path("upload") {
>> post {
>>   fileUpload("file") {
>> case (fileInfo, bytes) ⇒
>>   complete("Done")
>>   }
>> }
>> }
>>
>>
>> The exception:
>>
>> Uncaught error from thread [toplevel-akka.actor.default-dispatcher-10] 
>> shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for 
>> ActorSystem[toplevel]
>> java.lang.NoSuchMethodError: 
>> akka.stream.ActorMaterializer$.downcast(Lakka/stream/Materializer;)Lakka/stream/ActorMaterializer;
>> at 
>> akka.http.scaladsl.unmarshalling.MultipartUnmarshallers$$anonfun$multipartUnmarshaller$1$$anonfun$apply$1$$anonfun$apply$2$$anonfun$2.apply(MultipartUnmarshallers.scala:78)
>> at 
>> akka.http.scaladsl.unmarshalling.MultipartUnmarshallers$$anonfun$multipartUnmarshaller$1$$anonfun$apply$1$$anonfun$apply$2$$anonfun$2.apply(MultipartUnmarshallers.scala:78)
>> at scala.Option.getOrElse(Option.scala:121)
>> at 
>> akka.http.scaladsl.unmarshalling.MultipartUnmarshallers$$anonfun$multipartUnmarshaller$1$$anonfun$apply$1$$anonfun$apply$2.apply(MultipartUnmarshallers.scala:78)
>> at 
>> akka.http.scaladsl.unmarshalling.MultipartUnmarshallers$$anonfun$multipartUnmarshaller$1$$anonfun$apply$1$$anonfun$apply$2.apply(MultipartUnmarshallers.scala:71)
>> at 
>> akka.http.scaladsl.unmarshalling.Unmarshaller$$anon$1.apply(Unmarshaller.scala:52)
>> at 
>> akka.http.scaladsl.unmarshalling.LowerPriorityGenericUnmarshallers$$anonfun$messageUnmarshallerFromEntityUnmarshaller$1$$anonfun$apply$1$$anonfun$apply$2.apply(GenericUnmarshallers.scala:20)
>> at 
>> akka.http.scaladsl.unmarshalling.LowerPriorityGenericUnmarshallers$$anonfun$messageUnmarshallerFromEntityUnmarshaller$1$$anonfun$apply$1$$anonfun$apply$2.apply(GenericUnmarshallers.scala:20)
>> at 
>> akka.http.scaladsl.unmarshalling.Unmarshaller$$anon$1.apply(Unmarshaller.scala:52)
>>
>>
>> This appears to be throwing an exception when the code hits:
>>
>> entity(as[Multipart.FormData])
>>
>> I've written my own custom code to do multipart form file uploads and I 
>> initially suspected it was a problem on my end. However, after testing the 
>> above code (using one of the file upload directives), this error is being 
>> thrown in the akka http source.
>>
>> Is anyone else having this issue? I can't seem to get multipart file 
>> uploads to work in 2.4.8 and 2.4.9-RC1.
>>
>> Any help / information would be greatly appreciated.
>>
>>
>> Regards,
>>
>> Scott.
>>
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Multipart Fileupload problem with Akka 2.4.8 and 2.4.9-RC1

2016-08-04 Thread Scott Lunel
Hello everyone,


I seem to be having a problem with Multipart file upload since Akka 2.4.8. 
It works perfectly fine in 2.4.7.

I've google searched for anything related to this and haven't found much, 
so I've decided to post here.

Code to reproduce:

val upload = path("upload") {
post {
  fileUpload("file") {
case (fileInfo, bytes) ⇒
  complete("Done")
  }
}
}


The exception:

Uncaught error from thread [toplevel-akka.actor.default-dispatcher-10] 
shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for 
ActorSystem[toplevel]
java.lang.NoSuchMethodError: 
akka.stream.ActorMaterializer$.downcast(Lakka/stream/Materializer;)Lakka/stream/ActorMaterializer;
at 
akka.http.scaladsl.unmarshalling.MultipartUnmarshallers$$anonfun$multipartUnmarshaller$1$$anonfun$apply$1$$anonfun$apply$2$$anonfun$2.apply(MultipartUnmarshallers.scala:78)
at 
akka.http.scaladsl.unmarshalling.MultipartUnmarshallers$$anonfun$multipartUnmarshaller$1$$anonfun$apply$1$$anonfun$apply$2$$anonfun$2.apply(MultipartUnmarshallers.scala:78)
at scala.Option.getOrElse(Option.scala:121)
at 
akka.http.scaladsl.unmarshalling.MultipartUnmarshallers$$anonfun$multipartUnmarshaller$1$$anonfun$apply$1$$anonfun$apply$2.apply(MultipartUnmarshallers.scala:78)
at 
akka.http.scaladsl.unmarshalling.MultipartUnmarshallers$$anonfun$multipartUnmarshaller$1$$anonfun$apply$1$$anonfun$apply$2.apply(MultipartUnmarshallers.scala:71)
at 
akka.http.scaladsl.unmarshalling.Unmarshaller$$anon$1.apply(Unmarshaller.scala:52)
at 
akka.http.scaladsl.unmarshalling.LowerPriorityGenericUnmarshallers$$anonfun$messageUnmarshallerFromEntityUnmarshaller$1$$anonfun$apply$1$$anonfun$apply$2.apply(GenericUnmarshallers.scala:20)
at 
akka.http.scaladsl.unmarshalling.LowerPriorityGenericUnmarshallers$$anonfun$messageUnmarshallerFromEntityUnmarshaller$1$$anonfun$apply$1$$anonfun$apply$2.apply(GenericUnmarshallers.scala:20)
at 
akka.http.scaladsl.unmarshalling.Unmarshaller$$anon$1.apply(Unmarshaller.scala:52)


This appears to be throwing an exception when the code hits:

entity(as[Multipart.FormData])

I've written my own custom code to do multipart form file uploads and I 
initially suspected it was a problem on my end. However, after testing the 
above code (using one of the file upload directives), this error is being 
thrown in the akka http source.

Is anyone else having this issue? I can't seem to get multipart file 
uploads to work in 2.4.8 and 2.4.9-RC1.

Any help / information would be greatly appreciated.


Regards,

Scott.

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Akka Stream

2016-08-04 Thread Konrad Malawski
Yes, this is the right place.
Announcements are here, as well as general community discussions.

For news only you may want to subscribe to: akka.io/news
And for blogs from the core team there's akka.io/blog

-- 
Konrad `ktoso` Malawski
Akka  @ Lightbend 

On 4 August 2016 at 14:08:45, Mike Khan (mike.k...@hmrc.gov.uk) wrote:

Hi

Is this the correct channel to stay up to date with Akka Streams? I would
like to keep up to date on the direction and latest changes to the library.

Mike
--
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
http://doc.akka.io/docs/akka/current/additional/faq.html
>> Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the Google Groups
"Akka User List" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Akka Stream

2016-08-04 Thread Mike Khan
Hi

Is this the correct channel to stay up to date with Akka Streams? I would 
like to keep up to date on the direction and latest changes to the library.

Mike

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: No configuration setting found for key 'akka.version' when running jar-with-dependencies

2016-08-04 Thread Johan Andrén
Hi Dean,

You must make sure all the reference.conf files are concatenated into one 
file if you want to put all the akka modules inside one jar like that.

This is described in the docs you linked 
to: 
http://doc.akka.io/docs/akka/current/general/configuration.html#When_using_JarJar__OneJar__Assembly_or_any_jar-bundler

--
Johan
Akka Team

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Stashing vs Scala Collection

2016-08-04 Thread roberto . garcia . teodoro

Hi all.


I've been using akka in my application, but I've encountered a problem 
managing my resources.

I have an actor which controls access to 3 pools of a limited resource. 
When one of these pools is depleated, I need to stash future requests to 
that pool, while still attending requests for the others.

When one of the items in my pool is freed, I unstashAll to recover the 
requests for this resource.

Should I unstashAll (with the inconvenience of reading messages that I'm 
going to stash again) or should I use some Scala Collection (presumably a 
queue) where I enqueue the messages until I can tend to them?


Thank you in advance :)

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Akka http vs Spray performance

2016-08-04 Thread Владимир Морозов
I perform some test with Akka 2.4.8


> wrk -t30 -c64 
http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
Running 10s test @ 
http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
  30 threads and 64 connections
  Thread Stats   Avg  Stdev Max   +/- Stdev
Latency19.19ms   86.05ms   1.20s96.02%
Req/Sec 1.53k   716.29 4.49k60.40%
  435786 requests in 10.08s, 61.92MB read
Requests/sec:  43221.21
Transfer/sec:  6.14MB
> wrk -t30 -c64 
http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
Running 10s test @ 
http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
  30 threads and 64 connections
  Thread Stats   Avg  Stdev Max   +/- Stdev
Latency20.33ms   95.59ms   1.13s96.66%
Req/Sec 1.95k   723.17 8.15k75.32%
  557516 requests in 10.06s, 79.22MB read
Requests/sec:  55426.85
Transfer/sec:  7.88MB
> wrk -t30 -c64 
http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
Running 10s test @ 
http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
  30 threads and 64 connections
  Thread Stats   Avg  Stdev Max   +/- Stdev
Latency 8.83ms   37.21ms 614.55ms   96.12%
Req/Sec 1.76k   647.83 5.66k71.19%
  519199 requests in 10.07s, 73.78MB read
Requests/sec:  51580.71
Transfer/sec:  7.33MB

And then with: Akka 2.4.9-RC1

> wrk -t30 -c64 
http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
Running 10s test @ 
http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
  30 threads and 64 connections
  Thread Stats   Avg  Stdev Max   +/- Stdev
Latency10.29ms   42.81ms 685.72ms   95.91%
Req/Sec 1.65k   721.12 5.46k61.92%
  478914 requests in 10.07s, 69.88MB read
Requests/sec:  47574.64
Transfer/sec:  6.94MB

>  wrk -t30 -c64 
http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
Running 10s test @ 
http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
  30 threads and 64 connections
  Thread Stats   Avg  Stdev Max   +/- Stdev
Latency 8.63ms   35.87ms 556.24ms   96.45%
Req/Sec 1.81k   663.09 6.11k70.47%
  534884 requests in 10.07s, 78.05MB read
Requests/sec:  53123.50
Transfer/sec:  7.75MB

> wrk -t30 -c64 
http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
Running 10s test @ 
http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
  30 threads and 64 connections
  Thread Stats   Avg  Stdev Max   +/- Stdev
Latency 4.56ms   11.40ms 205.91ms   90.97%
Req/Sec 1.91k   556.78 5.98k62.47%
  569945 requests in 10.06s, 83.16MB read
Requests/sec:  56663.71
Transfer/sec:  8.27MB

четверг, 18 февраля 2016 г., 23:11:27 UTC+3 пользователь rkuhn написал:
>
> Wow, this is quite interesting. I take it that your machine is a bit 
> slower than the original poster’s machine, because the Spray results are 
> ca. 30% lower.
>
> Thanks for sharing!
>
> Regards,
>
> Roland
>
> 18 feb 2016 kl. 20:52 skrev Владимир Морозов  >:
>
> I take server’s code from first message and run both benchmarks  on 
> Akka-http 2.4.2
>
> My results, akka-http serve more connections than spray 1.3.3. Each server 
> started in Run mode from 'intellij idea 15 CE' with root logging level 
> INFO. I run WRK command thee times without server shutdown (because we use 
> JWM and have JIT)
>
> *Akka http 2.4.2*
>
> > wrk -t30 -c64 
> http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
> Running 10s test @ 
> http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
>   30 threads and 64 connections
>   Thread Stats   Avg  Stdev Max   +/- Stdev
> Latency20.63ms   99.19ms   1.18s96.17%
> Req/Sec 1.49k   770.37 2.95k61.72%
>   420300 requests in 10.03s, 59.72MB read
> Requests/sec:  41886.83
> Transfer/sec:  5.95MB
>
> > wrk -t30 -c64 
> http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
> Running 10s test @ 
> http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
>   30 threads and 64 connections
>   Thread Stats   Avg  Stdev Max   +/- Stdev
> Latency11.67ms   62.36ms 910.17ms   97.23%
> Req/Sec 2.13k   618.76 6.12k77.79%
>   619516 requests in 10.07s, 88.03MB read
> Requests/sec:  61519.04
> Transfer/sec:  8.74MB
>
> > wrk -t30 -c64 
> http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
> Running 10s test @ 
> http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
>   30 threads and 64 connections
>   Thread Stats   Avg  Stdev Max   +/- Stdev
> Latency 7.20ms   30.98ms 529.91ms   95.50%
> Req/Sec 2.07k   583.44 7.44k71.47%
>   613762 requests in 10.06s, 87.21MB read
> Requests/sec:  61006.76
> Transfer/sec:  8.67MB
>
> *Spray 1.3.3*
>
> > wrk -t30 -c64 
> http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
> Running 10s test @ 
> 

Re: [akka-user] Akka http vs Spray performance

2016-08-04 Thread Владимир Морозов
I perform some test with Akka 2.4.8

```
> wrk -t30 -c64 
http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
Running 10s test @ 
http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
  30 threads and 64 connections
  Thread Stats   Avg  Stdev Max   +/- Stdev
Latency19.19ms   86.05ms   1.20s96.02%
Req/Sec 1.53k   716.29 4.49k60.40%
  435786 requests in 10.08s, 61.92MB read
Requests/sec:  43221.21
Transfer/sec:  6.14MB
> wrk -t30 -c64 
http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
Running 10s test @ 
http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
  30 threads and 64 connections
  Thread Stats   Avg  Stdev Max   +/- Stdev
Latency20.33ms   95.59ms   1.13s96.66%
Req/Sec 1.95k   723.17 8.15k75.32%
  557516 requests in 10.06s, 79.22MB read
Requests/sec:  55426.85
Transfer/sec:  7.88MB
> wrk -t30 -c64 
http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
Running 10s test @ 
http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
  30 threads and 64 connections
  Thread Stats   Avg  Stdev Max   +/- Stdev
Latency 8.83ms   37.21ms 614.55ms   96.12%
Req/Sec 1.76k   647.83 5.66k71.19%
  519199 requests in 10.07s, 73.78MB read
Requests/sec:  51580.71
Transfer/sec:  7.33MB
```

And then with: Akka 2.4.9-RC1

```
> wrk -t30 -c64 
http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
Running 10s test @ 
http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
  30 threads and 64 connections
  Thread Stats   Avg  Stdev Max   +/- Stdev
Latency10.29ms   42.81ms 685.72ms   95.91%
Req/Sec 1.65k   721.12 5.46k61.92%
  478914 requests in 10.07s, 69.88MB read
Requests/sec:  47574.64
Transfer/sec:  6.94MB

>  wrk -t30 -c64 
http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
Running 10s test @ 
http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
  30 threads and 64 connections
  Thread Stats   Avg  Stdev Max   +/- Stdev
Latency 8.63ms   35.87ms 556.24ms   96.45%
Req/Sec 1.81k   663.09 6.11k70.47%
  534884 requests in 10.07s, 78.05MB read
Requests/sec:  53123.50
Transfer/sec:  7.75MB

> wrk -t30 -c64 
http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
Running 10s test @ 
http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
  30 threads and 64 connections
  Thread Stats   Avg  Stdev Max   +/- Stdev
Latency 4.56ms   11.40ms 205.91ms   90.97%
Req/Sec 1.91k   556.78 5.98k62.47%
  569945 requests in 10.06s, 83.16MB read
Requests/sec:  56663.71
Transfer/sec:  8.27MB
```

четверг, 18 февраля 2016 г., 23:11:27 UTC+3 пользователь rkuhn написал:
>
> Wow, this is quite interesting. I take it that your machine is a bit 
> slower than the original poster’s machine, because the Spray results are 
> ca. 30% lower.
>
> Thanks for sharing!
>
> Regards,
>
> Roland
>
> 18 feb 2016 kl. 20:52 skrev Владимир Морозов  >:
>
> I take server’s code from first message and run both benchmarks  on 
> Akka-http 2.4.2
>
> My results, akka-http serve more connections than spray 1.3.3. Each server 
> started in Run mode from 'intellij idea 15 CE' with root logging level 
> INFO. I run WRK command thee times without server shutdown (because we use 
> JWM and have JIT)
>
> *Akka http 2.4.2*
>
> > wrk -t30 -c64 
> http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
> Running 10s test @ 
> http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
>   30 threads and 64 connections
>   Thread Stats   Avg  Stdev Max   +/- Stdev
> Latency20.63ms   99.19ms   1.18s96.17%
> Req/Sec 1.49k   770.37 2.95k61.72%
>   420300 requests in 10.03s, 59.72MB read
> Requests/sec:  41886.83
> Transfer/sec:  5.95MB
>
> > wrk -t30 -c64 
> http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
> Running 10s test @ 
> http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
>   30 threads and 64 connections
>   Thread Stats   Avg  Stdev Max   +/- Stdev
> Latency11.67ms   62.36ms 910.17ms   97.23%
> Req/Sec 2.13k   618.76 6.12k77.79%
>   619516 requests in 10.07s, 88.03MB read
> Requests/sec:  61519.04
> Transfer/sec:  8.74MB
>
> > wrk -t30 -c64 
> http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
> Running 10s test @ 
> http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
>   30 threads and 64 connections
>   Thread Stats   Avg  Stdev Max   +/- Stdev
> Latency 7.20ms   30.98ms 529.91ms   95.50%
> Req/Sec 2.07k   583.44 7.44k71.47%
>   613762 requests in 10.06s, 87.21MB read
> Requests/sec:  61006.76
> Transfer/sec:  8.67MB
>
> *Spray 1.3.3*
>
> > wrk -t30 -c64 
> http://127.0.0.1:9000/dictionaries/hello/suggestions?ngr=hond
> Running 10s test @ 
> 

[akka-user] Why application.conf doesn't affect substitutions in reference.conf

2016-08-04 Thread BlueEyed Hush
Hello,

I think I don't understand philosophy behind Typesafe Config, and I'd like 
to ask you to help me with that.
Setup I thought to be natural for Config was:
* My libraries have reference.conf files with default values
* My application includes application.conf which overrides those values
But everything breaks because of the fact that overrides from 
application.conf don't affect substitutions in reference.conf.

Example:
I've got library which connects to service. Urls are placed in 
reference.conf file, and they use ${service.url}:

service {
  url = "http://XXX;
  schema {
url = ${service.url}/schema
get.url = ${service.schema.url}/get 
put.url = ${service.schema.url}/put
  }

}

Now in my application.conf I override service.url, but it doesn't work since 
substitutions aren't affected and urls are wrong.


How should I structure my configuration to handle this scenario?


Best wishes,

Chris

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Akka Cluster (with Sharding) not working without auto-down-unreachable-after

2016-08-04 Thread Konrad Malawski
Just to re-affirm what Justin wrote there.

Auto downing is "auto". It's dumb. That's why it's not safe.
The safer automatic downing modes ones are in
doc.akka.io/docs/akka/rp-16s01p05/scala/split-brain-resolver.html
Yes, that's a commercial thing.

If you don't want to use these, use EC2's APIs - they have APIs from which
you can get information about state like that.

-- 
Konrad `ktoso` Malawski
Akka  @ Lightbend 

On 4 August 2016 at 04:00:34, Justin du coeur (jduco...@gmail.com) wrote:

The keyword here is "auto".  Autodowning is an *incredibly braindead*
algorithm for dealing with nodes coming out of service, and if you use it
in production you more or less guarantee disaster, because that algorithm
can't cope with cluster partition.  You *do* need to deal with downing, but
you have to get something smarter than that.

Frankly, if you're already hooking into AWS, I *suspect* the best approach
is to leverage that -- when a node goes offline, you have some code to
detect that through the ECS APIs, react to it, and manually down that node.
 (I'm planning on something along those lines for my system, but haven't
actually tried yet.)  But whether you do that or something else, you've got
to add *something* that does downing.

I believe the official party line is "Buy a Lightbend Subscription",
through which you can get their Split Brain Resolver, which is a fairly
battle-hardened module for dealing with this problem.  That's not strictly
necessary, but you *do* need to have a reliable solution...

On Wed, Aug 3, 2016 at 8:42 PM, Eric Swenson  wrote:

> We have an akka-cluster/sharding application deployed an AWS/ECS, where
> each instance of the application is a Docker container.  An ECS service
> launches N instances of the application based on configuration data.  It is
> not possible to know, for certain, the IP addresses of the cluster
> members.  Upon startup, before the AkkaSystem is created, the code
> currently polls AWS and determines the IP addresses of all the Docker hosts
> (which potentially could run the akka application).  It sets these IP
> addresses as the seed nodes before bringing up the akka cluster system. The
> configuration for these has, up until yesterday always included the
> akka.cluster.auto-down-unreachable-after configuration setting.  And it has
> always worked.  Furthermore, it supports two very critical requirements:
>
> a) an instance of the application can be removed at any time, due to
> scaling or rolling updates
> b) an instance of the application can be added at any time, due to scaling
> or rolling updates
>
> On the advice of an Akka expert on the Gitter channel, I removed the
> auto-down-unreachable-after setting, which, as documented, is dangerous for
> production.  As a result the system no longer supports rolling updates.  A
> rolling update occurs thus:  a new version of the application is deployed
> (a new ECS task definition is created with a new Docker image).  The ECS
> service launches a new task (Docker container running on an available host)
> and once that container becomes stable, it kills one of the remaining
> instances (cluster members) to bring the number of instances to some
> configured value.
>
> When this happens, akka-cluster becomes very unhappy and becomes
> unresponsive.  Without the auto-down-unreachable-after setting, it keeps
> trying to talk to the old cluster members. which is no longer present.  It
> appears to NOT recover from this.  There is a constant barrage of messages
> of the form:
>
> [DEBUG] [08/04/2016 00:19:27.126]
> [ClusterSystem-cassandra-plugin-default-dispatcher-27]
> [akka.actor.LocalActorRefProvider(akka://ClusterSystem)] resolve of path
> sequence [/system/sharding/ExperimentInstance#-389574371] failed
> [DEBUG] [08/04/2016 00:19:27.140]
> [ClusterSystem-cassandra-plugin-default-dispatcher-27]
> [akka.actor.LocalActorRefProvider(akka://ClusterSystem)] resolve of path
> sequence [/system/sharding/ExperimentInstance#-389574371] failed
> [DEBUG] [08/04/2016 00:19:27.142]
> [ClusterSystem-cassandra-plugin-default-dispatcher-27]
> [akka.actor.LocalActorRefProvider(akka://ClusterSystem)] resolve of path
> sequence [/system/sharding/ExperimentInstance#-389574371] failed
> [DEBUG] [08/04/2016 00:19:27.143]
> [ClusterSystem-cassandra-plugin-default-dispatcher-27]
> [akka.actor.LocalActorRefProvider(akka://ClusterSystem)] resolve of path
> sequence [/system/sharding/ExperimentInstance#-389574371] failed
> [DEBUG] [08/04/2016 00:19:27.143]
> [ClusterSystem-cassandra-plugin-default-dispatcher-27]
> [akka.actor.LocalActorRefProvider(akka://ClusterSystem)] resolve of path
> sequence [/system/sharding/ExperimentInstance#-389574371] failed
>
> and of the form:
>
> [WARN] [08/04/2016 00:19:16.787]
> [ClusterSystem-akka.actor.default-dispatcher-9] [akka.tcp://
> ClusterSystem@10.0.3.103:2552/system/sharding/ExperimentInstance] Retry
> request for shard [5] homes from