I am almost certain that this is the cause:
13:08:28.723UTC [test] DEBUG akka.contrib.pattern.ShardRegion
akka.tcp://Denis@172.17.0.22:2551/user/sharding/Zone - Request shard [-10]
home
13:08:28.724UTC [test] DEBUG a.s.Serialization(akka://Denis)
akka.serialization.Serialization(akka://Denis)
I upped the logging levels, a little more info...it seems as though this is
happening when attempting to send messages through cluster sharding.
In the following log snippet, I see this "endpointWriter - received local
message RemoteMessage: [null]"
In this interaction, 172.17.0.27 is the "node
You need use another config to correctly bind container ip because it uses
a NAT network. See:
http://doc.akka.io/docs/akka/snapshot/additional/faq.html#Why_are_replies_not_received_from_a_remote_actor_
On Friday, September 18, 2015 at 10:12:33 PM UTC-3, Paul Cleary wrote:
>
> I am struggling
At the Scala by the Bay meetup this week Comcast presented how the Akka
platform has been applied successfully at scale to continuous delivery and
data center automation.
This was followed by presentation of SOA on Steroids. This shows how the
Akka platform can be used to implement the SOA pa
Paul,
Try looking at my repo here: https://github.com/gzoller/docker-exp
Check out the cluster branch. It shows how to use the new dual-binding
features in Akka 2.4 to get it working w/Docker.
I my example you can either pass in host/port info for Akka or it has some
facilities to infer this i
I am actually binding to a specific ip address.
Here is the config section for remoting:
netty.tcp {
hostname = ${denis.app.host}
port = ${denis.app.port}
}
And the actual app host is an environment variable that is pulled when the
app starts inside docker. The ENTRYPOINT in the Docker Cont
Thanks, will check it out...but...ugh...I am on akka 2.3
On Saturday, September 19, 2015 at 9:06:20 PM UTC-4, tigerfoot wrote:
>
> Paul,
>
> Try looking at my repo here: https://github.com/gzoller/docker-exp
>
> Check out the cluster branch. It shows how to use the new dual-binding
> features in
To summarize, here is my setup:
App ---> Cluster Singleton ---> Cluster sharded actor
In the failure scenario above, this looks like:
App(on seed) ---> Cluster Singleton(on seed) --> Cluster sharded actor (on
node1, separate node)
So, my functional tests work fine when everything is in the same
Is it possible there is some kind of issue using AtLeastOnceDelivery here?
I am using at least once delivery between the cluster singleton and the
cluster shard
Here are some more logs that are happening on the seed node that is sending
the message to the shard on the "other" node:
-- so, it