Hi Patrick,

 

Thanks for your answer but the problem is other. Before to start the actor 
it try to create the shard coordinator. The log of that is:

 

 

[DEBUG] [12/19/2016 14:00:18.735] [main] 
[AkkaSSLConfig(akka://ClusterSystem)] Initializing AkkaSSLConfig 
extension...

[DEBUG] [12/19/2016 14:00:18.738] [main] 
[AkkaSSLConfig(akka://ClusterSystem)] buildHostnameVerifier: created 
hostname verifier: 
com.typesafe.sslconfig.ssl.DefaultHostnameVerifier@36349e29

[DEBUG] [12/19/2016 14:00:19.174] 
[ClusterSystem-akka.actor.default-dispatcher-7] [akka.tcp://
ClusterSystem@172.19.0.3:2551/system/IO-TCP/selectors/$a/0] Successfully 
bound to /0:0:0:0:0:0:0:0:9001

[DEBUG] [12/19/2016 14:02:38.837] 
[ClusterSystem-akka.actor.default-dispatcher-7] [akka.tcp://
ClusterSystem@172.19.0.3:2551/system/IO-TCP/selectors/$a/0] New connection 
accepted

[DEBUG] [12/19/2016 14:02:39.021] 
[ClusterSystem-akka.actor.default-dispatcher-4] [akka.tcp://
ClusterSystem@172.19.0.3:2551/user/$a] 
[CommandBusSupervisor]==========>>>>Trying message: [<function1>]

[INFO] [12/19/2016 14:02:39.185] 
[ClusterSystem-akka.actor.default-dispatcher-3] [akka.tcp://
ClusterSystem@172.19.0.3:2551/user/$b] CommandBusActorMsg

[DEBUG] [12/19/2016 14:02:39.210] 
[ClusterSystem-akka.actor.default-dispatcher-17] [akka.tcp://
ClusterSystem@172.19.0.3:2551/system/sharding/test] Coordinator moved from 
[] to [akka.tcp://ClusterSystem@172.19.0.2:2551]

[DEBUG] [12/19/2016 14:02:39.213] 
[ClusterSystem-akka.remote.default-remote-dispatcher-8] 
[akka.serialization.Serialization(akka://ClusterSystem)] Using 
serializer[akka.cluster.sharding.protobuf.ClusterShardingMessageSerializer] 
for message [akka.cluster.sharding.ShardCoordinator$Internal$Register]

[INFO] [12/19/2016 14:02:39.224] 
[ClusterSystem-akka.actor.default-dispatcher-6] [akka.tcp://
ClusterSystem@172.19.0.3:2551/system/sharding/testCoordinator] 
ClusterSingletonManager state change [Start -> Younger]

[DEBUG] [12/19/2016 14:02:39.229] 
[ClusterSystem-akka.actor.default-dispatcher-4] [akka.tcp://
ClusterSystem@172.19.0.3:2551/system/sharding/test] Request shard [22] home

[WARN] [12/19/2016 14:02:49.211] 
[ClusterSystem-akka.actor.default-dispatcher-4] [akka.tcp://
ClusterSystem@172.19.0.3:2551/system/sharding/test] Trying to register to 
coordinator at [Some(ActorSelection[Anchor(akka.tcp://
ClusterSystem@172.19.0.2:2551/), 
Path(/system/sharding/testCoordinator/singleton/coordinator)])], but no 
acknowledgement. Total [1] buffered messages.

[WARN] [12/19/2016 14:02:51.219] 
[ClusterSystem-akka.actor.default-dispatcher-17] [akka.tcp://
ClusterSystem@172.19.0.3:2551/system/sharding/test] Trying to register to 
coordinator at [Some(ActorSelection[Anchor(akka.tcp://
ClusterSystem@172.19.0.2:2551/), 
Path(/system/sharding/testCoordinator/singleton/coordinator)])], but no 
acknowledgement. Total [1] buffered messages.

 

So, the prestart and the “cluster.subscribe(self, classOf[MemberEvent], 
classOf[UnreachableMember])”   are not execute.

 

Do you have another idea?

 

Thank you so much for your help,

 

 

 

Victor

 

Thanks for your answer

 


El domingo, 18 de diciembre de 2016, 13:53:29 (UTC-5), Patrik Nordwall 
escribió:
>
> Perhaps you haven't joined the cluster?
>
> http://doc.akka.io/docs/akka/2.4/scala/cluster-usage.html#Joining_to_Seed_Nodes
>
> On Fri, Dec 16, 2016 at 11:04 PM, Víctor Martínez <vandres...@gmail.com 
> <javascript:>> wrote:
>
>> Hi,
>>
>>
>> Hi,
>>
>>  
>>
>> In my test I have two nodes in cluster. The problem is the sharding 
>> coordinator. When a younger node try to send a message just print.
>>
>>  
>>
>>  
>>
>> [akka.tcp://ClusterSystem@localhost:2551/system/sharding/latamautos] 
>> Trying to register to coordinator at [None], but no acknowledgement. Total 
>> [2] buffered messages.
>>
>>  
>>
>> And never answer. But if a send a message to the oldest node the 
>> coordinator sharding is create and the younger node work find.
>>
>>  
>>
>> This is mi part from my conf file:
>>
>>  
>>
>> # Settings for the ClusterShardingExtension
>>
>> akka.cluster.sharding {
>>
>>  
>>
>>  
>>
>>  
>>
>>   # The extension creates a top level actor with this name in top level 
>> system scope,
>>
>>   # e.g. '/system/sharding'
>>
>>   guardian-name = sharding
>>
>>  
>>
>>   # Specifies that entities runs on cluster nodes with a specific role.
>>
>>   # If the role is not specified (or empty) all nodes in the cluster are 
>> used.
>>
>>   role = ""
>>
>>  
>>
>>   # When this is set to 'on' the active entity actors will automatically 
>> be restarted
>>
>>   # upon Shard restart. i.e. if the Shard is started on a different 
>> ShardRegion
>>
>>   # due to rebalance or crash.
>>
>>   remember-entities = on
>>
>>  
>>
>>   # If the coordinator can't store state changes it will be stopped
>>
>>   # and started again after this duration, with an exponential back-off
>>
>>   # of up to 5 times this duration.
>>
>>   coordinator-failure-backoff = 5 s
>>
>>  
>>
>>   # The ShardRegion retries registration and shard location requests to 
>> the
>>
>>   # ShardCoordinator with this interval if it does not reply.
>>
>>   retry-interval = 2 s
>>
>>  
>>
>>   # Maximum number of messages that are buffered by a ShardRegion actor.
>>
>>   buffer-size = 100000
>>
>>  
>>
>>   # Timeout of the shard rebalancing process.
>>
>>   handoff-timeout = 60 s
>>
>>  
>>
>>   # Time given to a region to acknowledge it's hosting a shard.
>>
>>   shard-start-timeout = 10 s
>>
>>  
>>
>>   # If the shard is remembering entities and can't store state changes
>>
>>   # will be stopped and then started again after this duration. Any 
>> messages
>>
>>   # sent to an affected entity may be lost in this process.
>>
>>   shard-failure-backoff = 10 s
>>
>>  
>>
>>   # If the shard is remembering entities and an entity stops itself 
>> without
>>
>>   # using passivate. The entity will be restarted after this duration or 
>> when
>>
>>   # the next message for it is received, which ever occurs first.
>>
>>   entity-restart-backoff = 10 s
>>
>>  
>>
>>   # Rebalance check is performed periodically with this interval.
>>
>>   rebalance-interval = 10 s
>>
>>  
>>
>>   # Absolute path to the journal plugin configuration entity that is to be
>>
>>   # used for the internal persistence of ClusterSharding. If not defined
>>
>>   # the default journal plugin is used. Note that this is not related to
>>
>>   # persistence used by the entity actors.
>>
>>   journal-plugin-id = "cassandra-journal"
>>
>>  
>>
>>   # Absolute path to the snapshot plugin configuration entity that is to 
>> be
>>
>>   # used for the internal persistence of ClusterSharding. If not defined
>>
>>   # the default snapshot plugin is used. Note that this is not related to
>>
>>   # persistence used by the entity actors.
>>
>>   snapshot-plugin-id = "cassandra-snapshot-store"
>>
>>  
>>
>>   # Parameter which determines how the coordinator will be store a state
>>
>>   # valid values either "persistence" or "ddata"
>>
>>   # The "ddata" mode is experimental, since it depends on the experimental
>>
>>   # module akka-distributed-data-experimental.
>>
>>   # state-store-mode = "persistence"
>>
>>   state-store-mode = "persistence"
>>
>>  
>>
>>   # The shard saves persistent snapshots after this number of persistent
>>
>>   # events. Snapshots are used to reduce recovery times.
>>
>>   snapshot-after = 1000
>>
>>  
>>
>>   # Setting for the default shard allocation strategy
>>
>>   least-shard-allocation-strategy {
>>
>>     # Threshold of how large the difference between most and least number 
>> of
>>
>>     # allocated shards must be to begin the rebalancing.
>>
>>     rebalance-threshold = 3
>>
>>  
>>
>>     # The number of ongoing rebalancing processes is limited to this 
>> number.
>>
>>     max-simultaneous-rebalance = 4
>>
>>   }
>>
>>  
>>
>>   # Timeout of waiting the initial distributed state (an initial state 
>> will be queried again if the timeout happened)
>>
>>   # works only for state-store-mode = "ddata"
>>
>>   waiting-for-state-timeout = 5 s
>>
>>  
>>
>>   # Timeout of waiting for update the distributed state (update will be 
>> retried if the timeout happened)
>>
>>   # works only for state-store-mode = "ddata"
>>
>>   updating-state-timeout = 5 s
>>
>>  
>>
>>   # Settings for the coordinator singleton. Same layout as 
>> akka.cluster.singleton.
>>
>>   coordinator-singleton = ${akka.cluster.singleton}
>>
>>  
>>
>>  
>>
>>   # The shard uses this strategy to determines how to recover the 
>> underlying entity actors. The strategy is only used
>>
>>   # by the persistent shard when rebalancing or restarting. The value can 
>> either be "all" or "constant". The "all"
>>
>>   # strategy start all the underlying entity actors at the same time. The 
>> constant strategy will start the underlying
>>
>>   # entity actors at a fix rate. The default strategy "all".
>>
>>   entity-recovery-strategy = "all"
>>
>>  
>>
>>   # Default settings for the constant rate entity recovery strategy
>>
>>   entity-recovery-constant-rate-strategy {
>>
>>     # Sets the frequency at which a batch of entity actors is started.
>>
>>     frequency = 100 ms
>>
>>     # Sets the number of entity actors to be restart at a particular 
>> interval
>>
>>     number-of-entities = 5
>>
>>   }
>>
>>  
>>
>> }
>>
>>  
>>
>>  
>>
>> I don’t understood what is wrong
>>
>>  
>>
>> Thanks
>>
>>  
>>
>> Victor
>>
>> -- 
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com <javascript:>.
>> To post to this group, send email to akka...@googlegroups.com 
>> <javascript:>.
>> Visit this group at https://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> -- 
>
> Patrik Nordwall
> Akka Tech Lead
> Lightbend <http://www.lightbend.com/> -  Reactive apps on the JVM
> Twitter: @patriknw
>
>

-- 
>>>>>>>>>>      Read the docs: http://akka.io/docs/
>>>>>>>>>>      Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>      Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.

Reply via email to