Hi Patrik,

Thank you for your answer. It really helps me to clarify the thinks I was 
not  sure. In the meantime, I managed to make it works with one seed only.

The reason  I'm using only one seed is to save me some extra amount of 
work. My application is in use, but I'm still developing it and change the 
code often. I'm using Azure cloud to run it on, and node instances use 
Azure internal IP between nodes which are not permanent, so it is changed 
if instance is restarted.

I have 1 master node and many workers. Master is used as the seed, and I'm 
creating image of worker to auto-scale it based on the work load. Each time 
I change the source code procedure is as follows:
- update the code on master and one worker
- create an image from the worker (in Azure opposed to Amazon AWS this 
process terminates worker instance you were using as template)
- initialize 2 worker instances
- configure 1 worker to be seed and leave it to be running and configure 
second worker to use master and first worker as seeds.
- create an image from the second worker so I can auto-scale other workers 
when I need it.

If I have 1 seed I don't have to create image 2 times, so it saves me some 
work and time.

However, using only one seed node is a single point of failure, so I don't 
> see why you would use that.

 
What do you mean by "single point of failure"? Does it mean that if I 
initialize node100 having seed-nodes=[node1], and node1 is not reachable, 
it will fail to join cluster? Is there any other risk?

In other to avoid frequent re-configuration of nodes and make it dynamic as 
much as possible, I thought to expose REST service on master node that will 
return seeds IPs and each worker on load will dynamically load seeds on 
application startup. Do you find any problems with this approach?

Thanks,
Zoran

On Monday, 17 February 2014 02:12:00 UTC-8, Patrik Nordwall wrote:
>
> Hi Zoran,
>
> Glad that you got it working.
>
> Using only one seed should be possible, but only one can have itself as 
> seed node.
> E.g.
> node1 may have seed-nodes=[node1]
> node2 may have seed-nodes=[node1]
> node3 may have seed-nodes=[node1]
>
> then, later on, when the cluster is running, you could use any other node 
> as seed
> node100 may have seed-nodes=[node3]
>
> However, using only one seed node is a single point of failure, so I don't 
> see why you would use that.
>
> /Patrik
>
>
>
> On Sat, Feb 15, 2014 at 3:14 AM, Zoran Jeremic 
> <zoran....@gmail.com<javascript:>
> > wrote:
>
>> Hi,
>>
>> I solved this. Seems that the problem was in the fact I was using only 
>> one seed. I thought it's not necessary to have 2 seeds for cluster.
>>
>> Zoran
>>
>>
>> On Thursday, 13 February 2014 16:41:50 UTC-8, Zoran Jeremic wrote:
>>>
>>> Hi Roland,
>>>
>>> Thank you for your advice. I think I tried this and it didn't work for 
>>> some reason, but I don't remember it now. Anyway, I solved this issue by 
>>> dynamically discovering IP address at node start up and setting up 
>>> configuration from Java code rather then from application.conf. 
>>> I'm able to connect new worker nodes without having to know their IP and 
>>> ports. However, shortly after the second worker is connected, all nodes are 
>>> disconnected and error message is:
>>>
>>> [WARN] [02/13/2014 23:50:25.736] 
>>> [ClusterSystem-akka.remote.default-remote-dispatcher-5] 
>>> [akka.tcp://ClusterSystem@100.71.52.66:41754/system/endpointManager/
>>> reliableEndpointWriter-akka.tcp%3A%2F%2FClusterSystem%
>>> 40100.71.54.43%3A39932-1] Association with remote system [akka.tcp://
>>> ClusterSystem@100.71.54.43:39932] has failed, address is now gated for 
>>> [5000] ms. Reason is: [Association failed with 
>>> [akka.tcp://ClusterSystem@100.71.54.43:39932]].
>>>
>>> [INFO] [02/13/2014 23:50:25.748] 
>>> [ClusterSystem-akka.actor.default-dispatcher-4] 
>>>> [akka://ClusterSystem/deadLetters] Message 
>>>> [com.inextweb.crawler.akka.messages.GeneralJobMessage] 
>>>> from Actor[akka://ClusterSystem/remote/akka.tcp/ClusterSystem@
>>>> 100.71.110.33:2551/user/clusterController/crawlerManager/c2#-993579407] 
>>>> to Actor[akka://ClusterSystem/deadLetters] was not delivered. [1] dead 
>>>> letters encountered. This logging can be turned off or adjusted with 
>>>> configuration settings 'akka.log-dead-letters' and 
>>>> 'akka.log-dead-letters-during-shutdown'.
>>>>
>>>> [WARN] [02/13/2014 23:50:29.399] 
>>>> [ClusterSystem-akka.cluster.cluster-dispatcher-16] 
>>>> [akka.tcp://ClusterSystem@100.71.52.66:41754/system/cluster/core/daemon] 
>>>> Cluster Node [akka.tcp://ClusterSystem@100.71.52.66:41754] - Marking 
>>>> node(s) as UNREACHABLE [Member(address = akka.tcp://ClusterSystem@100.
>>>> 71.54.43:39932, status = Up)]
>>>> [WARN] [02/13/2014 23:50:46.271] 
>>>> [ClusterSystem-akka.remote.default-remote-dispatcher-6] 
>>>> [akka.tcp://ClusterSystem@100.71.52.66:41754/system/endpointManager/
>>>> reliableEndpointWriter-akka.tcp%3A%2F%2FClusterSystem%
>>>> 40100.71.54.43%3A39932-1] Association with remote system [akka.tcp://
>>>> ClusterSystem@100.71.54.43:39932] has failed, address is now gated for 
>>>> [5000] ms. Reason is: [Association failed with 
>>>> [akka.tcp://ClusterSystem@100.71.54.43:39932]].
>>>>
>>>
>>> Do you have any idea what could cause this failure? It's not happening 
>>> until I try to connect second worker.
>>>
>>> Thanks
>>>
>>> On Thursday, 13 February 2014 03:45:15 UTC-8, rkuhn wrote:
>>>>
>>>> Hi Zoran,
>>>>
>>>> on the worker nodes you can configure
>>>>
>>>> hostname=""
>>>> port=0
>>>>
>>>> since they just need to find the master.
>>>>
>>>> Regards,
>>>>
>>>> Roland
>>>>
>>>> 12 feb 2014 kl. 03:17 skrev Zoran Jeremic <zoran....@gmail.com>:
>>>>
>>>> Hi,
>>>>
>>>> I've implemented Akka cluster where I have one master node that 
>>>> initialize cluster in the following way:
>>>>
>>>>
>>>> AdaptiveLoadBalancingPool pool = new AdaptiveLoadBalancingPool(
>>>>>                 MixMetricsSelector.getInstance(), 0);
>>>>>         ClusterRouterPoolSettings settings = new 
>>>>> ClusterRouterPoolSettings(
>>>>>                 totalInstances, maxInstancesPerNode, 
>>>>> allowLocalRoutees, useRole);
>>>>>         crawlerManager = getContext().actorOf(
>>>>>                 new ClusterRouterPool(pool, 
>>>>> settings).props(Props.create(
>>>>>                         CrawlerManagerActor.class, getSelf())),
>>>>>                 "crawlerManager");
>>>>
>>>>
>>>> It's configure like as:
>>>>
>>>> akka {
>>>>>     actor {
>>>>>         provider = "akka.cluster.ClusterActorRefProvider"
>>>>>     }
>>>>>     remote {
>>>>>         log-remote-lifecycle-events = off
>>>>>         netty.tcp {
>>>>>                 hostname = "100.71.88.118"
>>>>>                 port=2551
>>>>>         }
>>>>>     }
>>>>>     cluster {
>>>>>     seed-nodes = [
>>>>>                 "akka.tcp://ClusterSystem@100.71.88.118:2551"
>>>>>     ]   
>>>>
>>>>
>>>> And I have a worker node which is configured as:
>>>>
>>>>
>>>> akka {
>>>>>      actor {
>>>>>         provider = "akka.cluster.ClusterActorRefProvider"
>>>>>     }
>>>>>     remote {
>>>>>         log-remote-lifecycle-events = off
>>>>>         netty.tcp {
>>>>>                 hostname = "100.71.96.54"
>>>>>                 port=2552
>>>>>         }
>>>>>     }
>>>>>     cluster {
>>>>>     seed-nodes = [
>>>>>                 "akka.tcp://ClusterSystem@100.71.88.118:2551"
>>>>>     ] 
>>>>
>>>>
>>>> Each node is on different instance in Microsoft Azure cloud,but what I 
>>>> want is to create an image from worker and based on that image to create 
>>>> new instances of workers when system load is increased, so hostname and 
>>>> port should be created dynamically. However, from Akka documentation and 
>>>> previous discussion I couldn't find any description that will help me how 
>>>> to make this work. Could you give me some reference or description what 
>>>> have to be done and how to configure my nodes once it's loaded.
>>>>
>>>> Thanks,
>>>> Zoran
>>>>
>>>> -- 
>>>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>>>> >>>>>>>>>> Check the FAQ: http://akka.io/faq/
>>>> >>>>>>>>>> Search the archives: https://groups.google.com/
>>>> group/akka-user
>>>> --- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "Akka User List" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to akka-user+...@googlegroups.com.
>>>> To post to this group, send email to akka...@googlegroups.com.
>>>> Visit this group at http://groups.google.com/group/akka-user.
>>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>>
>>>>
>>>>
>>>>
>>>> *Dr. Roland Kuhn*
>>>> *Akka Tech Lead*
>>>> Typesafe <http://typesafe.com/> – Reactive apps on the JVM.
>>>> twitter: @rolandkuhn
>>>>  <http://twitter.com/#!/rolandkuhn>
>>>>  
>>>>  -- 
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ: http://akka.io/faq/
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com <javascript:>.
>> To post to this group, send email to akka...@googlegroups.com<javascript:>
>> .
>> Visit this group at http://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>
>
>
> -- 
>
> Patrik Nordwall
> Typesafe <http://typesafe.com/> -  Reactive apps on the JVM
> Twitter: @patriknw
>
> 

-- 
>>>>>>>>>>      Read the docs: http://akka.io/docs/
>>>>>>>>>>      Check the FAQ: http://akka.io/faq/
>>>>>>>>>>      Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to