Okay, so it was a configuration mistake on my part. but still for me the
start-cluster.sh command won't work. It only starts the Jobmanager on the
master node for me. Therefore I had to manually start Taskmanagers on every
node and it worked fine. Anyone familiar with this issue?

On Wed, May 4, 2016 at 1:33 PM, Punit Naik <naik.puni...@gmail.com> wrote:

> Passwordless SSH has been setup across all the machines. And when I
> execute the spark-clsuter.sh script, I can see the master logging into the
> slaves but it does not start anything. It just logs in and logs out.
>
> I have referred to the documentation on official site.
>
>
> https://ci.apache.org/projects/flink/flink-docs-release-1.0/quickstart/setup_quickstart.html
>
> On Wed, May 4, 2016 at 12:43 PM, Flavio Pompermaier <pomperma...@okkam.it>
> wrote:
>
>> I think your slaves didn't come up...have you configured ssh
>> password-less login between the master node (the one running the
>> start-cluster.sh) and the task managers (listed in the conf/slaves file)?
>>
>> Best,
>> Flavio
>>
>> On Wed, May 4, 2016 at 8:49 AM, Balaji Rajagopalan <
>> balaji.rajagopa...@olacabs.com> wrote:
>>
>>> What is the flink documentation you were following to set up your
>>> cluster , can you point to that ?
>>>
>>> On Tue, May 3, 2016 at 6:21 PM, Punit Naik <naik.puni...@gmail.com>
>>> wrote:
>>>
>>>> Hi
>>>>
>>>> I did all the settings required for cluster setup. but when I ran the
>>>> start-cluster.sh script, it only started one jobmanager on the master node.
>>>> Logs are written only on the master node. Slaves don't have any logs. And
>>>> when I ran a program it said:
>>>>
>>>> Resources available to scheduler: Number of instances=0, total number
>>>> of slots=0, available slots=0
>>>>
>>>> Can anyone help please?
>>>>
>>>> --
>>>> Thank You
>>>>
>>>> Regards
>>>>
>>>> Punit Naik
>>>>
>>>
>>
>
>
> --
> Thank You
>
> Regards
>
> Punit Naik
>



-- 
Thank You

Regards

Punit Naik

Reply via email to