For a cluster that size, having the job manager also be a task manager is not 
recommended.

Michael

> On Apr 26, 2018, at 11:47 AM, Makis Pap <makisnt...@gmail.com> wrote:
> 
> OK Michael!
> 
> I will look into it and will come back at you! Thanks for the help. I agree 
> that it is quite suspicious the par = 8
> 
> Jps? Meaning?
> 
> Oh I should mention that the JobManager node is also a TaskManager.
> 
> Best,
> Max
> 
>> On 27 Apr 2018, at 01:39, TechnoMage <mla...@technomage.com 
>> <mailto:mla...@technomage.com>> wrote:
>> 
>> Check that you have slaves and masters set correctly on all machines, and in 
>> particular the one submitting jobs.  Make sure that from the machine 
>> submitting the job that it is talking to the correct job manager 
>> (jobmanager.rpc.address).  It really sounds like you are some how submitting 
>> jobs to only one taskmanager.
>> 
>> You should also use jps to verify that you only have one jobmanager running 
>> and the worker machines only have taskmanager running.
>> 
>> Michael
>> 
>>> On Apr 26, 2018, at 11:34 AM, Makis Pap <makisnt...@gmail.com 
>>> <mailto:makisnt...@gmail.com>> wrote:
>>> 
>>> So what should the correct configs be then?
>>> 
>>> I have set numOfSlotsPerTaskManager = 8. Which is reasonable as each has 8 
>>> cpus.
>>> 
>>> Best,
>>> Makis
>>> 
>>> On Fri, 27 Apr 2018, 01:26 TechnoMage, <mla...@technomage.com 
>>> <mailto:mla...@technomage.com>> wrote:
>>> You need to verify your configs are correct.  Check that the local machine 
>>> sees all the task managers, that is the most likely reason it will reject a 
>>> higher parallelism.  I use a java program to submit to a 3 node 18 slot 
>>> cluster without issue on a job with 18 parallelism.  I have not used the 
>>> command line to do this however.
>>> 
>>> Michael
>>> 
>>> > On Apr 26, 2018, at 11:16 AM, m@xi <makisnt...@gmail.com 
>>> > <mailto:makisnt...@gmail.com>> wrote:
>>> > 
>>> > No man. I have 17 TaskManagers and each has a number of 8 slots.
>>> > 
>>> > Do you think it is better to have 8 TaskManager (1 slot each) ?
>>> > 
>>> > Best,
>>> > Max
>>> > 
>>> > 
>>> > 
>>> > --
>>> > Sent from: 
>>> > http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/ 
>>> > <http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/>
>>> 
>> 
> 

Reply via email to