(re-adding user mailing list)

A non-serializable function object should cause the job to fail, but not to
ignore a parallelism setting.

This might be a bug. Most users specify the parallelism directly in the
application code (via StreamExecutionEnvironment) or when submitting the
application.
Which version are you using?

Best, Fabian

2018-04-14 15:07 GMT+02:00 Michael Latta <mla...@technomage.com>:

> Parallelism in config. I think the issue is that some objects used oin the
> stream are not serializable (which I just discovered). I am surprised it
> supports that.
>
>
> Michael
>
> On Apr 14, 2018, at 6:12 AM, Fabian Hueske <fhue...@gmail.com> wrote:
>
> Hi,
>
> The number of Taskmanagers is irrelevant für the parallelism of a job or
> operator. The scheduler only cares about the number of slots.
>
> How did you set the default parallelism? In the config or in the program /
> StreamExecutionEnvironment?
>
> Best, Fabian
>
>
> TechnoMage <mla...@technomage.com> schrieb am Fr., 13. Apr. 2018, 04:30:
>
>> I am pretty new to flink.  I have a flink job that has 10 transforms
>> (mostly CoFlatMap with some simple filters and key extractrs as well.  I
>> have the config set for 6 slots and default parallelism of 6, but all my
>> stages show paralellism of 1.  Is that because there is only one task
>> manager?  Some of what I have read suggested separate slots were needed to
>> use multiple threads on a single box?  I have read the section on the docs
>> several times and still not totally sure about the execution model.
>>
>> Michael
>
>

Reply via email to