Hi Matthew,

For 1: Beam does not compute the right configuration for the pipeline so
its recommended to tune it manually as it's done in regular Spark jobs.

For 2: The recommendation is same as that for a regular Spark job.

Thanks,
Ankur

On Tue, Dec 10, 2019 at 2:46 PM Matthew K. <softm...@gmx.com> wrote:

> Hi,
>
> To run a beam job on a spark cluster with some number of nodes running:
>
> 1. Is it recommended to set pipeline parameters --num_workers,
> --max_num_workers, --autoscaling_algorithms, --worker_machine_type, etc, or
> beam (spark) will figure that out?
>
> 2. If that is recommended to set those params, what are the recommended
> values based on the machines and resources in the cluster?
>
> Thanks
>

Reply via email to