n the job there?
> This way we can ensure that the issue is not infrastructure and YARN
> configuration related.
>
> Kind regards
>
> Jochen Hebbrecht schrieb am Fr. 4. Okt. 2019
> um 19:27:
>
>> Hi Roland,
>>
>> I switched to the default security groups,
un
> your application? It is requesting 208G of RAM
>
> Thanks,
>
> Sent from Yahoo Mail for iPhone
> <https://overview.mail.yahoo.com/?.src=iOS>
>
> On Friday, October 4, 2019, 2:31 PM, Jochen Hebbrecht <
> jochenhebbre...@gmail.com> wrote:
>
> Hi Igor,
&g
> wrote:
>
> This are dynamic port ranges and dependa on configuration of your cluster.
> Per job there is a separate application master so there can‘t be just one
> port.
> If I remeber correctly the default EMR setup creates worker security
> groups with unrestricted traffi
ts I suggest that you start with a
> default like setup and determine ports and port ranges from the docs
> afterwards to further restrict traffic between the nodes.
>
> Kind regards
>
> Jochen Hebbrecht schrieb am Fr. 4. Okt. 2019
> um 17:16:
>
>> Hi Roland,
>&g
> did you setup the EMR cluster with custom security groups? Can you confirm
> that the relevant EC2 instances can connect through relevant ports?
>
> Best regards
>
> Jochen Hebbrecht schrieb am Fr. 4. Okt. 2019
> um 17:09:
>
>> Hi Jeff,
>>
>> Than
tializing
> SparkContext, which cause timeout.
>
> See this property here
> http://spark.apache.org/docs/latest/running-on-yarn.html
>
>
> Jochen Hebbrecht 于2019年10月4日周五 下午10:08写道:
>
>> Hi,
>>
>> I'm using Spark 2.4.2 on AWS EMR 5.24.0. I'm trying to send a
Hi,
I'm using Spark 2.4.2 on AWS EMR 5.24.0. I'm trying to send a Spark job
towards the cluster. Thhe job gets accepted, but the YARN application fails
with:
{code}
19/09/27 14:33:35 ERROR ApplicationMaster: Uncaught exception:
java.util.concurrent.TimeoutException: Futures timed out after [1000