Thank you For the answer
I have set now these properties as you suggest
SparkConf sparkConf = new
SparkConf().setAppName("simpleTest2").setMaster("yarn")
.set("spark.executor.memory", "1g")
.set("deploy.mode",
How can I check it?
On 2021/09/28 03:29:45, Stelios Philippou wrote:
> It might be possible that you do not have the resources on the cluster. So
> your job will remain to wait for them as they cannot be provided.
>
> On Tue, 28 Sep 2021, 04:26 davvy benny, wrote:
>
> > How can I solve the
It might be possible that you do not have the resources on the cluster. So
your job will remain to wait for them as they cannot be provided.
On Tue, 28 Sep 2021, 04:26 davvy benny, wrote:
> How can I solve the problem?
>
> On 2021/09/27 23:05:41, Thejdeep G wrote:
> > Hi,
> >
> > That would
How can I solve the problem?
On 2021/09/27 23:05:41, Thejdeep G wrote:
> Hi,
>
> That would usually mean that the application has not been allocated the
> executor resources from the resource manager yet.
>
> On 2021/09/27 21:37:30, davvy benny wrote:
> > Hi
> > I am trying to run spark
Hi,
That would usually mean that the application has not been allocated the
executor resources from the resource manager yet.
On 2021/09/27 21:37:30, davvy benny wrote:
> Hi
> I am trying to run spark programmatically from eclipse with these
> configurations for hadoop cluster locally
>
Hi
I am trying to run spark programmatically from eclipse with these
configurations for hadoop cluster locally
SparkConf sparkConf = new
SparkConf().setAppName("simpleTest2").setMaster("yarn")
.set("spark.executor.memory", "1g")
Yesterday night, I run the jar on my pseudo-distributed mode without WARN and
ERROR. However, Today, Getting the WARN and directly leading to the ERROR
below. My computer memory is 8GB and I think it’s not the issue as the LOG WARN
describe. What ‘s wrong ? The code haven’t change yet. And the
>>> *15/12/16 10:22:01 WARN cluster.YarnScheduler: Initial job has not
accepted any resources; check your cluster UI to ensure that workers are
registered and have sufficient resources*
That means you don't have resources for your application, please check your
hadoop web ui.
On Wed, Dec 16,