On Tuesday 21 April 2015 12:12 PM, Akhil Das wrote:
Your spark master should be spark://swetha:7077 :)
Thanks
Best Regards
On Mon, Apr 20, 2015 at 2:44 PM, madhvi madhvi.gu...@orkash.com
mailto:madhvi.gu...@orkash.com wrote:
PFA screenshot of my cluster UI
Thanks
On Monday 20
Your spark master should be spark://swetha:7077 :)
Thanks
Best Regards
On Mon, Apr 20, 2015 at 2:44 PM, madhvi madhvi.gu...@orkash.com wrote:
PFA screenshot of my cluster UI
Thanks
On Monday 20 April 2015 02:27 PM, Akhil Das wrote:
Are you seeing your task being submitted to the UI?
I think the memory requested by your job 2.0 GB is higher than what is
requested.
Please request for 256 MB explicitly which creating Spark Context and try
again.
Thanks and Regards,
Suraj Sheth
On Mon, Apr 20, 2015 at 2:44 PM, madhvi madhvi.gu...@orkash.com wrote:
PFA screenshot of my
There are lot of similar problems shared and resolved by users on this same
portal. I have been part of those discussions before, Search those, Please
Try them and let us know, if you still face problems.
Thanks and Regards,
Archit Thakur.
On Mon, Apr 20, 2015 at 3:05 PM, madhvi
On Monday 20 April 2015 03:18 PM, Archit Thakur wrote:
There are lot of similar problems shared and resolved by users on this
same portal. I have been part of those discussions before, Search
those, Please Try them and let us know, if you still face problems.
Thanks and Regards,
Archit
Are you seeing your task being submitted to the UI? Under completed or
running tasks? How much resources are you allocating for your job? Can you
share a screenshot of your cluster UI and the code snippet that you are
trying to run?
Thanks
Best Regards
On Mon, Apr 20, 2015 at 12:37 PM, madhvi
On Monday 20 April 2015 02:52 PM, SURAJ SHETH wrote:
Hi Madhvi,
I think the memory requested by your job, i.e. 2.0 GB is higher than
what is available.
Please request for 256 MB explicitly while creating Spark Context and
try again.
Thanks and Regards,
Suraj Sheth
Tried the same but still
Hi,
I Did the same you told but now it is giving the following error:
ERROR TaskSchedulerImpl: Exiting due to error from cluster scheduler:
All masters are unresponsive! Giving up.
On UI it is showing that master is working
Thanks
Madhvi
On Monday 20 April 2015 12:28 PM, Akhil Das wrote:
In
No I am not getting any task on the UI which I am running.Also I have
set instances=1 but on UI it is showing 2 workers.i am running the java
word count code exactly but i have the text file in HDFS.Following is
the part of my code I writing to make connection
SparkConf sparkConf = new
Hi Madhvi,
I think the memory requested by your job, i.e. 2.0 GB is higher than what
is available.
Please request for 256 MB explicitly while creating Spark Context and try
again.
Thanks and Regards,
Suraj Sheth
Hi All,
I am new to spark and have installed spark cluster over my system having
hadoop cluster.I want to process data stored in HDFS through spark.
When I am running code in eclipse it is giving the following warning
repeatedly:
scheduler.TaskSchedulerImpl: Initial job has not accepted any
In your eclipse, while you create your SparkContext, set the master uri as
shown in the web UI's top left corner like: spark://someIPorHost:7077 and
it should be fine.
Thanks
Best Regards
On Mon, Apr 20, 2015 at 12:22 PM, madhvi madhvi.gu...@orkash.com wrote:
Hi All,
I am new to spark and
12 matches
Mail list logo