Re: spark jobs don't require the master/worker to startup?

2022-03-09 Thread Sean Owen
You can run Spark in local mode and not require any standalone master or worker. Are you sure you're not using local mode? are you sure the daemons aren't running? What is the Spark master you pass? On Wed, Mar 9, 2022 at 7:35 PM wrote: > What I tried to say is, I didn't start spark

Re: spark jobs don't require the master/worker to startup?

2022-03-09 Thread capitnfrakass
What I tried to say is, I didn't start spark master/worker at all, for a standalone deployment. But I still can login into pyspark to run the job. I don't know why. $ ps -efw|grep spark $ netstat -ntlp both the output above have no spark related info. And this machine is managed by myself, I

Re: spark jobs don't require the master/worker to startup?

2022-03-09 Thread Artemis User
To be specific: 1. Check the log files on both master and worker and see if any errors. 2. If you are not running your browser on the same machine and the Spark cluster, please use the host's external IP instead of localhost IP when launching the worker Hope this helps... -- ND On 3/9/22

Re: spark jobs don't require the master/worker to startup?

2022-03-09 Thread Sean Owen
Did it start successfully? What do you mean ports were not opened? On Wed, Mar 9, 2022 at 3:02 AM wrote: > Hello > > I have spark 3.2.0 deployed in localhost as the standalone mode. > I even didn't run the start master and worker command: > > start-master.sh > start-worker.sh

Re: Spark jobs failing due to java.lang.OutOfMemoryError: PermGen space

2016-08-04 Thread Deepak Sharma
Yes agreed.It seems to be issue with mapping the text file contents to case classes, not sure though. On Thu, Aug 4, 2016 at 8:17 PM, $iddhe$h Divekar wrote: > Hi Deepak, > > My files are always > 50MB. > I would think there would be a small config to overcome this.

Re: Spark jobs failing due to java.lang.OutOfMemoryError: PermGen space

2016-08-04 Thread $iddhe$h Divekar
Hi Deepak, My files are always > 50MB. I would think there would be a small config to overcome this. Tried almost everything i could after searching online. Any help from the mailing list would be appreciated. On Thu, Aug 4, 2016 at 7:43 AM, Deepak Sharma wrote: > I am

Re: Spark jobs failing due to java.lang.OutOfMemoryError: PermGen space

2016-08-04 Thread Deepak Sharma
I am facing the same issue with spark 1.5.2 If the file size that's being processed by spark , is of size 10-12 MB , it throws out of memory . But if the same file is within 5 MB limit , it runs fine. I am using spark configuration with 7GB of memory and 3 cores for executors in the cluster of 8

RE: Spark jobs

2016-06-30 Thread Joaquin Alzola
une 2016 14:51 To: Joaquin Alzola <joaquin.alz...@lebara.com>; user <user@spark.apache.org> Subject: Re: Spark jobs check if this helps, from multiprocessing import Process def training() : print ("Training Workflow") cmd = spark/bin/spark-submit ./ml.p

Re: Spark jobs

2016-06-29 Thread sujeet jog
check if this helps, from multiprocessing import Process def training() : print ("Training Workflow") cmd = spark/bin/spark-submit ./ml.py & " os.system(cmd) w_training = Process(target = training) On Wed, Jun 29, 2016 at 6:28 PM, Joaquin Alzola

Re: Spark jobs without a login

2016-06-16 Thread Ted Yu
Can you describe more about the container ? Please show complete stack trace for the exception. Thanks On Thu, Jun 16, 2016 at 1:32 PM, jay vyas wrote: > Hi spark: > > Is it possible to avoid reliance on a login user when running a spark job? > > I'm running out a

Re: Spark jobs run extremely slow on yarn cluster compared to standalone spark

2016-02-14 Thread Yuval.Itzchakov
Your question lacks sufficient information for us to actually provide help. Have you looked at the Spark UI to see which part of the graph is taking the longest? Have you tried logging your methods? -- View this message in context: