Thank you for your help! I was able to resolve it by changing my working
directory to local. The default was a map drive.
From: Khandeshi, Ami
Sent: Friday, October 09, 2015 11:23 AM
To: 'Sun, Rui'; Hossein
Cc: akhandeshi; user@spark.apache.org
Subject: RE: SparkR Error in sparkR.i
It seems the problem is with creating Usage: RBackend
From: Sun, Rui [mailto:rui@intel.com]
Sent: Wednesday, October 07, 2015 10:23 PM
To: Khandeshi, Ami; Hossein
Cc: akhandeshi; user@spark.apache.org
Subject: RE: SparkR Error in sparkR.init(master=“local”) in RStudio
Can you extract the
5.1/bin/spark-submit.cmd sparkr-shell
C:\Users\a554719\AppData\Local\Temp\RtmpkXZVBa\backend_port45ac487f2fbd
Error in sparkR.init(master = "local") :
JVM is not ready after 10 seconds
From: Sun, Rui [mailto:rui@intel.com]
Sent: Wednesday, October 07, 2015 2:35 AM
To: Hossei
> Sys.setenv(SPARKR_SUBMIT_ARGS="--verbose sparkr-shell")
> Sys.setenv(SPARK_PRINT_LAUNCH_COMMAND=1)
>
> sc <- sparkR.init(master="local")
Launching java with spark-submit command
/C/DevTools/spark-1.5.1/bin/spark-submit.cmd --verbose sparkr-shell
C:\Users\a554719\AppData\Local\Temp\Rtmpw11KJ
I am seeing the same behavior. I have enough resources. How do I resolve
it?
Thanks,
Ami
I am running as Local in client mode. I have allocated as high as 85g to the
driver, executor, and daemon. When I look at java processes. I see two. I
see
20974 SparkSubmitDriverBootstrapper
21650 Jps
21075 SparkSubmit
I have tried repartition before, but my understanding is that comes with
n the time it begins iterating thru the iterator seems
extremely slow, why is that the case?
B.t.w. I don't see qexect.hasNext() as you suggest - what am I missing.
From: Khandeshi, Ami D
Sent: Friday, March 18, 2011 3:10 PM
To: jena-users@incubator.apache.org
Subject
Hi all,
I noticed that iterating thru stmtiterator (thru hasNext) seems take
long time. Can you suggest better/more efficient way to get information
in a stattments?
Here is a code base where execution query is fast, iterating thru result
is time consuming.
Thanks,
Ami