Thanks to both of you!
You solved the problem.
Thanks
Erik Stensrud
Sendt fra min iPhone
Den 16. jun. 2015 kl. 20.23 skrev Guru Medasani
gdm...@gmail.commailto:gdm...@gmail.com:
Hi Esten,
Looks like your sqlContext is connected to a Hadoop/Spark cluster, but the file
path you specified is
Hi Esten,
Looks like your sqlContext is connected to a Hadoop/Spark cluster, but the file
path you specified is local?.
mydf-read.df(sqlContext, /home/esten/ami/usaf.json, source=json”,
Error below shows that the Input path you specified does not exist on the
cluster. Pointing to the right
The error you are running into is that the input file does not exist -- You
can see it from the following line
Input path does not exist: hdfs://smalldata13.hdp:8020/
home/esten/ami/usaf.json
Thanks
Shivaram
On Tue, Jun 16, 2015 at 1:55 AM, esten erik.stens...@dnvgl.com wrote:
Hi,
In SparkR
Hello,
Is the json file in HDFS or local?
/home/esten/ami/usaf.json is this an HDFS path?
Suggestions:
1) Specify file:/home/esten/ami/usaf.json
2) Or move the usaf.json file into HDFS since the application is looking for
the file in HDFS.
Please let me know if that helps.
Thank you.
--
Hi,
In SparkR shell, I invoke:
mydf-read.df(sqlContext, /home/esten/ami/usaf.json, source=json,
header=false)
I have tried various filetypes (csv, txt), all fail.
RESPONSE: ERROR RBackendHandler: load on 1 failed
BELOW THE WHOLE RESPONSE:
15/06/16 08:09:13 INFO MemoryStore: