Hi

I am trying to run spark 2.0 prebuilt with hadoop 2.7 on windows. I do not
have hadoop installed as I wanted to test spark alone.

When I run pyspark it does start up, but reading any file using dataframe
APIs fail. I recall it was doable in earlier versions of spark, but is it
something not possible anymore?

[image: Inline image 1]


-- 
Best Regards,
Ayan Guha

Reply via email to