Hi, Roni,

For parquetFile(), it is just a warning, you can get the DataFrame 
successfully, right? It is a bug has been fixed in the latest repo: 
https://issues.apache.org/jira/browse/SPARK-8952

For S3, it is not related to SparkR. I guess it is related to 
http://stackoverflow.com/questions/28029134/how-can-i-access-s3-s3n-from-a-local-hadoop-2-6-installation
 , https://issues.apache.org/jira/browse/SPARK-7442


From: roni [mailto:roni.epi...@gmail.com]
Sent: Friday, September 11, 2015 3:05 AM
To: user@spark.apache.org
Subject: reading files on HDFS /s3 in sparkR -failing


I am trying this -
 ddf <- parquetFile(sqlContext,  
"hdfs://ec2-52-26-180-130.us-west-2.compute.amazonaws.com:9000/IPF_14_1.parquet<http://ec2-52-26-180-130.us-west-2.compute.amazonaws.com:9000/IPF_14_1.parquet>")
and I get 
path[1]="hdfs://ec2-52-26-180-130.us-west-2.compute.amazonaws.com:9000/IPF_14_1.parquet<http://ec2-52-26-180-130.us-west-2.compute.amazonaws.com:9000/IPF_14_1.parquet>":
 No such file or directory

when I read file on s3 , I get -  java.io.IOException: No FileSystem for 
scheme: s3

Thanks in advance.
-Roni




Reply via email to