tried the following. still failed the same way.. it ran in yarn. cdh5.8.0

from pyspark import SparkContext, SparkConf

conf = SparkConf().setAppName('s3 ---')
sc = SparkContext(conf=conf)

sc._jsc.hadoopConfiguration().set("fs.s3n.awsAccessKeyId", "...")
sc._jsc.hadoopConfiguration().set("fs.s3n.awsSecretAccessKey", "...")

myRdd =
sc.textFile("s3n://****/y=2016/m=5/d=26/h=20/2016.5.26.21.9.52.6d53180a-28b9-4e65-b749-b4a2694b9199.json.gz")

count = myRdd.count()
print "The count is", count



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/spark-1-6-0-read-s3-files-error-tp27417p27427.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to