nseCode=400, ResponseMessage=Bad Request
what does the user has to do here??? i am using key & secret !!!
How can i simply create RDD from text file on S3
Thanks
Didi
--
View this message in context:
http://apache-spark-user
How did you solve the problem with V4?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/s3-bucket-access-read-file-tp23536p28688.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
a good place to start
debugging from
full list here: https://hortonworks.github.io/hdp-aws/s3-configure/index.html
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/s3-bucket-access-read-file-tp23536p28688.html
Sent from the Apache Spark User List mailing
.jets3t.service.S3ServiceException: S3 HEAD request failed for
'/user%2Fdidi' - ResponseCode=400, ResponseMessage=Bad Request
what does the user has to do here??? i am using key & secret !!!
How can i simply create RDD from text file on S3
Thanks
Didi
--
View this message in
n:
> org.jets3t.service.S3ServiceException: S3 HEAD request failed for
> '/user%2Fdidi' - ResponseCode=400, ResponseMessage=Bad Request
>
> what does the user has to do here??? i am using key & secret !!!
>
> How can i simply create RDD from text file on S3
>
.nabble.com/s3-bucket-access-read-file-tp23536p23544.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h
3
Good luck!
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/s3-bucket-access-read-file-tp23536p23560.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
---
doopConf.set("fs.s3n.awsAccessKeyId","key")
> hadoopConf.set("fs.s3n.awsSecretAccessKey","secret")
>
> Try setting them to s3n as opposed to just s3
>
> Good luck!
>
>
>
> --
> View this message in context:
> http://apache-spark-user-lis
et")
Try setting them to s3n as opposed to just s3
Good luck!
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/s3-bucket-access-read-file-tp23536p23560.html
Sent from the Apache Spark User List mailing list archive at
Nabble.com<http://Nabble.com>.
I think 2.6 failed to abruptly close streams that weren't fully read, which
we observed as a huge performance hit. We had to backport the 2.7
improvements before being able to use it.
10 matches
Mail list logo