the error does not go, the JSON
files can be saved successfully.
On Sun, Aug 23, 2015 at 5:51 AM, Ted Yu yuzhih...@gmail.com wrote:
You may have seen this:
http://search-hadoop.com/m/q3RTtdSyM52urAyI
On Aug 23, 2015, at 1:01 AM, lostrain A donotlikeworkingh...@gmail.com
wrote:
Hi,
I'm
, 2015 at 11:03 AM, lostrain A
donotlikeworkingh...@gmail.com wrote:
Hi Ted,
Thanks for the reply. I tried setting both of the keyid and accesskey
via
sc.hadoopConfiguration.set(fs.s3n.awsAccessKeyId, ***)
sc.hadoopConfiguration.set(fs.s3n.awsSecretAccessKey, **)
However, the error still
Hi,
I'm trying to save a simple dataframe to S3 in ORC format. The code is as
follows:
val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
import sqlContext.implicits._
val df=sc.parallelize(1 to 1000).toDF()
df.write.format(orc).save(s3://logs/dummy)
I
, probably it is caused by SPARK-8458
https://issues.apache.org/jira/browse/SPARK-8458
Thanks.
Zhan Zhang
On Aug 23, 2015, at 12:49 PM, lostrain A donotlikeworkingh...@gmail.com
wrote:
Ted,
Thanks for the suggestions. Actually I tried both s3n and s3 and the
result remains the same