A small csv file in S3. I use s3a://key:seckey@bucketname/a.csv

 It works for SparkContext

pixelsStr: SparkContext = ctx.textFile(s3pathOrg);

It works for Java Spark-csv as well

Java code : DataFrame careerOneDF = sqlContext.read().format(
"com.databricks.spark.csv")

    .option("inferSchema", "true") .option("header", "true").load(s3pathOrg
);

However, it do not work for Scala, error message shown below

    val careerOneDF:DataFrame = sqlContext.read

    .format("com.databricks.spark.csv")

    .option("inferSchema", "true")

    .option("header", "true")

    .load(s3pathOrg);

com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service:
Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID:
F2E11C10E6D35BF3), S3 Extended Request ID:
0tdESZAHmROgSJem6P3gYnEZs86rrt4PByrTYbxzCw0xyM9KUMCHEAX3x4lcoy5O3A8qccgHraQ=

at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(
AmazonHttpClient.java:1160)

at com.amazonaws.http.AmazonHttpClient.executeOneRequest(
AmazonHttpClient.java:748)

at com.amazonaws.http.AmazonHttpClient.executeHelper(
AmazonHttpClient.java:467)

at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:302)

at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785)

at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(
AmazonS3Client.java:1050)

at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(
AmazonS3Client.java:1027)

at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(
S3AFileSystem.java:688)

at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(
S3AFileSystem.java:71)

at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)

at org.apache.hadoop.fs.Globber.glob(Globber.java:252)

at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1644)

at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(
FileInputFormat.java:257)

at org.apache.hadoop.mapred.FileInputFormat.listStatus(
FileInputFormat.java:228)

at org.apache.hadoop.mapred.FileInputFormat.getSplits(
FileInputFormat.java:313)

at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:207)

at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)

at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)

at scala.Option.getOrElse(Option.scala:120)

at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)

at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(
MapPartitionsRDD.scala:32)

at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)

at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)

at scala.Option.getOrElse(Option.scala:120)

at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)

at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1255)

at org.apache.spark.rdd.RDDOperationScope$.withScope(
RDDOperationScope.scala:147)

at org.apache.spark.rdd.RDDOperationScope$.withScope(
RDDOperationScope.scala:108)

at org.apache.spark.rdd.RDD.withScope(RDD.scala:286)

at org.apache.spark.rdd.RDD.take(RDD.scala:1250)

at org.apache.spark.rdd.RDD$$anonfun$first$1.apply(RDD.scala:1290)

at org.apache.spark.rdd.RDDOperationScope$.withScope(
RDDOperationScope.scala:147)

at org.apache.spark.rdd.RDDOperationScope$.withScope(
RDDOperationScope.scala:108)

at org.apache.spark.rdd.RDD.withScope(RDD.scala:286)

at org.apache.spark.rdd.RDD.first(RDD.scala:1289)

at com.databricks.spark.csv.CsvRelation.firstLine$lzycompute(
CsvRelation.scala:129)

at com.databricks.spark.csv.CsvRelation.firstLine(CsvRelation.scala:127)

at com.databricks.spark.csv.CsvRelation.inferSchema(CsvRelation.scala:109)

at com.databricks.spark.csv.CsvRelation.<init>(CsvRelation.scala:62)

at com.databricks.spark.csv.DefaultSource.createRelation(
DefaultSource.scala:115)

at com.databricks.spark.csv.DefaultSource.createRelation(
DefaultSource.scala:40)

at com.databricks.spark.csv.DefaultSource.createRelation(
DefaultSource.scala:28)

at org.apache.spark.sql.sources.ResolvedDataSource$.apply(ddl.scala:269)

at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:114)

at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:104)


Thanks

-- 
This message and its attachments may contain legally privileged or 
confidential information. It is intended solely for the named addressee. If 
you are not the addressee indicated in this message or responsible for 
delivery of the message to the addressee, you may not copy or deliver this 
message or its attachments to anyone. Rather, you should permanently delete 
this message and its attachments and kindly notify the sender by reply 
e-mail. Any content of this message and its attachments which does not 
relate to the official business of the sending company must be taken not to 
have been sent or endorsed by that company or any of its related entities. 
No warranty is made that the e-mail or attachments are free from computer 
virus or other defect.

Reply via email to