Have a look at this SO <http://stackoverflow.com/questions/24048729/how-to-read-input-from-s3-in-a-spark-streaming-ec2-cluster-application> question, it has discussion on various ways of accessing S3.
Thanks Best Regards On Fri, May 8, 2015 at 1:21 AM, in4maniac <sa...@skimlinks.com> wrote: > Hi Guys, > > I think this problem is related to : > > http://apache-spark-user-list.1001560.n3.nabble.com/AWS-Credentials-for-private-S3-reads-td8689.html > > I am running pyspark 1.2.1 in AWS with my AWS credentials exported to > master > node as Environmental Variables. > > Halfway through my application, I get thrown with a > org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException: > S3 HEAD request failed for "file path" - ResponseCode=403, > ResponseMessage=Forbidden > > Here is some important information about my job: > + my AWS credentials exported to master node as Environmental Variables > + there are no '/'s in my secret key > + The earlier steps that uses this parquet file actually complete > successsfully > + The step before the count() does the following: > + reads the parquet file (SELECT STATEMENT) > + maps it to an RDD > + runs a filter on the RDD > + The filter works as follows: > + extracts one field from each RDD line > + checks with a list of 40,000 hashes for presence (if field in > LIST_OF_HASHES.value) > + LIST_OF_HASHES is a broadcast object > > The wierdness is that I am using this parquet file in earlier steps and it > works fine. The other confusion I have is due to the fact that it only > starts failing halfway through the stage. It completes a fraction of tasks > and then starts failing.. > > Hoping to hear something positive. Many thanks in advance > > Sahanbull > > The stack trace is as follows: > >>> negativeObs.count() > [Stage 9:==========================> (161 + 240) / > 800] > > 15/05/07 07:55:59 ERROR TaskSetManager: Task 277 in stage 9.0 failed 4 > times; aborting job > Traceback (most recent call last): > File "<stdin>", line 1, in <module> > File "/root/spark/python/pyspark/rdd.py", line 829, in count > return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum() > File "/root/spark/python/pyspark/rdd.py", line 820, in sum > return self.mapPartitions(lambda x: [sum(x)]).reduce(operator.add) > File "/root/spark/python/pyspark/rdd.py", line 725, in reduce > vals = self.mapPartitions(func).collect() > File "/root/spark/python/pyspark/rdd.py", line 686, in collect > bytesInJava = self._jrdd.collect().iterator() > File "/root/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", > line 538, in __call__ > File "/root/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line > 300, in get_return_value > py4j.protocol.Py4JJavaError: An error occurred while calling o139.collect. > : org.apache.spark.SparkException: Job aborted due to stage failure: Task > 277 in stage 9.0 failed 4 times, most recent failure: Lost task 277.3 in > stage 9.0 (TID 4832, ip-172-31-1-185.ec2.internal): > org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException: > S3 HEAD request failed for > > '/subbucket%2Fpath%2F2Fpath%2F2Fpath%2F2Fpath%2F2Fpath%2Ffilename.parquet%2Fpart-r-349.parquet' > - ResponseCode=403, ResponseMessage=Forbidden > at > > org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieveMetadata(Jets3tNativeFileSystemStore.java:122) > at sun.reflect.GeneratedMethodAccessor116.invoke(Unknown Source) > at > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) > at > > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) > at org.apache.hadoop.fs.s3native.$Proxy9.retrieveMetadata(Unknown > Source) > at > > org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:326) > at > parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:381) > at > > parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:155) > at > parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:138) > at > org.apache.spark.rdd.NewHadoopRDD$$anon$1.<init>(NewHadoopRDD.scala:135) > at > org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:107) > at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:69) > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:280) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:247) > at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31) > at org.apache.spark.sql.SchemaRDD.compute(SchemaRDD.scala:120) > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:280) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:247) > at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31) > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:280) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:247) > at > org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:280) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:247) > at > > org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply$mcV$sp(PythonRDD.scala:242) > at > > org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204) > at > > org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204) > at > org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1550) > at > org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:203) > Caused by: org.jets3t.service.S3ServiceException: S3 HEAD request failed > for > > '/sahan%2Fiisee%2FMonthlyProfiles%2Fyear%3D2015%2Fmonth%3D05%2Fday%3D02%2FMonthlyAggregates.parquet%2Fpart-r-349.parquet' > - ResponseCode=403, ResponseMessage=Forbidden > at > > org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:477) > at > > org.jets3t.service.impl.rest.httpclient.RestS3Service.performRestHead(RestS3Service.java:718) > at > > org.jets3t.service.impl.rest.httpclient.RestS3Service.getObjectImpl(RestS3Service.java:1599) > at > > org.jets3t.service.impl.rest.httpclient.RestS3Service.getObjectDetailsImpl(RestS3Service.java:1535) > at > org.jets3t.service.S3Service.getObjectDetails(S3Service.java:1987) > at > org.jets3t.service.S3Service.getObjectDetails(S3Service.java:1332) > at > > org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieveMetadata(Jets3tNativeFileSystemStore.java:111) > ... 30 more > Caused by: org.jets3t.service.impl.rest.HttpException > at > > org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:475) > ... 36 more > > Driver stacktrace: > at > org.apache.spark.scheduler.DAGScheduler.org > $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1214) > at > > org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1203) > at > > org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1202) > at > > scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) > at > scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) > at > org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1202) > at > > org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696) > at > > org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696) > at scala.Option.foreach(Option.scala:236) > at > > org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:696) > at > > org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1420) > at akka.actor.Actor$class.aroundReceive(Actor.scala:465) > at > > org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundReceive(DAGScheduler.scala:1375) > at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516) > at akka.actor.ActorCell.invoke(ActorCell.scala:487) > at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238) > at akka.dispatch.Mailbox.run(Mailbox.scala:220) > at > > akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393) > at > scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) > at > > scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) > at > scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) > at > > scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) > > > > > > > -- > View this message in context: > http://apache-spark-user-list.1001560.n3.nabble.com/AWS-Credentials-fails-with-org-apache-hadoop-fs-s3-S3Exception-FORBIDDEN-tp22800.html > Sent from the Apache Spark User List mailing list archive at Nabble.com. > > --------------------------------------------------------------------- > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org > For additional commands, e-mail: user-h...@spark.apache.org > >