On 2 May 2016, at 19:24, Gourav Sengupta
> wrote:
Jorn,
what aspects are you speaking about ?
My response was absolutely pertinent to Jinan because he will not even face the
problem if he used Scala. So it was along the lines of
Date: Thu, 28 Apr 2016 11:19:08 +0100
Subject: Re: Reading from Amazon S3
From: gourav.sengu...@gmail.com<mailto:gourav.sengu...@gmail.com>
To: ste...@hortonworks.com<mailto:ste...@hortonworks.com>
CC: yuzhih...@gmail.com<mailto:yuzhih...@gmail.com>;
j.r.alhaj...@hot
ark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>>
>> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
>>
>> at scala.Option.getOrElse(Option.scala:120)
>>
>> at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>>
>> at
>> org.apa
cala:237)
>>
>> at
>> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
>>
>> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>>
>> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.appl
t;
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
>
> at scala.Option.getOrElse(Option.scala:120)
>
> at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>
> at org.apache
.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at
org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:65)
at
org.apache.spark.api.java.JavaPairRDD.reduceByKey(JavaPairRDD.scala:526)Date:
Thu, 28 Apr 2016 11:19:08 +0100
Subject: Re: Reading from Amazon S3
From: gourav.sengu...
Why would you use JAVA (create a problem and then try to solve it)? Have
you tried using Scala or Python or even R?
Regards,
Gourav
On Thu, Apr 28, 2016 at 10:07 AM, Steve Loughran
wrote:
>
> On 26 Apr 2016, at 18:49, Ted Yu wrote:
>
> Looking at
Looking at the cause of the error, it seems hadoop-aws-xx.jar
(corresponding to the version of hadoop you use) was missing in classpath.
FYI
On Tue, Apr 26, 2016 at 9:06 AM, Jinan Alhajjaj
wrote:
> Hi All,
> I am trying to read a file stored in Amazon S3.
> I wrote
Hi All,I am trying to read a file stored in Amazon S3.I wrote this code:import
java.util.List;
import java.util.Scanner;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import