Hello, everyone! I'm new in spark. I have already written programs in
Hadoop2.5.2, where I defined my own InputFormat and OutputFormat. Now I
want to move my codes to spark using java language. The first problem I
encountered is how to transform big txt file in local storage to RDD, which
is compat
Did you happened to try this?
JavaPairRDD hadoopFile = sc.hadoopFile(
"/sigmoid", DataInputFormat.class, LongWritable.class,
Text.class)
Thanks
Best Regards
On Tue, Jun 23, 2015 at 6:58 AM, 付雅丹 wrote:
> Hello, everyone! I'm new in spark. I have already written programs i
Hi, Akhil. Thank you for your reply. I tried what you suggested. But it
exists the following error.
source code is:
JavaPairRDD distFile=sc.hadoopFile(
"hdfs://cMaster:9000/wcinput/data.txt",
DataInputFormat.class,LongWritable.class,Text.class);
while DataInputFormat class is defined as this:
cl