I am trying to create a Spark javaRDD using the newAPIHadoopFile and the 
FixedLengthInputFormat. Here is my code snippit,

Configuration config = new Configuration();
config.setInt(FixedLengthInputFormat.FIXED_RECORD_LENGTH, JPEG_INDEX_SIZE);
config.set("fs.hdfs.impl", DistributedFileSystem.class.getName());
String fileFilter = config.get("fs.defaultFS") + "/A/B/C/*.idx";
JavaPairRDD<LongWritable, BytesWritable> inputRDD = 
sparkContext.newAPIHadoopFile(fileFilter, FixedLengthInputFormat.class, 
LongWritable.class, BytesWritable.class, config);

At this point I get the following exception:

Error executing mapreduce job: 
com.fasterxml.jackson.databind.JsonMappingException: Infinite recursion 
(StackOverflowError)

Any idea what I am doing wrong? I am new to Spark. David

David R Robison
Senior Systems Engineer
O. +1 512 247 3700
M. +1 757 286 0022
david.robi...@psgglobal.net<mailto:david.robi...@psgglobal.net>
www.psgglobal.net<http://www.psgglobal.net/>

Prometheus Security Group Global, Inc.
3019 Alvin Devane Boulevard
Building 4, Suite 450
Austin, TX 78741


Reply via email to