Re: method newAPIHadoopFile

2015-02-25 Thread patcharee

I tried
val pairVarOriRDD = sc.newAPIHadoopFile(path,
classOf[NetCDFFileInputFormat].asSubclass(
classOf[org.apache.hadoop.mapreduce.lib.input.FileInputFormat[WRFIndex,WRFVariable]]), 


classOf[WRFIndex],
classOf[WRFVariable],
jobConf)

The compiler does not complain. Please let me know if this solution is 
not good enough.


Patcharee


On 25. feb. 2015 10:57, Sean Owen wrote:

OK, from the declaration you sent me separately:

public class NetCDFFileInputFormat extends ArrayBasedFileInputFormat
public abstract class ArrayBasedFileInputFormat extends
org.apache.hadoop.mapreduce.lib.input.FileInputFormat

It looks like you do not declare any generic types that
FileInputFormat declares for the key and value type. I think you can
get away with this in the Java API with warnings, but scalac is
correct that you have not given an InputFormat that matches the bounds
required by the API.

That is you need to extend something like ArrayBasedFileInputFormat
WRFIndex ,WRFVariable

On Wed, Feb 25, 2015 at 9:15 AM, patcharee patcharee.thong...@uni.no wrote:

Hi,

I am new to spark and scala. I have a custom inputformat (used before with
mapreduce) and I am trying to use it in spark.

In java api (the syntax is correct):

JavaPairRDDWRFIndex, WRFVariable pairVarOriRDD = sc.newAPIHadoopFile(
 path,
 NetCDFFileInputFormat.class,
 WRFIndex.class,
 WRFVariable.class,
 jobConf);

But in scala:

val pairVarOriRDD = sc.newAPIHadoopFile(path,
 classOf[NetCDFFileInputFormat],
 classOf[WRFIndex],
 classOf[WRFVariable],
 jobConf)

The compiler complained
inferred type arguments
[no.uni.computing.io.WRFIndex,no.uni.computing.io.WRFVariable,no.uni.computing.io.input.NetCDFFileInputFormat]
do not conform to method newAPIHadoopFile's type parameter bounds [K,V,F :
org.apache.hadoop.mapreduce.InputFormat[K,V]]

What is the correct syntax for scala api?

Best,
Patcharee


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org




-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: method newAPIHadoopFile

2015-02-25 Thread patcharee

This is the declaration of my custom inputformat

public class NetCDFFileInputFormat extends ArrayBasedFileInputFormat
public abstract class ArrayBasedFileInputFormat extends 
org.apache.hadoop.mapreduce.lib.input.FileInputFormat


Best,
Patcharee


On 25. feb. 2015 10:15, patcharee wrote:

Hi,

I am new to spark and scala. I have a custom inputformat (used before 
with mapreduce) and I am trying to use it in spark.


In java api (the syntax is correct):

JavaPairRDDWRFIndex, WRFVariable pairVarOriRDD = sc.newAPIHadoopFile(
path,
NetCDFFileInputFormat.class,
WRFIndex.class,
WRFVariable.class,
jobConf);

But in scala:

val pairVarOriRDD = sc.newAPIHadoopFile(path,
classOf[NetCDFFileInputFormat],
classOf[WRFIndex],
classOf[WRFVariable],
jobConf)

The compiler complained
inferred type arguments 
[no.uni.computing.io.WRFIndex,no.uni.computing.io.WRFVariable,no.uni.computing.io.input.NetCDFFileInputFormat] 
do not conform to method newAPIHadoopFile's type parameter bounds 
[K,V,F : org.apache.hadoop.mapreduce.InputFormat[K,V]]


What is the correct syntax for scala api?

Best,
Patcharee


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org




-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: method newAPIHadoopFile

2015-02-25 Thread Sean Owen
OK, from the declaration you sent me separately:

public class NetCDFFileInputFormat extends ArrayBasedFileInputFormat
public abstract class ArrayBasedFileInputFormat extends
org.apache.hadoop.mapreduce.lib.input.FileInputFormat

It looks like you do not declare any generic types that
FileInputFormat declares for the key and value type. I think you can
get away with this in the Java API with warnings, but scalac is
correct that you have not given an InputFormat that matches the bounds
required by the API.

That is you need to extend something like ArrayBasedFileInputFormat
WRFIndex ,WRFVariable

On Wed, Feb 25, 2015 at 9:15 AM, patcharee patcharee.thong...@uni.no wrote:
 Hi,

 I am new to spark and scala. I have a custom inputformat (used before with
 mapreduce) and I am trying to use it in spark.

 In java api (the syntax is correct):

 JavaPairRDDWRFIndex, WRFVariable pairVarOriRDD = sc.newAPIHadoopFile(
 path,
 NetCDFFileInputFormat.class,
 WRFIndex.class,
 WRFVariable.class,
 jobConf);

 But in scala:

 val pairVarOriRDD = sc.newAPIHadoopFile(path,
 classOf[NetCDFFileInputFormat],
 classOf[WRFIndex],
 classOf[WRFVariable],
 jobConf)

 The compiler complained
 inferred type arguments
 [no.uni.computing.io.WRFIndex,no.uni.computing.io.WRFVariable,no.uni.computing.io.input.NetCDFFileInputFormat]
 do not conform to method newAPIHadoopFile's type parameter bounds [K,V,F :
 org.apache.hadoop.mapreduce.InputFormat[K,V]]

 What is the correct syntax for scala api?

 Best,
 Patcharee


 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org