Hi,

I am new to spark and scala. I have a custom inputformat (used before with mapreduce) and I am trying to use it in spark.

In java api (the syntax is correct):

JavaPairRDD<WRFIndex, WRFVariable> pairVarOriRDD = sc.newAPIHadoopFile(
            path,
            NetCDFFileInputFormat.class,
            WRFIndex.class,
            WRFVariable.class,
            jobConf);

But in scala:

val pairVarOriRDD = sc.newAPIHadoopFile(path,
        classOf[NetCDFFileInputFormat],
        classOf[WRFIndex],
        classOf[WRFVariable],
        jobConf)

The compiler complained>
inferred type arguments [no.uni.computing.io.WRFIndex,no.uni.computing.io.WRFVariable,no.uni.computing.io.input.NetCDFFileInputFormat] do not conform to method newAPIHadoopFile's type parameter bounds [K,V,F <: org.apache.hadoop.mapreduce.InputFormat[K,V]]

What is the correct syntax for scala api?

Best,
Patcharee


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to