On Thursday 23 April 2015 12:22 PM, Akhil Das wrote:
Here's a complete scala example
https://github.com/bbux-proteus/spark-accumulo-examples/blob/1dace96a115f29c44325903195c8135edf828c86/src/main/scala/org/bbux/spark/AccumuloMetadataCount.scala
Thanks
Best Regards
On Thu, Apr 23, 2015 at 12:19 PM, Akhil Das
<ak...@sigmoidanalytics.com <mailto:ak...@sigmoidanalytics.com>> wrote:
Change your import from mapred to mapreduce. like :
import org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat;
Thanks
Best Regards
On Wed, Apr 22, 2015 at 2:42 PM, madhvi <madhvi.gu...@orkash.com
<mailto:madhvi.gu...@orkash.com>> wrote:
Hi,
I am creating a spark RDD through accumulo writing like:
JavaPairRDD<Key, Value> accumuloRDD =
sc.newAPIHadoopRDD(accumuloJob.getConfiguration(),AccumuloInputFormat.class,Key.class,
Value.class);
But I am getting the following error and it is not getting
compiled:
Bound mismatch: The generic method
newAPIHadoopRDD(Configuration, Class<F>, Class<K>, Class<V>)
of type JavaSparkContext is not applicable for the arguments
(Configuration, Class<AccumuloInputFormat>, Class<Key>,
Class<Value>). The inferred type AccumuloInputFormat is not a
valid substitute for the bounded parameter <F extends
InputFormat<K,V>>
I am using the following import statements:
import org.apache.accumulo.core.client.mapred.AccumuloInputFormat;
import org.apache.accumulo.core.data.Key;
import org.apache.accumulo.core.data.Value;
I am not getting what is the problem in this.
Thanks
Madhvi
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
<mailto:user-unsubscr...@spark.apache.org>
For additional commands, e-mail: user-h...@spark.apache.org
<mailto:user-h...@spark.apache.org>
Hi,
Thanks.I got that solved:)
madhvi