Hi Team,

I'm not able to print the values from Spark Sql JavaSchemaRDD. Please find
below my code

             JavaSQLContext sqlCtx = new JavaSQLContext(sc);

            NewHadoopRDD<ImmutableBytesWritable, Result> rdd = new
NewHadoopRDD<ImmutableBytesWritable, Result>(
                    JavaSparkContext.toSparkContext(sc),
                    TableInputFormat.class, ImmutableBytesWritable.class,
                    Result.class, conf);

            JavaRDD<Tuple2<ImmutableBytesWritable, Result>> jrdd = rdd
                    .toJavaRDD();

            ForEachFunction f = new ForEachFunction();

            JavaRDD<ANAInventory> retrdd = jrdd.map(f);

            JavaSchemaRDD schemaPeople = sqlCtx.applySchema(retrdd,
Test.class);
            schemaPeople.registerAsTable("retrdd");

            JavaSchemaRDD teenagers = sqlCtx.sql("SELECT * FROM retrdd");

When i add below code. It is giving compilation issue. Could you please
help me to resolve this issue.


List<String> teenagerNames = teenagers.map(new Function<Row, String>() {
      public String call(Row row) {
        return null;
      }
    }).collect();
    for (String name: teenagerNames) {
      System.out.println(name);
    }


Compilation issue :

The method map(Function<Row,R> in the type JavaSchemaRDD is not applicable
for the arguments (new Functiona<Row, String>(){})


Thank you for your help

Regards,
Rajesh

Reply via email to