Hello,

Where in the Spark APIs can I get access to the Hadoop Context instance?  I
am trying to implement the Spark equivalent of this

    public void reduce(Text key, Iterable<DoubleWritable> values, Context
context)

        throws IOException, InterruptedException {

      if (record == null) {

        throw new IOException("No output record found");

      }

      record.set("a", 125);

      record.set("b", true);

      record.set("c", 'c');

      record.set("d", new java.sql.Date
(Calendar.getInstance().getTimeInMillis()));

      record.set("f", 234.526);

      record.set("t", new java.sql.Timestamp
(Calendar.getInstance().getTimeInMillis()));

      record.set("v", "foobar string");

      record.set("z", new byte[10]);

      context.write(new Text("mrtarget"), record);


where record is a VerticaRecord.

Reply via email to