​Here's two ways of doing that:

Without the filter function :

JavaPairDStream<String, String> foo =
      ssc.<String, String, SequenceFileInputFormat>fileStream("/tmp/foo");​



With the filter function:

JavaPairInputDStream<LongWritable, Text> foo = ssc.fileStream("/tmp/foo",
      LongWritable.class,
      Text.class,
      TextInputFormat.class,
      new Function<Path, Boolean>() {
        @Override
        public Boolean call(Path v1) throws Exception {
          return Boolean.TRUE;
        }
      },
      true);



Thanks
Best Regards

On Mon, Jul 20, 2015 at 11:10 PM, unk1102 <umesh.ka...@gmail.com> wrote:

> Hi I am trying to find correct way to use Spark Streaming API
> streamingContext.fileStream(String,Class<K>,Class<V>,Class<F>)
>
> I tried to find example but could not find it anywhere in either Spark
> documentation. I have to stream files in hdfs which is of custom hadoop
> format.
>
>   JavaPairDStream<Void,MyRecordWritable> input = streamingContext.
> fileStream("/path/to/hdfs/stream/dir/",
>     Void.class,
>     MyRecordWritable.class,
>     MyInputFormat.class,
>     ??);
>
> How do I implement fourth argument class type Function mentioned as ??
> Please guide I am new to Spark Streaming. Thank in advance.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/What-is-the-correct-syntax-of-using-Spark-streamingContext-fileStream-tp23916.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to