How are you submitting your job? You need to make sure HADOOP_CONF_DIR is
pointing to your hadoop configuration directory (with core-site.xml,
hdfs-site.xml files), If you have them set properly then make sure you are
giving the full hdfs url like:

dStream.saveAsTextFiles("hdfs://sigmoid-cluster:9000/twitter/","status-")


Thanks
Best Regards

On Sun, Oct 25, 2015 at 1:57 AM, Andy Davidson <
a...@santacruzintegration.com> wrote:

> Hi
>
> I am using spark streaming in Java. One of the problems I have is I need
> to save twitter status in JSON format as I receive them
>
> When I run the following code on my local machine. It work how ever all
> the output files are created in the current directory of the driver
> program. Clearly not a good cluster solution. Any idea how I can configure
> spark so that it will write the output to hdfs?
>
>         JavaDStream<Status> tweets = TwitterFilterQueryUtils.createStream(
> ssc, twitterAuth);
>
>         DStream<Status> dStream = tweets.dstream();
>
>         String prefix = “MyPrefix";
>
>         String suffix = "json";
>
> //      this works, when I test locally the files are created in the
> current directory  of driver program
>
>         dStream.saveAsTextFiles(prefix, suffix);
>
>
> Is there something I need to set on my SparkConf object?
>
>
> Kind regards
>
>
> andy
>

Reply via email to