That kinda dodges the problem by ignoring generic types. But it may be
simpler than the 'real' solution, which is a bit ugly.

(But first, to double check, are you importing the correct
TextOutputFormat? there are two versions. You use .mapred. with the
old API and .mapreduce. with the new API.)

Here's how I've formally casted around it in similar code:

@SuppressWarnings
Class<? extends OutputFormat<?,?>> outputFormatClass =
    (Class<? extends OutputFormat<?,?>>) (Class<?>) TextOutputFormat.class;

and then pass that as the final argument.

On Wed, Feb 11, 2015 at 6:35 AM, Akhil Das <ak...@sigmoidanalytics.com> wrote:
> Did you try :
>
> temp.saveAsHadoopFiles("DailyCSV",".txt", String.class, String.class,(Class)
> TextOutputFormat.class);
>
> Thanks
> Best Regards
>
> On Wed, Feb 11, 2015 at 9:40 AM, Bahubali Jain <bahub...@gmail.com> wrote:
>>
>> Hi,
>> I am facing issues while writing data from a streaming rdd to hdfs..
>>
>> JavaPairDstream<String,String> temp;
>> ...
>> ...
>> temp.saveAsHadoopFiles("DailyCSV",".txt", String.class,
>> String.class,TextOutputFormat.class);
>>
>>
>> I see compilation issues as below...
>> The method saveAsHadoopFiles(String, String, Class<?>, Class<?>, Class<?
>> extends OutputFormat<?,?>>) in the type JavaPairDStream<String,String> is
>> not applicable for the arguments (String, String, Class<String>,
>> Class<String>, Class<TextOutputFormat>)
>>
>> I see same kind of problem even with saveAsNewAPIHadoopFiles API .
>>
>> Thanks,
>> Baahu
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to