Why writeStream is needed to consume the data ?

When I tried it I got this exception:

INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
> org.apache.spark.sql.AnalysisException: Complete output mode not supported
> when there are no streaming aggregations on streaming DataFrames/Datasets;
> at
> org.apache.spark.sql.catalyst.analysis.UnsupportedOperationChecker$.org$apache$spark$sql$catalyst$analysis$UnsupportedOperationChecker$$throwError(UnsupportedOperationChecker.scala:173)
> at
> org.apache.spark.sql.catalyst.analysis.UnsupportedOperationChecker$.checkForStreaming(UnsupportedOperationChecker.scala:65)
> at
> org.apache.spark.sql.streaming.StreamingQueryManager.startQuery(StreamingQueryManager.scala:236)
> at
> org.apache.spark.sql.streaming.DataStreamWriter.start(DataStreamWriter.scala:287)
> at .<init>(<console>:59)




2016-08-01 18:44 GMT+02:00 Amit Sela <amitsel...@gmail.com>:

> I think you're missing:
>
> val query = wordCounts.writeStream
>
>   .outputMode("complete")
>   .format("console")
>   .start()
>
> Dis it help ?
>
> On Mon, Aug 1, 2016 at 2:44 PM Jacek Laskowski <ja...@japila.pl> wrote:
>
>> On Mon, Aug 1, 2016 at 11:01 AM, Ayoub Benali
>> <benali.ayoub.i...@gmail.com> wrote:
>>
>> > the problem now is that when I consume the dataframe for example with
>> count
>> > I get the stack trace below.
>>
>> Mind sharing the entire pipeline?
>>
>> > I followed the implementation of TextSocketSourceProvider to implement
>> my
>> > data source and Text Socket source is used in the official documentation
>> > here.
>>
>> Right. Completely forgot about the provider. Thanks for reminding me
>> about it!
>>
>> Pozdrawiam,
>> Jacek Laskowski
>> ----
>> https://medium.com/@jaceklaskowski/
>> Mastering Apache Spark 2.0 http://bit.ly/mastering-apache-spark
>> Follow me at https://twitter.com/jaceklaskowski
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>>
>>

Reply via email to