Just reached this thread. +1 on to create a simple reproducer app and I
suggest to create a jira attaching the full driver and executor logs.
Ping me on the jira and I'll pick this up right away...

Thanks!

G


On Wed, Jan 13, 2021 at 8:54 AM Jungtaek Lim <kabhwan.opensou...@gmail.com>
wrote:

> Would you mind if I ask for a simple reproducer? Would be nice if you
> could create a repository in Github and push the code including the build
> script.
>
> Thanks in advance!
>
> On Wed, Jan 13, 2021 at 3:46 PM Eric Beabes <mailinglist...@gmail.com>
> wrote:
>
>> I tried both. First tried 3.0.0. That didn't work so I tried 3.1.0.
>>
>> On Wed, Jan 13, 2021 at 11:35 AM Jungtaek Lim <
>> kabhwan.opensou...@gmail.com> wrote:
>>
>>> Which exact Spark version did you use? Did you make sure the version for
>>> Spark and the version for spark-sql-kafka artifact are the same? (I asked
>>> this because you've said you've used Spark 3.0 but spark-sql-kafka
>>> dependency pointed to 3.1.0.)
>>>
>>> On Tue, Jan 12, 2021 at 11:04 PM Eric Beabes <mailinglist...@gmail.com>
>>> wrote:
>>>
>>>> org.apache.spark.sql.streaming.StreamingQueryException: Data source v2
>>>> streaming sinks does not support Update mode. === Streaming Query ===
>>>> Identifier: [id = 1f342043-29de-4381-bc48-1c6eef99232e, runId =
>>>> 62410f05-db59-4026-83fc-439a79b01c29] Current Committed Offsets: {} Current
>>>> Available Offsets: {} Current State: INITIALIZING Thread State: RUNNABLE at
>>>> org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:353)
>>>> at
>>>> org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:244)
>>>> Caused by: java.lang.IllegalArgumentException: Data source v2 streaming
>>>> sinks does not support Update mode. at
>>>> org.apache.spark.sql.execution.streaming.StreamExecution.createStreamingWrite(StreamExecution.scala:635)
>>>> at
>>>> org.apache.spark.sql.execution.streaming.MicroBatchExecution.logicalPlan$lzycompute(MicroBatchExecution.scala:130)
>>>> at
>>>> org.apache.spark.sql.execution.streaming.MicroBatchExecution.logicalPlan(MicroBatchExecution.scala:61)
>>>> at 
>>>> org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:320)
>>>> ... 1 more
>>>>
>>>>
>>>> *Please see the attached image for more information.*
>>>>
>>>>
>>>> On Tue, Jan 12, 2021 at 6:01 PM Jacek Laskowski <ja...@japila.pl>
>>>> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> Can you post the whole message? I'm trying to find what might be
>>>>> causing it. A small reproducible example would be of help too. Thank you.
>>>>>
>>>>> Pozdrawiam,
>>>>> Jacek Laskowski
>>>>> ----
>>>>> https://about.me/JacekLaskowski
>>>>> "The Internals Of" Online Books <https://books.japila.pl/>
>>>>> Follow me on https://twitter.com/jaceklaskowski
>>>>>
>>>>> <https://twitter.com/jaceklaskowski>
>>>>>
>>>>>
>>>>> On Tue, Jan 12, 2021 at 6:35 AM Eric Beabes <mailinglist...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Trying to port my Spark 2.4 based (Structured) streaming application
>>>>>> to Spark 3.0. I compiled it using the dependency given below:
>>>>>>
>>>>>> <dependency>
>>>>>>     <groupId>org.apache.spark</groupId>
>>>>>>     <artifactId>spark-sql-kafka-0-10_${scala.binary.version}</artifactId>
>>>>>>     <version>3.1.0</version>
>>>>>> </dependency>
>>>>>>
>>>>>>
>>>>>> Every time I run it under Spark 3.0, I get this message: *Data
>>>>>> source v2 streaming sinks does not support Update mode*
>>>>>>
>>>>>> I am using '*mapGroupsWithState*' so as per this link (
>>>>>> https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#output-modes),
>>>>>> the only supported Output mode is "*Update*".
>>>>>>
>>>>>> My Sink is a Kafka topic so I am using this:
>>>>>>
>>>>>> .writeStream
>>>>>> .format("kafka")
>>>>>>
>>>>>>
>>>>>> What am I missing?
>>>>>>
>>>>>>
>>>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>>>
>>>

Reply via email to