Raghu and I spent some time on a hangout looking at this issue. Looks like
there is an issue with unbounded collections with KafkaIO
on InProcessPipelineRunner.

We changed the code to be a bounded collection with withMaxNumRecords and
used DirectPipelineRunner. That worked and processed the messages.

Next, we used InProcessPipelineRunner with a bounded collection. That
worked and processed the messages.

We changed it back to an unbounded collection
using InProcessPipelineRunner. That didn't work and continued to output the
error messages similar to the ones I've shown on the thread.

Thanks,

Jesse

On Wed, Jun 8, 2016 at 7:12 PM Jesse Anderson <[email protected]> wrote:

> I tried an 0.9.0 broker and I got the same error. Not sure if it makes a
> difference, but I'm using Confluent platform 2.0 and 3.0 for this testing.
>
> On Wed, Jun 8, 2016 at 5:20 PM Jesse Anderson <[email protected]>
> wrote:
>
>> Still open to screensharing and resolving over a hangout.
>>
>> On Wed, Jun 8, 2016 at 5:19 PM Raghu Angadi <[email protected]> wrote:
>>
>>> On Wed, Jun 8, 2016 at 1:56 PM, Jesse Anderson <[email protected]>
>>> wrote:
>>>
>>>> [pool-2-thread-1] INFO org.apache.beam.sdk.io.kafka.KafkaIO - Reader-0:
>>>> resuming eventsim-0 at default offset
>>>>
>>> [...]
>>>>
>>> [pool-2-thread-1] INFO org.apache.kafka.common.utils.AppInfoParser -
>>>> Kafka commitId : 23c69d62a0cabf06
>>>> [pool-2-thread-1] WARN org.apache.beam.sdk.io.kafka.KafkaIO - Reader-0:
>>>> getWatermark() : no records have been read yet.
>>>> [pool-77-thread-1] INFO org.apache.beam.sdk.io.kafka.KafkaIO -
>>>> Reader-0: Returning from consumer pool loop
>>>> [pool-78-thread-1] WARN org.apache.beam.sdk.io.kafka.KafkaIO -
>>>> Reader-0: exception while fetching latest offsets. ignored.
>>>>
>>>
>>> this reader is closed before the exception. The exception is due to an
>>> action during close and can be ignored. The main question is why this was
>>> closed...
>>>
>>

Reply via email to