Yes I am connecting to a Kafka 0.10.0.1 broker.  The default configuration
for the PublishKafka_0_10 step is to connect to localhost:9092 with
PLAINTEXT which is how this local cluster is currently set up.

On Sun, Oct 30, 2016 at 1:53 PM Andrew Grande <[email protected]> wrote:

> 2 things to check:
>
> 1. Are you connecting to Kafka 0.10 broker?
> 2. Which port are you using? Recent Kafka clients must point to the kafka
> broker port directly. Older clients were connecting through a zookeeper and
> used different host/port.
>
> Andrew
>
> On Sun, Oct 30, 2016, 1:11 PM Daniel Einspanjer <
> [email protected]> wrote:
>
> These db rows are fairly small. Five or six small text fields and five or
> six integer fields plus a timestamp.  Looking at the queue in NiFi, about
> 440 bytes each.
>
>
> If I exported the template correctly, the flow should show that I am
> trying to generate a set of query statement flow files, each 10k records,
> and then executesql on them.  Next I use SplitAvro and AvroToJSON to get a
> set of flow files, each one record in json format.  Those are what I'm
> feeding in to Kafka.
>
> -Daniel
>
> On Oct 30, 2016 11:00 AM, "Joe Witt" <[email protected]> wrote:
>
> Daniel
>
> How large is each object you are trying to write to Kafka?
>
> Since it was working based on a different source of data but now
> problematic that is the direction I am looking in in terms of changes.
> Output of the db stuff could need demarcation for example.
>
>
>

Reply via email to