Okay, I swear I thought I saw it work once, but now even the simple test
case of a generate flow file to publishkafka10 isn't working.

I'm wondering if the producer config that NiFi is generating is not
suitable given the plain no auth configuration that my Kafka is running.
Here is what NiFi dumps in the logs when the step starts up:

2016-10-30 14:14:37,129 INFO [Timer-Driven Process Thread-6]
o.a.k.clients.producer.ProducerConfig ProducerConfig values:
metric.reporters = []
metadata.max.age.ms = 300000
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [localhost:9092]
ssl.keystore.type = JKS
sasl.mechanism = GSSAPI
max.block.ms = 2000
interceptor.classes = null
ssl.truststore.password = null
client.id = producer-18
ssl.endpoint.identification.algorithm = null
request.timeout.ms = 30000
acks = 0
receive.buffer.bytes = 32768
ssl.truststore.type = JKS
retries = 0
ssl.truststore.location = null
ssl.keystore.password = null
send.buffer.bytes = 131072
compression.type = none
metadata.fetch.timeout.ms = 60000
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class
org.apache.kafka.common.serialization.ByteArraySerializer
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
batch.size = 16384
ssl.keystore.location = null
ssl.cipher.suites = null
security.protocol = PLAINTEXT
max.request.size = 1048576
value.serializer = class
org.apache.kafka.common.serialization.ByteArraySerializer
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
partitioner.class = class
org.apache.kafka.clients.producer.internals.DefaultPartitioner
linger.ms = 0

On Sun, Oct 30, 2016 at 2:06 PM Daniel Einspanjer <
[email protected]> wrote:

> Yes I am connecting to a Kafka 0.10.0.1 broker.  The default configuration
> for the PublishKafka_0_10 step is to connect to localhost:9092 with
> PLAINTEXT which is how this local cluster is currently set up.
>
> On Sun, Oct 30, 2016 at 1:53 PM Andrew Grande <[email protected]> wrote:
>
> 2 things to check:
>
> 1. Are you connecting to Kafka 0.10 broker?
> 2. Which port are you using? Recent Kafka clients must point to the kafka
> broker port directly. Older clients were connecting through a zookeeper and
> used different host/port.
>
> Andrew
>
> On Sun, Oct 30, 2016, 1:11 PM Daniel Einspanjer <
> [email protected]> wrote:
>
> These db rows are fairly small. Five or six small text fields and five or
> six integer fields plus a timestamp.  Looking at the queue in NiFi, about
> 440 bytes each.
>
>
> If I exported the template correctly, the flow should show that I am
> trying to generate a set of query statement flow files, each 10k records,
> and then executesql on them.  Next I use SplitAvro and AvroToJSON to get a
> set of flow files, each one record in json format.  Those are what I'm
> feeding in to Kafka.
>
> -Daniel
>
> On Oct 30, 2016 11:00 AM, "Joe Witt" <[email protected]> wrote:
>
> Daniel
>
> How large is each object you are trying to write to Kafka?
>
> Since it was working based on a different source of data but now
> problematic that is the direction I am looking in in terms of changes.
> Output of the db stuff could need demarcation for example.
>
>
>

Reply via email to