0.10.1.0-SNAPSHOT is the broker too
My procedure is the same as 0.9.0.1. I do a signed tgz build and then a local
maven install. I unpack the tgz into a docker just like spotify/kafka. I run
that as my server broker with ports 9092 and 2181 exposed.
I link my producer code in the gist to
Hi Greg,
What is the broker version? Have you checked that there are no errors
logged in the broker?
Ismael
On Fri, Apr 15, 2016 at 11:55 PM, Greg Zoller
wrote:
> ok... one more attempt for a working link...
> https://gist.github.com/gzoller/145faef1fefc8acea212e87e06fc86e8
> (If this doesn't
ok... one more attempt for a working link...
https://gist.github.com/gzoller/145faef1fefc8acea212e87e06fc86e8
(If this doesn't work please copy/paste link.)
From: Greg Zoller
To: "users@kafka.apache.org" ; Greg Zoller
Sent: Friday, April 15, 2016 5:53 PM
Subject: Re: Producer Bug? M
GIST: https://gist.github.com/gzoller/145faef1fefc8acea212e87e06fc86e8
From: Greg Zoller
To: "users@kafka.apache.org"
Sent: Friday, April 15, 2016 5:51 PM
Subject: Producer Bug? Moving form 0.9.0.1 to 0.10.1.0-SNAPSHOT (latest)
I have built 0.10.1.0-SNAPSHOT from scratch and used i
I have built 0.10.1.0-SNAPSHOT from scratch and used it with my KafkaProducer
code from 0.9.0.1.It compiled just fine but when run it hangs (times out
actually) on send(). I've created a gist below with a clip from the output in
comments at the end of the file.
Remember, this code worked flawle
Awesome that is what I thought. Answer seems simple, speed up flush :-D,
which we should be able to do.
On Fri, Apr 15, 2016 at 10:15 AM Liquan Pei wrote:
> Hi Scott,
>
> It seems that your flush takes longer time than
> consumer.session.timeout.ms.
> The consumers used in SinkTasks for a SinkCo
Hi Scott,
It seems that your flush takes longer time than consumer.session.timeout.ms.
The consumers used in SinkTasks for a SinkConnector are in the same
consumer group. In case that your flush method takes longer than the
consumer.session.timeout.ms, the consumer for a SinkTask may be kicked out
List,
We are struggling with Kafka Connect settings. The process start up and
handle a bunch of messages and flush. Then slowly the Group coordinator
removes them.
This is has to be a interplay between Connect's flush interval and the call
to poll for each of these tasks. Here is my current setti
Yes that worked! I missed that that was an option. Thank you Liquan.
On Thu, Apr 14, 2016 at 5:51 PM, Liquan Pei wrote:
> Hi Alexander,
>
> The group management of the new Kafka Consumer is not using Zookeeper. Can
> you add new-consumer to the command line arguments of
> /kafka-consumer-groups.
Hi Jeff,
We currently does not expose the TimestampExtractor, as it will always be
applied for all records polled from consumer automatically.
As for your case, do you have the JSON-formatted along with non-JSON
messages on the same topic? In that case, I agree with you that you could
do the filt
1) There are three types of joins for KTable-KTable join, the follow the
same semantics in SQL joins:
KTable.join(KTable): when there is no matching record from inner table when
received a new record from outer table, no output; and vice versa.
KTable.leftjoin(KTable): when there is no matching re
The only hook I see for specifying a TimestampExtractor is in the
Properties that you pass when creating a KafkaStreams instance. Is it
possible to modify the timestamp while processing a stream, or does the
timestamp need to be extracted immediately upon entry into the topology?
I have a case whe
Yes, sounds good, Guozhang, thanks. I'll create a jira today.
-josh
On Thu, Apr 14, 2016, 1:37 PM Guozhang Wang wrote:
> Hi Josh,
>
> As we chatted offline, would you like to summarize your proposed Transform
> APIs in a separate JIRA so we can continue our discussions there?
>
> Guozhang
>
> O
Hi Florian,
This may be helpful
https://github.com/omkreddy/kafka-examples/blob/master/consumer/src/main/java/kafka/examples/consumer/advanced/AdvancedConsumer.java
--Kamal
On Fri, Apr 15, 2016 at 2:57 AM, Jason Gustafson wrote:
> Hi Florian,
>
> It's actually OK if processing takes longer tha
Hi Fredo,
This may help:
http://www.confluent.io/blog/apache-kafka-security-authorization-authentication-encryption
Ismael
On Fri, Apr 15, 2016 at 4:50 AM, Fredo Lee wrote:
> how to config kafka security with plaintext && acl? i just want to deny
> some ips.
>
Hi Guozhang,
Thank you very much for your reply and sorry for the generic question, I'll
try to explain with some pseudocode.
I have two KTable with a join:
ktable1: KTable[String, String] = builder.table("topic1")
ktable2: KTable[String, String] = builder.table("topic2")
result: KTable[String,
Hi,
log compaction related JMX metric object names are given below.
kafka.log:type=LogCleaner,name=cleaner-recopy-percent
kafka.log:type=LogCleaner,name=max-buffer-utilization-percent
kafka.log:type=LogCleaner,name=max-clean-time-secs
kafka.log:type=LogCleanerManager,name=max-dirty-percent
Afte
Hi,
kafka.log:type=LogCleaner,name=cleaner-recopy-percent
kafka.log:type=LogCleanerManager,name=max-dirty-percent
kafka.log:type=LogCleaner,name=max-clean-time-secs
After every compaction cycle, we also print some useful statistics to
logs/log-cleaner.log file.
On Wed, Apr 13, 2016 at 7:16
18 matches
Mail list logo