blockquote, div.yahoo_quoted { margin-left: 0 !important; border-left:1px
#715FFA solid !important; padding-left:1ex !important; background-color:white
!important; } i am running 5 broker kafka cluster on docker and using mesos
cluster manager and marathon framework
Sent from Yahoo Mail
Hi,
So we specifically kept the consumers to world writable in secure
mode. This is to allow zookeeper based consumers to create their own child
nodes under /consumers and they can add their own sasl based acls on top of
it. From the looks of it incase of zookeeper digest based connection it
Hey guys,
I have some unit test that has an embedded kafka server running. I want to
skip all debug and info logs from kafka server. But having this set in
log4j.properties does work. Some INFO log still keep showing up like this:
2016-07-08 18:01:14,288 [kafka-request-handler-4] INFO
Hi,
So we specifically kept the consumers to world writable in secure
mode. This is to allow zookeeper based consumers to create their own child
nodes under /consumers and they can add their own sasl based acls on top of
it. From the looks of it incase of zookeeper digest based connection it
For “first partition”, I was speaking specifically of your example - Burrow
doesn’t care about partition 0 vs. any other partition. Looking at that
output from the groups tool, it looks like there are a lot of partition
with no committed offsets. There’s even one partition with a committed
offset
We're using AWS ECS for our Kafka cluster of six nodes. We did some
performance testing on a three node cluster and the results were as good as
the Linkedin published results on bare metal machines.
We are using EBS st1 drives. The bottleneck is the network to the ebs
volumes. So for about 25%
Thanks, Christian.
I am currently reading about kafka-on-mesos.
I will hack something this weekend to see if I can bring up a kafka
scheduler on mesos using dockerized brokers. .
--
κρισhναν
On Thu, Jul 7, 2016 at 7:29 PM, Christian Posta
wrote:
> One thing I can
Hi,
consumer.subscribe(Pattern p , ..) method implementation tries to get
metadata of all the topics.
This will throw TopicAuthorizationException on internal topics and other
unauthorized topics.
We may need to move the pattern matching to sever side.
Is this know issue?. If not, I will raise
Hi All,
Since a few weeks I'm working for fun on a CEP library on top of
KafkaStreams.
There is some progress and I think my project start to look something, or
at least I hope ;)
https://github.com/fhussonnois/kafkastreams-cep
So I'm pleased to share it with you (I already shared it with dev
Sorry, I should say only partition 1 had something at first, then zero:
Toms-iMac:betwave-server tomdearman$
/Users/tomdearman/software/kafka_2.11-0.10.0.0/bin/kafka-consumer-groups.sh
--new-consumer --bootstrap-server localhost:9092 --describe --group
voidbridge-oneworks-dummy
GROUP
When you say ‘for the first partition’ do you literally mean partition zero, or
you mean any partition. It is true that when I had only 1 user there were only
messages on partition 15 but the second user happened to go to partition zero.
Is it the case that partition zero must have a consumer
If you open up an issue on the project, I'd be happy to dig into this in
more detail if needed. Excluding the ZK offset checking, Burrow doesn't
enumerate consumer groups - it learns about them from offset commits. It
sounds like maybe your consumer had not committed offsets for the first
I should mention this was using the web server to check status.
> On 8 Jul 2016, at 16:56, Tom Dearman wrote:
>
> Todd,
>
> Thanks for that I am taking a look.
>
> Is there a bug whereby if you only have a couple of messages on a topic, both
> with the same key, that
Todd,
Thanks for that I am taking a look.
Is there a bug whereby if you only have a couple of messages on a topic, both
with the same key, that burrow doesn’t return correct info. I was finding that
http://localhost:8100/v2/kafka/betwave/consumer
Hello,
I am developing a service so that all clustered nodes form a consumer
group. I also need to run some logic on only one of the node. Can I use a
special single-partition topic for leader election? That is, in my node I
can use ConsumerRebalanceListener to make sure that if the "leader"
I am running zookeeper and kafka on local machine.
This is the user permission on zookeeper
[zk: localhost:2181(CONNECTED) 0] getAcl /
'digest,'broker:TqgUewyrgBbYEWTfsNStYmIfD2Q=
: cdrwa
I am using the same user in kafka to connect to this local zookeeper
@Ismael: thanks for clarification. Understood question not correctly...
@Vivek: You can also go with processing-time (or ingestion-time)
semantics if you cannot embed a timestamp in the data itself.
See http://docs.confluent.io/3.0.0/streams/concepts.html#time for more
details.
-Matthias
On
Vivek,
in this case you should manually embed a timestamp within the payload of
the produced messages (e.g. as a Long field in an Avro-encoded message
value). This would need to be done by the producer.
Then, in Kafka Streams, you'd need to implement a custom
TimestampExtractor that can
18 matches
Mail list logo