unsubsribe
Thu, Aug 23, 2018, 7:29 PM Jack S wrote:
>
> > Hello,
> >
> > We have a requirement for opening Kafka on WAN where external producers
> and
> > consumers need to be able to talk to Kafka. I was able to get Zookeeper
> and
> > Kafka working with two way
Hello,
We have a requirement for opening Kafka on WAN where external producers and
consumers need to be able to talk to Kafka. I was able to get Zookeeper and
Kafka working with two way SSL and SASL for authentication and ACL for
authorization.
However, my concern with this approach was opening u
least, I think this would have allowed a clean and
automatic recovery. Has this idea been considered before? Does it have
fatal flaws?
Thanks,
--
Jack Foy
to 10.179.165.7
at20170621
delete.topic.enable=true #add at 20170621
zookeeper zoo.cfg
server.0=10.179.165.7:2287:3387 #added at 20170621
发件人: Caokun (Jack, Platform)
发送时间: 2017年6月20日 23:07
收件人: 'users@kafka.apache.org'
抄送: Xukeke
主题: kafka version 0.10.2.1 consumer can not get the mess
Hello experts
I write the kafka demo with java .
The prouct can send the message but the consumer can not get the message
My kafka configuration is ok
./kafka-console-producer.sh --broker-list localhost:9080 --topic testkun
./kafka-console-consumer.sh --zookeeper localhost:2181 --topic testkun
--
cluster B
- Producers gradually migrate from A to B
We've found the following, which seems to suggest no, but doesn't
address the point directly:
http://events.linuxfoundation.org/sites/events/files/slides/Kafka%20At%20Scale.pdf
--
Jack Foy
g with the under-replicated partitions JMX metric. If the
partitions aren't being moved (which makes sense), then it makes sense that
it would claim that the partition is under-replicated, because, well, it
is, at least briefly.
>
>
-James
>
>
Thanks!
-Jack
as and the leadership
election process.
This is causing us some pain because it means that we get pages whenever we
roll out changes to Zookeeper.
Does anybody have any ideas why this would be happening, and how we can
avoid it?
Thanks.
-Jack Lund
Braintree Payments
No worries.
I figure that out already.
Thanks all.
Best regards,
Jack
-Original Message-
From: Jack Yang [mailto:j...@uow.edu.au]
Sent: Monday, 29 August 2016 10:13 AM
To: users@kafka.apache.org
Subject: RE: consumer with version 0.10.0
Hi there,
My fault. When I produce messages
consumer, we can decide a previous
offset, and then ask the consumer to start from it.
How about the new one?
Best regards,
Jack
-Original Message-
From: Jaikiran Pai [mailto:jai.forums2...@gmail.com]
Sent: Friday, 26 August 2016 11:30 PM
To: users@kafka.apache.org
Subject: Re: consumer
] = kafkaConsumer.poll(timeout)
records.asScala.foreach(record => {
println(record.key() + ":" + record.value() + ":" + record.offset())
})
} catch {
case e: Exception => {
e.printStackTrace
}
} finally {
kafkaConsumer.close
}
Best regards,
Jack
I see, thanks for the clarification.
On Tue, Aug 2, 2016 at 10:07 PM, Ewen Cheslack-Postava
wrote:
> Jack,
>
> The partition is always selected by the client -- if it weren't the brokers
> would need to forward requests since different partitions are handled by
> differe
/master/lib/partitioner.js). Does
Kafka delegate the task of partitioning to client? From their documentation
it doesn't seem like they provide an option to select the "default Kafka
partitioner".
Thanks,
Jack
On Fri, Jul 29, 2016 at 7:42 AM, Gerard Klijs
wrote:
> The defaul
both
topics are the same. I can't find any documentation on this point though.
Does anyone know if this is indeed the case?
Thanks,
Jack
want to keep
the flexibility to add persistence back if we need to.
Thanks,
Jack
ld.
>
> -Ewen
>
> On Tue, May 31, 2016 at 9:47 AM, Jack Lund <
> jack.l...@braintreepayments.com>
> wrote:
>
> > Yes, the one that the SinkConnector uses is the WorkerSinkTaskContext,
> but,
> > unfortunately, it creates it and uses it internally, but doesn
Yes, the one that the SinkConnector uses is the WorkerSinkTaskContext, but,
unfortunately, it creates it and uses it internally, but doesn't expose any
accessors for it, nor does the constructor allow me to pass one in for it
to use.
-Jack
On Tue, May 31, 2016 at 11:34 AM Dean Arnold
ter has accessors to reset the
offsets with. Without those, I'm not sure what the purpose of the rewind
method is, since it doesn't seem to be possible to set the offsets at all.
Is this by design?
Thanks.
-Jack Lund
Braintree Payment Systems
Thanks, that helps! We are using the new consumer, but we aren't sure yet
whether it will be easier in our case to reset the offsets directly in the
consumers or do it externally, so we wanted to experiment with both.
-Jack
On Tue, Nov 24, 2015 at 7:18 PM Jason Gustafson wrote:
> The
seem to be valid any more (for instance,
kafka.javaapi.ConsumerMetadataRequest and Response don’t seem to exist any
more).
Is there any updated code snippets for fetching/committing offsets in Kafka?
Thanks
-Jack Lund
Braintree Payments
i also observed, in my case, mbeans on different broker give out different
numbers. was it reporting per partitioning metrics?
On Mon, Jun 29, 2015 at 7:52 PM, Jack Zhou wrote:
> Thanks Raj,
> So for each second there is a a rate. For one minute we have 60 many such
> rates.So for on
messages on
the topic? But my number does not reflects that.
Thanks again for your time,
Jack
On Mon, Jun 29, 2015 at 7:33 PM, Raj Chudasama
wrote:
> sorry i missed this:number of messages received in to the broker per
> XXX seconds gives mean avg rate.
>
>
>
>
> On Mon,
Hi,
Could some point me to the documenation for the attibutes of the mbean:
Count, MeanRate, OneMinuteRate, FiveMinuteRate, FifteenMinuteRate.
I just can't figure out their meaning.
Thanks for help
Jack
That would be really useful. Thanks for your writing, Guozhang. I will give
it a shot and let you know.
On Tue, Apr 7, 2015 at 10:06 AM, Guozhang Wang wrote:
> Jack,
>
> Okay I see your point now. I was originally thinking that in each run, you
> 1) first create the topic, 2) start
set" to "smallest" which we
know it works, but it took forever to get the data (since the log is long).
Thanks again.
-Jack
On Mon, Apr 6, 2015 at 5:34 PM, Guozhang Wang wrote:
> Did you turn on automatic offset committing? If yes then this issue should
> not happen as l
Hi Guozhang,
When I switched to auto.offset.reset to smallest, it will work. However, it
will generate a lot of data and it will slow down the verification.
Thanks,
-Jack
On Mon, Apr 6, 2015 at 5:07 PM, Guozhang Wang wrote:
> Jack,
>
> Could you just change "auto.offset.rese
ge.max.bytes -> 10M
So it seems like we need to make sure the submitted future returns before
performing action actions which eventually generate the message we expect.
Cheers,
-Jack
On Mon, Apr 6, 2015 at 4:04 PM, Guozhang Wang wrote:
> Jack,
>
> Your theory is correct if your con
ssion is
asynchronous (we don't wait for each submission and then produce message to
kafka), that could get us a later offset, which might not contains the
message we want). One possible solution to that is perform any action which
produce messages to kafka, after all these submitted tasks returns.
Any thoughts?
Thanks,
-Jack
r={immutableInfo=true,
interfaceClassName=com.yammer.metrics.reporting.JmxReporter$GaugeMBean,
mxbean=false}]
We can work around this by using asInstanceOf[Long], but that usually indicates
we're doing something wrong.
Am I missing something here? I would expect types like Scala Long to be
preserved across JMX.
--
Jack Foy mailto:j...@whitepages.com>>
n SSDs and 2) Turning sync on every write off
> (zookeeper.forceSync). I'm not sure if #2 negatively affected the
> consistency of the zookeeper data ever but it did help with speeding up the
> rebalancing.
OK, thanks very much for the guidance.
--
Jack Foy
workable strategy with high-level consumers? Can we actually deploy a
consumer group with this many consumers and partitions?
We see throughput of more than 500,000 messages per second with our 512
consumers, but we need greater parallelism to meet our performance needs.
--
Jack Foy
32 matches
Mail list logo