but they have always been up. I mean when i was testing, all the zookeepers
were up. and all the kafka nodes were up. its just that I changed the
number of zookeeper nodes in my first test iteration. second and third were
still the same. not sure why the topics were losing some messages.
On Thu,
This is probably due to KAFKA-1642, which is fixed in 0.8.2.0. Could you
try that version or 0.8.2.1 which is being voted now.
Thanks,
Jun
On Thu, Feb 19, 2015 at 10:42 AM, Steven Wu stevenz...@gmail.com wrote:
forgot to mention in case it matters
producer: 0.8.2-beta
broker: 0.8.1.1
On
Tao,
I updated the mirrorconsumer.properties config file as you suggested, and
upped the MM's log level to DEBUG. I have the output of the DEBUG logger
here in this pastebin, if you could take a minute to look for anything in
its contents that would indicate a problem that would be extremely
I have noticed some strange patterns when testing with the 0.8.1 build and
the 0.8.2 builds, and are listed below.
1. So I setup a brand new cluster [3 kafka nodes with 3 zookeepers],
created 2 topics via the API calls, everything went fine and was
successfully able to view my messages in my
Joel/All,
The SimpleConsumer constructor requires a specific host and port.
Can this be any broker?
If it needs to be a specific broker, for 0.8.2, should this be the offset
coordinator? For 0.8.1, does it matter?
-Suren
On Thursday, February 19, 2015 10:43 AM, Joel Koshy
+1 binding.
Checked the md5, and quick start.
Some minor comments:
1. The quickstart section would better include the building step after
download and before starting server.
2. There seems to be a bug in Gradle 1.1x with Java 8 causing the gradle
initialization to fail:
-
FAILURE:
will try 0.8.2.1 on producer and report back result.
On Thu, Feb 19, 2015 at 11:52 AM, Jun Rao j...@confluent.io wrote:
This is probably due to KAFKA-1642, which is fixed in 0.8.2.0. Could you
try that version or 0.8.2.1 which is being voted now.
Thanks,
Jun
On Thu, Feb 19, 2015 at 10:42
Hi
I could not find a way to customize Partitioner class in new KafaProducer
class, is it intentional ?
tx
SunilKalva
Hi,
In new producer, we can specify the partition number as part of
ProducerRecord.
From javadocs :
*If a valid partition number is specified that partition will be used when
sending the record. If no partition is specified but a key is present a
partition will be chosen using a hash of the key.
Hi
I could not find a way to customize Partitioner class in new
KafaProducer class, is it intentional ?
tx
SunilKalva
thanks mani for quick response, sorry some how i missed this javadoc :)
t
SunilKalva
On Thu, Feb 19, 2015 at 6:14 PM, Manikumar Reddy ku...@nmsworks.co.in
wrote:
Hi,
In new producer, we can specify the partition number as part of
ProducerRecord.
From javadocs :
*If a valid partition
We are using the High Level Consumer API to interact with Kafka for our normal
use cases.
However, on consumer restart in the case of consumer failures, we want to be
able to manually
reset offsets in certain situations.
And ideally we'd like to use the same api in 0.8.1 and 0.8.2. :-)
It
Not sure what you mean by using the SimpleConsumer on failure
recovery. Can you elaborate on this?
On Thu, Feb 19, 2015 at 03:04:47PM +, Suren wrote:
Haven't used either one now. Sounds like 0.8.2.1 will help.
We are using the High Level Consumer generally but are thinking to use the
I see - yes, you can use the SimpleConsumer for that. However, your
high-level consumers need to be shutdown while you do that (otherwise
they may auto-commit while you are resetting offsets).
Thanks,
Joel
On Thu, Feb 19, 2015 at 03:29:19PM +, Suren wrote:
We are using the High Level
I think this is an issue caused by KAFKA-1788.
I was trying to test producer resiliency to broker outage. In this
experiment, I shutdown all brokers and see how producer behavior.
Here are the observations
1) kafka producer can recover from kafka outage. i.e. send resumed after
brokers came back
forgot to mention in case it matters
producer: 0.8.2-beta
broker: 0.8.1.1
On Thu, Feb 19, 2015 at 10:34 AM, Steven Wu stevenz...@gmail.com wrote:
I think this is an issue caused by KAFKA-1788.
I was trying to test producer resiliency to broker outage. In this
experiment, I shutdown all
If I consumed up to the log end offset and log compaction happens in
between, I would have missed some messages.
Compaction actually only runs on the rolled over segments (not the
active - i.e., latest segment). The log-end-offset will be in the
latest segment which does not participate in
Well are there any measurement techniques for Memory config in brokers. We
do have a large load, with a max throughput 200MB/s. What do you suggest as
the recommended memory config for 5 brokers to handle such loads?
On Wed, Feb 18, 2015 at 7:13 PM, Jay Kreps jay.kr...@gmail.com wrote:
40G is
Hello,
We're having the following issue with Kafka and/or Zookeeper:
If a broker (id=1) is running, and you start another broker with id=1, the new
broker will exit saying A broker is already registered on the path
/brokers/ids/1. However, I noticed when I query zookeeper /brokers/ids/1
The log end offset is just the end of the committed messages in the log
(the last thing the consumer has access to). It isn't the same as the
cleaner point but is always later than it so it would work just as well.
Isn't this just roughly the same value as using c.getOffsetsBefore() with a
Hi Folks,
At the recent Kafka Meetup in Mountain View there was interest expressed
about the encryption through Kafka proof of concept that Symantec did a
few months ago, so I have created a blog post with some details about it.
You can find that here:
http://goo.gl/sjYGWN
Let me know if you
Looks like you only have 4 messages in your topic and no more messages got
sent
2015-02-19 20:09:34,661] DEBUG initial fetch offset of consolemm:0: fetched
offset = 4: consumed offset = 4 is 4 (kafka.consumer.PartitionTopicInfo
You can try sending more messages to topic or give the MM a
If I may use the same thread to discuss the exact same issue
Assuming one can store the offset in an external location (redis/db etc), along
with the rest of the state that a program requires, wouldn't it be possible to
manage things such that, you use the High Level API with auto commit
Yes it is supported in 0.8.2-beta. It is documented on the site - you
will need to set offsets.storage to kafka.
On Thu, Feb 19, 2015 at 03:57:31PM -0500, Matthew Butt wrote:
I'm having a hard time figuring out if the new Kafka-based offset
management in the high-level Scala Consumer is
Haven't used either one now. Sounds like 0.8.2.1 will help.
We are using the High Level Consumer generally but are thinking to use the
SimpleConsumer on failure recovery to set the offsets.
Is that the recommended approach for this use case?
Thanks.
-Suren
On Thursday, February 19, 2015
Are you using it from Java or Scala? i.e., are you using the
javaapi.SimpleConsumer or kafka.consumer.SimpleConsumer
In 0.8.2 javaapi we explicitly set version 0 of the
OffsetCommitRequest/OffsetFetchRequest which means it will
commit/fetch to/from ZooKeeper only. If you use the scala API you can
I'm not sure if I misunderstood Jay's suggestion, but I think it is
along the lines of: we expose the log-end-offset (actually the high
watermark) of the partition in the fetch response. However, this is
not exposed to the consumer (either in the new ConsumerRecord class
or the existing
The log end offset is just the end of the committed messages in the log
(the last thing the consumer has access to). It isn't the same as the
cleaner point but is always later than it so it would work just as well.
-Jay
On Thu, Feb 19, 2015 at 8:54 AM, Will Funnell w.f.funn...@gmail.com wrote:
[2015-02-05 14:21:09,708] ERROR [ReplicaFetcherThread-2-1], Error in fetch
Name: FetchRequest; Version: 0; CorrelationId: 147301; ClientId:
ReplicaFetcherThread-2-1; ReplicaId: 3; MaxWait: 500 ms; MinBytes: 1 bytes;
RequestInfo: [site.db.people,6] -
PartitionFetchInfo(0,1048576),[site.db.main,4] -
Yeah that is a good point - will do the update as part of the doc
changes in KAFKA-1729
On Thu, Feb 19, 2015 at 09:26:30PM -0500, Evan Huus wrote:
On Thu, Feb 19, 2015 at 8:43 PM, Joel Koshy jjkosh...@gmail.com wrote:
If you are using v0 of OffsetCommit/FetchRequest then you can issue
that
Jun,
I am already using the latest release 0.8.2.1.
-Zakee
On Thu, Feb 19, 2015 at 2:46 PM, Jun Rao j...@confluent.io wrote:
Could you try the 0.8.2.1 release being voted on now? It fixes a CPU issue
and should reduce the CPU load in network thread.
Thanks,
Jun
On Thu, Feb 19, 2015 at
If you are using v0 of OffsetCommit/FetchRequest then you can issue
that to any broker. For version 0 you will need to issue it to the
coordinator. You can discover the coordinator by sending a
ConsumerMetadataRequest to any broker.
On Thu, Feb 19, 2015 at 07:55:16PM +, Suren wrote:
The log end offset (of a partition) changes when messages are appended
to the partition. (It is not correlated with the consumer's offset).
On Thu, Feb 19, 2015 at 08:58:10PM +, Will Funnell wrote:
So at what point does the log end offset change? When you commit?
On 19 February 2015 at
On Thu, Feb 19, 2015 at 8:43 PM, Joel Koshy jjkosh...@gmail.com wrote:
If you are using v0 of OffsetCommit/FetchRequest then you can issue
that to any broker. For version 0 you will need to issue it to the
coordinator. You can discover the coordinator by sending a
ConsumerMetadataRequest to
Is there any error in the producer log? Is there any pattern in the
messages being lost?
Thanks,
Jun
On Thu, Feb 19, 2015 at 4:20 PM, Karts kartad...@gmail.com wrote:
yes i did.
On Thu, Feb 19, 2015 at 2:42 PM, Jun Rao j...@confluent.io wrote:
Did you consume the messages from the
35 matches
Mail list logo