Yes, your understanding is correct. The reason we have to recompress the
messages is to assign a unique offset to messages inside a compressed
message. Some preliminary load testing shows 30% increase in CPU, but that
is using GZIP which is known to be CPU intensive. By this week, we will
know the
Do you mind trying out the DumpLogSegment tool on the log segment for the
corrupted topic. That will validate if the log data is corrupted. Also, Is
your test reproducible ? We ran into a similar issue in production but
could not reproduce it.
Thanks,
Neha
On Monday, March 18, 2013, Helin Xiang w
thanks Jun.
we are using java producer.
does the last exception
"java.lang.IllegalArgumentException
at java.nio.Buffer.limit(Buffer.java:266)
"
also means the broker received corrupted messages ? sorry i am not
familiar with java nio.
On Tue, Mar 19, 2013 at 12:58 PM, Jun Rao wrote:
> H
Hmm, both log4j messages suggest that the broker received some corrupted
produce requests. Are you using the java producer? Also, we have seen that
network router problems caused corrupted requests before.
Thanks,
Jun
On Mon, Mar 18, 2013 at 8:22 PM, Helin Xiang wrote:
> Hi,
> We were doing so
Hi,
I have just started looking at moving from 0.7 to 0.8 and wanted to confirm
my understanding of code in the message server/broker.
In the code for 0.8, KafkaApis.appendToLocalLog calls log.append(...,
assignOffsets = true), which then calls ByteBufferMessageSet.assignOffsets.
This method seem
One other clue, is that although those topics still show up under
/brokers/topics// they contain empty sub-nodes.So zk knows
they don't exist on any broker. So maybe that's the issue. Brokers are
successfully removing themselves as serving a topic, but the topic itself
remains. The consumers
In 0.8, the delete topic command will be able to remove a topic in an
online cluster, without having to shutdown anything.
Thanks,
Neha
On Mon, Mar 18, 2013 at 1:37 PM, Jason Rosenberg wrote:
> Thanks Jun,
>
> I have nothing listed under /brokers/topics/deletedtopic, etcso that
> doesn't a
Thanks Jun,
I have nothing listed under /brokers/topics/deletedtopic, etcso that
doesn't appear to be the issue.
I will try removing now the unwanted topics under /brokers/topics/.
Jason
On Mon, Mar 18, 2013 at 9:31 AM, Jun Rao wrote:
> Jason,
>
> This is mainly a problem that we don't h
Ok,
So we'll leave it as this (I won't file a bug). It is of course less than
ideal to require everything to be brought down when deleting topics! Since
we consider the cluster to be a high-availability resource. I assume in
0.8, it won't be necessary to bring everything down to delete a topic?
scala-tools.org no longer serving any jar files.
On Mar 18, 2013 11:03 AM, "Sijo Mathew" wrote:
> Hi,
>
> I just downloaded 0.7.2 and followed the quick start documentation but it
> failed because the script uses a url (scala-tools.org) internally which
> is not available any more. Could you sug
Re "So, how did you get the data from the local broker out without ZK"...
We didn't use Mirror Maker itself. We wrote a simple application inspired by
Mirror Maker but written in Java that understands our topology and used
external information to locate source brokers from which to consume data
So, looks like in 0.7, the way to correctly delete topics from the brokers
and zookeeper is as follows -
1. Shutdown producers, consumers and brokers.
2. Delete the topic logs from the brokers
3. Delete the /brokers/topics/ nodes from zookeeper
4. Restart the brokers
5. Restart producers and consu
What's the error you saw?
Thanks,
Jun
On Mon, Mar 18, 2013 at 8:03 AM, Sijo Mathew wrote:
> Hi,
>
> I just downloaded 0.7.2 and followed the quick start documentation but it
> failed because the script uses a url (scala-tools.org) internally which is
> not available any more. Could you suggest
Phil,
In 0.8, the broker always depends on ZK. So, this is no longer optional.
You can still run a single node broker with replication factor set to 1
though.
Actually, in 0.7, MirrorMaker depends on ZK to consume data in a broker.
So, how did you get the data from the local broker out without ZK
Jason,
This is mainly a problem that we don't have a formal way of deleting a
topic in 0.7, which we are trying to fix in 0.8.
The extra watchers on those deleted topics are likely registered by the
producers. They should be gone once /brokers/topics/deletedtopic are
removed from ZK.
You probabl
Not sure if this helps, but I had no trouble running the quick start
step by step from the documentation. You may have another issue.
On 3/18/2013 11:03 AM, Sijo Mathew
wrote:
Hi,
I just downloaded 0.7.2 and followed the quick start documentation but i
The error you saw on the broker is for consumer requests, not for producer.
For the issues in the producer, are you using a VIP? Is there any firewall
btw producer and broker? The typical issues with "connection reset" that we
have seen are caused by the load balancer or the firewall killing idle
c
Thanks Neha,
I use one kafka server with 4 partitions and 3 consumers(senseidb).
Kafka server producer input rate is about 10k.
And each consumer consuming rate is about 3k.
I see this exceptions many times, kafka has this exception on each
consumers, but I didn't find error log in consumer si
While the replication features in 0.8 are very desirable for us, one aspect of
0.7 that was also appealing was that in specific scenarios a single broker
instance could run by itself without an accompanying Zookeeper.
This provided a lightweight "entry point" for log flows by running lots of
19 matches
Mail list logo