Are you consuming more than 2GB of data for a single fetch request (for all
partitions)? If so, this can cause overflow in the request (response) size
since it's represented as an int. The solution is to reduce the fetch size.
Thanks,
Jun
On Sun, Jun 23, 2013 at 9:30 PM, anand nalya wrote:
>
Hi,
I'm using High Level Consumer with kafka 0.7.2. The consumer is able to
consume messages from kafka for the first time but when I restart the
consumer process, I'm getting the following error:
2013-06-24 09:48:25 FetchRunnable-0 [ERROR] FetcherRunnable:89 - error in
FetcherRunnable
kafka.netw
Jason,
Are you using ack = 0 in the producer? This mode doesn't work well with
controlled shutdown (this is explained in FAQ i*n
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#)*
*
*
Thanks,
Jun
On Sun, Jun 23, 2013 at 1:45 AM, Jason Rosenberg wrote:
> I'm working on tryi
Hi
I am Sphinx Jiang, a beginner of Kafka, glad to added in the Kafka user
mailing list. Hope to discuss problems with you and share experience with
you guys.
I am in Beijing China now, if you are in the same city, we can have a Kafka
user meeting here~~
All the best
Sphinx
Howdy,
Is the replication factor system in Kafka 0.8 suitable for creating single
clusters which span across data centers ? (up to 3)
I am looking for a system where I don't loose messages, and can effectively
'fail over' to a different datacenter for processing if/when the primary goes
down
It seems the producer has been designed to be initialized and pointed at
one Kafka cluster only.
Is it not possible to change the Kafka cluster (i.e. use a new value for
topic metadata and force a re-initialization) of an initialized producer?
If I want the producer start sending to region #2 (Ka
It's also worth mentioning why new slave machines need to truncate
back to a known good point.
When a new server joins the cluster and already has some data on disk
we cannot blindly trust its log as it may have messages that were
never committed (for example if it was the master and then crashed
A json payload can be seen from several perspectives:
- as some kind of object (e.g. jackson JSonNode)
- as a string
- a byte array
Imho all depends on what "perspective" you are using.
Assuming that you are not working directly with byte arrays (in this case
you could write a simple Byte array
Hi ,
I need to produce/consume json to/from Kafka.
Can you please point me on example how to do it.
I am using java and kafka 0.7.2
Thanks
Oleg.
Hi Jason,
A rolling bounce will create an imbalance in the number of leaders
distribution between the brokers and this is not ideal. We do plan to have
the preferred leader election tool integrated into kafka that periodically
balances the leader count across the brokers in the cluster. For now yo
So, I'm running into the case where after issuing a rolling restart, with
controlled shutdown enabled, the last server restarted ends up without any
partitions that it's the leader of. This is more pronounced of course if I
have only 2 servers in the cluster (during testing). I presume it's kind
Hi Sriram,
I don't see any indication at all on the producer that there's a problem.
Only the above logging on the server (and it repeats continually). I
think what may be happening is that the producer for that topic did not
actually try to send a message between the start of the controlled shu
Hey Jason,
The producer on failure initiates a metadata request to refresh its state
and should issue subsequent requests to the new leader. The errors that
you see should only happen once per topic partition per producer. Let me
know if this is not what you see. On the producer end you should see
I'm working on trying on having seamless rolling restarts for my kafka
servers, running 0.8. I have it so that each server will be restarted
sequentially. Each server takes itself out of the load balancer (e.g. sets
a status that the lb will recognize, and then waits more than long enough
for the
That was it!
I didn't have logging setup correctly, so I missed those extra clues.
Not using the kafka instance hostname in the producer caused the topic
metadata requests to fail.
Thank you!
On Thu, Jun 20, 2013 at 3:01 PM, Jun Rao wrote:
> Before that log entry, you should see why the send
Offset preserving mirroring would be a great addition, allowing for offsite
backups which closely match production. It would be much cleaner than
running rsync repeatedly.
Regarding the broader discussion of maximizing availability while
minimizing operational complexity, I've been considering th
16 matches
Mail list logo