Hi,
I am new to Kafka and I have a question how Kafka handles scenarios where
no consumer is available. Can I configure Kafka in such a way that the
messages will be dropped after x seconds? Otherwise I would be afraid that
the queues would overflow...
Cheers,
Klaus
--
--
Klaus Schaefers
Message retention in Kafka is disconnected from message consumption.
Messages are all persisted to disk and the queues do not need to fit in RAM
unlike some other systems. There are configuration values that control
maximum log size in terms of MB and the duration of retention which is
typically
As a data point on one system, while Snappy compression is significantly
better than gzip, for our system isn't wasn't enough to offset the
decompress/compress on the broker. No matter how fast the compression is,
doing that on the broker will always be slower than not.
We went the route the
I am using gzip compression. Too big is really difficult to define
because it always depends (for example what can your hardware handle), but
I would say no more than a few megabytes. Having said that we are still
successfully using 50MB size in production for some things, but it comes
at a
This is great!
-Jay
On Thu, Jun 26, 2014 at 5:47 PM, Joe Stein joe.st...@stealth.ly wrote:
Hi folks, I just started a new Meetup specifically for Apache Kafka in NYC
(everyone is welcome of course) http://www.meetup.com/Apache-Kafka-NYC/
For the last couple of years we have been piggy
You can control retention using log.retention.hours,
log.retention.minutes or log.retention.bytes.
On Fri, Jun 27, 2014 at 2:06 AM, cac...@gmail.com cac...@gmail.com wrote:
Message retention in Kafka is disconnected from message consumption.
Messages are all persisted to disk and the queues
Neha,
Apologies for the slow response. I was out yesterday.
To answer your questions
-- Is the LeaderNotAvailableException repeatable? Yes. I happens whenever I
send a message to that topic.
-- Are you running Kafka in the cloud? No.
Does this problem indicate that the topic is corrupt?
Hi,
I was testing out Kafka 0.8.1.1 and found that i get the following
exception during partition re-assignment :
*./kafka-reassign-partitions.sh --path-to-json-file ritest.json --zookeeper
localhost:2181*
*Partitions reassignment failed due to Partition reassignment currently in
progress for
Hi Kashyap,
If a previous reassignment is still on going the current one cannot
proceed, did you trigger another reassignment before this one?
Guozhang
On Fri, Jun 27, 2014 at 10:05 AM, Kashyap Mhaisekar kashya...@gmail.com
wrote:
Hi,
I was testing out Kafka 0.8.1.1 and found that i get the
*Partitions reassignment failed due to Partition reassignment currently in
progress for Map().
The map is empty, so this seems like a bug. Please can you file a JIRA and
also attach your server's log4j to it?
On Fri, Jun 27, 2014 at 10:05 AM, Kashyap Mhaisekar kashya...@gmail.com
wrote:
Hi,
I'm not so sure what is causing those exceptions. When you send data, do
you see any errors in the server logs? Could you send it around?
On Fri, Jun 27, 2014 at 10:00 AM, England, Michael mengl...@homeadvisor.com
wrote:
Neha,
Apologies for the slow response. I was out yesterday.
To
but I found one message (5.1MB in size) which
is clogging my pipeline up
Have you ensured that the fetch.message.max.bytes on the consumer config
is set to 5.1 MB?
On Thu, Jun 26, 2014 at 6:14 PM, Louis Clark sfgypsy...@gmail.com wrote:
in the consumer.properties file, I've got (default?):
I believe so. I have set
fetch.message.max.bytes=10485760
in both the consumer.properties and the server.properties config files,
then restarted kafka - same problem. I'm following up on some of
Guozhang's other suggestions now.
One thing I'm confused about (I should read the docs again) is
Hey Ravi,
I think what you want is available via log4j and jmx. Log4j is
pluggable you can plug in any java code at runtime you want to handle
the log events. JMX can be called in any way you like too.
-Jay
On Mon, Jun 23, 2014 at 11:51 PM, ravi singh rrs120...@gmail.com wrote:
Primarily we
Excellant decision
Sent from my iPhone
On Jun 27, 2014, at 12:06 PM, Jay Kreps jay.kr...@gmail.com wrote:
This is great!
-Jay
On Thu, Jun 26, 2014 at 5:47 PM, Joe Stein joe.st...@stealth.ly wrote:
Hi folks, I just started a new Meetup specifically for Apache Kafka in NYC
(everyone
Neha,
In state-change.log I see lots of logging from when I last started up kafka,
and nothing after that. I do see a bunch of errors of the form:
[2014-06-25 13:21:37,124] ERROR Controller 1 epoch 11 initiated state change
for partition [lead.indexer,37] from OfflinePartition to
Running a mixed 2 broker cluster. Mixed as in one of the broker1 is
running 0.8.0 and broker2 one 0.8.1.1 (from the apache release link.
Directly using the tar ball, no local build used).
I have set the log.retention.minutes=10. However the broker is not
honoring the setting. I see its not
thanks for the help. For others who happen upon this thread, the problem
was indeed on the consumer side. Spark (0.9.1) needs a bit of help setting
the Kafka properties for big messages.
// setup Kafka with manual parameters to allow big messaging
//see
Hi,
I have a quick question. Say I have a custom REST API with data in JSON format.
Can the Kafka Producer read directly from the REST API and write to a topic?
If so, can you point me to the code base?
Thanks,
Sonali
Sonali Parthasarathy
RD Developer, Data Insights
Accenture Technology Labs
The answer is no, it doesn't work that way. You would have to write a process
to consume from the API back end and have that back end write to Kafka
On Jun 27, 2014, at 19:35, sonali.parthasara...@accenture.com wrote:
Hi,
I have a quick question. Say I have a custom REST API with data in
20 matches
Mail list logo