One caveat. If you are relying on log.segment.ms to roll the current log
segment, it will not roll until the both time elapses and something new
arrives for the log.
In other words, if your topic/log segment are idle, no rolling will happen.
The theoretically ineligible log will still be the
wrote:
Using -1 for log.retention.ms should work only for 0.8.3 (
https://issues.apache.org/jira/browse/KAFKA-1990).
2015-07-13 17:08 GMT-03:00 Shayne S shaynest...@gmail.com:
Did this work for you? I set the topic settings to retention.ms=-1 and
retention.bytes=-1 and it looks like
:
If I recall correctly, setting log.retention.ms and log.retention.bytes
to
-1 disables both.
Thanks!
On Fri, Jul 10, 2015 at 1:55 PM, Daniel Schierbeck
daniel.schierb...@gmail.com wrote:
On 10. jul. 2015, at 15.16, Shayne S shaynest...@gmail.com wrote:
There are two ways
Your payload is so small that I suspect it's an encoding issue. Is your
producer set to expect a byte array and you're passing a string? Or vice
versa?
On Sat, Jul 11, 2015 at 11:08 PM, David Montgomery
davidmontgom...@gmail.com wrote:
I cant send this s simple payload using python.
The console producer will read from STDIN. Assuming you are using 0.8.2,
you can pipe the file right in like this:
kafka-console-produce.sh --broker-list localhost:9092 --topic my_topic
--new-producer my_file.txt
On Sat, Jul 11, 2015 at 6:32 PM, tsoli...@gmail.com tsoli...@gmail.com
wrote:
There are two ways you can configure your topics, log compaction and with
no cleaning. The choice depends on your use case. Are the records uniquely
identifiable and will they receive updates? Then log compaction is the way
to go. If they are truly read only, you can go without log compaction.
We
The problem is gone, but I'm unsure of the root cause. The client library I
use recently added support for the new producer. Switching to that seems to
have sidestepped the problem.
On Tue, Jun 30, 2015 at 12:53 PM, Shayne S shaynest...@gmail.com wrote:
Thanks for responding Gwen
when this happens?
On Tue, Jun 30, 2015 at 10:14 AM, Shayne S shaynest...@gmail.com wrote:
This problem is intermittent, not sure what is causing it. Some days
everything runs non-stop with no issues, some days I get the following.
Setup:
- Single broker
- Running 0.8.2.1
I'm
This problem is intermittent, not sure what is causing it. Some days
everything runs non-stop with no issues, some days I get the following.
Setup:
- Single broker
- Running 0.8.2.1
I'm running a single broker. When the problem is presenting, anywhere from
5,000 to 30,000 messages may be
Duplicate messages might be due to network issues, but it is worthwhile to
dig deeper.
It sounds like the problem happens when you have 3 partitions and 3
consumers. Based on my understanding (still learning), each consumer should
have it's own partition to consume. Can you verify this while your
kafka-server-start.sh has a -daemon option, but I don't think Zookeeper has
it.
On Tue, Jun 16, 2015 at 11:32 PM, Su She suhsheka...@gmail.com wrote:
It seems like nohup has solved this issue, even when the putty window
becomes inactive the processes are still running (I din't need to
Some further information, and is this a bug? I'm using 0.8.2.1.
Log compaction will only occur on the non active segments. Intentional or
not, it seems that the last segment is always the active segment. In other
words, an expired segment will not be cleaned until a new segment has been
Manikumar
On Tue, Jun 16, 2015 at 5:35 PM, Shayne S shaynest...@gmail.com wrote:
Some further information, and is this a bug? I'm using 0.8.2.1.
Log compaction will only occur on the non active segments. Intentional
or
not, it seems that the last segment is always the active segment
Hi, I'm new to Kafka and having trouble with log compaction.
I'm attempting to set up topics that will aggressively compact, but so far
I'm having trouble getting complete compaction at all. The topic is
configured like so:
Topic:beer_archive PartitionCount:20 ReplicationFactor:1
14 matches
Mail list logo