I am very much new to Kafka and we are using Kafka 0.8.1.
What I need to do is to consume a message from topic. For that, I will have
to write one consumer in Java which will consume a message from topic and
then save that message to database. After a message is saved, some
acknowledgement will
I want to delete the message from a Kafka broker after consuming it(Java
consumer). How can I do that?
Kafka is a log and not a queue. The client is remembering a position in the
log rather than working with individual messages.
On Fri, Aug 1, 2014 at 4:02 PM, anand jain anandjain1...@gmail.com wrote:
I want to delete the message from a Kafka broker after consuming it(Java
consumer). How can I
HI all!
I think I already saw this question on the mailing list, but I'm not able
to find it back...
I'm using kafka 0.8.1.1, i have 3 brokers and I have a default replication
factor of 2 and a default partitioning factor of 2.
My partition are distributed fairly on every brokers.
My problem
Dear Kafka team,
Would you mind add us @
https://cwiki.apache.org/confluence/display/KAFKA/Powered+By ?
We're using it as part of our ticket sequencing system for our helpdesk
software.
--
*Vitaliy Verbenko - Business Development at Helprace *
vitaliy.verbe...@helprace.com
Customer Service
You have to remember statsd uses udp and possibly lossy which might account
for the errors.
-Steve
On Fri, Aug 1, 2014 at 1:28 AM, Guy Doulberg guy.doulb...@perion.com
wrote:
Hey,
After a year or so I have Kafka as my streaming layer in my production, I
decided it is time to audit, and to
Thanks Guozhang,
I was looking for actual real world workflows. I realize you can commit
after each message but if you’re using ZK for offsets for instance you’ll
put too much write load on the nodes and crush your throughput. So I was
interested in batching strategies people have used that
One seed broker should be enough, and the the number of partitionMetadata
should be the same as num. of partitions. One note here is that the
metadata is propagated asynchronously to the brokers, and hence the
metadata returned by any broker may be stale by small chances, so you need
to
Do you have producer retries (due to broker failure) in those minutes when
you see a diff?
Thanks,
Jun
On Fri, Aug 1, 2014 at 1:28 AM, Guy Doulberg guy.doulb...@perion.com
wrote:
Hey,
After a year or so I have Kafka as my streaming layer in my production, I
decided it is time to audit,
Sure, can you give me the blurb you want?
-Jay
On Fri, Aug 1, 2014 at 6:58 AM, Vitaliy Verbenko
vitaliy.verbe...@helprace.com wrote:
Dear Kafka team,
Would you mind add us @
https://cwiki.apache.org/confluence/display/KAFKA/Powered+By ?
We're using it as part of our ticket sequencing system
Howdy,
I was wondering if it would be possible to update the release plan:
https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan
aligned with the feature roadmap:
https://cwiki.apache.org/confluence/display/KAFKA/Index
We have several active projects actively and planning to
I too could benefit from an updated roadmap.
We're in a similar situation where some components in our stream processing
stack could use an overhaul, but I'm waiting for the offset API to be fully
realized before doing any meaningful planning.
On Fri, Aug 1, 2014 at 11:52 AM, Jonathan Weeks
Hello Weide,
That should be doable via high-level consumer, you can take a look at this
page:
https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example
Guozhang
On Fri, Aug 1, 2014 at 3:20 PM, Weide Zhang weo...@gmail.com wrote:
Hi,
I have a use case for a master slave
13 matches
Mail list logo