Hello!
I'm looking for opensource solutions for consuming Kafka messages in
browser.
I found to interesting articles
1.
https://www.confluent.io/blog/consuming-messages-out-of-apache-kafka-in-a-browser/
2.https://ably.com/topic/websockets-kafka
Unfortunately:
- the first is from 2019 and does
Hello!
Please check whether the segment.ms configuration on topic will help you
to solve your problem.
https://kafka.apache.org/documentation/
https://stackoverflow.com/questions/41048041/kafka-deletes-segments-even-before-segment-size-is-reached
Regards,
Florin
segment.ms This
Hello!
I've read about the KSQL streaming SQL for Kafka. In my opinion, this is a
big step for performing complex event processing on streams.
In the provides examples there is a TUMBLING window function.
By default, I assume that the windowing is performed on the timestamp
generated by the
Hello!
For me it seems that rebalance is the cause that some of the messages are
consumed by either one consumer or another. If you are using random client
id, then it could happen that during the rebalance a client id will get a
"lower" position than the consumer that was previously consumed
Hi!
Regarding the configurations:
you are using the retention.ms property that should be used for a topic not
for the broker via server.properties. If you'd like to use at the broker
level then you have to use log.retention.ms.
Kafka separates the configuration for brokers and for topics. Please
hange periodically.
> Is there any mechanism by which I can make sure a consumer consumes from a
> particular partition for sufficient amount of time which is configurable
> provided none of the consumers goes down triggering rebalance.
>
>
>
>
> On Wed, Jun 29, 2016 at 3:02 PM
Hi!
By default kafka uses internally a round robin partitioner that will send
the messages to the right partition based on the message key. Each of your
consumer will receive message for its allocated partition for that they
subscribed.
In case of rebalance, if you add more consumers than the
Hello!
I would like to know what are the configurations/properties for the
producer/consumer in order fail fast when the connection to the entire
broker cluster is lost.
For example if we can set up a parameter that when the connection trial
reached a treshold then disconnect and throw an
Hello!
I'm working with Kafka 0.9.1 new consumer API.
The consumer is manually assigned to a partition. For this consumer I would
like to see its progress (meaning the lag).
Since I added the group id consumer-tutorial as property, I assumed that I
can use the command
HI!
If you have subscribed to a topic via a consumer group you can use:
./kafka-consumer-groups.sh --new-consumer --bootstrap-server
brokerhost:brokerport --describe --group your_group_id
You can see the group list via command:
./kafka-consumer-groups.sh --new-consumer
--bootstrap-server
Hello!
I'm working with Kafka 0.9.1 new consumer API.
The consumer is manually assigned to a partition. For this consumer I would
like to see its progress (meaning the lag).
Since I added the group id consumer-tutorial as property, I assumed that I
can use the command
Hi!
What version of Kafka you are using? What do you mean by "Kafka needs
rebalacing?" Rebalancing of what? Can you please be more specific.
Regards,
Florin
On Tue, May 31, 2016 at 4:58 PM, Hafsa Asif
wrote:
> Hello Folks,
>
> Today , my team members shows
Hello!
I'm using Kafka 0.9.1 as a service in Horton Works Ambari.
I have installed on one machine M1 Kafka manager that needs the JMX_PORT
for getting the consumers for a specific topic.
If I'm running the kafka scripts such as kafka-consumer-groups or
kafka-topics from the same machine where
Hi!
Here is a great article about the consumer API:
http://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0.9-consumer-client
.
In my opinion, the consumer will not be able to send the heartbeat to the
group coordinator (due to the fact that poll calls on
per to find out about partition leaders
> and there is some handling required there.
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example
>
> I think Gwen's answer is complete in that respect. Hope that helps.
>
> On Wed, May 11, 2016 at 3:23 AM, Spico
Hello!
I'm using Kafka 0.9.1. Suppose that I have created a topic "my-topic" with
1 partition.With the following code, I got StaleMetadataException in
Fetcher->listOffset method and the thread is blocked in an infinite while
loop (while true).
I came to this error by mistake, so what to do in
wrote:
> And where is the documentation for this topic: "__consumers_offsets"
>
> On Tue, May 10, 2016 at 1:16 AM, Spico Florin <spicoflo...@gmail.com>
> wrote:
>
> > Hi!
> > Yes both are possible. The new versions 0.9 and above store the offsets
> in
I have 3 topics A,B,C with same number of partitions. I use the same group
name for all the consumers to this topics.
My questions are:
1. If a consumer for one of the topics/partitions will rebalance be
triggered for the other two topics consumers?
2. Same if adding a new partition for one
:17 AM, Mudit Kumar <mudit.ku...@askme.in> wrote:
> so zookeeper not needed anymore?
>
> > On May 10, 2016, at 1:46 PM, Spico Florin <spicoflo...@gmail.com> wrote:
> >
> > Hi!
> > Yes both are possible. The new versions 0.9 and above store the of
Hi!
Yes both are possible. The new versions 0.9 and above store the offsets in
a special Kafka topic named __consumers_offsets.
Regards,
florin
On Tue, May 10, 2016 at 8:33 AM, Gerard Klijs
wrote:
> Both are possible, but the 'new' consumer stores the offset in an
hi!
please have a look at this article. it help me touse the log compaction
feature mechanism
i hope thtat it helps.
regards,
florin
http://www.shayne.me/blog/2015/2015-06-25-everything-about-kafka-part-2/
On Thursday, May 5, 2016, Behera, Himansu (Contractor) <
;
> KTable's changelog is using log compaction by default instead of log
> deletion. But we are adding mechanism to to let users configure their
> changelog topics.
>
> Guozhang
>
>
> On Thu, May 5, 2016 at 12:00 PM, Spico Florin <spicoflo...@gmail.com
> <javascr
Hi!
If you produce your messages with key type (optional) and value type of
String then you can you Kafka tool: http://www.kafkatool.com/
I hope that it helps.
Regards,
Florin
On Fri, May 6, 2016 at 12:10 AM, Henry Cai
wrote:
> When we are on kafka 0.8, all the
hello!
i would like to ask you if ktable is using as backend storage a compacted
topic ? i have read here
http://www.confluent.io/blog/introducing-kafka-streams-stream-processing-made-simple
thtat a ktable is at its base a compacted topic
if i would like to have the messages from ktable be
Hello!
We are using Kafka 0.9.1. We have created a class CustomKafkaConsumer
whose method receive
has the pseudocoode
public OurClassStructure[] receiveFromKafka()
{
//gte the message from topic
ConsumerRecords received= org
Hello!
I'm using Kafka 0.9.0 high level consumer API. I have the following
questions:
1. Where is kept the consumer group and its corresponding clients?
2. Is the group kept in Zookeeper?
3. If the group is kept in ZK the data structure remains the same as
described here:
HI!
You can set up your kafka process to dump the stack trace in case of the
OOM by providing the flags:(
https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/clopts001.html
)
-
xx:HeapDumpPath=path
This option can be used to specify a location for the heap dump, see
and maintain group membership are done in
> the same thread as the caller.
>
> Also, poll() is a blocking method with timeout and you can interrupt it
> with the wakeup method in Kafka Consumer.
>
> Thanks,
> Liquan
>
>
> On Sun, Apr 24, 2016 at 11:43 AM, Spico Florin &
hi!
i would like to ask if the kafka consumer poll method is done in aseprated
thread than the caller or in the same thread as the caller?
it is syncriunous blocking method or asynch?
thank you
florin
29 matches
Mail list logo