Hi
Thanks for the response!
I do not think my use case is general enough, so I think I will spare
you the details, unless you "insist", when I will happy to do share?
Regards, Per Steffensen
On 20/02/18 20:57, Andras Beni wrote:
Hi Per,
Unfortunately you can not extract this information fr
Just a heads up that an issue with upgrades was found
https://issues.apache.org/jira/browse/KAFKA-6238 There's a PR in progress
to address the underlying issue. There was a workaround, but it looked like
this would cause more pain than doing one more RC, so we'll have an RC2 up
soon after the PR is
Hello all,
after reading several properties in Kafka documentations, I asked mysleft some
questions ...
these 2 following options are available:
log.dir The directory in which the log data is kept (supplemental for log.dirs
property)string /tmp/kafka-logs high
log.dirs
Hi Per,
Unfortunately you can not extract this information from the client. Even if
you implement your own PartitionAssignor to supply this information to all
the consumers, KafkaConsumer has its ConsumerCoordinator implementation
hard wired, so you cannot extract that info. Here there is room for
Could you elaborate a bit more.
a) what version of Kafka?
b) What do you mean by "it can still send/receive"? It cant be node 3 as it
is down.
- Affan
On Tue, Feb 20, 2018 at 3:47 PM, Shatadeep Banerjee <
shata_d...@yahoo.co.in.invalid> wrote:
> I am running Kafka in a multi-node, multi-broke
I am running Kafka in a multi-node, multi-broker setup. I have 3 machines (with
Windows OS) connected in a cluster. When I close node 3, it can still send and
receive messages, while node 2 cannot receive messages but can send messages.
Why is this happening?
Shatadeep Banerjee
Sent from Mail f
Dear Apache Enthusiast,
(You’re receiving this message because you’re subscribed to a user@ or
dev@ list of one or more Apache Software Foundation projects.)
We’re pleased to announce the upcoming ApacheCon [1] in Montréal,
September 24-27. This event is all about you — the Apache project com
Thanks Sound good
On 19 February 2018 at 21:04, Matthias J. Sax wrote:
> Using Kafka's Streams API sound like a very good solution to your
> problem. I'd recommend to check out the docs and examples:
>
> https://kafka.apache.org/10/documentation/streams/
>
> https://github.com/confluentinc/kafk
As a Kafka consumer I know which topic-partitions I am assigned to
within my consumer-group. What is the easiest way, as a client, to get a
complete overview of the assignments within the consumer-group -
specifically who (client-id, host/IP, port etc (whatever is available))
is assigned to the
Hi Sandor,
Thanks for your reply. I am not at work right now, but I still am a bit
confused about what happened at work:
1- One thing that I confirmed was that one the 3 nodes was definitely down.
We were unable to telnet into its Kafka port from anywhere. The other two
nodes were up and we could
Thanks Sharat.
Working with offsets is my last option. Checking if we have any other
option available.
Cheers,
Siva
On Tue, Feb 20, 2018 at 12:54 PM, Sharath Gururaj
wrote:
> There are no statistics on a per-day basis. Kafka exposes metrics for
> producer throughput per sec. Both in terms of b
Hi Behrang,
All reads and writes of a partition go through the leader of that
partition.
If the leader of a partition is down you will not be able to
produce/consume data in it until a new leader is elected. Typically it
happens in a few seconds, after that you should be able to use that
partition
Why are the per second metrics for messages and bytes not useful for
capacity planning? I can't think of a situation where knowing the number of
metrics would be more useful. If you really want that, you can always
extrapolate the per second number and get an approximation.
--
Sharninder
On Tue,
Hi,
I have a Kafka cluster with 3 nodes.
I pass the nodes in the cluster to a consumer app I am building as
bootstrap servers.
When one of the nodes in the cluster is down, the consumer group sometimes
CAN read records from the server but sometimes CAN NOT.
In both cases, the same Kafka node is
14 matches
Mail list logo