Hi Pushkar,
Best way to see what metrics are available is to connect to a broker via
JConsole to see the exposed mbeans.
You can iterate over them programmatically by using the MBean API.
Also recommend chapter 10 of Kafka: The Definitive Guide, it covers the
metrics really well.
Cheers,
Liam
Thanks Liam...
Few questions: in your pattern the topic parameter is appended pattern:
'kafka.server<>OneMinuteRate'
however the kafka docs doesn't mention that
kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec
does the topic parameter available in all BrokerTopicMetrics and can the
bro
Whoops, just spotted a typo - the second $1 in the above snippet should of
course be $2.
On Thu, Jul 16, 2020 at 4:33 PM Liam Clarke-Hutchinson <
liam.cla...@adscale.co.nz> wrote:
> Hi Pushkar,
>
> There are broker side metrics for messages in / bytes in / bytes out per
> topic per second. I use
Hi Pushkar,
There are broker side metrics for messages in / bytes in / bytes out per
topic per second. I use this jmx_exporter rule to export them:
- pattern: 'kafka.server<>OneMinuteRate'
name: kafka_broker_per_topic_$1_one_minute_rate
labels:
topic: $1
type: GAUGE
You can't
When I am checking Kafka logs it is there:
Invalid record due to REST client error
(io.confluent.kafka.schemaregistry.validator.RecordSchemaValidator)
io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException:
This ID is banned; error code: 40403
at
io.conflue
Thanks a lot Ricardo
I will try that 😊
On Wed, Jul 15, 2020, 19:11 Ricardo Ferreira wrote:
> The `tls.private.key` type is indeed modeled as a password but for the
> sake of how to assign values; it is just a string. Therefore, you can
> provide any valid string to it regardless if it is long o
Hi all
My schema is:
{
"fields": [
{
"name": "ID",
"type": "long"
},
{
"name": "Group",
"type": [
"null",
{
"avro.java.string": "String",
"type": "string"
}
]
},
{
"name": "Key",
Thanks Claudia! For broker level metrics, we are also using same jmx
exporter to export those metrics to prometheus.
Are you fetching any per topic metrics from broker? e.g. messages produced
on a certain topic or messages consumed from a certain topic. I am mainly
interested in these metrics.
I r
Hi,
I use https://github.com/prometheus/jmx_exporter for collecting broker metrics
and integrating them into prometheus.
Hope this helps.
Greetings,
Claudia
-Ursprüngliche Nachricht-
Von: Pushkar Deole
Gesendet: Mittwoch, 15. Juli 2020 09:07
An: users@kafka.apache.org
Betreff: Re: kaf
The `tls.private.key` type is indeed modeled as a password but for the
sake of how to assign values; it is just a string. Therefore, you can
provide any valid string to it regardless if it is long or not.
Regarding escaping, I understand how this can be a PITA. I would
recommend either:
1. *
Hello kafka community,
Hi, in KTable-KTable Join document from an older version, the cwiki
mentions:
“Pay attention, that the KTable lookup is done on the current KTable state,
and thus, out-of-order records can yield non-deterministic result.
Furthermore, in practice Kafka Streams does not guarant
Hello kafka community,
Writing black on white to be more visible,
This is a thought on making join more clear to me and less prone to
concurrency issues that would be risky, not knowing the underlying
implementation of join:
Waiting your feedback,
Thanks,
1. kafka streams 1:
map topic1 in: key: th
Hello kafka community,
Refining the step 2 and some questions:
- is the indeterminism of the ktable join a real problem?
- how is the ktable join implemented?
- do you think the solution outlined is a step in the right direction?
- does the ktable join implement such a strategy in a future version
We are using prometheus as metrics collection and storage system and
Grafana for displaying those metrics, so integration with them is required
On Wed, Jul 15, 2020 at 11:11 AM rohit garg wrote:
> You can try using kafka manager and check it will fullfill most of
> requirement of yours.
>
> Than
14 matches
Mail list logo