Hi community,
I need to grant a user consumer permissions on all topics (super consumer
), including new topics that haven’t been created. I have tried this:
kafka-acls.sh --add --allow-principal User:$1 --operation Read --topic "*"
kafka-acls.sh --add --allow-principal User:$1 --operation
Hi, I am using Ambari to manage Kafka, info as listed below:
Ambari version: 2.7.4.0
Kafka version: 2.0.0
The problem I ran into is that one broker restarts without shutdown log,
which makes it difficult to track down the reason. The related logs are
as follows, in which I cannot find "shut
I am using Ambari to manage Kafka, info listed below:
Ambari version: 2.7.4.0
Kafka version: 2.0.0
broker number: 10
On every broker, authorizer logger keeps outputting following logs:
[2020-04-14 07:56:40,214] INFO Principal = User:xxx is Denied Operation =
Describe from host = 10.90.1.213 on
, even resetting them
> anywhere.
>
> Under these circumstances, MessagesOutPerSec is meaningless.
>
> On Mon, Apr 13, 2020 at 11:38 AM 张祥 wrote:
>
> > Hi,
> >
> > I am wondering why there isn't a metric called MessagesOutPerSec in Kafka
> > JMX metrics to de
Hi,
I am wondering why there isn't a metric called MessagesOutPerSec in Kafka
JMX metrics to describe how many messages are consumed by clients and
fetched by followers per second since there are already metrics like
MessagesInPerSec, BytesInPerSec and BytesOutPerSec. Thanks.
Hi,
I notice that there are jmx metrics for deleted topics when using java code
and jmxterm. Has anyone also run into this ? If yes, what is the reason
behind this and how can I filter expired metrics ? Thanks.
Hi,
I want to know what the best practice to collect Kafka JMX metrics is. I
haven't found a decent way to collect and parse JMX in Java (because it is
too much) and I learn that there are tools like tools like jmxtrans to do
this. I wonder if there is more. Thanks. Regards.
Hi community,
I understand that there are two configs regarding segment file size,
log.segment.bytes for broker and segment.bytes for topic. The default
values are both 1G and they are required to be an integer so they cannot
be larger than 2G. My question is, assuming I am not making any
nce).
>
> There is also an upcoming webinar on how Kafka is integrated in your
> application/architecture.
>
> I hope it helps.
>
> Regards,
> M. MAnna
>
> On Thu, 12 Mar 2020 at 00:51, 张祥 wrote:
>
> > Thanks, very helpful !
> >
> > Peter Bukowi
partition availability is more important to you than data
> integrity, you should allow unclean leader election.
>
>
> > On Mar 11, 2020, at 6:11 AM, 张祥 wrote:
> >
> > Hi, Peter, following what we talked about before, I want to understand
> what
> > will happen when one b
right? Thanks.
张祥 于2020年3月5日周四 上午9:25写道:
> Thanks Peter, really appreciate it.
>
> Peter Bukowinski 于2020年3月4日周三 下午11:50写道:
>
>> Yes, you should restart the broker. I don’t believe there’s any code to
>> check if a Log directory previously marked as failed has returne
dware repair. I treat broker
> restarts as a normal, non-disruptive operation in my clusters. I use a
> minimum of 3x replication.
>
> -- Peter (from phone)
>
> > On Mar 4, 2020, at 12:46 AM, 张祥 wrote:
> >
> > Another question, according to my memory, the broker
Another question, according to my memory, the broker needs to be restarted
after replacing disk to recover this. Is that correct? If so, I take that
Kafka cannot know by itself that the disk has been replaced, manually
restart is necessary.
张祥 于2020年3月4日周三 下午2:48写道:
> Thanks Peter, it ma
ers can still consume data from the online partitions.
>
> -- Peter
>
> > On Mar 2, 2020, at 7:00 PM, 张祥 wrote:
> >
> > Hi community,
> >
> > I ran into disk failure when using Kafka, and fortunately it did not
> crash
> > the entire cluster. So I am wond
Hi community,
I ran into disk failure when using Kafka, and fortunately it did not crash
the entire cluster. So I am wondering how Kafka handles multiple disks and
it manages to work in case of single disk failure. The more detailed, the
better. Thanks !
erent partition counts and use
> kafka’s performance testing tools (kafka-producer-perf-test.sh and
> kafka-consumer-perf-test.sh) to test throughput in different scenarios and
> see the results for yourself.
>
> —
> Peter
>
> > On Feb 27, 2020, at 1:28 AM, 张祥 wrote:
>
I believe no matter the partition count exceeds the broker count, we can
always have the same number of consumer instances as the partition count.
So what I want to know is when two partition exists on the same broker, two
consumer instances will be talking to same broker, is that bad ?
张祥 于
her reason is if you need more
> ingest or output throughout for your topic data. If your producers aren’t
> able to send data to kafka fast enough or your consumers are lagging, you
> might benefit from more brokers and more partitions.
>
> -- Peter
>
> > On Feb 26, 2
In documentation, it is described how to expand cluster:
https://kafka.apache.org/20/documentation.html#basic_ops_cluster_expansion.
But I am wondering what the criteria for expand is. I can only think of
disk usage threshold. For example, suppose several disk usage exceed 80%.
Is this correct and
this, otherwise, SASL/PALIN is less useful.
| |
张祥
|
|
18133622...@163.com
|
签名由网易邮箱大师定制
a lot less performance impact when using
ssl.
-- Pere
On Sun, 29 Sep 2019, 08:28 张祥 <18133622...@163.com> wrote:
Hi everyone !
I am enabling SASL/PLAIN authentication for our Kafka and I am aware it
should be used with SSL encryption. But it may bring a performance impact.
So I am won
. Thanks.
| |
张祥
|
|
18133622...@163.com
|
签名由网易邮箱大师定制
somebody point out
what is the right way to do this ? Thanks. P.S., I am using CDK 4.1.0.
| |
张祥
|
|
18133622...@163.com
|
签名由网易邮箱大师定制
23 matches
Mail list logo