[Kafka Config] Setting multiple timestamp.column.name

2020-03-04 Thread Hung X. Pham
Hi guys,
Sorry if this mail has bothered you. Currently, I want to set up a topic but it 
listen two column timestamp to consum data when the data is changed.
Example: timestamp.column.name = createddate or modifieddate

Thank for your help, glad to hear from you soon.


Thanks,
Hung Pham
Application Developer | Electronic Transaction Consultants Corporation (ETC)
1600 N. Collins Boulevard, Suite 4000, Richardson, TX 75080
(o) 214.615.2320



Problems when Consuming from multiple Partitions

2020-03-04 Thread James Olsen
I’m seeing behaviour that I don’t understand when I have Consumers fetching 
from multiple Partitions from the same Topic.  There are two different 
conditions arising:

1. A subset of the Partitions allocated to a given Consumer not being consumed 
at all.  The Consumer appears healthy, the Thread is running and logging 
activity and is successfully processing records from some of the Partitions it 
has been assigned.  I don’t think this is due to the first Partition fetched 
filling a Batch (KIP-387).  The problem does not occur if we have a particular 
number of Consumers (3 in this case) but it has failed with a range of other 
larger values.  I don’t think there is anything special about 3 - it just 
happens to work OK with that value although it is the same as the Broker and 
Replica count.  When we tried 6, 5 Consumers were fine but 1 exhibited this 
issue.

2. Up to a half second delay between Producer sending and Consumer receiving a 
message.  This looks suspiciously like the fetch.max.wait.ms=500 but we also 
have fetch.min.bytes=1 so should get messages as soon as something is 
available.  The only explanation I can think of is if the fetch.max.wait.ms is 
applied in full to the first Partition checked and it remains empty for the 
duration.  Then it moves on to a subsequent non-empty Partition and delivers 
messages from there.

Our environment is AWS MSK (Kafka 2.2.1) and Kafka Java client 2.4.0.

All environments appear healthy and under light load, e.g. clients only 
operating at a 1-2% CPU, Brokers (3) at 5-10% CPU.   No swap, no crashes, no 
dead threads etc.

Typical scenario is a Topic with 60 Partitions, 3 Replicas and a single 
ConsumerGroup with 5 Consumers.  The Partitioning is for semantic purposes with 
the intention being to add more Consumers as the business grows and load 
increases.  Some of the Partitions are always empty due to using short string 
keys and the default Partitioner - we will probably implement a custom 
Partitioner to achieve better distribution in the near future.

I don’t have access to the detailed JMX metrics yet but am working on that in 
the hope it will help diagnose.

Thoughts and advice appreciated!

Re: what happened in case of single disk failure

2020-03-04 Thread 张祥
Thanks Peter, really appreciate it.

Peter Bukowinski  于2020年3月4日周三 下午11:50写道:

> Yes, you should restart the broker. I don’t believe there’s any code to
> check if a Log directory previously marked as failed has returned to
> healthy.
>
> I always restart the broker after a hardware repair. I treat broker
> restarts as a normal, non-disruptive operation in my clusters. I use a
> minimum of 3x replication.
>
> -- Peter (from phone)
>
> > On Mar 4, 2020, at 12:46 AM, 张祥  wrote:
> >
> > Another question, according to my memory, the broker needs to be
> restarted
> > after replacing disk to recover this. Is that correct? If so, I take that
> > Kafka cannot know by itself that the disk has been replaced, manually
> > restart is necessary.
> >
> > 张祥  于2020年3月4日周三 下午2:48写道:
> >
> >> Thanks Peter, it makes a lot of sense.
> >>
> >> Peter Bukowinski  于2020年3月3日周二 上午11:56写道:
> >>
> >>> Whether your brokers have a single data directory or multiple data
> >>> directories on separate disks, when a disk fails, the topic partitions
> >>> located on that disk become unavailable. What happens next depends on
> how
> >>> your cluster and topics are configured.
> >>>
> >>> If the topics on the affected broker have replicas and the minimum ISR
> >>> (in-sync replicas) count is met, then all topic partitions will remain
> >>> online and leaders will move to another broker. Producers and consumers
> >>> will continue to operate as usual.
> >>>
> >>> If the topics don’t have replicas or the minimum ISR count is not met,
> >>> then the topic partitions on the failed disk will be offline.
> Producers can
> >>> still send data to the affected topics — it will just go to the online
> >>> partitions. Consumers can still consume data from the online
> partitions.
> >>>
> >>> -- Peter
> >>>
> > On Mar 2, 2020, at 7:00 PM, 张祥  wrote:
> >
> > Hi community,
> >
> > I ran into disk failure when using Kafka, and fortunately it did not
> >>> crash
>  the entire cluster. So I am wondering how Kafka handles multiple disks
> >>> and
>  it manages to work in case of single disk failure. The more detailed,
> >>> the
>  better. Thanks !
> >>>
> >>
>


Dynamic Loading of Truststore Issue

2020-03-04 Thread Darshan
Hi

We are on Kafka 1.1.1. We add bunch of new entries (say ~ 10 new entries)
in truststore and restart for Kafka to read the truststore file. Everything
works fine.

We wanted to move to Kafka 2.0.x to get this new features, wherein we can
dynamically remove something from truststore. Let's say, we want to remove
1 entry from truststore, this feature works fine. But if we restart Kafka,
all previously added 9 entries don't work any more. Is this by design ?

I also saw that in Kafka 1.1.1, when we added bunch of new entries in
truststore, the file size of truststore went up. But in Kafka 2.0.1, the
truststore file stays constant.

Can someone please comment:
1. If the issue that we are seeing is by design ?
2. Do we need to add to keystore all entries every-time upon Kafka restart ?

Thanks.


Re: Integrating Kafka with Stateful Spark Streaming

2020-03-04 Thread Something Something
Yes, I have. No response from them. I thought someone in Kafka community
might know the answer. Thanks.

On Wed, Mar 4, 2020 at 9:49 AM Boyang Chen 
wrote:

> Hey there,
>
> have you already sought help from Spark community? Currently I don't think
> we could attribute the symptom to Kafka.
>
> Boyang
>
> On Wed, Mar 4, 2020 at 7:37 AM Something Something <
> mailinglist...@gmail.com>
> wrote:
>
> > Need help integrating Kafka with 'Stateful Spark Streaming' application.
> >
> > In a Stateful Spark Streaming application I am writing the 'OutputRow' in
> > the 'updateAcrossEvents' but I keep getting this error (*Required
> attribute
> > 'value' not found*) while it's trying to write to Kafka. I know from the
> > documentation that 'value' attribute needs to be set but how do I do that
> > in the 'Stateful Structured Streaming'? Where & how do I add this 'value'
> > attribute in the following code? *Note: I am using Spark 2.3.1*
> >
> > withEventTime
> >   .as[R00tJsonObject]
> >   .withWatermark("event_time", "5 minutes")
> >   .groupByKey(row => (row.value.Id ,
> > row.value.time.toString, row.value.cId))
> >
> >
> .mapGroupsWithState(GroupStateTimeout.EventTimeTimeout)(updateAcrossEvents)
> >   .writeStream
> >   .format("kafka")
> >   .option("kafka.bootstrap.servers", "localhost:9092")
> >   .option("topic", "myTopic")
> >   .option("checkpointLocation", "/Users/username/checkpointLocation")
> >   .outputMode("update")
> >   .start()
> >   .awaitTermination()
> >
>


Re: Integrating Kafka with Stateful Spark Streaming

2020-03-04 Thread Boyang Chen
Hey there,

have you already sought help from Spark community? Currently I don't think
we could attribute the symptom to Kafka.

Boyang

On Wed, Mar 4, 2020 at 7:37 AM Something Something 
wrote:

> Need help integrating Kafka with 'Stateful Spark Streaming' application.
>
> In a Stateful Spark Streaming application I am writing the 'OutputRow' in
> the 'updateAcrossEvents' but I keep getting this error (*Required attribute
> 'value' not found*) while it's trying to write to Kafka. I know from the
> documentation that 'value' attribute needs to be set but how do I do that
> in the 'Stateful Structured Streaming'? Where & how do I add this 'value'
> attribute in the following code? *Note: I am using Spark 2.3.1*
>
> withEventTime
>   .as[R00tJsonObject]
>   .withWatermark("event_time", "5 minutes")
>   .groupByKey(row => (row.value.Id ,
> row.value.time.toString, row.value.cId))
>
> .mapGroupsWithState(GroupStateTimeout.EventTimeTimeout)(updateAcrossEvents)
>   .writeStream
>   .format("kafka")
>   .option("kafka.bootstrap.servers", "localhost:9092")
>   .option("topic", "myTopic")
>   .option("checkpointLocation", "/Users/username/checkpointLocation")
>   .outputMode("update")
>   .start()
>   .awaitTermination()
>


Re: what happened in case of single disk failure

2020-03-04 Thread Peter Bukowinski
Yes, you should restart the broker. I don’t believe there’s any code to check 
if a Log directory previously marked as failed has returned to healthy.

I always restart the broker after a hardware repair. I treat broker restarts as 
a normal, non-disruptive operation in my clusters. I use a minimum of 3x 
replication.

-- Peter (from phone)

> On Mar 4, 2020, at 12:46 AM, 张祥  wrote:
> 
> Another question, according to my memory, the broker needs to be restarted
> after replacing disk to recover this. Is that correct? If so, I take that
> Kafka cannot know by itself that the disk has been replaced, manually
> restart is necessary.
> 
> 张祥  于2020年3月4日周三 下午2:48写道:
> 
>> Thanks Peter, it makes a lot of sense.
>> 
>> Peter Bukowinski  于2020年3月3日周二 上午11:56写道:
>> 
>>> Whether your brokers have a single data directory or multiple data
>>> directories on separate disks, when a disk fails, the topic partitions
>>> located on that disk become unavailable. What happens next depends on how
>>> your cluster and topics are configured.
>>> 
>>> If the topics on the affected broker have replicas and the minimum ISR
>>> (in-sync replicas) count is met, then all topic partitions will remain
>>> online and leaders will move to another broker. Producers and consumers
>>> will continue to operate as usual.
>>> 
>>> If the topics don’t have replicas or the minimum ISR count is not met,
>>> then the topic partitions on the failed disk will be offline. Producers can
>>> still send data to the affected topics — it will just go to the online
>>> partitions. Consumers can still consume data from the online partitions.
>>> 
>>> -- Peter
>>> 
> On Mar 2, 2020, at 7:00 PM, 张祥  wrote:
> 
> Hi community,
> 
> I ran into disk failure when using Kafka, and fortunately it did not
>>> crash
 the entire cluster. So I am wondering how Kafka handles multiple disks
>>> and
 it manages to work in case of single disk failure. The more detailed,
>>> the
 better. Thanks !
>>> 
>> 


Integrating Kafka with Stateful Spark Streaming

2020-03-04 Thread Something Something
Need help integrating Kafka with 'Stateful Spark Streaming' application.

In a Stateful Spark Streaming application I am writing the 'OutputRow' in
the 'updateAcrossEvents' but I keep getting this error (*Required attribute
'value' not found*) while it's trying to write to Kafka. I know from the
documentation that 'value' attribute needs to be set but how do I do that
in the 'Stateful Structured Streaming'? Where & how do I add this 'value'
attribute in the following code? *Note: I am using Spark 2.3.1*

withEventTime
  .as[R00tJsonObject]
  .withWatermark("event_time", "5 minutes")
  .groupByKey(row => (row.value.Id ,
row.value.time.toString, row.value.cId))
  
.mapGroupsWithState(GroupStateTimeout.EventTimeTimeout)(updateAcrossEvents)
  .writeStream
  .format("kafka")
  .option("kafka.bootstrap.servers", "localhost:9092")
  .option("topic", "myTopic")
  .option("checkpointLocation", "/Users/username/checkpointLocation")
  .outputMode("update")
  .start()
  .awaitTermination()


Incremental topic subscription

2020-03-04 Thread Pedro Cardoso
Hello,

Does the KafkaConsumer subscribe methods allow for incremental topic
subscriptions?

By incremental I mean that only the added and removed topics and
subscribed/unsubscribes respectively and the other topics are not
unsubscribed and subscribed back.

>From the javadoc API on the subscribe method with a list of topics
,
it is mentioned that topic subscription is not incremental. Does the same
apply for pattern-based subscriptions

?

As an example, take the following scenario:
Step 1: consumer.subscribe(list("A","B","C"))

// Now we subscribe to topic D and no longer care about C
Step 2: consumer.subscribe(list("A","B","D"))

What happens in this scenario with pattern-based subscription?
Are all topics from Step 1 unsubscribed and then all Step 2 topics are
subscribed?

Thank you and best regards,

Pedro Cardoso

Research Data Engineer

pedro.card...@feedzai.com




[image: Follow Feedzai on Facebook.] [image:
Follow Feedzai on Twitter!] [image: Connect
with Feedzai on LinkedIn!] 
[image: Feedzai in Forbes Fintech 50!]


-- 
The content of this email is confidential and 
intended for the recipient 
specified in message only. It is strictly 
prohibited to share any part of 
this message with any third party, 
without a written consent of the 
sender. If you received this message by
 mistake, please reply to this 
message and follow with its deletion, so 
that we can ensure such a mistake 
does not occur in the future.


Re: Subject: [VOTE] 2.4.1 RC0

2020-03-04 Thread Eno Thereska
Hi Bill,

I built from source and ran unit and integration tests. They passed.
There was a large number of skipped tests, but I'm assuming that is
intentional.

Cheers
Eno

On Tue, Mar 3, 2020 at 8:42 PM Eric Lalonde  wrote:
>
> Hi,
>
> I ran:
> $  https://github.com/elalonde/kafka/blob/master/bin/verify-kafka-rc.sh 
>  2.4.1 
> https://home.apache.org/~bbejeck/kafka-2.4.1-rc0 
> 
>
> All checksums and signatures are good and all unit and integration tests that 
> were executed passed successfully.
>
> - Eric
>
> > On Mar 2, 2020, at 6:39 PM, Bill Bejeck  wrote:
> >
> > Hello Kafka users, developers and client-developers,
> >
> > This is the first candidate for release of Apache Kafka 2.4.1.
> >
> > This is a bug fix release and it includes fixes and improvements from 38
> > JIRAs, including a few critical bugs.
> >
> > Release notes for the 2.4.1 release:
> > https://home.apache.org/~bbejeck/kafka-2.4.1-rc0/RELEASE_NOTES.html
> >
> > *Please download, test and vote by Thursday, March 5, 9 am PT*
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > https://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > https://home.apache.org/~bbejeck/kafka-2.4.1-rc0/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >
> > * Javadoc:
> > https://home.apache.org/~bbejeck/kafka-2.4.1-rc0/javadoc/
> >
> > * Tag to be voted upon (off 2.4 branch) is the 2.4.1 tag:
> > https://github.com/apache/kafka/releases/tag/2.4.1-rc0
> >
> > * Documentation:
> > https://kafka.apache.org/24/documentation.html
> >
> > * Protocol:
> > https://kafka.apache.org/24/protocol.html
> >
> > * Successful Jenkins builds for the 2.4 branch:
> > Unit/integration tests: Links to successful unit/integration test build to
> > follow
> > System tests:
> > https://jenkins.confluent.io/job/system-test-kafka/job/2.4/152/
> >
> >
> > Thanks,
> > Bill Bejeck
>


Kafka broker using too much CPU?

2020-03-04 Thread Péter Sinóros-Szabó
Hi,

I read here and there that Kafka is not CPU intensive, but mostly disk and
network. Seems to be reasonable, but that's not what I see on my monitoring.

Could anyone help me to see if the CPU usage I see is about the expected
usage or there is something how we use Kafka that makes it more CPU hungry
that it should be?

Running 6 Kafka brokers v1.1.1 each on AWS EC2 m5 instances with 32GB ram,
8 CPUs
We have 830 topics that in sum has 3050 partitions with varying usage
patterns.
Inbound traffic: 3-8MB/sec, 8-15k msgs/sec
Outbound traffic: 10-50MB/sec
I see a total of 21k client connections (TCP connection to broker port), so
guess that is 21k Kafka clients connecting to the cluster.

So CPU usage is at minimum 2-3.5 CPUs on baseline traffic, 4-5 CPUs with
occasional spikes to 7 CPUs.

I feel that this CPU usage is more than what is expected, but I may
be wrong.

What do you think?

Thanks,
Peter


SV: Does the response to a OffsetFetch include non-committed transactional offsets?

2020-03-04 Thread Reftel, Magnus
Oh, that's nice! Thank you!

Best Regards
Magnus Reftel

-Opprinnelig melding-
Fra: Matthias J. Sax 
Sendt: onsdag 4. mars 2020 05:27
Til: users@kafka.apache.org
Emne: Re: Does the response to a OffsetFetch include non-committed 
transactional offsets?

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

A fetch-offset request would return the latest "stable" offset, ie, either 
non-transactional or transactional+committed.

If there is a pending transaction, the corresponding offset would not be 
returned.

Btw: Kafka 2.5 allows you to block a fetch-offset request for this
case: ie, if there is a pending transaction, you can wait until the transaction 
is either committed (and the committed offset would be
returned) or aborted (and the "old" offset would be returned).

Check out KIP-447 for more details:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-447%3A+Producer+sc
alability+for+exactly+once+semantics

The broker side changes will be included in 2.5 release.


- -Matthias



On 3/3/20 6:11 AM, Reftel, Magnus wrote:
> If a consumer sends its offset for a topic-partition as part of a
> transaction, and someone sends an OffsetFetch request for that
> consumer group and topic-partition before the transaction is
> committed, is the OffsetFetch response meant to include that pending
> offset, or only the last offset sent outside of a non-committed
> transaction? I find no discussion of it in the Kafka protocol guide,
> and the code in GroupMetadata.scala seems to indicate that pending
> offsets are not included, but I'm observing some behavior that might
> suggest otherwise.
>
> Best Regards Magnus Reftel
>
>
>  Denne e-posten og eventuelle vedlegg
> er beregnet utelukkende for den institusjon eller person den er rettet
> til og kan vaere belagt med lovbestemt taushetsplikt.
> Dersom e-posten er feilsendt, vennligst slett den og kontakt
> Skatteetaten. The contents of this email message and any attachments
> are intended solely for the addressee(s) and may contain confidential
> information and may be legally protected from disclosure. If you are
> not the intended recipient of this message, please immediately delete
> the message and alert the Norwegian Tax Administration.
>
-BEGIN PGP SIGNATURE-

iQIzBAEBCgAdFiEEI8mthP+5zxXZZdDSO4miYXKq/OgFAl5fLikACgkQO4miYXKq
/OjqWBAAxM6S8fmOhD99h4xdg9oV4ce6p2UnPficAuLhiAJT3yG7zrP3K9roz47/
Kqrw5sO0Hu1cGDGDJyjE3ODPRF1IA5Dou5/5H094biElzJIf6170hYkjvLKZUum6
5tjSAFXuotZn6CXDaD2l3/LORqufdq9qYVkfe6S89zTz4cD1v4ULe1+B6zddh8+A
VDanCB1usJo6VyJ2kU3/IGPkDPKYLoSN3+ijBIdGX7rj/d/RaaH6HO5G4fWXhAae
9RJ05BLSuTo1WqglfZs0PHAhMqurzkHlyXNHxa1W+llxh8AJ/eYz3NmwAKHBrW3M
bw/PEPIAcF5xj2xR1p+2FjYNJbeK1qxBwLRw8jbUaX+yoqn7YQmEvjAuOizr4moF
qHFrIpjIi5SCG5iXpSUJRxY0Wlt/RJG1WYqwdCOIlJtSzgL3+aEbaKdQ8EFMckhA
K7jUaF/TQrsfNszOCI9YvtwqYdDI1b85K0l6+5H5Ki69akQwWSR1nI/2M1WQ07oS
YJLENV17qEJBKdK7wBrqRMRKgBYwlQvjDhthrroCgPdQe0jySwpMwIHzKdi2yhVH
hem8Q8u6fjtfTMDLD7S/+sTATEhJsjN97b/t+wUrK2L3BjkXm83BUziftd+6+L2S
ahDwuZJciYt5U0eFkP+4co26U/BkNituTmGRCTCtmlSyahbocus=
=aSd3
-END PGP SIGNATURE-


Denne e-posten og eventuelle vedlegg er beregnet utelukkende for den 
institusjon eller person den er rettet til og kan være belagt med lovbestemt 
taushetsplikt. Dersom e-posten er feilsendt, vennligst slett den og kontakt 
Skatteetaten.
The contents of this email message and any attachments are intended solely for 
the addressee(s) and may contain confidential information and may be legally 
protected from disclosure. If you are not the intended recipient of this 
message, please immediately delete the message and alert the Norwegian Tax 
Administration.


RE: Please help: How to print --reporting-interval in the perf metrics?

2020-03-04 Thread Sunil CHAUDHARI
Hello Experts,
Any thought on this?

From: Sunil CHAUDHARI
Sent: Tuesday, March 3, 2020 5:46 PM
To: users@kafka.apache.org
Subject: Please help: How to print --reporting-interval in the perf metrics?

Hi,
I want to test consumer perf using kafka-consumer-perf-test.sh
I am running below command:

./kafka-consumer-perf-test.sh --broker-list localhost:9092 --topic MB3P3R 
--messages 65495 --num-fetch-threads  9 --print-metrics --reporting-interval 
5000 --show-detailed-stats  > MB3P3R-consumer-Perf2.log

I am getting metrics output as below:  Its not printing anything under the 
highlighted columns.
Another question: Whatevere number of partitions and replicas my topic has, it 
always giving  approx value "2159" for records-consumed-rate

time, threadId, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec, 
rebalance.time.ms, fetch.time.ms, fetch.MB.sec, fetch.nMsg.sec

Metric Name 
Value
consumer-coordinator-metrics:assigned-partitions:{client-id=consumer-1} 
  : 2.000
consumer-coordinator-metrics:commit-latency-avg:{client-id=consumer-1}  
  : 6.000
consumer-coordinator-metrics:commit-latency-max:{client-id=consumer-1}  
  : 6.000
consumer-coordinator-metrics:commit-rate:{client-id=consumer-1} 
  : 0.033
consumer-coordinator-metrics:commit-total:{client-id=consumer-1}
  : 1.000
consumer-coordinator-metrics:heartbeat-rate:{client-id=consumer-1}  
  : 0.000
consumer-coordinator-metrics:heartbeat-response-time-max:{client-id=consumer-1} 
  : NaN
consumer-coordinator-metrics:heartbeat-total:{client-id=consumer-1} 
  : 0.000
consumer-coordinator-metrics:join-rate:{client-id=consumer-1}   
  : 0.033

And so on...


CONFIDENTIAL NOTE:
The information contained in this email is intended only for the use of the 
individual or entity named above and may contain information that is 
privileged, confidential and exempt from disclosure under applicable law. If 
the reader of this message is not the intended recipient, you are hereby 
notified that any dissemination, distribution or copying of this communication 
is strictly prohibited. If you have received this message in error, please 
immediately notify the sender and delete the mail. Thank you.


Re: what happened in case of single disk failure

2020-03-04 Thread 张祥
Another question, according to my memory, the broker needs to be restarted
after replacing disk to recover this. Is that correct? If so, I take that
Kafka cannot know by itself that the disk has been replaced, manually
restart is necessary.

张祥  于2020年3月4日周三 下午2:48写道:

> Thanks Peter, it makes a lot of sense.
>
> Peter Bukowinski  于2020年3月3日周二 上午11:56写道:
>
>> Whether your brokers have a single data directory or multiple data
>> directories on separate disks, when a disk fails, the topic partitions
>> located on that disk become unavailable. What happens next depends on how
>> your cluster and topics are configured.
>>
>> If the topics on the affected broker have replicas and the minimum ISR
>> (in-sync replicas) count is met, then all topic partitions will remain
>> online and leaders will move to another broker. Producers and consumers
>> will continue to operate as usual.
>>
>> If the topics don’t have replicas or the minimum ISR count is not met,
>> then the topic partitions on the failed disk will be offline. Producers can
>> still send data to the affected topics — it will just go to the online
>> partitions. Consumers can still consume data from the online partitions.
>>
>> -- Peter
>>
>> > On Mar 2, 2020, at 7:00 PM, 张祥  wrote:
>> >
>> > Hi community,
>> >
>> > I ran into disk failure when using Kafka, and fortunately it did not
>> crash
>> > the entire cluster. So I am wondering how Kafka handles multiple disks
>> and
>> > it manages to work in case of single disk failure. The more detailed,
>> the
>> > better. Thanks !
>>
>