KIP idea: Separate publish request from the subscribe request

2020-08-20 Thread Ming Liu
Hi Kafka community,
   I like to surface a KIP idea, which is to separate publish request from
the subscribe request using different ports.

   The context: We have some workload with over 5000 subscribers, the
latency on publish latency can be as high as 3000 ms second. After
investigation, we found the reason is mainly because there are too many
connections on socketserver and the multiplexing slows down the publish
latency.

   The proposal is somewhat similar to KIP-291: Separating controller
connections and requests from the data plane

   I like to check with experts here whether this is a viable idea to
continue pursuing or not?

Thanks!
Ming


Re: Requesting to add to contributor list and write access for wiki

2020-08-20 Thread Matthias J. Sax
What's your Jira and Wiki Ids?


On 8/20/20 8:01 AM, Prithvi S wrote:
> Hello Team,
> 
> I am interested in contributing to Apache Kafka codebase. Kindly add me to 
> contributor list. It would help me in assigning myself to on Kafka Jiras.
> Also, kindly grant me wiki write access to create KIP.
> 
> Thanks in advance,
> 
> Regards,
> Prithvi
> 



signature.asc
Description: OpenPGP digital signature


Re: virtual KIP meeting for KIP-405

2020-08-20 Thread Adam Bellemare
Hello

I am interested in attending, mostly just to listen and observe.

Thanks ! 

> On Aug 20, 2020, at 6:20 PM, Jun Rao  wrote:
> 
> Hi, everyone,
> 
> We plan to have weekly virtual meetings for KIP-405 to discuss progress and
> outstanding issues, starting from this coming Tuesday at 9am PT. If you are
> interested in attending, please let Harsha or me know.
> 
> The recording of the meeting will be posted in
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
> .
> 
> Thanks,
> 
> Jun


Jenkins build is back to normal : Kafka » kafka-trunk-jdk11 #24

2020-08-20 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #25

2020-08-20 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10379: Implement the KIP-478 StreamBuilder#addGlobalStore() 
(#9148)


--
[...truncated 3.22 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 

virtual KIP meeting for KIP-405

2020-08-20 Thread Jun Rao
Hi, everyone,

We plan to have weekly virtual meetings for KIP-405 to discuss progress and
outstanding issues, starting from this coming Tuesday at 9am PT. If you are
interested in attending, please let Harsha or me know.

The recording of the meeting will be posted in
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
.

Thanks,

Jun


Re: [DISCUSS] KIP-659: Improve TimeWindowedDeserializer and TimeWindowedSerde to handle window size

2020-08-20 Thread Walker Carlson
Hi Leah,

Could you explain a bit more why we do not wish to
let TimeWindowedDeserializer and WindowedSerdes be created without a
specified time as a parameter?

I understand the long.MAX_VALUE could cause problems but would it not be a
good idea to have a usable default or fetch from the config if available?
After all you are proposing to add "window.size.ms"

We definitely need a fix to this problem and adding "window.size.ms" makes
sense to me.

Thanks for the KIP,
Walker

On Thu, Aug 20, 2020 at 2:22 PM Leah Thomas  wrote:

> Hi all,
>
> I'd like to start a discussion for KIP-659:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-659%3A+Improve+TimeWindowedDeserializer+and+TimeWindowedSerde+to+handle+window+size
>
>
> The goal of the KIP is to ensure that window size is passed to the consumer
> when needed, which will generally be for testing purposes, and to avoid
> runtime errors when the *TimeWindowedSerde* is created without a window
> size.
>
> Looking forward to hearing your feedback.
>
> Cheers,
> Leah
>


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #23

2020-08-20 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10379: Implement the KIP-478 StreamBuilder#addGlobalStore() 
(#9148)


--
[...truncated 3.20 MB...]

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 

[DISCUSS] KIP-659: Improve TimeWindowedDeserializer and TimeWindowedSerde to handle window size

2020-08-20 Thread Leah Thomas
Hi all,

I'd like to start a discussion for KIP-659:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-659%3A+Improve+TimeWindowedDeserializer+and+TimeWindowedSerde+to+handle+window+size


The goal of the KIP is to ensure that window size is passed to the consumer
when needed, which will generally be for testing purposes, and to avoid
runtime errors when the *TimeWindowedSerde* is created without a window
size.

Looking forward to hearing your feedback.

Cheers,
Leah


Re: Requesting to add to contributor list and write access for wiki

2020-08-20 Thread Boyang Chen
Added both permissions, you are good to go.

On Thu, Aug 20, 2020 at 8:05 AM sasilekha  wrote:

> Hello Team,
>
> I am interested in contributing to Apache Kafka codebase. Kindly add me to
> contributor list. It would help me in assigning Jiras to myself on Kafka.
>
> Also, kindly grant me wiki write access to create KIP.
>
> Thanks in advance,
>
> Regards,
> Sasilekha
>


Jenkins build is back to normal : Kafka » kafka-trunk-jdk15 #24

2020-08-20 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #23

2020-08-20 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fix typo in LeaderEpochFileCacheTest (#9203)


--
[...truncated 6.45 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

[jira] [Resolved] (KAFKA-9852) Lower block duration in BufferPoolTest to cut down on overall test runtime

2020-08-20 Thread Jira


 [ 
https://issues.apache.org/jira/browse/KAFKA-9852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-9852.
-
Fix Version/s: 2.6.0
   Resolution: Fixed

> Lower block duration in BufferPoolTest to cut down on overall test runtime
> --
>
> Key: KAFKA-9852
> URL: https://issues.apache.org/jira/browse/KAFKA-9852
> Project: Kafka
>  Issue Type: Improvement
>  Components: unit tests
>Reporter: Sönke Liebau
>Assignee: Sönke Liebau
>Priority: Trivial
> Fix For: 2.6.0
>
>
> In BufferPoolTest we use a global setting for the maximum duration that calls 
> can block (max.block.ms) of [2000ms 
> |https://github.com/apache/kafka/blob/e032a360708cec2284f714e4cae388066064d61c/clients/src/test/java/org/apache/kafka/clients/producer/internals/BufferPoolTest.java#L54]
> Since this is wall clock time that might be waited on and could potentially 
> come into play multiple times while this class is executed this is a very 
> long timeout for testing.
> We should reduce this timeout to a much lower value to cut back on test 
> runtimes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[DISCUSS] KIP-661: Expose task configurations in Connect REST API

2020-08-20 Thread Mickael Maison
Hi,

I've created KIP-661 expose the configuration of individual tasks in
the Connect API
https://cwiki.apache.org/confluence/display/KAFKA/KIP-661%3A+Expose+task+configurations+in+Connect+REST+API

Please take a look and let me know if you have any feedback.

Thanks


[DISCUSS] KIP-660: Pluggable ReplicaAssignor

2020-08-20 Thread Mickael Maison
Hi,

I've created KIP-660 to make the replica assignment logic pluggable.
https://cwiki.apache.org/confluence/display/KAFKA/KIP-660%3A+Pluggable+ReplicaAssignor

Please take a look and let me know if you have any feedback.

Thanks


Requesting to add to contributor list and write access for wiki

2020-08-20 Thread sasilekha
Hello Team,

I am interested in contributing to Apache Kafka codebase. Kindly add me to
contributor list. It would help me in assigning Jiras to myself on Kafka.

Also, kindly grant me wiki write access to create KIP.

Thanks in advance,

Regards,
Sasilekha


Requesting to add to contributor list and write access for wiki

2020-08-20 Thread Prithvi S
Hello Team,

I am interested in contributing to Apache Kafka codebase. Kindly add me to 
contributor list. It would help me in assigning myself to on Kafka Jiras.
Also, kindly grant me wiki write access to create KIP.

Thanks in advance,

Regards,
Prithvi

Re: [VOTE] KIP-657: Add Customized Kafka Streams Logo

2020-08-20 Thread Ben Stopford
Just adding my 2c.

Whether "cute" is a good route to take for logos is pretty subjective, but
I do think the approach can work. However, a logo being simple seems. This
was echoed earlier in Robins 'can it be shrunk' comment. Visually there's a
lot going on in both of those images. I think simplifying, and being more
heavily based on the Kafka logo would help. It's a cool logo. Michael's is
that. Otherwise, maybe something like this.

[image: image.png]
[image: image.png]



On Thu, 20 Aug 2020 at 10:20, Antony Stubbs  wrote:

> (just my honest opinion)
>
> I strongly oppose the suggested logos. I completely agree with Michael's
> analysis.
>
> The design appears to me to be quite random (regardless of the association
> of streams with otters) and clashes terribly with the embedded Kafka logo
> making it appear quite unprofessional. It looks like KS is trying to jump
> on the cute animal band wagon against the natural resistance of
> its existing style (Kafka). It also looks far too similar to the Firefox
> logo.
>
> As for the process, I think for there to be meaningful
> community deliberation about a logo, there needs to be far more ideas put
> forward, rather than just the two takes on the one concept.
>
> As for any suggestion on what it should be, I'm afraid I won't be of much
> help.
>
> On Thu, Aug 20, 2020 at 7:59 AM Michael Noll  wrote:
>
> > For what it's worth, here is an example sketch that I came up with. Point
> > is to show an alternative direction for the KStreams logo.
> >
> > https://ibb.co/bmZxDCg
> >
> > Thinking process:
> >
> >- It shows much more clearly (I hope) that KStreams is an official
> part
> >of Kafka.
> >- The Kafka logo is still front and center, and KStreams orbits around
> >it like electrons around the Kafka core/nucleus. That’s important
> > because
> >we want users to adopt all of Kafka, not just bits and pieces.
> >- It uses and builds upon the same ‘simple is beautiful’ style of the
> >original Kafka logo. That also has the nice side-effect that it
> alludes
> > to
> >Kafka’s and KStreams’ architectural simplicity.
> >- It picks up the good idea in the original logo candidates to convey
> >the movement and flow of stream processing.
> >- Execution-wise, and like the main Kafka logo, this logo candidate
> >works well in smaller size, too, because of its simple and clear
> lines.
> >(Logo types like the otter ones tend to become undecipherable at
> smaller
> >sizes.)
> >- It uses the same color scheme of the revamped AK website for brand
> >consistency.
> >
> > I am sure we can come up with even better logo candidates.  But the
> > suggestion above is, in my book, certainly a better option than the
> otters.
> >
> > -Michael
> >
> >
> >
> > On Wed, Aug 19, 2020 at 11:09 PM Boyang Chen  >
> > wrote:
> >
> > > Hey Ben,
> > >
> > > that otter was supposed to be a river-otter to connect to "streams".
> And
> > of
> > > course, it's cute :)
> > >
> > > On Wed, Aug 19, 2020 at 12:41 PM Philip Schmitt <
> > > philip.schm...@outlook.com>
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > I’m with Robin and Michael here.
> > > >
> > > > What this decision needs is a good design brief.
> > > > This article seems decent:
> > > >
> > >
> >
> https://yourcreativejunkie.com/logo-design-brief-the-ultimate-guide-for-designers/
> > > >
> > > > Robin is right about the usage requirements.
> > > > It goes a bit beyond resolution. How does the logo work when it’s on
> a
> > > > sticker on someone’s laptop? Might there be some cases, where you
> want
> > to
> > > > print it in black and white?
> > > > And how would it look if you put the Kafka, ksqlDB, and Streams
> > stickers
> > > > on a laptop?
> > > >
> > > > Of the two, I prefer the first option.
> > > > The brown on black is a bit subdued – it might not work well on a
> > t-shirt
> > > > or a laptop sticker. Maybe that could be improved by using a bolder
> > > color,
> > > > but once it gets smaller or lower-resolution, it may not work any
> > longer.
> > > >
> > > >
> > > > Regards,
> > > > Philip
> > > >
> > > >
> > > > P.S.:
> > > > Another article about what makes a good logo:
> > > > https://vanschneider.com/what-makes-a-good-logo
> > > >
> > > > P.P.S.:
> > > >
> > > > If I were to pick a logo for Streams, I’d choose something that fits
> > well
> > > > with Kafka and ksqlDB.
> > > >
> > > > ksqlDB has the rocket.
> > > > I can’t remember (or find) the reasoning behind the Kafka logo (aside
> > > from
> > > > representing a K). Was there something about planets orbiting the
> sun?
> > Or
> > > > was it the atom?
> > > >
> > > > So I might stick with a space/sience metaphor.
> > > > Could Streams be a comet? UFO? Star? Eclipse? ...
> > > > Maybe a satellite logo for Connect.
> > > >
> > > > Space inspiration: https://thenounproject.com/term/space/
> > > >
> > > >
> > > >
> > > >
> > > > 
> > > > From: Robin Moffatt 
> > > > Sent: 

Re: [DISCUSS] KIP-639 Move nodeLevelSensor and storeLevelSensor methods from StreamsMetricsImpl to StreamsMetrics

2020-08-20 Thread Bruno Cadonna

Thanks Mohamed for the updates!

I really do not want to rain on your parade, but I am still not sure 
whether moving those methods from the StreamsMetricsImpl to the 
StreamsMetrics is the right approach to get rid of 
ProcessorContextUtils#getMetricsImpl().


I do also not agree about the stability of the methods as described in 
the Jira ticket. We changed the signature last October to implement 
KIP-444 and we might change it again due to some discussions we had 
during the implementation of KIP-607.


I would rather try to pass the StreamsMetricsImpl object around behind 
the scenes. I also have to admit that I haven't had too much time 
recently to look how to accomplish that.


Best,
Bruno



On 14.08.20 21:58, John Roesler wrote:

Thanks Mohamed,

I think Bruno raised a good point in the ticket that the
node name is not well known from within the Processor at the
time of init(), so users would basically have to make up a
name to pass into the sensor. Maybe this is ok, but it
doesn't seem too nice.

Offhand, it seems like the options are:

1. To just expose the node name also in ProcessorContext
(such as with a nullable "processorNodeName()" method).

2. To close over the node name in the view of the context
and metrics that we pass to the Processor when we call
init(). The caller of init() is actually the ProcessorNode
itself, so it can easily insert this information into the
metrics before invoking Processor#init().

Offhand, it seems option 2 gives a simpler and better
experience. My concern would be that it might be "too
fancy". Also, if we favor this route, we should reconsider
many of the other arguments to those methods.

Thanks for the KIP!
-John

On Fri, 2020-08-14 at 01:37 +0100, Mohamed Chebbi wrote:

KIP updated with the comments of Bruno Cardona.

Le 06/07/2020 à 22:36, Mohamed Chebbi a écrit :

Thank Bruno for your review.

Changes was added as you sugested.

Le 06/07/2020 à 14:57, Bruno Cadonna a écrit :

Hi Mohamed,

Thank you for the KIP.

Comments regarding the KIP wiki:

1. In section "Public Interface", you should state what you want to
change in interface StreamsMetrics. In your case, you want to add two
methods. You can find a good example how to describe this in KIP-444
(https://cwiki.apache.org/confluence/display/KAFKA/KIP-444%3A+Augment+metrics+for+Kafka+Streams).


2. In section "Compatibility, Deprecation, and Migration Plan" you
should state if anything is needed to keep backward compatibility.
Since you just want to add two methods to the interface, nothing is
needed. You should describe that under that section.

Regarding the KIP content, I left some comments on the corresponding
Jira ticket.

Best,
Bruno


On Sun, Jul 5, 2020 at 3:48 AM Mohamed Chebbi 
wrote:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-639%3A+Move+nodeLevelSensor+and+storeLevelSensor+methods+from+StreamsMetricsImpl+to+StreamsMetrics






[jira] [Created] (KAFKA-10423) Logtash is restarting with invalid_fetch_session_epoch error

2020-08-20 Thread Akshay Sharma (Jira)
Akshay Sharma created KAFKA-10423:
-

 Summary: Logtash is restarting with invalid_fetch_session_epoch 
error
 Key: KAFKA-10423
 URL: https://issues.apache.org/jira/browse/KAFKA-10423
 Project: Kafka
  Issue Type: Bug
  Components: clients, KafkaConnect, logging
Affects Versions: 2.3.0
Reporter: Akshay Sharma


Logstash(input plugin is kafka) is restarting again and again with error as 
mentioned below

 

logs:

```

 

{{{"log":"[INFO ] 2020-07-27 05:03:43.873 [Ruby-0-Thread-12: 
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-kafka-8.3.1/lib/logstash/inputs/kafka.rb:244]
 FetchSessionHandler - [Consumer clientId=logstash-0, groupId=logstash] Node 
1001 was unable to process the fetch request with (sessionId=2115239606, 
epoch=1128): 
INVALID_FETCH_SESSION_EPOCH.\n","stream":"stdout","time":"2020-07-27T05:03:43.873634303Z"}
\{"log":"[WARN ] 2020-07-27 05:14:18.976 [SIGTERM handler] runner - SIGTERM 
received. Shutting 
down.\n","stream":"stdout","time":"2020-07-27T05:14:18.976808493Z"}
\{"log":"[INFO ] 2020-07-27 05:14:19.030 [Ruby-0-Thread-12: 
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-kafka-8.3.1/lib/logstash/inputs/kafka.rb:244]
 AbstractCoordinator - [Consumer clientId=logstash-0, groupId=logstash] Sending 
LeaveGroup request to coordinator 
ikafka-0.kafka-headless.default.svc.cluster.local:9092 (id: 2147482646 rack: 
null)\n","stream":"stdout","time":"2020-07-27T05:14:19.031121275Z"}}}

```

KAFKA LOGS

```

 

{{{"log":"[2020-07-27 05:14:19,031] INFO [GroupCoordinator 1001]: Member 
logstash-0-9a59ad3d-c6ab-4dba-9775-5974d17934d1 in group logstash has left, 
removing it from the group 
(kafka.coordinator.group.GroupCoordinator)\n","stream":"stdout","time":"2020-07-27T05:14:19.032132241Z"}
\{"log":"[2020-07-27 05:14:19,032] INFO [GroupCoordinator 1001]: Preparing to 
rebalance group logstash in state PreparingRebalance with old generation 85 
(__consumer_offsets-49) (reason: removing member 
logstash-0-9a59ad3d-c6ab-4dba-9775-5974d17934d1 on LeaveGroup) 
(kafka.coordinator.group.GroupCoordinator)\n","stream":"stdout","time":"2020-07-27T05:14:19.032320407Z"}
\{"log":"[2020-07-27 05:14:19,032] INFO [GroupCoordinator 1001]: Group logstash 
with generation 86 is now empty (__consumer_offsets-49) 
(kafka.coordinator.group.GroupCoordinator)\n","stream":"stdout","time":"2020-07-27T05:14:19.032619661Z"}
\{"log":"[2020-07-27 05:15:10,766] INFO [GroupCoordinator 1001]: Preparing to 
rebalance group logstash in state PreparingRebalance with old generation 86 
(__consumer_offsets-49) (reason: Adding new member 
logstash-0-bde43584-a21e-4a0d-92cc-69196f213f11 with group instanceid None) 
(kafka.coordinator.group.GroupCoordinator)\n","stream":"stdout","time":"2020-07-27T05:15:10.767053896Z"}}}

```

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10422) Provide a `timesForOffsets` operation in kafka consumer

2020-08-20 Thread Guillaume Bort (Jira)
Guillaume Bort created KAFKA-10422:
--

 Summary: Provide a `timesForOffsets` operation in kafka consumer
 Key: KAFKA-10422
 URL: https://issues.apache.org/jira/browse/KAFKA-10422
 Project: Kafka
  Issue Type: Improvement
  Components: clients, core
Reporter: Guillaume Bort


The kafka consumer already provides an operation to quickly lookup the offsets 
by timestamp by using the `offsetsForTimes` operation.

However there are use cases where the inverse operation would be useful: having 
a set of offsets, I would like to lookup the ingestion timestamps for all these 
messages. Currently it would require fetching all these message by random 
access to retrieve the timestamps.

I propose to add the `timesForOffsets` operation to the kafka consumer. The 
operation signature would be equivalent to `offsetsForTimes`: given a mapping 
from partition to the offset to look up, it would return a mapping from 
partition to the timestamp and offset of the message at the requested offset. 
{{null}} would be returned for the partition if there is no message at this 
offset.

I think it would require an API change in `ListOffset`, so maybe it does 
require a KIP?

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-657: Add Customized Kafka Streams Logo

2020-08-20 Thread Antony Stubbs
(just my honest opinion)

I strongly oppose the suggested logos. I completely agree with Michael's
analysis.

The design appears to me to be quite random (regardless of the association
of streams with otters) and clashes terribly with the embedded Kafka logo
making it appear quite unprofessional. It looks like KS is trying to jump
on the cute animal band wagon against the natural resistance of
its existing style (Kafka). It also looks far too similar to the Firefox
logo.

As for the process, I think for there to be meaningful
community deliberation about a logo, there needs to be far more ideas put
forward, rather than just the two takes on the one concept.

As for any suggestion on what it should be, I'm afraid I won't be of much
help.

On Thu, Aug 20, 2020 at 7:59 AM Michael Noll  wrote:

> For what it's worth, here is an example sketch that I came up with. Point
> is to show an alternative direction for the KStreams logo.
>
> https://ibb.co/bmZxDCg
>
> Thinking process:
>
>- It shows much more clearly (I hope) that KStreams is an official part
>of Kafka.
>- The Kafka logo is still front and center, and KStreams orbits around
>it like electrons around the Kafka core/nucleus. That’s important
> because
>we want users to adopt all of Kafka, not just bits and pieces.
>- It uses and builds upon the same ‘simple is beautiful’ style of the
>original Kafka logo. That also has the nice side-effect that it alludes
> to
>Kafka’s and KStreams’ architectural simplicity.
>- It picks up the good idea in the original logo candidates to convey
>the movement and flow of stream processing.
>- Execution-wise, and like the main Kafka logo, this logo candidate
>works well in smaller size, too, because of its simple and clear lines.
>(Logo types like the otter ones tend to become undecipherable at smaller
>sizes.)
>- It uses the same color scheme of the revamped AK website for brand
>consistency.
>
> I am sure we can come up with even better logo candidates.  But the
> suggestion above is, in my book, certainly a better option than the otters.
>
> -Michael
>
>
>
> On Wed, Aug 19, 2020 at 11:09 PM Boyang Chen 
> wrote:
>
> > Hey Ben,
> >
> > that otter was supposed to be a river-otter to connect to "streams". And
> of
> > course, it's cute :)
> >
> > On Wed, Aug 19, 2020 at 12:41 PM Philip Schmitt <
> > philip.schm...@outlook.com>
> > wrote:
> >
> > > Hi,
> > >
> > > I’m with Robin and Michael here.
> > >
> > > What this decision needs is a good design brief.
> > > This article seems decent:
> > >
> >
> https://yourcreativejunkie.com/logo-design-brief-the-ultimate-guide-for-designers/
> > >
> > > Robin is right about the usage requirements.
> > > It goes a bit beyond resolution. How does the logo work when it’s on a
> > > sticker on someone’s laptop? Might there be some cases, where you want
> to
> > > print it in black and white?
> > > And how would it look if you put the Kafka, ksqlDB, and Streams
> stickers
> > > on a laptop?
> > >
> > > Of the two, I prefer the first option.
> > > The brown on black is a bit subdued – it might not work well on a
> t-shirt
> > > or a laptop sticker. Maybe that could be improved by using a bolder
> > color,
> > > but once it gets smaller or lower-resolution, it may not work any
> longer.
> > >
> > >
> > > Regards,
> > > Philip
> > >
> > >
> > > P.S.:
> > > Another article about what makes a good logo:
> > > https://vanschneider.com/what-makes-a-good-logo
> > >
> > > P.P.S.:
> > >
> > > If I were to pick a logo for Streams, I’d choose something that fits
> well
> > > with Kafka and ksqlDB.
> > >
> > > ksqlDB has the rocket.
> > > I can’t remember (or find) the reasoning behind the Kafka logo (aside
> > from
> > > representing a K). Was there something about planets orbiting the sun?
> Or
> > > was it the atom?
> > >
> > > So I might stick with a space/sience metaphor.
> > > Could Streams be a comet? UFO? Star? Eclipse? ...
> > > Maybe a satellite logo for Connect.
> > >
> > > Space inspiration: https://thenounproject.com/term/space/
> > >
> > >
> > >
> > >
> > > 
> > > From: Robin Moffatt 
> > > Sent: Wednesday, August 19, 2020 6:24 PM
> > > To: us...@kafka.apache.org 
> > > Cc: dev@kafka.apache.org 
> > > Subject: Re: [VOTE] KIP-657: Add Customized Kafka Streams Logo
> > >
> > > I echo what Michael says here.
> > >
> > > Another consideration is that logos are often shrunk (when used on
> > slides)
> > > and need to work at lower resolution (think: printing swag, stitching
> > > socks, etc) and so whatever logo we come up with needs to not be too
> > fiddly
> > > in the level of detail - something that I think both the current
> proposed
> > > options will fall foul of IMHO.
> > >
> > >
> > > On Wed, 19 Aug 2020 at 15:33, Michael Noll 
> wrote:
> > >
> > > > Hi all!
> > > >
> > > > Great to see we are in the process of creating a cool logo for Kafka
> > > > Streams.  First, I apologize for 

Re: [VOTE] KIP-656: MirrorMaker2 Exactly-once Semantics

2020-08-20 Thread Mickael Maison
Thanks for looking into this, it would be a great feature

Considering it's not a trivial feature, we should have a [DISCUSS]
thread before starting a vote.


On Mon, Aug 17, 2020 at 6:25 PM Ning Zhang  wrote:
>
> Hello everyone,
>
> I'd like to start a vote on KIP-656. This KIP aims to make MirrorMaker 2 
> replicates messages exactly-once across clusters
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-656%3A+MirrorMaker2+Exactly-once+Semantics


Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-08-20 Thread Alexandre Dupriez
Hi Jun,

Many thanks for your initiative.

If you like, I am happy to attend at the time you suggested.

Many thanks,
Alexandre

Le mer. 19 août 2020 à 22:00, Harsha Ch  a écrit :

> Hi Jun,
>  Thanks. This will help a lot. Tuesday will work for us.
> -Harsha
>
>
> On Wed, Aug 19, 2020 at 1:24 PM Jun Rao  wrote:
>
> > Hi, Satish, Ying, Harsha,
> >
> > Do you think it would be useful to have a regular virtual meeting to
> > discuss this KIP? The goal of the meeting will be sharing
> > design/development progress and discussing any open issues to
> > accelerate this KIP. If so, will every Tuesday (from next week) 9am-10am
> PT
> > work for you? I can help set up a Zoom meeting, invite everyone who might
> > be interested, have it recorded and shared, etc.
> >
> > Thanks,
> >
> > Jun
> >
> > On Tue, Aug 18, 2020 at 11:01 AM Satish Duggana <
> satish.dugg...@gmail.com>
> > wrote:
> >
> > > Hi  Kowshik,
> > >
> > > Thanks for looking into the  KIP and sending your comments.
> > >
> > > 5001. Under the section "Follower fetch protocol in detail", the
> > > next-local-offset is the offset upto which the segments are copied to
> > > remote storage. Instead, would last-tiered-offset be a better name than
> > > next-local-offset? last-tiered-offset seems to naturally align well
> with
> > > the definition provided in the KIP.
> > >
> > > Both next-local-offset and local-log-start-offset were introduced to
> > > talk about offsets related to local log. We are fine with
> > > last-tiered-offset too as you suggested.
> > >
> > > 5002. After leadership is established for a partition, the leader would
> > > begin uploading a segment to remote storage. If successful, the leader
> > > would write the updated RemoteLogSegmentMetadata to the metadata topic
> > (via
> > > RLMM.putRemoteLogSegmentData). However, for defensive reasons, it seems
> > > useful that before the first time the segment is uploaded by the leader
> > for
> > > a partition, the leader should ensure to catch up to all the metadata
> > > events written so far in the metadata topic for that partition (ex: by
> > > previous leader). To achieve this, the leader could start a lease
> (using
> > an
> > > establish_leader metadata event) before commencing tiering, and wait
> > until
> > > the event is read back. For example, this seems useful to avoid cases
> > where
> > > zombie leaders can be active for the same partition. This can also
> prove
> > > useful to help avoid making decisions on which segments to be uploaded
> > for
> > > a partition, until the current leader has caught up to a complete view
> of
> > > all segments uploaded for the partition so far (otherwise this may
> cause
> > > same segment being uploaded twice -- once by the previous leader and
> then
> > > by the new leader).
> > >
> > > We allow copying segments to remote storage which may have common
> > > offsets. Please go through the KIP to understand the follower fetch
> > > protocol(1) and follower to leader transition(2).
> > >
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-405%3A+Kafka+Tiered+Storage#KIP405:KafkaTieredStorage-FollowerReplication
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-405%3A+Kafka+Tiered+Storage#KIP405:KafkaTieredStorage-Followertoleadertransition
> > >
> > >
> > > 5003. There is a natural interleaving between uploading a segment to
> > remote
> > > store, and, writing a metadata event for the same (via
> > > RLMM.putRemoteLogSegmentData). There can be cases where a remote
> segment
> > is
> > > uploaded, then the leader fails and a corresponding metadata event
> never
> > > gets written. In such cases, the orphaned remote segment has to be
> > > eventually deleted (since there is no confirmation of the upload). To
> > > handle this, we could use 2 separate metadata events viz.
> copy_initiated
> > > and copy_completed, so that copy_initiated events that don't have a
> > > corresponding copy_completed event can be treated as garbage and
> deleted
> > > from the remote object store by the broker.
> > >
> > > We are already updating RMM with RemoteLogSegmentMetadata pre and post
> > > copying of log segments. We had a flag in RemoteLogSegmentMetadata
> > > whether it is copied or not. But we are making changes in
> > > RemoteLogSegmentMetadata to introduce a state field in
> > > RemoteLogSegmentMetadata which will have the respective started and
> > > finished states. This includes for other operations like delete too.
> > >
> > > 5004. In the default implementation of RLMM (using the internal topic
> > > __remote_log_metadata), a separate topic called
> > > __remote_segments_to_be_deleted is going to be used just to track
> > failures
> > > in removing remote log segments. A separate topic (effectively another
> > > metadata stream) introduces some maintenance overhead and design
> > > complexity. It seems to me that the same can be achieved just by using
> > just
> > > the __remote_log_metadata topic 

Re: [VOTE] KIP-657: Add Customized Kafka Streams Logo

2020-08-20 Thread Michael Noll
For what it's worth, here is an example sketch that I came up with. Point
is to show an alternative direction for the KStreams logo.

https://ibb.co/bmZxDCg

Thinking process:

   - It shows much more clearly (I hope) that KStreams is an official part
   of Kafka.
   - The Kafka logo is still front and center, and KStreams orbits around
   it like electrons around the Kafka core/nucleus. That’s important because
   we want users to adopt all of Kafka, not just bits and pieces.
   - It uses and builds upon the same ‘simple is beautiful’ style of the
   original Kafka logo. That also has the nice side-effect that it alludes to
   Kafka’s and KStreams’ architectural simplicity.
   - It picks up the good idea in the original logo candidates to convey
   the movement and flow of stream processing.
   - Execution-wise, and like the main Kafka logo, this logo candidate
   works well in smaller size, too, because of its simple and clear lines.
   (Logo types like the otter ones tend to become undecipherable at smaller
   sizes.)
   - It uses the same color scheme of the revamped AK website for brand
   consistency.

I am sure we can come up with even better logo candidates.  But the
suggestion above is, in my book, certainly a better option than the otters.

-Michael



On Wed, Aug 19, 2020 at 11:09 PM Boyang Chen 
wrote:

> Hey Ben,
>
> that otter was supposed to be a river-otter to connect to "streams". And of
> course, it's cute :)
>
> On Wed, Aug 19, 2020 at 12:41 PM Philip Schmitt <
> philip.schm...@outlook.com>
> wrote:
>
> > Hi,
> >
> > I’m with Robin and Michael here.
> >
> > What this decision needs is a good design brief.
> > This article seems decent:
> >
> https://yourcreativejunkie.com/logo-design-brief-the-ultimate-guide-for-designers/
> >
> > Robin is right about the usage requirements.
> > It goes a bit beyond resolution. How does the logo work when it’s on a
> > sticker on someone’s laptop? Might there be some cases, where you want to
> > print it in black and white?
> > And how would it look if you put the Kafka, ksqlDB, and Streams stickers
> > on a laptop?
> >
> > Of the two, I prefer the first option.
> > The brown on black is a bit subdued – it might not work well on a t-shirt
> > or a laptop sticker. Maybe that could be improved by using a bolder
> color,
> > but once it gets smaller or lower-resolution, it may not work any longer.
> >
> >
> > Regards,
> > Philip
> >
> >
> > P.S.:
> > Another article about what makes a good logo:
> > https://vanschneider.com/what-makes-a-good-logo
> >
> > P.P.S.:
> >
> > If I were to pick a logo for Streams, I’d choose something that fits well
> > with Kafka and ksqlDB.
> >
> > ksqlDB has the rocket.
> > I can’t remember (or find) the reasoning behind the Kafka logo (aside
> from
> > representing a K). Was there something about planets orbiting the sun? Or
> > was it the atom?
> >
> > So I might stick with a space/sience metaphor.
> > Could Streams be a comet? UFO? Star? Eclipse? ...
> > Maybe a satellite logo for Connect.
> >
> > Space inspiration: https://thenounproject.com/term/space/
> >
> >
> >
> >
> > 
> > From: Robin Moffatt 
> > Sent: Wednesday, August 19, 2020 6:24 PM
> > To: us...@kafka.apache.org 
> > Cc: dev@kafka.apache.org 
> > Subject: Re: [VOTE] KIP-657: Add Customized Kafka Streams Logo
> >
> > I echo what Michael says here.
> >
> > Another consideration is that logos are often shrunk (when used on
> slides)
> > and need to work at lower resolution (think: printing swag, stitching
> > socks, etc) and so whatever logo we come up with needs to not be too
> fiddly
> > in the level of detail - something that I think both the current proposed
> > options will fall foul of IMHO.
> >
> >
> > On Wed, 19 Aug 2020 at 15:33, Michael Noll  wrote:
> >
> > > Hi all!
> > >
> > > Great to see we are in the process of creating a cool logo for Kafka
> > > Streams.  First, I apologize for sharing feedback so late -- I just
> > learned
> > > about it today. :-)
> > >
> > > Here's my *personal, subjective* opinion on the currently two logo
> > > candidates for Kafka Streams.
> > >
> > > TL;DR: Sorry, but I really don't like either of the proposed "otter"
> > logos.
> > > Let me try to explain why.
> > >
> > >- The choice to use an animal, regardless of which specific animal,
> > >seems random and doesn't fit Kafka. (What's the purpose? To show
> that
> > >KStreams is 'cute'?) In comparison, the O’Reilly books always have
> an
> > >animal cover, that’s their style, and it is very recognizable.
> Kafka
> > >however has its own, different style.  The Kafka logo has clear,
> > simple
> > >lines to achieve an abstract and ‘techy’ look, which also alludes
> > > nicely to
> > >its architectural simplicity. Its logo is also a smart play on the
> > >Kafka-identifying letter “K” and alluding to it being a distributed
> > > system
> > >(the circles and links that make the K).
> > >- The proposed