Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #23

2020-08-19 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10407: Have KafkaLog4jAppender support `linger.ms` and 
`batch.size` (#9189)


--
[...truncated 3.22 MB...]
org.apache.kafka.streams.scala.kstream.KStreamTest > foreach a KStream should 
run foreach actions on records STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > foreach a KStream should 
run foreach actions on records PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > peek a KStream should run 
peek actions on records STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > peek a KStream should run 
peek actions on records PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > selectKey a KStream should 
select a new key STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > selectKey a KStream should 
select a new key PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > repartition should 
repartition a KStream STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > repartition should 
repartition a KStream PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > join 2 KStreams should 
join correctly records STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > join 2 KStreams should 
join correctly records PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > transform a KStream should 
transform correctly records STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > transform a KStream should 
transform correctly records PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > flatTransform a KStream 
should flatTransform correctly records STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > flatTransform a KStream 
should flatTransform correctly records PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > flatTransformValues a 
KStream should correctly flatTransform values in records STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > flatTransformValues a 
KStream should correctly flatTransform values in records PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > flatTransformValues with 
key in a KStream should correctly flatTransformValues in records STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > flatTransformValues with 
key in a KStream should correctly flatTransformValues in records PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > join 2 KStreamToTables 
should join correctly records STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > join 2 KStreamToTables 
should join correctly records PASSED

org.apache.kafka.streams.scala.kstream.JoinedTest > Create a Joined should 
create a Joined with Serdes STARTED

org.apache.kafka.streams.scala.kstream.JoinedTest > Create a Joined should 
create a Joined with Serdes PASSED

org.apache.kafka.streams.scala.kstream.JoinedTest > Create a Joined should 
create a Joined with Serdes and repartition topic name STARTED

org.apache.kafka.streams.scala.kstream.JoinedTest > Create a Joined should 
create a Joined with Serdes and repartition topic name PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsJava PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWords STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWords PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > filter a KTable should 
filter records satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > filter a KTable should 
filter records satisfying the predicate PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > filterNot a KTable should 
filter records not satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > filterNot a KTable should 
filter records not satisfying the predicate PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables should join 
correctly records STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables should join 
correctly records PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables with a 
Materialized should join correctly records and state store STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables with a 
Materialized should join correctly records and state store PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilTimeLimit STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilTimeLimit PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilWindowCloses STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results 

[jira] [Resolved] (KAFKA-10407) add linger.ms parameter support to KafkaLog4jAppender

2020-08-19 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx resolved KAFKA-10407.

Fix Version/s: 2.7.0
   Resolution: Fixed

> add linger.ms parameter support to KafkaLog4jAppender
> -
>
> Key: KAFKA-10407
> URL: https://issues.apache.org/jira/browse/KAFKA-10407
> Project: Kafka
>  Issue Type: Improvement
>  Components: logging
>Reporter: Yu Yang
>Assignee: huxihx
>Priority: Minor
> Fix For: 2.7.0
>
>
> Currently  KafkaLog4jAppender does not accept `linger.ms` setting.   When a 
> service has an outrage that cause excessively error logging,  the service can 
> have too many producer requests to kafka brokers and overload the broker.  
> Setting a non-zero 'linger.ms' will allow kafka producer to batch records and 
> reduce # of producer request. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka-site] mjsax merged pull request #298: MINOR: remove unreleased versions from CVE page

2020-08-19 Thread GitBox


mjsax merged pull request #298:
URL: https://github.com/apache/kafka-site/pull/298


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: Someone should remove nonexistent versions 2.0.2, 2.1.2 from https://kafka.apache.org/cve-list

2020-08-19 Thread Matthias J. Sax
Franklin,

thanks for raising this. I opened a PR to update the web-page
accordingly: https://github.com/apache/kafka-site/pull/298


-Matthias


On 8/10/20 10:53 AM, Franklin Davis wrote:
> https://kafka.apache.org/cve-list APACHE KAFKA SECURITY VULNERABILITIES 
> incorrectly lists fixed versions 2.0.2 and 2.1.2, but those don't exist (e.g. 
> in https://archive.apache.org/dist/kafka/). I'm not qualified to modify 
> anything -- just letting you know in case someone can fix it.
> 
> --Franklin
> 



signature.asc
Description: OpenPGP digital signature


[GitHub] [kafka-site] mjsax opened a new pull request #298: MINOR: remove unreleased versions from CVE page

2020-08-19 Thread GitBox


mjsax opened a new pull request #298:
URL: https://github.com/apache/kafka-site/pull/298


   As reported on the mailing list: 
https://lists.apache.org/list.html?dev@kafka.apache.org:lte=1M:cve%202.0.2%202.1.2
   
   Call for review @ijuma 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [VOTE] KIP-657: Add Customized Kafka Streams Logo

2020-08-19 Thread Boyang Chen
Hey Ben,

that otter was supposed to be a river-otter to connect to "streams". And of
course, it's cute :)

On Wed, Aug 19, 2020 at 12:41 PM Philip Schmitt 
wrote:

> Hi,
>
> I’m with Robin and Michael here.
>
> What this decision needs is a good design brief.
> This article seems decent:
> https://yourcreativejunkie.com/logo-design-brief-the-ultimate-guide-for-designers/
>
> Robin is right about the usage requirements.
> It goes a bit beyond resolution. How does the logo work when it’s on a
> sticker on someone’s laptop? Might there be some cases, where you want to
> print it in black and white?
> And how would it look if you put the Kafka, ksqlDB, and Streams stickers
> on a laptop?
>
> Of the two, I prefer the first option.
> The brown on black is a bit subdued – it might not work well on a t-shirt
> or a laptop sticker. Maybe that could be improved by using a bolder color,
> but once it gets smaller or lower-resolution, it may not work any longer.
>
>
> Regards,
> Philip
>
>
> P.S.:
> Another article about what makes a good logo:
> https://vanschneider.com/what-makes-a-good-logo
>
> P.P.S.:
>
> If I were to pick a logo for Streams, I’d choose something that fits well
> with Kafka and ksqlDB.
>
> ksqlDB has the rocket.
> I can’t remember (or find) the reasoning behind the Kafka logo (aside from
> representing a K). Was there something about planets orbiting the sun? Or
> was it the atom?
>
> So I might stick with a space/sience metaphor.
> Could Streams be a comet? UFO? Star? Eclipse? ...
> Maybe a satellite logo for Connect.
>
> Space inspiration: https://thenounproject.com/term/space/
>
>
>
>
> 
> From: Robin Moffatt 
> Sent: Wednesday, August 19, 2020 6:24 PM
> To: us...@kafka.apache.org 
> Cc: dev@kafka.apache.org 
> Subject: Re: [VOTE] KIP-657: Add Customized Kafka Streams Logo
>
> I echo what Michael says here.
>
> Another consideration is that logos are often shrunk (when used on slides)
> and need to work at lower resolution (think: printing swag, stitching
> socks, etc) and so whatever logo we come up with needs to not be too fiddly
> in the level of detail - something that I think both the current proposed
> options will fall foul of IMHO.
>
>
> On Wed, 19 Aug 2020 at 15:33, Michael Noll  wrote:
>
> > Hi all!
> >
> > Great to see we are in the process of creating a cool logo for Kafka
> > Streams.  First, I apologize for sharing feedback so late -- I just
> learned
> > about it today. :-)
> >
> > Here's my *personal, subjective* opinion on the currently two logo
> > candidates for Kafka Streams.
> >
> > TL;DR: Sorry, but I really don't like either of the proposed "otter"
> logos.
> > Let me try to explain why.
> >
> >- The choice to use an animal, regardless of which specific animal,
> >seems random and doesn't fit Kafka. (What's the purpose? To show that
> >KStreams is 'cute'?) In comparison, the O’Reilly books always have an
> >animal cover, that’s their style, and it is very recognizable.  Kafka
> >however has its own, different style.  The Kafka logo has clear,
> simple
> >lines to achieve an abstract and ‘techy’ look, which also alludes
> > nicely to
> >its architectural simplicity. Its logo is also a smart play on the
> >Kafka-identifying letter “K” and alluding to it being a distributed
> > system
> >(the circles and links that make the K).
> >- The proposed logos, however, make it appear as if KStreams is a
> >third-party technology that was bolted onto Kafka. They certainly, for
> > me,
> >do not convey the message "Kafka Streams is an official part of Apache
> >Kafka".
> >- I, too, don't like the way the main Kafka logo is obscured (a
> concern
> >already voiced in this thread). Also, the Kafka 'logo' embedded in the
> >proposed KStreams logos is not the original one.
> >- None of the proposed KStreams logos visually match the Kafka logo.
> >They have a totally different style, font, line art, and color scheme.
> >- Execution-wise, the main Kafka logo looks great at all sizes.  The
> >style of the otter logos, in comparison, becomes undecipherable at
> > smaller
> >sizes.
> >
> > What I would suggest is to first agree on what the KStreams logo is
> > supposed to convey to the reader.  Here's my personal take:
> >
> > Objective 1: First and foremost, the KStreams logo should make it clear
> and
> > obvious that KStreams is an official and integral part of Apache Kafka.
> > This applies to both what is depicted and how it is depicted (like font,
> > line art, colors).
> > Objective 2: The logo should allude to the role of KStreams in the Kafka
> > project, which is the processing part.  That is, "doing something useful
> to
> > the data in Kafka".
> >
> > The "circling arrow" aspect of the current otter logos does allude to
> > "continuous processing", which is going in the direction of (2), but the
> > logos do not meet (1) in my opinion.
> >
> > -Michael
> >
> >
> >
> 

Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-08-19 Thread Harsha Ch
Hi Jun,
 Thanks. This will help a lot. Tuesday will work for us.
-Harsha


On Wed, Aug 19, 2020 at 1:24 PM Jun Rao  wrote:

> Hi, Satish, Ying, Harsha,
>
> Do you think it would be useful to have a regular virtual meeting to
> discuss this KIP? The goal of the meeting will be sharing
> design/development progress and discussing any open issues to
> accelerate this KIP. If so, will every Tuesday (from next week) 9am-10am PT
> work for you? I can help set up a Zoom meeting, invite everyone who might
> be interested, have it recorded and shared, etc.
>
> Thanks,
>
> Jun
>
> On Tue, Aug 18, 2020 at 11:01 AM Satish Duggana 
> wrote:
>
> > Hi  Kowshik,
> >
> > Thanks for looking into the  KIP and sending your comments.
> >
> > 5001. Under the section "Follower fetch protocol in detail", the
> > next-local-offset is the offset upto which the segments are copied to
> > remote storage. Instead, would last-tiered-offset be a better name than
> > next-local-offset? last-tiered-offset seems to naturally align well with
> > the definition provided in the KIP.
> >
> > Both next-local-offset and local-log-start-offset were introduced to
> > talk about offsets related to local log. We are fine with
> > last-tiered-offset too as you suggested.
> >
> > 5002. After leadership is established for a partition, the leader would
> > begin uploading a segment to remote storage. If successful, the leader
> > would write the updated RemoteLogSegmentMetadata to the metadata topic
> (via
> > RLMM.putRemoteLogSegmentData). However, for defensive reasons, it seems
> > useful that before the first time the segment is uploaded by the leader
> for
> > a partition, the leader should ensure to catch up to all the metadata
> > events written so far in the metadata topic for that partition (ex: by
> > previous leader). To achieve this, the leader could start a lease (using
> an
> > establish_leader metadata event) before commencing tiering, and wait
> until
> > the event is read back. For example, this seems useful to avoid cases
> where
> > zombie leaders can be active for the same partition. This can also prove
> > useful to help avoid making decisions on which segments to be uploaded
> for
> > a partition, until the current leader has caught up to a complete view of
> > all segments uploaded for the partition so far (otherwise this may cause
> > same segment being uploaded twice -- once by the previous leader and then
> > by the new leader).
> >
> > We allow copying segments to remote storage which may have common
> > offsets. Please go through the KIP to understand the follower fetch
> > protocol(1) and follower to leader transition(2).
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-405%3A+Kafka+Tiered+Storage#KIP405:KafkaTieredStorage-FollowerReplication
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-405%3A+Kafka+Tiered+Storage#KIP405:KafkaTieredStorage-Followertoleadertransition
> >
> >
> > 5003. There is a natural interleaving between uploading a segment to
> remote
> > store, and, writing a metadata event for the same (via
> > RLMM.putRemoteLogSegmentData). There can be cases where a remote segment
> is
> > uploaded, then the leader fails and a corresponding metadata event never
> > gets written. In such cases, the orphaned remote segment has to be
> > eventually deleted (since there is no confirmation of the upload). To
> > handle this, we could use 2 separate metadata events viz. copy_initiated
> > and copy_completed, so that copy_initiated events that don't have a
> > corresponding copy_completed event can be treated as garbage and deleted
> > from the remote object store by the broker.
> >
> > We are already updating RMM with RemoteLogSegmentMetadata pre and post
> > copying of log segments. We had a flag in RemoteLogSegmentMetadata
> > whether it is copied or not. But we are making changes in
> > RemoteLogSegmentMetadata to introduce a state field in
> > RemoteLogSegmentMetadata which will have the respective started and
> > finished states. This includes for other operations like delete too.
> >
> > 5004. In the default implementation of RLMM (using the internal topic
> > __remote_log_metadata), a separate topic called
> > __remote_segments_to_be_deleted is going to be used just to track
> failures
> > in removing remote log segments. A separate topic (effectively another
> > metadata stream) introduces some maintenance overhead and design
> > complexity. It seems to me that the same can be achieved just by using
> just
> > the __remote_log_metadata topic with the following steps: 1) the leader
> > writes a delete_initiated metadata event, 2) the leader deletes the
> segment
> > and 3) the leader writes a delete_completed metadata event. Tiered
> segments
> > that have delete_initiated message and not delete_completed message, can
> be
> > considered to be a failure and retried.
> >
> > Jun suggested in earlier mail to keep this simple . We decided not to
> > have this topic as 

Build failed in Jenkins: Kafka » kafka-2.6-jdk8 #9

2020-08-19 Thread Apache Jenkins Server
See 


Changes:

[github] Revert KAFKA-9309: Add the ability to translate Message to JSON (#9197)


--
[...truncated 3.15 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED


Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-08-19 Thread Jun Rao
Hi, Satish, Ying, Harsha,

Do you think it would be useful to have a regular virtual meeting to
discuss this KIP? The goal of the meeting will be sharing
design/development progress and discussing any open issues to
accelerate this KIP. If so, will every Tuesday (from next week) 9am-10am PT
work for you? I can help set up a Zoom meeting, invite everyone who might
be interested, have it recorded and shared, etc.

Thanks,

Jun

On Tue, Aug 18, 2020 at 11:01 AM Satish Duggana 
wrote:

> Hi  Kowshik,
>
> Thanks for looking into the  KIP and sending your comments.
>
> 5001. Under the section "Follower fetch protocol in detail", the
> next-local-offset is the offset upto which the segments are copied to
> remote storage. Instead, would last-tiered-offset be a better name than
> next-local-offset? last-tiered-offset seems to naturally align well with
> the definition provided in the KIP.
>
> Both next-local-offset and local-log-start-offset were introduced to
> talk about offsets related to local log. We are fine with
> last-tiered-offset too as you suggested.
>
> 5002. After leadership is established for a partition, the leader would
> begin uploading a segment to remote storage. If successful, the leader
> would write the updated RemoteLogSegmentMetadata to the metadata topic (via
> RLMM.putRemoteLogSegmentData). However, for defensive reasons, it seems
> useful that before the first time the segment is uploaded by the leader for
> a partition, the leader should ensure to catch up to all the metadata
> events written so far in the metadata topic for that partition (ex: by
> previous leader). To achieve this, the leader could start a lease (using an
> establish_leader metadata event) before commencing tiering, and wait until
> the event is read back. For example, this seems useful to avoid cases where
> zombie leaders can be active for the same partition. This can also prove
> useful to help avoid making decisions on which segments to be uploaded for
> a partition, until the current leader has caught up to a complete view of
> all segments uploaded for the partition so far (otherwise this may cause
> same segment being uploaded twice -- once by the previous leader and then
> by the new leader).
>
> We allow copying segments to remote storage which may have common
> offsets. Please go through the KIP to understand the follower fetch
> protocol(1) and follower to leader transition(2).
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-405%3A+Kafka+Tiered+Storage#KIP405:KafkaTieredStorage-FollowerReplication
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-405%3A+Kafka+Tiered+Storage#KIP405:KafkaTieredStorage-Followertoleadertransition
>
>
> 5003. There is a natural interleaving between uploading a segment to remote
> store, and, writing a metadata event for the same (via
> RLMM.putRemoteLogSegmentData). There can be cases where a remote segment is
> uploaded, then the leader fails and a corresponding metadata event never
> gets written. In such cases, the orphaned remote segment has to be
> eventually deleted (since there is no confirmation of the upload). To
> handle this, we could use 2 separate metadata events viz. copy_initiated
> and copy_completed, so that copy_initiated events that don't have a
> corresponding copy_completed event can be treated as garbage and deleted
> from the remote object store by the broker.
>
> We are already updating RMM with RemoteLogSegmentMetadata pre and post
> copying of log segments. We had a flag in RemoteLogSegmentMetadata
> whether it is copied or not. But we are making changes in
> RemoteLogSegmentMetadata to introduce a state field in
> RemoteLogSegmentMetadata which will have the respective started and
> finished states. This includes for other operations like delete too.
>
> 5004. In the default implementation of RLMM (using the internal topic
> __remote_log_metadata), a separate topic called
> __remote_segments_to_be_deleted is going to be used just to track failures
> in removing remote log segments. A separate topic (effectively another
> metadata stream) introduces some maintenance overhead and design
> complexity. It seems to me that the same can be achieved just by using just
> the __remote_log_metadata topic with the following steps: 1) the leader
> writes a delete_initiated metadata event, 2) the leader deletes the segment
> and 3) the leader writes a delete_completed metadata event. Tiered segments
> that have delete_initiated message and not delete_completed message, can be
> considered to be a failure and retried.
>
> Jun suggested in earlier mail to keep this simple . We decided not to
> have this topic as mentioned in our earlier replies, updated the KIP.
> As I mentioned in an earlier comment, we are  adding state entries for
> delete operations too.
>
> 5005. When a Kafka cluster is provisioned for the first time with KIP-405
> tiered storage enabled, could you explain in the KIP about how the
> bootstrap for __remote_log_metadata topic 

Re: [VOTE] KIP-657: Add Customized Kafka Streams Logo

2020-08-19 Thread Philip Schmitt
Hi,

I’m with Robin and Michael here.

What this decision needs is a good design brief.
This article seems decent: 
https://yourcreativejunkie.com/logo-design-brief-the-ultimate-guide-for-designers/

Robin is right about the usage requirements.
It goes a bit beyond resolution. How does the logo work when it’s on a sticker 
on someone’s laptop? Might there be some cases, where you want to print it in 
black and white?
And how would it look if you put the Kafka, ksqlDB, and Streams stickers on a 
laptop?

Of the two, I prefer the first option.
The brown on black is a bit subdued – it might not work well on a t-shirt or a 
laptop sticker. Maybe that could be improved by using a bolder color, but once 
it gets smaller or lower-resolution, it may not work any longer.


Regards,
Philip


P.S.:
Another article about what makes a good logo: 
https://vanschneider.com/what-makes-a-good-logo

P.P.S.:

If I were to pick a logo for Streams, I’d choose something that fits well with 
Kafka and ksqlDB.

ksqlDB has the rocket.
I can’t remember (or find) the reasoning behind the Kafka logo (aside from 
representing a K). Was there something about planets orbiting the sun? Or was 
it the atom?

So I might stick with a space/sience metaphor.
Could Streams be a comet? UFO? Star? Eclipse? ...
Maybe a satellite logo for Connect.

Space inspiration: https://thenounproject.com/term/space/





From: Robin Moffatt 
Sent: Wednesday, August 19, 2020 6:24 PM
To: us...@kafka.apache.org 
Cc: dev@kafka.apache.org 
Subject: Re: [VOTE] KIP-657: Add Customized Kafka Streams Logo

I echo what Michael says here.

Another consideration is that logos are often shrunk (when used on slides)
and need to work at lower resolution (think: printing swag, stitching
socks, etc) and so whatever logo we come up with needs to not be too fiddly
in the level of detail - something that I think both the current proposed
options will fall foul of IMHO.


On Wed, 19 Aug 2020 at 15:33, Michael Noll  wrote:

> Hi all!
>
> Great to see we are in the process of creating a cool logo for Kafka
> Streams.  First, I apologize for sharing feedback so late -- I just learned
> about it today. :-)
>
> Here's my *personal, subjective* opinion on the currently two logo
> candidates for Kafka Streams.
>
> TL;DR: Sorry, but I really don't like either of the proposed "otter" logos.
> Let me try to explain why.
>
>- The choice to use an animal, regardless of which specific animal,
>seems random and doesn't fit Kafka. (What's the purpose? To show that
>KStreams is 'cute'?) In comparison, the O’Reilly books always have an
>animal cover, that’s their style, and it is very recognizable.  Kafka
>however has its own, different style.  The Kafka logo has clear, simple
>lines to achieve an abstract and ‘techy’ look, which also alludes
> nicely to
>its architectural simplicity. Its logo is also a smart play on the
>Kafka-identifying letter “K” and alluding to it being a distributed
> system
>(the circles and links that make the K).
>- The proposed logos, however, make it appear as if KStreams is a
>third-party technology that was bolted onto Kafka. They certainly, for
> me,
>do not convey the message "Kafka Streams is an official part of Apache
>Kafka".
>- I, too, don't like the way the main Kafka logo is obscured (a concern
>already voiced in this thread). Also, the Kafka 'logo' embedded in the
>proposed KStreams logos is not the original one.
>- None of the proposed KStreams logos visually match the Kafka logo.
>They have a totally different style, font, line art, and color scheme.
>- Execution-wise, the main Kafka logo looks great at all sizes.  The
>style of the otter logos, in comparison, becomes undecipherable at
> smaller
>sizes.
>
> What I would suggest is to first agree on what the KStreams logo is
> supposed to convey to the reader.  Here's my personal take:
>
> Objective 1: First and foremost, the KStreams logo should make it clear and
> obvious that KStreams is an official and integral part of Apache Kafka.
> This applies to both what is depicted and how it is depicted (like font,
> line art, colors).
> Objective 2: The logo should allude to the role of KStreams in the Kafka
> project, which is the processing part.  That is, "doing something useful to
> the data in Kafka".
>
> The "circling arrow" aspect of the current otter logos does allude to
> "continuous processing", which is going in the direction of (2), but the
> logos do not meet (1) in my opinion.
>
> -Michael
>
>
>
>
> On Tue, Aug 18, 2020 at 10:34 PM Matthias J. Sax  wrote:
>
> > Adding the user mailing list -- I think we should accepts votes on both
> > lists for this special case, as it's not a technical decision.
> >
> > @Boyang: as mentioned by Bruno, can we maybe add black/white options for
> > both proposals, too?
> >
> > I also agree that Design B is not ideal with regard to the Kafka logo.
> 

Re: [DISCUSSION] KIP-619: Add internal topic creation support

2020-08-19 Thread Cheng Tan
Hi David,


Thanks for the feedback. They are really helpful.

> Can you clarify a bit more what the difference is between regular topics
> and internal topics (excluding  __consumer_offsets and
> __transaction_state)? Reading your last message, if internal topics
> (excluding the two) can be created, deleted, produced to, consumed from,
> added to transactions, I'm failing to see what is different about them. Is
> it simply that they are marked as "internal" so the application can treat
> them differently?

Yes. The user-defined internal topics (those except `__consumer_offsets` and 
`__transaction_state`) will behave as normal topics in regard to messaging 
operation and permission. Topics are marked as “internal” in order to make the 
broker able to test user-defined internal topics and better provide metadata 
services, such as `listTopics` API. I should have added the metadata behavior 
difference in the KIP.

> In the "Compatibility, Deprecation, and Migration" section, we should
> detail how users can overcome this incompatibility (i.e., changing the
> config name on their topic and changing their application logic if
> necessary).

Thanks for the suggestion. I updated the section.

> Should we consider adding any configs to constrain the min isr and
> replication factor for internal topics? If a topic is really internal and
> fundamentally required for an application to function, it might need a more
> stringent replication config. Our existing internal topics have their own
> configs in server.properties with a comment saying as much.


I think we should probably give clients the freedom to configure 
`min.insync.replicas`, `replication.factor`, and `log.retention` on 
user-defined internal topics as they do on normal topics.

1. Users may have performance requirements on user-defined internal topics.
2. Potential new defaults / restrictions may change the existing user 
application logic silently. There might be compatibility issues.
3. Since user-defined internal topics act like normal topics and won’t affect 
the messaging functionality (produce, consume, transaction, etc), unoptimized 
log configurations won’t harm the cluster. 


Please let me know what you think. Thanks.


Best, - Cheng Tan



> On Aug 14, 2020, at 7:44 AM, David Arthur  wrote:
> 
> Cheng,
> 
> Can you clarify a bit more what the difference is between regular topics
> and internal topics (excluding  __consumer_offsets and
> __transaction_state)? Reading your last message, if internal topics
> (excluding the two) can be created, deleted, produced to, consumed from,
> added to transactions, I'm failing to see what is different about them. Is
> it simply that they are marked as "internal" so the application can treat
> them differently?
> 
> 
> In the "Compatibility, Deprecation, and Migration" section, we should
> detail how users can overcome this incompatibility (i.e., changing the
> config name on their topic and changing their application logic if
> necessary).
> 
> 
> Should we consider adding any configs to constrain the min isr and
> replication factor for internal topics? If a topic is really internal and
> fundamentally required for an application to function, it might need a more
> stringent replication config. Our existing internal topics have their own
> configs in server.properties with a comment saying as much.
> 
> 
> Thanks!
> David
> 
> 
> 
> On Tue, Jul 7, 2020 at 1:40 PM Cheng Tan  wrote:
> 
>> Hi Colin,
>> 
>> 
>> Thanks for the comments. I’ve modified the KIP accordingly.
>> 
>>> I think we need to understand which of these limitations we will carry
>> forward and which we will not.  We also have the option of putting
>> limitations just on consumer offsets, but not on other internal topics.
>> 
>> 
>> In the proposal, I added details about this. I agree that cluster admin
>> should use ACLs to apply the restrictions.
>> Internal topic creation will be allowed.
>> Internal topic deletion will be allowed except for` __consumer_offsets`
>> and `__transaction_state`.
>> Producing to internal topic partitions other than `__consumer_offsets` and
>> `__transaction_state` will be allowed.
>> Adding internal topic partitions to transactions will be allowed.
>>> I think there are a fair number of compatibility concerns.  What's the
>> result if someone tries to create a topic with the configuration internal =
>> true right now?  Does it fail?  If not, that seems like a potential problem.
>> 
>> I also added this compatibility issue in the "Compatibility, Deprecation,
>> and Migration Plan" section.
>> 
>> Please feel free to make any suggestions or comments regarding to my
>> latest proposal. Thanks.
>> 
>> 
>> Best, - Cheng Tan
>> 
>> 
>> 
>> 
>> 
>> 
>>> On Jun 15, 2020, at 11:18 AM, Colin McCabe  wrote:
>>> 
>>> Hi Cheng,
>>> 
>>> The link from the main KIP page is an "edit link" meaning that it drops
>> you into the editor for the wiki page.  I think the link you meant to use
>> is a "view link" that will just 

Jenkins build is back to normal : Kafka » kafka-trunk-jdk11 #21

2020-08-19 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-657: Add Customized Kafka Streams Logo

2020-08-19 Thread Robin Moffatt
I echo what Michael says here.

Another consideration is that logos are often shrunk (when used on slides)
and need to work at lower resolution (think: printing swag, stitching
socks, etc) and so whatever logo we come up with needs to not be too fiddly
in the level of detail - something that I think both the current proposed
options will fall foul of IMHO.


On Wed, 19 Aug 2020 at 15:33, Michael Noll  wrote:

> Hi all!
>
> Great to see we are in the process of creating a cool logo for Kafka
> Streams.  First, I apologize for sharing feedback so late -- I just learned
> about it today. :-)
>
> Here's my *personal, subjective* opinion on the currently two logo
> candidates for Kafka Streams.
>
> TL;DR: Sorry, but I really don't like either of the proposed "otter" logos.
> Let me try to explain why.
>
>- The choice to use an animal, regardless of which specific animal,
>seems random and doesn't fit Kafka. (What's the purpose? To show that
>KStreams is 'cute'?) In comparison, the O’Reilly books always have an
>animal cover, that’s their style, and it is very recognizable.  Kafka
>however has its own, different style.  The Kafka logo has clear, simple
>lines to achieve an abstract and ‘techy’ look, which also alludes
> nicely to
>its architectural simplicity. Its logo is also a smart play on the
>Kafka-identifying letter “K” and alluding to it being a distributed
> system
>(the circles and links that make the K).
>- The proposed logos, however, make it appear as if KStreams is a
>third-party technology that was bolted onto Kafka. They certainly, for
> me,
>do not convey the message "Kafka Streams is an official part of Apache
>Kafka".
>- I, too, don't like the way the main Kafka logo is obscured (a concern
>already voiced in this thread). Also, the Kafka 'logo' embedded in the
>proposed KStreams logos is not the original one.
>- None of the proposed KStreams logos visually match the Kafka logo.
>They have a totally different style, font, line art, and color scheme.
>- Execution-wise, the main Kafka logo looks great at all sizes.  The
>style of the otter logos, in comparison, becomes undecipherable at
> smaller
>sizes.
>
> What I would suggest is to first agree on what the KStreams logo is
> supposed to convey to the reader.  Here's my personal take:
>
> Objective 1: First and foremost, the KStreams logo should make it clear and
> obvious that KStreams is an official and integral part of Apache Kafka.
> This applies to both what is depicted and how it is depicted (like font,
> line art, colors).
> Objective 2: The logo should allude to the role of KStreams in the Kafka
> project, which is the processing part.  That is, "doing something useful to
> the data in Kafka".
>
> The "circling arrow" aspect of the current otter logos does allude to
> "continuous processing", which is going in the direction of (2), but the
> logos do not meet (1) in my opinion.
>
> -Michael
>
>
>
>
> On Tue, Aug 18, 2020 at 10:34 PM Matthias J. Sax  wrote:
>
> > Adding the user mailing list -- I think we should accepts votes on both
> > lists for this special case, as it's not a technical decision.
> >
> > @Boyang: as mentioned by Bruno, can we maybe add black/white options for
> > both proposals, too?
> >
> > I also agree that Design B is not ideal with regard to the Kafka logo.
> > Would it be possible to change Design B accordingly?
> >
> > I am not a font expert, but the fonts in both design are different and I
> > am wondering if there is an official Apache Kafka font that we should
> > reuse to make sure that the logos align -- I would expect that both
> > logos (including "Apache Kafka" and "Kafka Streams" names) will be used
> > next to each other and it would look awkward if the font differs.
> >
> >
> > -Matthias
> >
> > On 8/18/20 11:28 AM, Navinder Brar wrote:
> > > Hi,
> > > Thanks for the KIP, really like the idea. I am +1(non-binding) on A
> > mainly because I felt like you have to tilt your head to realize the
> > otter's head in B.
> > > Regards,Navinder
> > >
> > > On Tuesday, 18 August, 2020, 11:44:20 pm IST, Guozhang Wang <
> > wangg...@gmail.com> wrote:
> > >
> > >  I'm leaning towards design B primarily because it reminds me of the
> > Firefox
> > > logo which I like a lot. But I also share Adam's concern that it should
> > > better not obscure the Kafka logo --- so if we can tweak a bit to fix
> it
> > my
> > > vote goes to B, otherwise A :)
> > >
> > >
> > > Guozhang
> > >
> > > On Tue, Aug 18, 2020 at 9:48 AM Bruno Cadonna 
> > wrote:
> > >
> > >> Thanks for the KIP!
> > >>
> > >> I am +1 (non-binding) for A.
> > >>
> > >> I would also like to hear opinions whether the logo should be
> colorized
> > >> or just black and white.
> > >>
> > >> Best,
> > >> Bruno
> > >>
> > >>
> > >> On 15.08.20 16:05, Adam Bellemare wrote:
> > >>> I prefer Design B, but given that I missed the discussion thread, I
> > think
> > >>> it would be 

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #22

2020-08-19 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Include security configs for topic delete in system tests 
(#9142)


--
[...truncated 3.23 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = 

Re: [VOTE] KIP-657: Add Customized Kafka Streams Logo

2020-08-19 Thread Michael Noll
Hi all!

Great to see we are in the process of creating a cool logo for Kafka
Streams.  First, I apologize for sharing feedback so late -- I just learned
about it today. :-)

Here's my *personal, subjective* opinion on the currently two logo
candidates for Kafka Streams.

TL;DR: Sorry, but I really don't like either of the proposed "otter" logos.
Let me try to explain why.

   - The choice to use an animal, regardless of which specific animal,
   seems random and doesn't fit Kafka. (What's the purpose? To show that
   KStreams is 'cute'?) In comparison, the O’Reilly books always have an
   animal cover, that’s their style, and it is very recognizable.  Kafka
   however has its own, different style.  The Kafka logo has clear, simple
   lines to achieve an abstract and ‘techy’ look, which also alludes nicely to
   its architectural simplicity. Its logo is also a smart play on the
   Kafka-identifying letter “K” and alluding to it being a distributed system
   (the circles and links that make the K).
   - The proposed logos, however, make it appear as if KStreams is a
   third-party technology that was bolted onto Kafka. They certainly, for me,
   do not convey the message "Kafka Streams is an official part of Apache
   Kafka".
   - I, too, don't like the way the main Kafka logo is obscured (a concern
   already voiced in this thread). Also, the Kafka 'logo' embedded in the
   proposed KStreams logos is not the original one.
   - None of the proposed KStreams logos visually match the Kafka logo.
   They have a totally different style, font, line art, and color scheme.
   - Execution-wise, the main Kafka logo looks great at all sizes.  The
   style of the otter logos, in comparison, becomes undecipherable at smaller
   sizes.

What I would suggest is to first agree on what the KStreams logo is
supposed to convey to the reader.  Here's my personal take:

Objective 1: First and foremost, the KStreams logo should make it clear and
obvious that KStreams is an official and integral part of Apache Kafka.
This applies to both what is depicted and how it is depicted (like font,
line art, colors).
Objective 2: The logo should allude to the role of KStreams in the Kafka
project, which is the processing part.  That is, "doing something useful to
the data in Kafka".

The "circling arrow" aspect of the current otter logos does allude to
"continuous processing", which is going in the direction of (2), but the
logos do not meet (1) in my opinion.

-Michael




On Tue, Aug 18, 2020 at 10:34 PM Matthias J. Sax  wrote:

> Adding the user mailing list -- I think we should accepts votes on both
> lists for this special case, as it's not a technical decision.
>
> @Boyang: as mentioned by Bruno, can we maybe add black/white options for
> both proposals, too?
>
> I also agree that Design B is not ideal with regard to the Kafka logo.
> Would it be possible to change Design B accordingly?
>
> I am not a font expert, but the fonts in both design are different and I
> am wondering if there is an official Apache Kafka font that we should
> reuse to make sure that the logos align -- I would expect that both
> logos (including "Apache Kafka" and "Kafka Streams" names) will be used
> next to each other and it would look awkward if the font differs.
>
>
> -Matthias
>
> On 8/18/20 11:28 AM, Navinder Brar wrote:
> > Hi,
> > Thanks for the KIP, really like the idea. I am +1(non-binding) on A
> mainly because I felt like you have to tilt your head to realize the
> otter's head in B.
> > Regards,Navinder
> >
> > On Tuesday, 18 August, 2020, 11:44:20 pm IST, Guozhang Wang <
> wangg...@gmail.com> wrote:
> >
> >  I'm leaning towards design B primarily because it reminds me of the
> Firefox
> > logo which I like a lot. But I also share Adam's concern that it should
> > better not obscure the Kafka logo --- so if we can tweak a bit to fix it
> my
> > vote goes to B, otherwise A :)
> >
> >
> > Guozhang
> >
> > On Tue, Aug 18, 2020 at 9:48 AM Bruno Cadonna 
> wrote:
> >
> >> Thanks for the KIP!
> >>
> >> I am +1 (non-binding) for A.
> >>
> >> I would also like to hear opinions whether the logo should be colorized
> >> or just black and white.
> >>
> >> Best,
> >> Bruno
> >>
> >>
> >> On 15.08.20 16:05, Adam Bellemare wrote:
> >>> I prefer Design B, but given that I missed the discussion thread, I
> think
> >>> it would be better without the Otter obscuring any part of the Kafka
> >> logo.
> >>>
> >>> On Thu, Aug 13, 2020 at 6:31 PM Boyang Chen <
> reluctanthero...@gmail.com>
> >>> wrote:
> >>>
>  Hello everyone,
> 
>  I would like to start a vote thread for KIP-657:
> 
> 
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-657%3A+Add+Customized+Kafka+Streams+Logo
> 
>  This KIP is aiming to add a new logo for the Kafka Streams library.
> And
> >> we
>  prepared two candidates with a cute otter. You could look up the KIP
> to
>  find those logos.
> 
> 
>  Please post your vote against these 

Re: [DISCUSS] KIP-406: GlobalStreamThread should honor custom reset policy

2020-08-19 Thread Navinder Brar
 
Thanks Matthias & John, 



I am glad we are converging towards an understanding. So, to summarize, 

we will still keep treating this change in KIP and instead of providing a reset

strategy, we will cleanup, and reset to earliest and build the state. 

When we hit the exception and we are building the state, we will stop all 

processing and change the state of KafkaStreams to something like 

“RESTORING_GLOBAL” or the like. 



How do we plan to educate users on the non desired effects of using 

non-compacted global topics? (via the KIP itself?)


+1 on changing the KTable behavior, reset policy for global, connecting 
processors to global for a later stage when demanded.

Regards,
Navinder
On Wednesday, 19 August, 2020, 01:00:58 pm IST, Matthias J. Sax 
 wrote:  
 
 Your observation is correct. Connecting (regular) stores to processors
is necessary to "merge" sub-topologies into single ones if a store is
shared. -- For global stores, the structure of the program does not
change and thus connecting srocessors to global stores is not required.

Also given our experience with restoring regular state stores (ie,
partial processing of task that don't need restore), it seems better to
pause processing and move all CPU and network resources to the global
thread to rebuild the global store as soon as possible instead of
potentially slowing down the restore in order to make progress on some
tasks.

Of course, if we collect real world experience and it becomes an issue,
we could still try to change it?


-Matthias


On 8/18/20 3:31 PM, John Roesler wrote:
> Thanks Matthias,
> 
> Sounds good. I'm on board with no public API change and just
> recovering instead of crashing.
> 
> Also, to be clear, I wouldn't drag KTables into it; I was
> just trying to wrap my head around the congruity of our
> choice for GlobalKTable with respect to KTable.
> 
> I agree that whatever we decide to do would probably also
> resolve KAFKA-7380.
> 
> Moving on to discuss the behavior change, I'm wondering if
> we really need to block all the StreamThreads. It seems like
> we only need to prevent processing on any task that's
> connected to the GlobalStore. 
> 
> I just took a look at the topology building code, and it
> actually seems that connections to global stores don't need
> to be declared. That's a bummer, since it means that we
> really do have to stop all processing while the global
> thread catches up.
> 
> Changing this seems like it'd be out of scope right now, but
> I bring it up in case I'm wrong and it actually is possible
> to know which specific tasks need to be synchronized with
> which global state stores. If we could know that, then we'd
> only have to block some of the tasks, not all of the
> threads.
> 
> Thanks,
> -John
> 
> 
> On Tue, 2020-08-18 at 14:10 -0700, Matthias J. Sax wrote:
>> Thanks for the discussion.
>>
>> I agree that this KIP is justified in any case -- even if we don't
>> change public API, as the change in behavior is significant.
>>
>> A better documentation for cleanup policy is always good (even if I am
>> not aware of any concrete complaints atm that users were not aware of
>> the implications). Of course, for a regular KTable, one can
>> enable/disable the source-topic-changelog optimization and thus can use
>> a non-compacted topic for this case, what is quite a difference to
>> global stores/tables; so maybe it's worth to point out this difference
>> explicitly.
>>
>> As mentioned before, the main purpose of the original Jira was to avoid
>> the crash situation but to allow for auto-recovering while it was an
>> open question if it makes sense / would be useful to allow users to
>> specify a custom reset policy instead of using a hard-coded "earliest"
>> strategy. -- It seem it's still unclear if it would be useful and thus
>> it might be best to not add it for now -- we can still add it later if
>> there are concrete use-cases that need this feature.
>>
>> @John: I actually agree that it's also questionable to allow a custom
>> reset policy for KTables... Not sure if we want to drag this question
>> into this KIP though?
>>
>> So it seem, we all agree that we actually don't need any public API
>> changes, but we only want to avoid crashing?
>>
>> For this case, to preserve the current behavior that guarantees that the
>> global store/table is always loaded first, it seems we need to have a
>> stop-the-world mechanism for the main `StreamThreads` for this case --
>> do we need to add a new state to KafkaStreams client for this case?
>>
>> Having a new state might also be helpful for
>> https://issues.apache.org/jira/browse/KAFKA-7380 ?
>>
>>
>>
>> -Matthias
>>
>>
>>
>>
>> On 8/17/20 7:34 AM, John Roesler wrote:
>>> Hi Navinder,
>>>
>>> I see what you mean about the global consumer being similar
>>> to the restore consumer.
>>>
>>> I also agree that automatically performing the recovery
>>> steps should be strictly an improvement over the current
>>> situation.
>>>
>>> Also, yes, 

[jira] [Created] (KAFKA-10421) Kafka Producer deadlocked on get call

2020-08-19 Thread Ranadeep Deb (Jira)
Ranadeep Deb created KAFKA-10421:


 Summary: Kafka Producer deadlocked on get call
 Key: KAFKA-10421
 URL: https://issues.apache.org/jira/browse/KAFKA-10421
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 2.3.0
 Environment: CentOS7
Reporter: Ranadeep Deb


I have been experiencing a similar issue in 2.3.0

I have a multi threaded application with each thread sending an individual 
message to the broker. There are instances where I have observed that the 
Producer threads get stuck on the Producer.send().get() call. I was not sure 
what was causing this issue but after landing on this thread 
(https://issues.apache.org/jira/browse/KAFKA-8135) I am suspecting that 
intermittent network outage might be the reason. 

I am curious about how to solve this.

 

Following are the stack trace of the Java threads

 

Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.77-b03 mixed mode):Full 
thread dump Java HotSpot(TM) 64-Bit Server VM (25.77-b03 mixed mode):
"Attach Listener" #15081 daemon prio=9 os_prio=0 tid=0x7f9c50002000 
nid=0xe572 waiting on condition [0x]   java.lang.Thread.State: 
RUNNABLE
"pool-14658-thread-9" #15071 prio=5 os_prio=0 tid=0x7f9c9842f800 nid=0x397b 
waiting on condition [0x7f9c378fb000]   java.lang.Thread.State: WAITING 
(parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for  
<0x0007703e85b8> (a java.util.concurrent.CountDownLatch$Sync) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231) at 
org.apache.kafka.clients.producer.internals.ProduceRequestResult.await(ProduceRequestResult.java:76)
 at 
org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:64)
 at 
org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:30)
 at com.t100.sender.T100KafkaProducer.runProducer(T100KafkaProducer.java:104) 
at com.t100.sender.T100KafkaProducer.run(T100KafkaProducer.java:165) at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)
"pool-14658-thread-8" #15070 prio=5 os_prio=0 tid=0x7f9c9842e000 nid=0x397a 
waiting on condition [0x7f9c379fc000]   java.lang.Thread.State: WAITING 
(parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for  
<0x0007704dabb0> (a java.util.concurrent.CountDownLatch$Sync) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231) at 
org.apache.kafka.clients.producer.internals.ProduceRequestResult.await(ProduceRequestResult.java:76)
 at 
org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:64)
 at 
org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:30)
 at com.t100.sender.T100KafkaProducer.runProducer(T100KafkaProducer.java:104) 
at com.t100.sender.T100KafkaProducer.run(T100KafkaProducer.java:165) at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)
"pool-14658-thread-7" #15069 prio=5 os_prio=0 tid=0x7f9c9842d800 nid=0x3979 
waiting on condition [0x7f9c371f4000]   java.lang.Thread.State: WAITING 
(parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for  
<0x0007705ed590> (a java.util.concurrent.CountDownLatch$Sync) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
 at 

[jira] [Created] (KAFKA-10420) group instance id Optional.empty failed due to UNKNOWN_MEMBER_ID, resetting generation when running kafka client 2.6 against broker 2.3.1

2020-08-19 Thread Tomasz Kaszuba (Jira)
Tomasz Kaszuba created KAFKA-10420:
--

 Summary: group instance id Optional.empty failed due to 
UNKNOWN_MEMBER_ID, resetting generation when running kafka client 2.6 against 
broker 2.3.1
 Key: KAFKA-10420
 URL: https://issues.apache.org/jira/browse/KAFKA-10420
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 2.6.0
Reporter: Tomasz Kaszuba


After upgrading our kafka clients to 2.6.0 and running it against broker 
version 2.3.1 we got errors where the consumer groups are reset. We didn't 
notice this happening with client 2.5.0.
{noformat}
020-08-17 04:35:27.787  INFO 1 --- [-StreamThread-1] 
o.a.k.c.c.internals.AbstractCoordinator  : [Consumer 
clientId=ieb-x07-baseline-pc-data-storage-incurred-pattern-36fbee26-0c5f-4993-a203-f34c0cac7caf-StreamThread-1-consumer,
 groupId=ieb-x07-baseline-pc-data-storage-incurred-pattern] Attempt to 
heartbeat with Generation{generationId=11, 
memberId='ieb-x07-baseline-pc-data-storage-incurred-pattern-36fbee26-0c5f-4993-a203-f34c0cac7caf-StreamThread-1-consumer-3902e2a9-1755-466b-9255-d144be25876f',
 protocol='stream'} and group instance id Optional.empty failed due to 
UNKNOWN_MEMBER_ID, resetting generation2020-08-17 04:35:27.787  INFO 1 --- 
[-StreamThread-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer 
clientId=ieb-x07-baseline-pc-data-storage-incurred-pattern-36fbee26-0c5f-4993-a203-f34c0cac7caf-StreamThread-1-consumer,
 groupId=ieb-x07-baseline-pc-data-storage-incurred-pattern] Giving away all 
assigned partitions as lost since generation has been reset,indicating that 
consumer is no longer part of the group2020-08-17 04:35:27.787  INFO 1 --- 
[-StreamThread-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer 
clientId=ieb-x07-baseline-pc-data-storage-incurred-pattern-36fbee26-0c5f-4993-a203-f34c0cac7caf-StreamThread-1-consumer,
 groupId=ieb-x07-baseline-pc-data-storage-incurred-pattern] Lost previously 
assigned partitions ieb.publish.baseline_pc.incurred_pattern-02020-08-17 
04:35:27.787  INFO 1 --- [-StreamThread-1] o.a.k.s.p.internals.StreamThread 
    : stream-thread 
[ieb-x07-baseline-pc-data-storage-incurred-pattern-36fbee26-0c5f-4993-a203-f34c0cac7caf-StreamThread-1]
 at state RUNNING: partitions [ieb.publish.baseline_pc.incurred_pattern-0] lost 
due to missed rebalance.    lost active tasks: [0_0]    lost assigned 
standby tasks: []2020-08-17 04:35:27.787  INFO 1 --- [-StreamThread-1] 
o.a.k.s.processor.internals.StreamTask   : stream-thread 
[ieb-x07-baseline-pc-data-storage-incurred-pattern-36fbee26-0c5f-4993-a203-f34c0cac7caf-StreamThread-1]
 task [0_0] Suspended running 2020-08-17 04:35:27.788  INFO 1 --- 
[-StreamThread-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer 
clientId=ieb-x07-baseline-pc-data-storage-incurred-pattern-36fbee26-0c5f-4993-a203-f34c0cac7caf-StreamThread-1-restore-consumer,
 groupId=null] Unsubscribed all topics or patterns and assigned partitions 
2020-08-17 04:35:27.789  INFO 1 --- [-StreamThread-1] 
o.a.k.s.p.internals.RecordCollectorImpl  : stream-thread 
[ieb-x07-baseline-pc-data-storage-incurred-pattern-36fbee26-0c5f-4993-a203-f34c0cac7caf-StreamThread-1]
 task [0_0] Closing record collector dirty 2020-08-17 04:35:27.790  INFO 1 --- 
[-StreamThread-1] o.a.k.s.processor.internals.StreamTask   : stream-thread 
[ieb-x07-baseline-pc-data-storage-incurred-pattern-36fbee26-0c5f-4993-a203-f34c0cac7caf-StreamThread-1]
 task [0_0] Closed dirty 2020-08-17 04:35:27.790  INFO 1 --- [-StreamThread-1] 
o.a.k.clients.producer.KafkaProducer : [Producer 
clientId=ieb-x07-baseline-pc-data-storage-incurred-pattern-36fbee26-0c5f-4993-a203-f34c0cac7caf-StreamThread-1-0_0-producer,
 transactionalId=ieb-x07-baseline-pc-data-storage-incurred-pattern-0_0] Closing 
the Kafka producer with timeoutMillis = 9223372036854775807 ms. 2020-08-17 
04:35:27.791  INFO 1 --- [-StreamThread-1] o.a.k.s.p.internals.StreamThread 
    : stream-thread 
[ieb-x07-baseline-pc-data-storage-incurred-pattern-36fbee26-0c5f-4993-a203-f34c0cac7caf-StreamThread-1]
 partitions lost took 4 ms. 2020-08-17 04:35:27.791  INFO 1 --- 
[-StreamThread-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer 
clientId=ieb-x07-baseline-pc-data-storage-incurred-pattern-36fbee26-0c5f-4993-a203-f34c0cac7caf-StreamThread-1-consumer,
 groupId=ieb-x07-baseline-pc-data-storage-incurred-pattern] (Re-)joining group 
2020-08-17 04:35:27.795  INFO 1 --- [-StreamThread-1] 
o.a.k.c.c.internals.AbstractCoordinator  : [Consumer 
clientId=ieb-x07-baseline-pc-data-storage-incurred-pattern-36fbee26-0c5f-4993-a203-f34c0cac7caf-StreamThread-1-consumer,
 groupId=ieb-x07-baseline-pc-data-storage-incurred-pattern] Join group failed 
with org.apache.kafka.common.errors.MemberIdRequiredException: The group member 
needs to have a valid member id before actually entering a consumer group. 
2020-08-17 04:35:27.795  INFO 1 

Re: [DISCUSS] KIP-406: GlobalStreamThread should honor custom reset policy

2020-08-19 Thread Matthias J. Sax
Your observation is correct. Connecting (regular) stores to processors
is necessary to "merge" sub-topologies into single ones if a store is
shared. -- For global stores, the structure of the program does not
change and thus connecting srocessors to global stores is not required.

Also given our experience with restoring regular state stores (ie,
partial processing of task that don't need restore), it seems better to
pause processing and move all CPU and network resources to the global
thread to rebuild the global store as soon as possible instead of
potentially slowing down the restore in order to make progress on some
tasks.

Of course, if we collect real world experience and it becomes an issue,
we could still try to change it?


-Matthias


On 8/18/20 3:31 PM, John Roesler wrote:
> Thanks Matthias,
> 
> Sounds good. I'm on board with no public API change and just
> recovering instead of crashing.
> 
> Also, to be clear, I wouldn't drag KTables into it; I was
> just trying to wrap my head around the congruity of our
> choice for GlobalKTable with respect to KTable.
> 
> I agree that whatever we decide to do would probably also
> resolve KAFKA-7380.
> 
> Moving on to discuss the behavior change, I'm wondering if
> we really need to block all the StreamThreads. It seems like
> we only need to prevent processing on any task that's
> connected to the GlobalStore. 
> 
> I just took a look at the topology building code, and it
> actually seems that connections to global stores don't need
> to be declared. That's a bummer, since it means that we
> really do have to stop all processing while the global
> thread catches up.
> 
> Changing this seems like it'd be out of scope right now, but
> I bring it up in case I'm wrong and it actually is possible
> to know which specific tasks need to be synchronized with
> which global state stores. If we could know that, then we'd
> only have to block some of the tasks, not all of the
> threads.
> 
> Thanks,
> -John
> 
> 
> On Tue, 2020-08-18 at 14:10 -0700, Matthias J. Sax wrote:
>> Thanks for the discussion.
>>
>> I agree that this KIP is justified in any case -- even if we don't
>> change public API, as the change in behavior is significant.
>>
>> A better documentation for cleanup policy is always good (even if I am
>> not aware of any concrete complaints atm that users were not aware of
>> the implications). Of course, for a regular KTable, one can
>> enable/disable the source-topic-changelog optimization and thus can use
>> a non-compacted topic for this case, what is quite a difference to
>> global stores/tables; so maybe it's worth to point out this difference
>> explicitly.
>>
>> As mentioned before, the main purpose of the original Jira was to avoid
>> the crash situation but to allow for auto-recovering while it was an
>> open question if it makes sense / would be useful to allow users to
>> specify a custom reset policy instead of using a hard-coded "earliest"
>> strategy. -- It seem it's still unclear if it would be useful and thus
>> it might be best to not add it for now -- we can still add it later if
>> there are concrete use-cases that need this feature.
>>
>> @John: I actually agree that it's also questionable to allow a custom
>> reset policy for KTables... Not sure if we want to drag this question
>> into this KIP though?
>>
>> So it seem, we all agree that we actually don't need any public API
>> changes, but we only want to avoid crashing?
>>
>> For this case, to preserve the current behavior that guarantees that the
>> global store/table is always loaded first, it seems we need to have a
>> stop-the-world mechanism for the main `StreamThreads` for this case --
>> do we need to add a new state to KafkaStreams client for this case?
>>
>> Having a new state might also be helpful for
>> https://issues.apache.org/jira/browse/KAFKA-7380 ?
>>
>>
>>
>> -Matthias
>>
>>
>>
>>
>> On 8/17/20 7:34 AM, John Roesler wrote:
>>> Hi Navinder,
>>>
>>> I see what you mean about the global consumer being similar
>>> to the restore consumer.
>>>
>>> I also agree that automatically performing the recovery
>>> steps should be strictly an improvement over the current
>>> situation.
>>>
>>> Also, yes, it would be a good idea to make it clear that the
>>> global topic should be compacted in order to ensure correct
>>> semantics. It's the same way with input topics for KTables;
>>> we rely on users to ensure the topics are compacted, and if
>>> they aren't, then the execution semantics will be broken.
>>>
>>> Thanks,
>>> -John
>>>
>>> On Sun, 2020-08-16 at 11:44 +, Navinder Brar wrote:
 Hi John,







 Thanks for your inputs. Since, global topics are in a way their own 
 changelog, wouldn’t the global consumers be more akin to restore consumers 
 than the main consumer? 







 I am also +1 on catching the exception and setting it to the earliest for 
 now. Whenever an instance starts,