[jira] [Resolved] (KAFKA-10293) fix flaky streams/streams_eos_test.py

2020-08-25 Thread Bruno Cadonna (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Cadonna resolved KAFKA-10293.
---
Resolution: Fixed

> fix flaky streams/streams_eos_test.py
> -
>
> Key: KAFKA-10293
> URL: https://issues.apache.org/jira/browse/KAFKA-10293
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, system tests
>Reporter: Chia-Ping Tsai
>Priority: Major
>
> {quote}
> Module: kafkatest.tests.streams.streams_eos_test
> Class:  StreamsEosTest
> Method: test_failure_and_recovery_complex
> Arguments:
> {
>   "processing_guarantee": "exactly_once"
> }
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-662: Throw Exception when Source Topics of a Streams App are Deleted

2020-08-25 Thread Bruno Cadonna

Hi Guozhang,

Thank you for pointing this out. I added the package to the KIP.

Best,
Bruno

On 24.08.20 20:55, Guozhang Wang wrote:

Hello Bruno,

Thanks for the KIP, it sounds good to me as well. Just a minor comment: we
would include which package the new "MissingSourceTopicException" class
belongs to.



Guozhang


On Fri, Aug 21, 2020 at 11:53 AM John Roesler  wrote:


Thanks for the KIP, Bruno!

Your proposal sounds good to me.

-John

On Fri, 2020-08-21 at 11:18 -0700, Sophie Blee-Goldman
wrote:

Thanks for the KIP! I'm totally in favor of this approach and to be

honest,

have
always wondered why we just silently shut down instead of throwing an
exception.
This has definitely been a source of confusion for users in my personal
experience.

I was originally hesitant to extend StreamsException since I always

thought

that anything
extending from KafkaException was supposed to "indicate Streams internal
errors"
-- a phrase I'm quoting from Streams logs directly -- but I now see that
we're actually
somewhat inconsistent here. Perhaps "Streams internal errors" does not in
fact mean
internal to Streams itself but just any error that occurs during Stream
processing?

Anyways, I'm looking forward to cleaning up the exception hierarchy so we
get a clear
division of user vs "internal" error, but within the current framework

this

SGTM

On Fri, Aug 21, 2020 at 8:06 AM Bruno Cadonna 

wrote:



Hi,

I would like to propose the following KIP:




https://cwiki.apache.org/confluence/display/KAFKA/KIP-662%3A+Throw+Exception+when+Source+Topics+of+a+Streams+App+are+Deleted


Best,
Bruno








[VOTE] KIP-662: Throw Exception when Source Topics of a Streams App are Deleted

2020-08-25 Thread Bruno Cadonna

Hi,

I would like to start the vote for

https://cwiki.apache.org/confluence/display/KAFKA/KIP-662%3A+Throw+Exception+when+Source+Topics+of+a+Streams+App+are+Deleted

Best,
Bruno


[jira] [Created] (KAFKA-10430) Hook support

2020-08-25 Thread Dennis Jaheruddin (Jira)
Dennis Jaheruddin created KAFKA-10430:
-

 Summary: Hook support
 Key: KAFKA-10430
 URL: https://issues.apache.org/jira/browse/KAFKA-10430
 Project: Kafka
  Issue Type: Improvement
Reporter: Dennis Jaheruddin


Currently big data storage (like HDFS and HBASE) allows other tooling to hook 
into it, for instance atlas.

As data movement tools become more open to Kafka as well, it makes sense to 
shift significant amounts of storage to Kafka, for instance when one just needs 
a buffer.

However, this may be blocked due to governance constraints. As currently 
producers and consumers would need to actively make an effort to log governance 
(where something like HDFS can guarantee its capture).

Hence I believe we should make it possible to hook into kafka as well so one 
does not simply depend on the integrity of the producers and consumers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10431) ProducerPerformance with payloadFile arg: add support for sequential or random outputs

2020-08-25 Thread Zaahir Laher (Jira)
Zaahir Laher created KAFKA-10431:


 Summary: ProducerPerformance with payloadFile arg: add support for 
sequential or random outputs
 Key: KAFKA-10431
 URL: https://issues.apache.org/jira/browse/KAFKA-10431
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Affects Versions: 2.5.1
Reporter: Zaahir Laher


When using ProducerPerformance  with the --payloadFile argument with a file 
with multiple payloads (i.e the default is one payload per line) , the 
ProducerPerformance randomly chooses payloads from the file. 

This could result in the same payload being sent, which may not be the desired 
result in some cases. 

It would be useful to all have another argument that allows for sequence 
payload submission if required. If left blank this arg would default to false 
(i.e default random selection).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka-site] vvcephei merged pull request #295: MINOR: fix for text wrapping in some tables in docs

2020-08-25 Thread GitBox


vvcephei merged pull request #295:
URL: https://github.com/apache/kafka-site/pull/295


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (KAFKA-10432) LeaderEpochCache is incorrectly recovered on segment recovery for epoch 0

2020-08-25 Thread Lucas Bradstreet (Jira)
Lucas Bradstreet created KAFKA-10432:


 Summary: LeaderEpochCache is incorrectly recovered on segment 
recovery for epoch 0
 Key: KAFKA-10432
 URL: https://issues.apache.org/jira/browse/KAFKA-10432
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.6.0, 2.5.0, 2.4.0, 2.3.0
Reporter: Lucas Bradstreet


I added some functionality to the system tests to compare epoch cache lineages 
([https://github.com/apache/kafka/pull/9213]), and I found a bug in leader 
epoch cache recovery.

The test hard kills a broker and the cache hasn't been flushed yet, and then it 
starts up and goes through log recovery. After recovery there is divergence in 
the epoch caches for epoch 0:
{noformat}
AssertionError: leader epochs for output-topic-1 didn't match
 [{0: 9393L, 2: 9441L, 4: 42656L},
 {0: 0L, 2: 9441L, 4: 42656L}, 
 {0: 0L, 2: 9441L, 4: 42656L}]  

  
{noformat}
The cache is supposed to include the offset for epoch 0 but in recovery it 
skips it 
[https://github.com/apache/kafka/blob/487b3682ebe0eefde3445b37ee72956451a9d15e/core/src/main/scala/kafka/log/LogSegment.scala#L364]
 due to 
[https://github.com/apache/kafka/commit/d152989f26f51b9004b881397db818ad6eaf0392].
 Then it stamps the epoch with a later offset when fetching from the leader.

I'm not sure why the recovery code includes the condition 
`batch.partitionLeaderEpoch > 0`. I discussed this with Jason Gustafson and he 
believes it may have been intended to avoid assigning negative epochs but is 
not sure why it was added. None of the tests fail with this check removed.
{noformat}
  leaderEpochCache.foreach { cache =>
if (batch.partitionLeaderEpoch > 0 && 
cache.latestEpoch.forall(batch.partitionLeaderEpoch > _))
  cache.assign(batch.partitionLeaderEpoch, batch.baseOffset)
  }
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10433) Reuse the ByteBuffer in validating compressed records

2020-08-25 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-10433:
--

 Summary: Reuse the ByteBuffer in validating compressed records 
 Key: KAFKA-10433
 URL: https://issues.apache.org/jira/browse/KAFKA-10433
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


{code:java}
for (batch <- batches) {
  validateBatch(topicPartition, firstBatch, batch, origin, toMagic, 
brokerTopicStats)
  uncompressedSizeInBytes += 
AbstractRecords.recordBatchHeaderSizeInBytes(toMagic, batch.compressionType())

  val recordsIterator = if (inPlaceAssignment && firstBatch.magic >= 
RecordBatch.MAGIC_VALUE_V2)
batch.skipKeyValueIterator(BufferSupplier.NO_CACHING)
  else
batch.streamingIterator(BufferSupplier.NO_CACHING)
{code}

It is hot method so reusing the ByteBuffer can reduce a bunch of memory usage 
if the compression type supports BufferSupplier.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-647: Add ability to handle late messages in streams-aggregation

2020-08-25 Thread Igor Piddubnyi

Hi Matthias, Bruno,
Let me elaborate on my suggestions regarding late-tick handling.

> why you rejected the more general solution involving a callback
The main reasons why I tend to the topic approach is API-consistency and 
cleanness of the user code.
The problem with a callback, in my opinion, is that users of the API 
will be forced to define handling for each late-item using procedural-code.
The same custom handling (statistics, db-insert, etc.) could be achieved 
by consuming the topic.
I acknowledge that topic-approach introduces an overhead, compared to 
the callback, however by paying this price users are getting all 
stream-processing features.
Taking a look from the opposite side, assuming there is a callback, and 
there is a need to persist data in the topic, one would have to 
fall-back to producer-API to implement such handling. This would be not 
so clean, from my point of view.


>I am wondering if we should try to do a built-in "dead-letter-queue" 
feature that would be general purpose?
Generic DLQ might be not so bad idea, however there could be more than 
one aggregation. Assuming DLQ is generic and contains messages with 
other kinds of errors, API definitely needs an ability to distinguish 
between messages with different types of errors.
This would be definitely a significant change. Taking into account that 
this is my first experience with kafka-internals, I tried to keep 
suggested change as small as possible.


> I am also wondering, if piping late records into a DLQ is the only 
way to handle them
Definitely not, but in my opinion stream-api fits better for any custom 
handling that user can define.
E.g. I don't see any problems defining another processing pipe, which 
consumes from DLQ, does any necessary side effects, and then gets merged 
anywhere.


>Can you elaborate on the use-case how a user would use the preserved 
late records?
As just explained, I see this as another processing pipe, or just a 
consumer, reading data from this topic and doing any necessary handling.
This might happen even in another service, if required by the logic. 
Docs probably should be updated with respective examples.


E.g. in the system I'm working on, such handling would involve complex 
data-correction in the database and would be executed on a separate 
instance.
My arguments and answers might be quite biased, because I mostly 
consider this use-case for the application currently being developed.

Please share your opinion and feedback.

Regards, Igor.

On 11.08.20 09:25, Bruno Cadonna wrote:

Hi Igor,

Thanks for the KIP!

Similar to Matthias, I am also wondering why you rejected the more 
general solution involving a callback. I also think that writing to a 
topic is just one of multiple ways to handle late records. For 
example, one could compute statistics over the late records before or 
instead writing the records to a topic. Or it could write the records 
to a database to analyse.


Best,
Bruno

On 28.07.20 05:14, Matthias J. Sax wrote:

Thanks for the KIP Igor.

What you propose sounds a little bit like a "dead-letter-queue" pattern.
Thus, I am wondering if we should try to do a built-in
"dead-letter-queue" feature that would be general purpose? For example,
uses can drop message in the source node if they don't have a valid
timestamp or if a deserialization error occurs and face a similar issue
for those cases (even if it might be a little simpler to handle those
cases, as custom user code is executed).

For a general purpose DLQ, the feature should be expose at the Processor
API level though, and the DSL would just use this feature (instead of
introducing it as a DSL feature).

Late records are of course only defined at the DSL level as for the PAPI
users need to define custom semantics. Also, late records are not really
corrupted. However, the pattern seems similar enough, ie, piping late
data into a topic is just a special case for a DLQ?

I am also wondering, if piping late records into a DLQ is the only way
to handle them? For example, I could imagine that a user just wants to
trigger a side-effect (similar to what you mention in rejected
alternatives)? Or maybe a user might even want to somehow process those
record and feed them back into the actually processing pipeline.

Last, a DLQ is only useful if somebody consumes from the topic and does
something with the data. Can you elaborate on the use-case how a user
would use the preserved late records?



-Matthias

On 7/27/20 1:45 AM, Igor Piddubnyi wrote:

Hi everybody,
I would like to start off the discussion for KIP-647:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-647%3A+Add+ability+to+handle+late+messages+in+streams-aggregation 

 




This KIP proposes a minor adjustment in the kafka-streams
aggregation-api, adding an ability for processing late messages.
[WIP] PR here:https://github.com/apache/kafka/pu

Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-08-25 Thread Harsha Chintalapani
Thanks everyone for attending the meeting today.
Here is the recording
https://drive.google.com/file/d/14PRM7U0OopOOrJR197VlqvRX5SXNtmKj/view?usp=sharing

Notes:

   1. KIP is updated with follower fetch protocol and ready to reviewed
   2. Satish to capture schema of internal metadata topic in the KIP
   3. We will update the KIP with details of different cases
   4. Test plan will be captured in a doc and will add to the KIP
   5. Add a section "Limitations" to capture the capabilities that will be
   introduced with this KIP and what will not be covered in this KIP.

Please add to it I missed anything. Will produce a formal meeting notes
from next meeting onwards.

Thanks,
Harsha



On Mon, Aug 24, 2020 at 9:42 PM, Ying Zheng  wrote:

> We did some basic feature tests at Uber. The test cases and results are
> shared in this google doc:
> https://docs.google.com/spreadsheets/d/
> 1XhNJqjzwXvMCcAOhEH0sSXU6RTvyoSf93DHF-YMfGLk/edit?usp=sharing
>
> The performance test results were already shared in the KIP last month.
>
> On Mon, Aug 24, 2020 at 11:10 AM Harsha Ch  wrote:
>
> "Understand commitments towards driving design & implementation of the KIP
> further and how it aligns with participant interests in contributing to the
> efforts (ex: in the context of Uber’s Q3/Q4 roadmap)." What is that about?
>
> On Mon, Aug 24, 2020 at 11:05 AM Kowshik Prakasam 
> wrote:
>
> Hi Harsha,
>
> The following google doc contains a proposal for temporary agenda for the
> KIP-405  sync meeting
> tomorrow:
>
> https://docs.google.com/document/d/
> 1pqo8X5LU8TpwfC_iqSuVPezhfCfhGkbGN2TqiPA3LBU/edit
>
> .
> Please could you add it to the Google calendar invite?
>
> Thank you.
>
> Cheers,
> Kowshik
>
> On Thu, Aug 20, 2020 at 10:58 AM Harsha Ch  wrote:
>
> Hi All,
>
> Scheduled a meeting for Tuesday 9am - 10am. I can record and upload for
> community to be able to follow the discussion.
>
> Jun, please add the required folks on confluent side.
>
> Thanks,
>
> Harsha
>
> On Thu, Aug 20, 2020 at 12:33 AM, Alexandre Dupriez < alexandre.dupriez@
> gmail.com > wrote:
>
> Hi Jun,
>
> Many thanks for your initiative.
>
> If you like, I am happy to attend at the time you suggested.
>
> Many thanks,
> Alexandre
>
> Le mer. 19 août 2020 à 22:00, Harsha Ch < harsha. ch@ gmail. com ( harsha.
> c...@gmail.com ) > a écrit :
>
> Hi Jun,
> Thanks. This will help a lot. Tuesday will work for us.
> -Harsha
>
> On Wed, Aug 19, 2020 at 1:24 PM Jun Rao < jun@ confluent. io ( jun@
> confluent.io ) > wrote:
>
> Hi, Satish, Ying, Harsha,
>
> Do you think it would be useful to have a regular virtual meeting to
> discuss this KIP? The goal of the meeting will be sharing
> design/development progress and discussing any open issues to
>
> accelerate
>
> this KIP. If so, will every Tuesday (from next week) 9am-10am
>
> PT
>
> work for you? I can help set up a Zoom meeting, invite everyone who
>
> might
>
> be interested, have it recorded and shared, etc.
>
> Thanks,
>
> Jun
>
> On Tue, Aug 18, 2020 at 11:01 AM Satish Duggana <
>
> satish. duggana@ gmail. com ( satish.dugg...@gmail.com ) >
>
> wrote:
>
> Hi Kowshik,
>
> Thanks for looking into the KIP and sending your comments.
>
> 5001. Under the section "Follower fetch protocol in detail", the
> next-local-offset is the offset upto which the segments are copied
>
> to
>
> remote storage. Instead, would last-tiered-offset be a better name
>
> than
>
> next-local-offset? last-tiered-offset seems to naturally align well
>
> with
>
> the definition provided in the KIP.
>
> Both next-local-offset and local-log-start-offset were introduced
>
> to
>
> talk
>
> about offsets related to local log. We are fine with
>
> last-tiered-offset
>
> too as you suggested.
>
> 5002. After leadership is established for a partition, the leader
>
> would
>
> begin uploading a segment to remote storage. If successful, the
>
> leader
>
> would write the updated RemoteLogSegmentMetadata to the metadata
>
> topic
>
> (via
>
> RLMM.putRemoteLogSegmentData). However, for defensive reasons, it
>
> seems
>
> useful that before the first time the segment is uploaded by the
>
> leader
>
> for
>
> a partition, the leader should ensure to catch up to all the
>
> metadata
>
> events written so far in the metadata topic for that partition (ex:
>
> by
>
> previous leader). To achieve this, the leader could start a lease
>
> (using
>
> an
>
> establish_leader metadata event) before commencing tiering, and
>
> wait
>
> until
>
> the event is read back. For example, this seems useful to avoid
>
> cases
>
> where
>
> zombie leaders can be active for the same partition. This can also
>
> prove
>
> useful to help avoid making decisions on which segments to be
>
> uploaded
>
> for
>
> a partition, until the current leader has caught up to a complete
>
> view
>
> of
>
> all segments uploaded for the partition so far (otherwise this may
>
> cause
>
> same segment being uploaded twice -- once by the previ

Re: [VOTE] KIP-662: Throw Exception when Source Topics of a Streams App are Deleted

2020-08-25 Thread Guozhang Wang
+1. Thanks Bruno!


Guozhang

On Tue, Aug 25, 2020 at 4:00 AM Bruno Cadonna  wrote:

> Hi,
>
> I would like to start the vote for
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-662%3A+Throw+Exception+when+Source+Topics+of+a+Streams+App+are+Deleted
>
> Best,
> Bruno
>


-- 
-- Guozhang


Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-08-25 Thread Jun Rao
Hi, Harsha,

Thanks for the summary. Could you add the summary and the recording link to
the last section of
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
?

Jun

On Tue, Aug 25, 2020 at 11:12 AM Harsha Chintalapani 
wrote:

> Thanks everyone for attending the meeting today.
> Here is the recording
>
> https://drive.google.com/file/d/14PRM7U0OopOOrJR197VlqvRX5SXNtmKj/view?usp=sharing
>
> Notes:
>
>1. KIP is updated with follower fetch protocol and ready to reviewed
>2. Satish to capture schema of internal metadata topic in the KIP
>3. We will update the KIP with details of different cases
>4. Test plan will be captured in a doc and will add to the KIP
>5. Add a section "Limitations" to capture the capabilities that will be
>introduced with this KIP and what will not be covered in this KIP.
>
> Please add to it I missed anything. Will produce a formal meeting notes
> from next meeting onwards.
>
> Thanks,
> Harsha
>
>
>
> On Mon, Aug 24, 2020 at 9:42 PM, Ying Zheng 
> wrote:
>
> > We did some basic feature tests at Uber. The test cases and results are
> > shared in this google doc:
> > https://docs.google.com/spreadsheets/d/
> > 1XhNJqjzwXvMCcAOhEH0sSXU6RTvyoSf93DHF-YMfGLk/edit?usp=sharing
> >
> > The performance test results were already shared in the KIP last month.
> >
> > On Mon, Aug 24, 2020 at 11:10 AM Harsha Ch  wrote:
> >
> > "Understand commitments towards driving design & implementation of the
> KIP
> > further and how it aligns with participant interests in contributing to
> the
> > efforts (ex: in the context of Uber’s Q3/Q4 roadmap)." What is that
> about?
> >
> > On Mon, Aug 24, 2020 at 11:05 AM Kowshik Prakasam <
> kpraka...@confluent.io>
> > wrote:
> >
> > Hi Harsha,
> >
> > The following google doc contains a proposal for temporary agenda for the
> > KIP-405  sync meeting
> > tomorrow:
> >
> > https://docs.google.com/document/d/
> > 1pqo8X5LU8TpwfC_iqSuVPezhfCfhGkbGN2TqiPA3LBU/edit
> >
> > .
> > Please could you add it to the Google calendar invite?
> >
> > Thank you.
> >
> > Cheers,
> > Kowshik
> >
> > On Thu, Aug 20, 2020 at 10:58 AM Harsha Ch  wrote:
> >
> > Hi All,
> >
> > Scheduled a meeting for Tuesday 9am - 10am. I can record and upload for
> > community to be able to follow the discussion.
> >
> > Jun, please add the required folks on confluent side.
> >
> > Thanks,
> >
> > Harsha
> >
> > On Thu, Aug 20, 2020 at 12:33 AM, Alexandre Dupriez < alexandre.dupriez@
> > gmail.com > wrote:
> >
> > Hi Jun,
> >
> > Many thanks for your initiative.
> >
> > If you like, I am happy to attend at the time you suggested.
> >
> > Many thanks,
> > Alexandre
> >
> > Le mer. 19 août 2020 à 22:00, Harsha Ch < harsha. ch@ gmail. com (
> harsha.
> > c...@gmail.com ) > a écrit :
> >
> > Hi Jun,
> > Thanks. This will help a lot. Tuesday will work for us.
> > -Harsha
> >
> > On Wed, Aug 19, 2020 at 1:24 PM Jun Rao < jun@ confluent. io ( jun@
> > confluent.io ) > wrote:
> >
> > Hi, Satish, Ying, Harsha,
> >
> > Do you think it would be useful to have a regular virtual meeting to
> > discuss this KIP? The goal of the meeting will be sharing
> > design/development progress and discussing any open issues to
> >
> > accelerate
> >
> > this KIP. If so, will every Tuesday (from next week) 9am-10am
> >
> > PT
> >
> > work for you? I can help set up a Zoom meeting, invite everyone who
> >
> > might
> >
> > be interested, have it recorded and shared, etc.
> >
> > Thanks,
> >
> > Jun
> >
> > On Tue, Aug 18, 2020 at 11:01 AM Satish Duggana <
> >
> > satish. duggana@ gmail. com ( satish.dugg...@gmail.com ) >
> >
> > wrote:
> >
> > Hi Kowshik,
> >
> > Thanks for looking into the KIP and sending your comments.
> >
> > 5001. Under the section "Follower fetch protocol in detail", the
> > next-local-offset is the offset upto which the segments are copied
> >
> > to
> >
> > remote storage. Instead, would last-tiered-offset be a better name
> >
> > than
> >
> > next-local-offset? last-tiered-offset seems to naturally align well
> >
> > with
> >
> > the definition provided in the KIP.
> >
> > Both next-local-offset and local-log-start-offset were introduced
> >
> > to
> >
> > talk
> >
> > about offsets related to local log. We are fine with
> >
> > last-tiered-offset
> >
> > too as you suggested.
> >
> > 5002. After leadership is established for a partition, the leader
> >
> > would
> >
> > begin uploading a segment to remote storage. If successful, the
> >
> > leader
> >
> > would write the updated RemoteLogSegmentMetadata to the metadata
> >
> > topic
> >
> > (via
> >
> > RLMM.putRemoteLogSegmentData). However, for defensive reasons, it
> >
> > seems
> >
> > useful that before the first time the segment is uploaded by the
> >
> > leader
> >
> > for
> >
> > a partition, the leader should ensure to catch up to all the
> >
> > metadata
> >
> > events written so far in the metadata topic for that partition (ex:
> >
> > by
> >
> 

[jira] [Created] (KAFKA-10434) Remove deprecated methods on WindowStore

2020-08-25 Thread Jorge Esteban Quilcate Otoya (Jira)
Jorge Esteban Quilcate Otoya created KAFKA-10434:


 Summary: Remove deprecated methods on WindowStore
 Key: KAFKA-10434
 URL: https://issues.apache.org/jira/browse/KAFKA-10434
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Jorge Esteban Quilcate Otoya


>From [https://github.com/apache/kafka/pull/9138#discussion_r474985997] and 
>[https://github.com/apache/kafka/pull/9138#discussion_r474995606] :

WindowStore contains ReadOnlyWindowStore methods.

We could consider:
 * Moving read methods from WindowStore to ReadOnlyWindowStore and/or
 * Consider removing long based methods



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10435) Fetch protocol changes for KIP-595

2020-08-25 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-10435:
---

 Summary: Fetch protocol changes for KIP-595
 Key: KAFKA-10435
 URL: https://issues.apache.org/jira/browse/KAFKA-10435
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jason Gustafson
Assignee: Jason Gustafson


KIP-595 makes several changes to the Fetch protocol. Since this affects 
inter-broker communication, it is useful to separate this into a separate 
change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10379) Implement the KIP-478 StreamBuilder#addGlobalStore()

2020-08-25 Thread John Roesler (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler resolved KAFKA-10379.
--
Resolution: Fixed

> Implement the KIP-478 StreamBuilder#addGlobalStore()
> 
>
> Key: KAFKA-10379
> URL: https://issues.apache.org/jira/browse/KAFKA-10379
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10436) Implement KIP-478 Topology changes

2020-08-25 Thread John Roesler (Jira)
John Roesler created KAFKA-10436:


 Summary: Implement KIP-478 Topology changes
 Key: KAFKA-10436
 URL: https://issues.apache.org/jira/browse/KAFKA-10436
 Project: Kafka
  Issue Type: Sub-task
Reporter: John Roesler
Assignee: John Roesler






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10437) Convert test-utils (and StateStore) for KIP-478

2020-08-25 Thread John Roesler (Jira)
John Roesler created KAFKA-10437:


 Summary: Convert test-utils (and StateStore) for KIP-478
 Key: KAFKA-10437
 URL: https://issues.apache.org/jira/browse/KAFKA-10437
 Project: Kafka
  Issue Type: Sub-task
Reporter: John Roesler
Assignee: John Roesler






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9344) Logged consumer config does not always match actual config values

2020-08-25 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx resolved KAFKA-9344.
---
Resolution: Fixed

> Logged consumer config does not always match actual config values
> -
>
> Key: KAFKA-9344
> URL: https://issues.apache.org/jira/browse/KAFKA-9344
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 2.4.0
>Reporter: huxihx
>Assignee: huxihx
>Priority: Major
>
> Similar to KAFKA-8928, during consumer construction, some configs might be 
> overridden (client.id for instance), but the actual values will not be 
> reflected in the info log. It'd better display the overridden values for 
> those configs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)