[jira] [Created] (KAFKA-7623) SMT STRUCT to MASK or FILTER

2018-11-13 Thread Chenchu Lakshman kumar (JIRA)
Chenchu Lakshman kumar created KAFKA-7623:
-

 Summary: SMT STRUCT to MASK or FILTER
 Key: KAFKA-7623
 URL: https://issues.apache.org/jira/browse/KAFKA-7623
 Project: Kafka
  Issue Type: Test
  Components: KafkaConnect
Reporter: Chenchu Lakshman kumar


{
 "schema": {
 "type": "struct",
 "fields": [{
 "type": "string",
 "optional": false,
 "doc": "This field stores the value of `Message.getJMSMessageID() 
`_.",
 "field": "messageID"
 }, {
 "type": "string",
 "optional": false,
 "doc": "This field stores the type of message that was received. This 
corresponds to the subinterfaces of `Message 
`_. `BytesMessage 
`_ = `bytes`, 
`MapMessage `_ = 
`map`, `ObjectMessage 
`_ = 
`object`, `StreamMessage 
`_ = `stream` 
and `TextMessage 
`_ = `text`. 
The corresponding field will be populated with the values from the respective 
Message subinterface.",
 "field": "messageType"
 }, {
 "type": "int64",
 "optional": false,
 "doc": "Data from the `getJMSTimestamp() 
`_
 method.",
 "field": "timestamp"
 }, {
 "type": "int32",
 "optional": false,
 "doc": "This field stores the value of `Message.getJMSDeliveryMode() 
`_.",
 "field": "deliveryMode"
 }, {
 "type": "string",
 "optional": true,
 "doc": "This field stores the value of `Message.getJMSCorrelationID() 
`_.",
 "field": "correlationID"
 }, {
 "type": "struct",
 "fields": [{
 "type": "string",
 "optional": false,
 "doc": "The type of JMS Destination, and either ``queue`` or ``topic``.",
 "field": "destinationType"
 }, {
 "type": "string",
 "optional": false,
 "doc": "The name of the destination. This will be the value of 
`Queue.getQueueName() 
`_ or 
`Topic.getTopicName() 
`_.",
 "field": "name"
 }],
 "optional": true,
 "name": "io.confluent.connect.jms.Destination",
 "doc": "This schema is used to represent a JMS Destination, and is either 
`queue `_ or `topic 
`_.",
 "field": "replyTo"
 }, {
 "type": "struct",
 "fields": [{
 "type": "string",
 "optional": false,
 "doc": "The type of JMS Destination, and either ``queue`` or ``topic``.",
 "field": "destinationType"
 }, {
 "type": "string",
 "optional": false,
 "doc": "The name of the destination. This will be the value of 
`Queue.getQueueName() 
`_ or 
`Topic.getTopicName() 
`_.",
 "field": "name"
 }],
 "optional": true,
 "name": "io.confluent.connect.jms.Destination",
 "doc": "This schema is used to represent a JMS Destination, and is either 
`queue `_ or `topic 
`_.",
 "field": "destination"
 }, {
 "type": "boolean",
 "optional": false,
 "doc": "This field stores the value of `Message.getJMSRedelivered() 
`_.",
 "field": "redelivered"
 }, {
 "type": "string",
 "optional": true,
 "doc": "This field stores the value of `Message.getJMSType() 
`_.",
 "field": "type"
 }, {
 "type": "int64",
 "optional": false,
 "doc": "This field stores the value of `Message.getJMSExpiration() 
`_.",
 "field": "expiration"
 }, {
 "type": "int32",
 "optional": false,
 "doc": "This field stores the value of `Message.getJMSPriority() 
`_.",
 "field": "priority"
 }, {
 "type": "map",
 "keys": {
 "type": "string",
 "optional": false
 },
 "values": {
 "type": "struct",
 "fields": [{
 "type": "string",
 "optional": false,
 "doc": "The java type of the property on the Message. One of ``boolean``, 
``byte``, ``short``, ``integer``, ``long``, ``float``, ``double``, or 
``string``.",
 "field": "propertyType"
 }, 

Re: [DISCUSS] KIP-360: Improve handling of unknown producer

2018-11-13 Thread Guozhang Wang
Hello Jason, thanks for the great write-up.

0. One question about the migration plan: "The new GetTransactionState API
and the new version of the transaction state message will not be used until
the inter-broker version supports it." I'm not so clear about the
implementation details here: say a broker is on the newer version and the
txn-coordinator is still on older version. Today the APIVersionsRequest can
only help upgrade / downgrade the request version, but not forbidding
sending any. Are you suggesting we add additional logic on the broker side
to handle the case of "not sending the request"? If yes my concern is that
this will be some tech-debt code that will live long before being removed.

Some additional minor comments:

1. "last epoch" and "instance epoch" seem to be referring to the same thing
in your wiki.
2. "The broker must verify after receiving the response that the producer
state is still unknown.": not sure why we have to validate? If the metadata
returned from the txn-coordinator can always be considered the
source-of-truth, can't we just bindly use it to update the cache?


Guozhang



On Thu, Sep 6, 2018 at 9:10 PM Matthias J. Sax 
wrote:

> I am +1 on this :)
>
>
> -Matthias
>
> On 9/4/18 9:55 AM, Jason Gustafson wrote:
> > Bump. Thanks to Magnus for noticing that I forgot to link to the KIP:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-360%3A+Improve+handling+of+unknown+producer
> > .
> >
> > -Jason
> >
> > On Tue, Aug 21, 2018 at 4:37 PM, Jason Gustafson 
> wrote:
> >
> >> Hi All,
> >>
> >> I have a proposal to improve the transactional/idempotent producer's
> >> handling of the UNKNOWN_PRODUCER error, which is the result of losing
> >> producer state following segment removal. The current behavior is both
> >> complex and limited. Please take a look and let me know what you think.
> >>
> >> Thanks in advance to Matthias Sax for feedback on the initial draft.
> >>
> >> -Jason
> >>
> >
>
>

-- 
-- Guozhang


[jira] [Resolved] (KAFKA-4601) Avoid duplicated repartitioning in KStream DSL

2018-11-13 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-4601.

Resolution: Fixed

Marking this resolved with [https://github.com/apache/kafka/pull/5451.]

As [~guozhang] said we will address follow up work with individual tickets.

> Avoid duplicated repartitioning in KStream DSL
> --
>
> Key: KAFKA-4601
> URL: https://issues.apache.org/jira/browse/KAFKA-4601
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Guozhang Wang
>Priority: Major
>  Labels: performance
>
> Consider the following DSL:
> {code}
> Stream source = builder.stream(Serdes.String(), 
> Serdes.String(), "topic1");
> Stream mapped = source.map(..);
> KTable counts = mapped
> .groupByKey()
> .count("Counts");
> KStream sink = mapped.leftJoin(counts, ..);
> {code}
> The resulted topology looks like this:
> {code}
> ProcessorTopology:
>   KSTREAM-SOURCE-00:
>   topics: [topic1]
>   children:   [KSTREAM-MAP-01]
>   KSTREAM-MAP-01:
>   children:   
> [KSTREAM-FILTER-04, KSTREAM-FILTER-07]
>   KSTREAM-FILTER-04:
>   children:   
> [KSTREAM-SINK-03]
>   KSTREAM-SINK-03:
>   topic:  X-Counts-repartition
>   KSTREAM-FILTER-07:
>   children:   
> [KSTREAM-SINK-06]
>   KSTREAM-SINK-06:
>   topic:  
> X-KSTREAM-MAP-01-repartition
> ProcessorTopology:
>   KSTREAM-SOURCE-08:
>   topics: 
> [X-KSTREAM-MAP-01-repartition]
>   children:   
> [KSTREAM-LEFTJOIN-09]
>   KSTREAM-LEFTJOIN-09:
>   states: [Counts]
>   KSTREAM-SOURCE-05:
>   topics: [X-Counts-repartition]
>   children:   
> [KSTREAM-AGGREGATE-02]
>   KSTREAM-AGGREGATE-02:
>   states: [Counts]
> {code}
> I.e. there are two repartition topics, one for the aggregate and one for the 
> join, which not only introduce unnecessary overheads but also mess up the 
> processing ordering (users are expecting each record to go through 
> aggregation first then the join operator). And in order to get the following 
> simpler topology users today need to add a {{through}} operator after {{map}} 
> manually to enforce repartitioning.
> {code}
> Stream source = builder.stream(Serdes.String(), 
> Serdes.String(), "topic1");
> Stream repartitioned = source.map(..).through("topic2");
> KTable counts = repartitioned
> .groupByKey()
> .count("Counts");
> KStream sink = repartitioned.leftJoin(counts, ..);
> {code}
> The resulted topology then will look like this:
> {code}
> ProcessorTopology:
>   KSTREAM-SOURCE-00:
>   topics: [topic1]
>   children:   [KSTREAM-MAP-01]
>   KSTREAM-MAP-01:
>   children:   
> [KSTREAM-SINK-02]
>   KSTREAM-SINK-02:
>   topic:  topic 2
> ProcessorTopology:
>   KSTREAM-SOURCE-03:
>   topics: [topic 2]
>   children:   
> [KSTREAM-AGGREGATE-04, KSTREAM-LEFTJOIN-05]
>   KSTREAM-AGGREGATE-04:
>   states: [Counts]
>   KSTREAM-LEFTJOIN-05:
>   states: [Counts]
> {code} 
> This kind of optimization should be automatic in Streams, which we can 
> consider doing when extending from one-operator-at-a-time translation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-388 Add observer interface to record request and response

2018-11-13 Thread xiongqi wu
Lincong,

Thanks for the KIP.
I have a question about the lifecycle of request and response.
With the current (requestAdapter, responseAdapter) implementation,
the observer can potentially extend the lifecycle of request and response
through adapter.
Anyone can implement own observer, and some observers may want to do async
process or batched processing.

Could you clarify how could we make sure this do not increase the memory
pressure on potentially holding large request/response object?



Xiongqi (Wesley) Wu


On Mon, Nov 12, 2018 at 10:23 PM Lincong Li  wrote:

> Thanks Mayuresh, Ismael and Colin for your feedback!
>
> I updated the KIP basing on your feedback. The change is basically that two
> interfaces are introduced to prevent the internal classes from being
> exposed. These two interfaces contain getters that allow user to extract
> information from request and response in their own implementation(s) of the
> observer interface and they would not constraint future implementation
> changes in neither RequestChannel.Request nor AbstractResponse. There could
> be more getters defined in these two interfaces. The implementation of
> these two interfaces will be provided as part of the KIP.
>
> I also expanded on "Add code to the broker (in KafkaApis) to allow Kafka
> servers to invoke any
> observers defined. More specifically, change KafkaApis code to invoke all
> defined observers, in the order in which they were defined, for every
> request-response pair" by providing a sample code block which shows how
> these interfaces are used in the KafkaApis class.
>
> Let me know if you have any question, concern, or comments. Thank you very
> much!
>
> Best regards,
> Lincong Li
>
> On Fri, Nov 9, 2018 at 10:34 AM Mayuresh Gharat <
> gharatmayures...@gmail.com>
> wrote:
>
> > Hi Lincong,
> >
> > Thanks for the KIP.
> >
> > As Colin pointed out, would it better to expose certain specific pieces
> of
> > information from the request/response like api key, request headers,
> record
> > counts, client ID instead of the entire request/response objects ? This
> > enables us to change the request response apis independently of this
> > pluggable public API, in future, unless you think we have a strong reason
> > that we need to expose the request, response objects.
> >
> > Also, it would be great if you can expand on :
> > "Add code to the broker (in KafkaApis) to allow Kafka servers to invoke
> any
> > observers defined. More specifically, change KafkaApis code to invoke all
> > defined observers, in the order in which they were defined, for every
> > request-response pair."
> > probably with an example of how you visualize it. It would help the KIP
> to
> > be more concrete and easier to understand the end to end workflow.
> >
> > Thanks,
> >
> > Mayuresh
> >
> > On Thu, Nov 8, 2018 at 7:44 PM Ismael Juma  wrote:
> >
> > > I agree, the current KIP doesn't discuss the public API that we would
> be
> > > exposing and it's extensive if the normal usage would allow for casting
> > > AbstractRequest into the various subclasses and potentially even
> > accessing
> > > Records and related for produce request.
> > >
> > > There are many use cases where this could be useful, but it requires
> > quite
> > > a bit of thinking around the APIs that we expose and the expected
> usage.
> > >
> > > Ismael
> > >
> > > On Thu, Nov 8, 2018, 6:09 PM Colin McCabe  > >
> > > > Hi Lincong Li,
> > > >
> > > > I agree that server-side instrumentation is helpful.  However, I
> don't
> > > > think this is the right approach.
> > > >
> > > > The problem is that RequestChannel.Request and AbstractResponse are
> > > > internal classes that should not be exposed.  These are
> implementation
> > > > details that we may change in the future.  Freezing these into a
> public
> > > API
> > > > would really hold back the project.  For example, for really large
> > > > responses, we might eventually want to avoid materializing the whole
> > > > response all at once.  It would make more sense to return it in a
> > > streaming
> > > > fashion.  But if we need to support this API forever, we can't do
> that.
> > > >
> > > > I think it's fair to say that this is, at best, half a solution to
> the
> > > > problem of tracing requests.  Users still need to write the plugin
> code
> > > and
> > > > arrange for it to be on their classpath to make this work.  I think
> the
> > > > alternative here is not client-side instrumentation, but simply
> making
> > > the
> > > > change to the broker without using a plugin interface.
> > > >
> > > > If a public interface is absolutely necessary here we should expose
> > only
> > > > things like the API key, client ID, time, etc. that don't constrain
> the
> > > > implementation a lot in the future.  I think we should also use java
> > here
> > > > to avoid the compatibility issues we have had with Scala APIs in the
> > > past.
> > > >
> > > > best,
> > > > Colin
> > > >
> > > >
> > > > On Thu, Nov 8, 2018, at 11:34, 

Re: [DISCUSS] KIP-345: Reduce multiple consumer rebalances by specifying member id

2018-11-13 Thread Jason Gustafson
Hey Boyang,

Thanks for the updates. From a high level, I think this actually
complements Konstantine's writeup on incremental rebalancing. The gap we're
addressing is providing a way to bind the the partition assignment of a
group to a set of user-provided ids so that we are not so reliant on the
group's immediate state. For example, these ids might identify the state
store volume for particular streams instances. This is basically what you
need to work well with k8s stateful sets (as far as I understand them).

One key decision is how we would define and update the expected static
members in a consumer group. The mechanics of the registration and
expansion timeouts feel a little bit clunky. For the sake of discussion, I
was wondering if we could just say that static members do not expire.
Instead, we offer an admin API that lets a user define the expected members
of the group. This API could be used to both grow and shrink a group. This
would solve the rebalancing problems when applications are initially
bootstrapped or when they are restarted because we would always know how
many members should be in a group. What do you think?

By the way, it would be helpful to include the full schema definition for
any protocol changes in the proposal.

Thanks,
Jason


On Mon, Nov 12, 2018 at 8:56 AM, Boyang Chen  wrote:

> Thanks Mayuresh for the feedback! Do you have a quick example for passing
> in consumer config dynamically? I mainly use Kafka Streams at my daily work
> so probably missing the idea how to do it in the current consumer setting.
>
>
> For differentiating session timeout and registration timeout, I would try
> to enhance the documentation in the first stage to see how people react to
> the confusion (would be great if they feel straightforward!). Since one
> doesn't have to fully understand the difference unless defining the new
> config "member name", for current users we could buy some time to listen to
> their understandings and improve our documentation correspondingly in the
> follow-up KIPs.
>
>
> Boyang
>
> 
> From: Mayuresh Gharat 
> Sent: Sunday, November 11, 2018 1:06 PM
> To: dev@kafka.apache.org
> Subject: Re: [DISCUSS] KIP-345: Reduce multiple consumer rebalances by
> specifying member id
>
> Hi Boyang,
>
> Thanks for the reply.
>
> Please find the replies inline below :
> For having a consumer config at runtime, I think it's not necessary to
> address in this KIP because most companies run sidecar jobs through daemon
> software like puppet. It should be easy to change the config through script
> or UI without actual code change. We still want to leave flexibility for
> user to define member name as they like.
>  This might be little different for companies that use configuration
> management tools that does not allow the applications to define/change the
> configs dynamically. For example, if we use something similar to spring to
> pull in the configs for the KafkaConsumer and pass it to the constructor to
> create the KafkaConsumer object, it will be hard to specify a unique value
> to the "MEMBER_NAME" config unless someone deploying the app generates a
> unique string for this config outside the deployment workflow and copies it
> statically before starting up each consumer instance. Unless we can loosen
> the criteria for uniqueness of this config value, for each consumer
> instance in the consumer group, I am not sure of a better way of
> addressing this. If we don't want to loosen the criteria, then providing a
> dynamic way to pass this in at runtime, would put the onus of having the
> same unique value each time a consumer is restarted, on to the application
> that is running the consumer.
>
> I just updated the kip about having both "registration timeout" and
> "session timeout". The benefit of having two configs instead of one is to
> reduce the mental burden for operation, for example user just needs to
> unset "member name" to cast back to dynamic membership without worrying
> about tuning the "session timeout" back to a smaller value.
> --- That is a good point. I was thinking, if both the configs are
> specified, it would be confusing for the end user without understanding the
> internals of the consumer and its interaction with group coordinator, as
> which takes precedence when and how it affects the consumer behavior. Just
> my 2 cents.
>
> Thanks,
>
> Mayuresh
>
> On Fri, Nov 9, 2018 at 8:27 PM Boyang Chen  wrote:
>
> > Hey Mayuresh,
> >
> >
> > thanks for the thoughtful questions! Let me try to answer your questions
> > one by one.
> >
> >
> > For having a consumer config at runtime, I think it's not necessary to
> > address in this KIP because most companies run sidecar jobs through
> daemon
> > software like puppet. It should be easy to change the config through
> script
> > or UI without actual code change. We still want to leave flexibility for
> > user to define member name as they like.
> >
> >
> > I just updated the kip 

Jenkins build is back to normal : kafka-trunk-jdk8 #3199

2018-11-13 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-386: Standardize on Min/Avg/Max metrics' default values

2018-11-13 Thread Stanislav Kozlovski
Hey everybody,

I've finished with the implementation, here is the PR:
https://github.com/apache/kafka/pull/5908

On Mon, Nov 12, 2018 at 6:58 PM Stanislav Kozlovski 
wrote:

> Thank you, everybody, for the votes and discussion. The KIP has passed
> with 3 binding votes (Harsha, Jun, Dong) and 1 non-binding vote (Kevin)
>
> @jun, I added a note on the KIP stating that this isn't related to the
> Yammer metrics
>
> Best,
> Stanislav
>
> On Thu, Nov 8, 2018 at 10:54 PM Dong Lin  wrote:
>
>> Thanks for the KIP Stanislav. +1 (binding)
>>
>> On Thu, Nov 8, 2018 at 2:40 PM Jun Rao  wrote:
>>
>> > Hi, Stanislav,
>> >
>> > Thanks for the KIP. +1. I guess this only covers the Kafka metrics (not
>> the
>> > Yammer metrics). It would be useful to make this clear.
>> >
>> > Jun
>> >
>> > On Tue, Nov 6, 2018 at 1:00 AM, Stanislav Kozlovski <
>> > stanis...@confluent.io>
>> > wrote:
>> >
>> > > Hey everybody,
>> > >
>> > > I'm starting a vote thread on KIP-386: Standardize on Min/Avg/Max
>> > metrics'
>> > > default values
>> > > <
>> >
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=95652345
>> > > >
>> > > In short, after the discussion thread
>> > > > > > 4b9014fbb50f663bf14e5aec67@%3Cdev.kafka.apache.org%3E>,
>> > > we decided to have all min/avg/max metrics output `NaN` as a default
>> > value.
>> > >
>> > > --
>> > > Best,
>> > > Stanislav
>> > >
>> >
>>
>
>
> --
> Best,
> Stanislav
>


-- 
Best,
Stanislav


Build failed in Jenkins: kafka-trunk-jdk8 #3198

2018-11-13 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Remove unused abstract function in test class (#5888)

--
[...truncated 2.40 MB...]
org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt8 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanFalse STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanFalse PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidSchemaType 
STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidSchemaType 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessString PASSED

org.apache.kafka.connect.transforms.CastTest > castFieldsWithSchema STARTED

org.apache.kafka.connect.transforms.CastTest > castFieldsWithSchema PASSED

org.apache.kafka.connect.transforms.HoistFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.HoistFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.HoistFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.HoistFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.ValueToKeyTest > withSchema STARTED

org.apache.kafka.connect.transforms.ValueToKeyTest > withSchema PASSED

org.apache.kafka.connect.transforms.ValueToKeyTest > schemaless STARTED

org.apache.kafka.connect.transforms.ValueToKeyTest > schemaless PASSED

org.apache.kafka.connect.transforms.TimestampRouterTest > defaultConfiguration 
STARTED

org.apache.kafka.connect.transforms.TimestampRouterTest > defaultConfiguration 
PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > schemaNameUpdate 
STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > schemaNameUpdate 
PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
schemaNameAndVersionUpdateWithStruct STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
schemaNameAndVersionUpdateWithStruct PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfStruct STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfStruct PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
schemaNameAndVersionUpdate STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
schemaNameAndVersionUpdate PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > updateSchemaOfNull 
STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > updateSchemaOfNull 
PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > schemaVersionUpdate 
STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > schemaVersionUpdate 
PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfNonStruct STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfNonStruct PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullWithSchema 
STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullWithSchema PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullSchemaless 
STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullSchemaless PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessFieldConversion STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessFieldConversion PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidTargetType STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidTargetType PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessStringToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessStringToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > testKey STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > testKey PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessDateToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessDateToTimestamp PASSED


Jenkins build is back to normal : kafka-trunk-jdk11 #94

2018-11-13 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk8 #3197

2018-11-13 Thread Apache Jenkins Server
See 


Changes:

[colin] Trogdor: Fix /coordinator/tasks parameters to accept long values (#5905)

[colin] KAFKA-7514: Add threads to ConsumeBenchWorker (#5864)

--
[...truncated 2.50 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 

Jenkins build is back to normal : kafka-trunk-jdk8 #3196

2018-11-13 Thread Apache Jenkins Server
See 



[jira] [Created] (KAFKA-7622) Add findSessions functionality to ReadOnlySessionStore

2018-11-13 Thread Di Campo (JIRA)
Di Campo created KAFKA-7622:
---

 Summary: Add findSessions functionality to ReadOnlySessionStore
 Key: KAFKA-7622
 URL: https://issues.apache.org/jira/browse/KAFKA-7622
 Project: Kafka
  Issue Type: New Feature
  Components: streams
Reporter: Di Campo


When creating a session store from the DSL, and you get a 
{{ReadOnlySessionStore}}, you can fetch by key, but not by key and time as in a 
{{WindowStore}}, even of the key type is a {{Windowed}}. So you would have 
to iterate through it to find the time-related entries, which should be less 
efficient than querying by time.

So the purpose of this ticket is to be able to query the store with (key, time).

Proposal is to add {{SessionStore's findSessions}}-like methods (i.e. 
time-bound access) to {{ReadOnlySessionStore.}}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7621) Clients can't deal with server IP address change

2018-11-13 Thread Bastian Voigt (JIRA)
Bastian Voigt created KAFKA-7621:


 Summary: Clients can't deal with server IP address change
 Key: KAFKA-7621
 URL: https://issues.apache.org/jira/browse/KAFKA-7621
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 2.0.0
Reporter: Bastian Voigt


When the server IP address changes (and the corresponding DNS record changes, 
of course), the client cannot deal with this. It keeps saying:

{{Connection to node 0 could not be established. Broker may not be available}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] 2.1.0 RC1

2018-11-13 Thread Andras Beni
+1 (non-binding)

Verified signatures and checksums of release artifacts
Performed quickstart steps on rc artifacts (both scala 2.11 and 2.12)

Andras

On Tue, Nov 13, 2018 at 10:51 AM Eno Thereska 
wrote:

> Built code and ran tests. Getting a single integration test failure:
>
> kafka.log.LogCleanerParameterizedIntegrationTest >
> testCleansCombinedCompactAndDeleteTopic[3] FAILED
> java.lang.AssertionError: Contents of the map shouldn't change
> expected: (340,340), 5 -> (345,345), 10 -> (350,350), 14 ->
> (354,354), 1 -> (341,341), 6 -> (346,346), 9 -> (349,349), 13 -> (353,353),
> 2 -> (342,342), 17 -> (357,357), 12 -> (352,352), 7 -> (347,347), 3 ->
> (343,343), 18 -> (358,358), 16 -> (356,356), 11 -> (351,351), 8 ->
> (348,348), 19 -> (359,359), 4 -> (344,344), 15 -> (355,355))> but
> was: (340,340), 5 -> (345,345), 10 -> (350,350), 14 -> (354,354),
> 1 -> (341,341), 6 -> (346,346), 9 -> (349,349), 13 -> (353,353), 2 ->
> (342,342), 17 -> (357,357), 12 -> (352,352), 7 -> (347,347), 3 ->
> (343,343), 18 -> (358,358), 16 -> (356,356), 11 -> (351,351), 99 ->
> (299,299), 8 -> (348,348), 19 -> (359,359), 4 -> (344,344), 15 ->
> (355,355))>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:834)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at
>
> kafka.log.LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic(LogCleanerParameterizedIntegrationTest.scala:129)
>
> Thanks
> Eno
>
> On Sun, Nov 11, 2018 at 7:34 PM Jonathan Santilli <
> jonathansanti...@gmail.com> wrote:
>
> > Hello,
> >
> > +1
> >
> > I have downloaded the release artifact from
> > http://home.apache.org/~lindong/kafka-2.1.0-rc1/
> > Executed a 3 brokers cluster. (java8 8u192b12)
> > Executed kafka-monitor for about 1 hour without problems.
> >
> > Thanks,
> > --
> > Jonathan
> >
> >
> > On Fri, Nov 9, 2018 at 11:33 PM Dong Lin  wrote:
> >
> > > Hello Kafka users, developers and client-developers,
> > >
> > > This is the second candidate for feature release of Apache Kafka 2.1.0.
> > >
> > > This is a major version release of Apache Kafka. It includes 28 new
> KIPs
> > > and
> > >
> > > critical bug fixes. Please see the Kafka 2.1.0 release plan for more
> > > details:
> > >
> > > *
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=91554044*
> > > <
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=91554044
> > > >
> > >
> > > Here are a few notable highlights:
> > >
> > > - Java 11 support
> > > - Support for Zstandard, which achieves compression comparable to gzip
> > with
> > > higher compression and especially decompression speeds(KIP-110)
> > > - Avoid expiring committed offsets for active consumer group (KIP-211)
> > > - Provide Intuitive User Timeouts in The Producer (KIP-91)
> > > - Kafka's replication protocol now supports improved fencing of
> zombies.
> > > Previously, under certain rare conditions, if a broker became
> partitioned
> > > from Zookeeper but not the rest of the cluster, then the logs of
> > replicated
> > > partitions could diverge and cause data loss in the worst case
> (KIP-320)
> > > - Streams API improvements (KIP-319, KIP-321, KIP-330, KIP-353,
> KIP-356)
> > > - Admin script and admin client API improvements to simplify admin
> > > operation (KIP-231, KIP-308, KIP-322, KIP-324, KIP-338, KIP-340)
> > > - DNS handling improvements (KIP-235, KIP-302)
> > >
> > > Release notes for the 2.1.0 release:
> > > http://home.apache.org/~lindong/kafka-2.1.0-rc0/RELEASE_NOTES.html
> > >
> > > *** Please download, test and vote by Thursday, Nov 15, 12 pm PT ***
> > >
> > > * Kafka's KEYS file containing PGP keys we use to sign the release:
> > > http://kafka.apache.org/KEYS
> > >
> > > * Release artifacts to be voted upon (source and binary):
> > > http://home.apache.org/~lindong/kafka-2.1.0-rc1/
> > >
> > > * Maven artifacts to be voted upon:
> > > https://repository.apache.org/content/groups/staging/
> > >
> > > * Javadoc:
> > > http://home.apache.org/~lindong/kafka-2.1.0-rc1/javadoc/
> > >
> > > * Tag to be voted upon (off 2.1 branch) is the 2.1.0-rc1 tag:
> > > https://github.com/apache/kafka/tree/2.1.0-rc1
> > >
> > > * Documentation:
> > > *http://kafka.apache.org/21/documentation.html*
> > > 
> > >
> > > * Protocol:
> > > http://kafka.apache.org/21/protocol.html
> > >
> > > * Successful Jenkins builds for the 2.1 branch:
> > > Unit/integration tests: *
> > https://builds.apache.org/job/kafka-2.1-jdk8/50/
> > > *
> > >
> > > Please test and verify the release artifacts and submit a vote for this
> > RC,
> > > or report any issues so we can fix them and get a new RC out ASAP.
> > Although
> > > this release vote requires PMC votes to pass, testing, votes, and bug
> > > reports are valuable and appreciated from everyone.
> > >
> > > Cheers,
> > > Dong
> > >
> >
> >
> > 

Re: [VOTE] 2.1.0 RC1

2018-11-13 Thread Eno Thereska
Built code and ran tests. Getting a single integration test failure:

kafka.log.LogCleanerParameterizedIntegrationTest >
testCleansCombinedCompactAndDeleteTopic[3] FAILED
java.lang.AssertionError: Contents of the map shouldn't change
expected: (340,340), 5 -> (345,345), 10 -> (350,350), 14 ->
(354,354), 1 -> (341,341), 6 -> (346,346), 9 -> (349,349), 13 -> (353,353),
2 -> (342,342), 17 -> (357,357), 12 -> (352,352), 7 -> (347,347), 3 ->
(343,343), 18 -> (358,358), 16 -> (356,356), 11 -> (351,351), 8 ->
(348,348), 19 -> (359,359), 4 -> (344,344), 15 -> (355,355))> but
was: (340,340), 5 -> (345,345), 10 -> (350,350), 14 -> (354,354),
1 -> (341,341), 6 -> (346,346), 9 -> (349,349), 13 -> (353,353), 2 ->
(342,342), 17 -> (357,357), 12 -> (352,352), 7 -> (347,347), 3 ->
(343,343), 18 -> (358,358), 16 -> (356,356), 11 -> (351,351), 99 ->
(299,299), 8 -> (348,348), 19 -> (359,359), 4 -> (344,344), 15 ->
(355,355))>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:118)
at
kafka.log.LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic(LogCleanerParameterizedIntegrationTest.scala:129)

Thanks
Eno

On Sun, Nov 11, 2018 at 7:34 PM Jonathan Santilli <
jonathansanti...@gmail.com> wrote:

> Hello,
>
> +1
>
> I have downloaded the release artifact from
> http://home.apache.org/~lindong/kafka-2.1.0-rc1/
> Executed a 3 brokers cluster. (java8 8u192b12)
> Executed kafka-monitor for about 1 hour without problems.
>
> Thanks,
> --
> Jonathan
>
>
> On Fri, Nov 9, 2018 at 11:33 PM Dong Lin  wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> > This is the second candidate for feature release of Apache Kafka 2.1.0.
> >
> > This is a major version release of Apache Kafka. It includes 28 new KIPs
> > and
> >
> > critical bug fixes. Please see the Kafka 2.1.0 release plan for more
> > details:
> >
> > *
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=91554044*
> > <
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=91554044
> > >
> >
> > Here are a few notable highlights:
> >
> > - Java 11 support
> > - Support for Zstandard, which achieves compression comparable to gzip
> with
> > higher compression and especially decompression speeds(KIP-110)
> > - Avoid expiring committed offsets for active consumer group (KIP-211)
> > - Provide Intuitive User Timeouts in The Producer (KIP-91)
> > - Kafka's replication protocol now supports improved fencing of zombies.
> > Previously, under certain rare conditions, if a broker became partitioned
> > from Zookeeper but not the rest of the cluster, then the logs of
> replicated
> > partitions could diverge and cause data loss in the worst case (KIP-320)
> > - Streams API improvements (KIP-319, KIP-321, KIP-330, KIP-353, KIP-356)
> > - Admin script and admin client API improvements to simplify admin
> > operation (KIP-231, KIP-308, KIP-322, KIP-324, KIP-338, KIP-340)
> > - DNS handling improvements (KIP-235, KIP-302)
> >
> > Release notes for the 2.1.0 release:
> > http://home.apache.org/~lindong/kafka-2.1.0-rc0/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Thursday, Nov 15, 12 pm PT ***
> >
> > * Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > http://home.apache.org/~lindong/kafka-2.1.0-rc1/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/
> >
> > * Javadoc:
> > http://home.apache.org/~lindong/kafka-2.1.0-rc1/javadoc/
> >
> > * Tag to be voted upon (off 2.1 branch) is the 2.1.0-rc1 tag:
> > https://github.com/apache/kafka/tree/2.1.0-rc1
> >
> > * Documentation:
> > *http://kafka.apache.org/21/documentation.html*
> > 
> >
> > * Protocol:
> > http://kafka.apache.org/21/protocol.html
> >
> > * Successful Jenkins builds for the 2.1 branch:
> > Unit/integration tests: *
> https://builds.apache.org/job/kafka-2.1-jdk8/50/
> > *
> >
> > Please test and verify the release artifacts and submit a vote for this
> RC,
> > or report any issues so we can fix them and get a new RC out ASAP.
> Although
> > this release vote requires PMC votes to pass, testing, votes, and bug
> > reports are valuable and appreciated from everyone.
> >
> > Cheers,
> > Dong
> >
>
>
> --
> Santilli Jonathan
>


Re: [DISCUSS] KIP-380: Detect outdated control requests and bounced brokers using broker generation

2018-11-13 Thread Patrick Huang
Hi Becket, Dong,

Thanks for the discussion. Those are very good points. I think it makes sense 
to send back STALE_BROKER_EPOCH error to the broker in both cases:

  1.  Broker gets quickly restarted. In this case, the channel has already been 
closed during broker shutdown so the broker will not react to the error.
  2.  Broker gets disconnected from zk and reconnect. In this case, the broker 
will see the error and will resend the ControlledShutdownRequest with a newer 
broker epoch.

I have also updated the KIP accordingly to include what we have discussed. 
Thanks again!

Best,
Zhanxiang (Patrick) Huang


From: Becket Qin 
Sent: Monday, November 12, 2018 21:56
To: Dong Lin
Cc: dev
Subject: Re: [DISCUSS] KIP-380: Detect outdated control requests and bounced 
brokers using broker generation

Hi Dong,

That is a good point. But I think the STALE_BROKER_EPOCH error may still be
sent to the broker. For example, think about the following case:

1. Broker sends a ControlledShutdownRequest to the controller
2. Broker had a ZK session timeout
3. Broker created the ephemeral node
4. Controller processes the ControlledShutdownRequest in step 1
5. Broker receives a ControlledShutdownResponse with STALE_BROKER_EPOCH.

However, in this case, the broker should probably resend the controlled
shutdown request again with the new epoch. So it looks that returning a
STALE_BROKER_EPOCH is the correct behavior. If the broker has really been
bounced, that response will not be delivered to the broker. If the broker
has not really restarted, it will just resend the ControlledShutdownRequest
with the current epoch again.

It might worth updating the KIP wiki to mention this behavior.

Thanks,

Jiangjie (Becket) Qin


On Tue, Nov 13, 2018 at 2:04 AM Dong Lin  wrote:

> Hey Becket, Patrick,
>
> Currently we expect that controller can only receive ControlledShutdownRequest
> with outdated broker epoch in two cases: 1) the ControlledShutdownRequest
> was sent by the broker before it has restarted; and 2) there is bug.
>
> In case 1), it seems that ControlledShutdownResponse will not be delivered
> to the broker as the channel should have been disconnected. Thus there is
> no confusion to the broker because broker will not receive response
> with STALE_BROKER_EPOCH error.
>
> In case 2), it seems that it can be useful to still deliver 
> ControlledShutdownResponse
> with STALE_BROKER_EPOCH so that this broker at least knows that which
> response is not accepted. This helps us debug this issue.
>
> Also, in terms of both design and implementation, it seems simpler to
> still define a response for a given invalid request rather than have
> special path to skip response for the invalid request. Does this sound
> reasonable?
>
> Thanks,
> Dong
>
>
> On Mon, Nov 12, 2018 at 9:52 AM Patrick Huang  wrote:
>
>> Hi Becket,
>>
>> I think you are right. STALE_BROKER_EPOCH only makes sense when the
>> broker detects outdated control requests and wants the controller to know
>> about that. For ControlledShutdownRequest, the controller should just
>> ignore the request with stale broker epoch since the broker does not need
>> and will not do anything for STALE_BROKER_EPOCH response. Thanks for
>> pointing it out.
>>
>> Thanks,
>> Zhanxiang (Patrick) Huang
>>
>> 
>> From: Becket Qin 
>> Sent: Monday, November 12, 2018 6:46
>> To: dev
>> Subject: Re: [DISCUSS] KIP-380: Detect outdated control requests and
>> bounced brokers using broker generation
>>
>> Hi Patrick,
>>
>> I am wondering why the controller should send STALE_BROKER_EPOCH error to
>> the broker if the broker epoch is stale? Would this be a little confusing
>> to the current broker if the request was sent by a broker with previous
>> epoch? Should the controller just ignore those requests in that case?
>>
>> Thanks,
>>
>> Jiangjie (Becket) Qin
>>
>> On Fri, Nov 9, 2018 at 2:17 AM Patrick Huang  wrote:
>>
>> > Hi,
>> >
>> > In this KIP, we are also going to add a new exception and a new error
>> code
>> > "STALE_BROKER_EPOCH" in order to allow the broker to respond back the
>> right
>> > error when it sees outdated broker epoch in the control requests. Since
>> > adding a new exception and error code is also considered as public
>> > interface change, I have updated the original KIP accordingly to include
>> > this change. Feel free to comment if there is any concern.
>> >
>> > Thanks,
>> > Zhanxiang (Patrick) Huang
>> >
>> > 
>> > From: Patrick Huang 
>> > Sent: Tuesday, October 23, 2018 6:20
>> > To: Jun Rao; dev@kafka.apache.org
>> > Subject: Re: [DISCUSS] KIP-380: Detect outdated control requests and
>> > bounced brokers using broker generation
>> >
>> > Agreed. I have updated the PR to add czxid in ControlledShutdownRequest
>> (
>> > https://github.com/apache/kafka/pull/5821). Appreciated if you can
>> take a
>> > look.
>> >
>> > Btw, I also have the vote thread for this KIP:
>> >
>> 

Build failed in Jenkins: kafka-trunk-jdk8 #3195

2018-11-13 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-7612: Fix javac warnings and enable warnings as errors (#5900)

[ismael] KAFKA-7605; Retry async commit failures in integration test cases to 
fix

--
[...truncated 2.41 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

> Task :streams:streams-scala:test

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava STARTED


Build failed in Jenkins: kafka-trunk-jdk11 #93

2018-11-13 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-7612: Fix javac warnings and enable warnings as errors (#5900)

[ismael] KAFKA-7605; Retry async commit failures in integration test cases to 
fix

--
[...truncated 2.25 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest >