Apache Kafka Authentication

2018-09-19 Thread Rasheed Siddiqui
Dear Team,

 

I want to know the detail document and discussion regarding the Kafka
Authentication.

 

We are building the Consumer on .Net Platform. We have difficulty in
communication with Producer as we have developed Unsecure Consumer.

So Please suggest to resolve this issue.

Thanks in Advance!!!

 

 

 

Thanks & Regards,

 


Description: Description: cid:image002.png@01D330A4.350DE830

Rasheed Siddiqui 

Sr.Technical Analyst 

M: 8655567060  E:   rash...@ccentric.co  

  www.ccentric.co 

Description: Description: cid:image004.png@01D330A4.350DE830

Years of 

Customer 

Excellence

 

 



Re: Kafka Contributor access

2018-09-19 Thread Srinivas Reddy
Thank you

-
Srinivas

- Typed on tiny keys. pls ignore typos.{mobile app}

On Thu 20 Sep, 2018, 01:57 Guozhang Wang,  wrote:

> Hello Srinivas,
>
> I've added you to the contributor list and assigned the ticket to you.
> Thanks for your interest contributing to Kafka!!
>
> Guozhang
>
> On Wed, Sep 19, 2018 at 6:08 AM, Srinivas Reddy <
> srinivas96all...@gmail.com>
> wrote:
>
> > My Apache JIRA id is : mrsrinivas
> >
> >
> >
> > --
> > Srinivas Reddy
> >
> > http://mrsrinivas.com/
> >
> >
> > (Sent via gmail web)
> >
> >
> > On Wed, 19 Sep 2018 at 18:23, Srinivas Reddy  >
> > wrote:
> >
> > > Hi,
> > >
> > > I would like to work on KAFKA-7418 as it is not assigned to anyone.
> > >
> > > "Assign to me" option is not available to me. Can someone help me with
> > > that ?
> > >
> > > Thank you in advance.
> > >
> > > -
> > > Srinivas
> > >
> > > - Typed on tiny keys. pls ignore typos.{mobile app}
> > >
> >
>
>
>
> --
> -- Guozhang
>


Jenkins build is back to normal : kafka-0.11.0-jdk7 #401

2018-09-19 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-362: Dynamic Session Window Support

2018-09-19 Thread Lei Chen
Thanks Matthias. That makes sense.

You're right that symmetric merge is necessary to ensure consistency. On
the other hand, I kinda feel it defeats the purpose of dynamic gap, which
is to update the gap from old value to new value. The symmetric merge
always honor the larger gap in both direction, rather than honor the gap
carried by record with larger timestamp. I wasn't able to find any semantic
definitions w.r.t this particular aspect online, but spent some time
looking into other streaming engines like Apache Flink.

Apache Flink defines the window differently, that uses (start time, start
time + gap).

so our previous example (10, 10), (19,5),(15,3) in Flink's case will be:
[10,20]
[19,24] => merged to [10,24]
[15,18] => merged to [10,24]

while example (15,3)(19,5)(10,10) will be
[15,18]
[19,24] => no merge
[10,20] => merged to [10,24]

however, since it only records gap in future direction, not past, a late
record might not trigger any merge where in symmetric merge it would.
(7,2),(10, 10), (19,5),(15,3)
[7,9]
[10,20]
[19,24] => merged to [10,24]
[15,18] => merged to [10,24]
so at the end
two windows [7,9][10,24] are there.

As you can see, in Flink, the gap semantic is more toward to the way that,
a gap carried by one record only affects how this record merges with future
records. e.g. a later event (T2, G2) will only be merged with (T1, G1) is
T2 is less than T1+G1, but not when T1 is less than T2 - G2. Let's call
this "forward-merge" way of handling this. I just went thought some source
code and if my understanding is incorrect about Flink's implementation,
please correct me.

On the other hand, if we want to do symmetric merge in Kafka Streams, we
can change the window definition to [start time - gap, start time + gap].
This way the example (7,2),(10, 10), (19,5),(15,3) will be
[5,9]
[0,20] => merged to [0,20]
[14,24] => merged to [0,24]
[12,18] => merged to [0,24]

 (19,5),(15,3)(7,2),(10, 10) will generate same result
[14,24]
[12,18] => merged to [12,24]
[5,9] => no merge
[0,20] => merged to [0,24]

Note that symmetric-merge would require us to change the way how Kafka
Steams fetch windows now, instead of fetching range from timestamp-gap to
timestamp+gap, we will need to fetch all windows that are not expired yet.
On the other hand, I'm not sure how this will impact the current logic of
how a window is considered as closed, because the window doesn't carry end
timestamp anymore, but end timestamp + gap.

So do you guys think forward-merge approach used by Flink makes more sense
in Kafka Streams, or symmetric-merge makes more sense? Both of them seems
to me can give deterministic result.

BTW I'll add the use case into original KIP.

Lei


On Tue, Sep 11, 2018 at 5:45 PM Matthias J. Sax 
wrote:

> Thanks for explaining your understanding. And thanks for providing more
> details about the use-case. Maybe you can add this to the KIP?
>
>
> First one general comment. I guess that my and Guozhangs understanding
> about gap/close/gracePeriod is the same as yours -- we might not have
> use the term precisely correct in previous email.
>
>
> To you semantics of gap in detail:
>
> > I thought when (15,3) is received, kafka streams look up for neighbor
> > record/window that is within the gap
> > of [15-3, 15+3], and merge if any. Previous record (10, 10) created its
> own
> > window [10, 10], which is
> > out of the gap, so nothing will be found and no merge occurs. Hence we
> have
> > two windows now in session store,
> > [10, 10] and [15, 15] respectively.
>
> If you have record (10,10), we currently create a window of size
> [10,10]. When record (15,3) arrives, your observation that the gap 3 is
> too small to be merged into [10,10] window -- however, merging is a
> symmetric operation and the existing window of [10,10] has a gap of 10
> defined: thus, 15 is close enough to fall into the gap, and (15,3) is
> merged into the existing window resulting in window [10,15].
>
> If we don't respect the gap both ways, we end up with inconsistencies if
> data is out-of-order. For example, if we use the same input record
> (10,10) and (15,3) from above, and it happens that (15,3) is processed
> first, when processing out-of-order record (10,10) we would want to
> merge both into a single window, too?
>
> Does this make sense?
>
> Now the question remains, if two records with different gap parameter
> are merged, which gap should we apply for processing/merging future
> records into the window? It seems, that we should use the gap parameter
> from the record with this larges timestamp. In the example above (15,3).
> We would use gap 3 after merging independent of the order of processing.
>
>
> > Also another thing worth mentioning is that, the session window object
> > created in current kafka streams
> > implementation doesn't have gap info, it has start and end, which is the
> > earliest and latest event timestamp
> > in that window interval, i.e for (10,10), the session window gets created
> > is 

Build failed in Jenkins: kafka-trunk-jdk10 #506

2018-09-19 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Remove ignored tests that hang, added new versions for EOS 
tests

[wangguoz] MINOR: increase number of unique keys for Streams EOS system test

--
[...truncated 2.91 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 

Re: [VOTE] KIP-291: Have separate queues for control requests and data requests

2018-09-19 Thread Lucas Wang
Hi Jun,

Thanks a lot for the detailed explanation.
I've restored the wiki to a previous version that does not require config
changes,
and keeps the current behavior with the proposed changes turned off by
default.
I'd appreciate it if you can review it again.

Thanks!
Lucas

On Tue, Sep 18, 2018 at 1:48 PM Jun Rao  wrote:

> Hi, Lucas,
>
> When upgrading to a minor release, I think the expectation is that a user
> wouldn't need to make any config changes, other than the usual
> inter.broker.protocol. If we require other config changes during an
> upgrade, then it's probably better to do that in a major release.
>
> Regarding your proposal, I think removing host/advertised_host in favor of
> listeners:advertised_listeners seems useful regardless of this KIP.
> However, that can probably wait until a major release.
>
> As for the controller listener, I am not sure if one has to set it. To make
> a cluster healthy, one sort of have to make sure that the request queue is
> never full and no request will be sitting in the request queue for long. If
> one does that, setting the controller listener may not be necessary. On the
> flip side, even if one sets the controller listener, but the request queue
> and the request time for the data part are still high, the cluster may
> still not be healthy. Given that we have already started the 2.1 release
> planning, perhaps we can start with not requiring the controller listener.
> If this is indeed something that everyone wants to set, we can make it a
> required config in a major release.
>
> Thanks,
>
> Jun
>
> On Tue, Sep 11, 2018 at 3:46 PM, Lucas Wang  wrote:
>
> > @Jun Rao 
> >
> > I made the recent config changes after thinking about the default
> behavior
> > for adopting this KIP.
> > I think there are basically two options:
> > 1. By default, the behavior proposed in this KIP is turned off, and
> > operators can turn it
> > on by adding the "controller.listener.name" config and entries in the
> > "listeners" and "advertised.listeners" list.
> > If no "controller.listener.name" is added, it'll be the *same as* the "
> > inter.broker.listener.name",
> > and the proposed feature is effectively turned off.
> > This has been the assumption in the KIP writeup before.
> >
> > 2. By default, the behavior proposed in this KIP is turned on, and
> > operators are forced to
> > recognize the proposed change if their "listeners" config is set (this is
> > most likely in production environments),
> > by allocating a new port for controller connections, and adding a new
> > endpoint to the "listeners" config.
> > For cases where "listeners" is not set explicitly,
> > there needs to be a default value for it that includes the controller
> > listener name,
> > e.g. "PLAINTEXT://:9092,CONTROLLER://:9091"
> >
> > I chose to go with option 2 since as author of this KIP,
> > I naturally think in the long run, it's worth the effort to adopt this
> > feature,
> > in order to prevent issues under circumstances listed in the motivation
> > section.
> >
> > 100, following the argument above, I want to enforce the separation
> > between controller
> > and data plane requests. Hence the "controller.listener.name" should
> > never be the same
> > as the "inter.broker.listener.name", which is intended for data plane
> > requests.
> >
> > 101, the default value for "listeners" will be
> > "PLAINTEXT://:9092,CONTROLLER://:9091",
> > making values of "host", and "port" not being used under any config
> > settings.
> > And the default value for "advertised.listeners" will be derived from
> > "listeners",
> > making the values of "advertised.host", and "advertised.port" not being
> > used any more.
> >
> > 102, for upgrading, a single broker that has "listeners" and/or
> > "advertised.listeners" set,
> > must add a new endpoint for the CONTROLLER listener name, or end up using
> > the default listeners "PLAINTEXT://:9092,CONTROLLER://:9091".
> > During rolling upgrade, in cases of  +  or
> >   + 
> > we still need to fall back to the current behavior. However after the
> > rolling upgrade is done,
> > it is guaranteed that the controller plane and data plane are separated,
> > given
> > the "controller.listener.name" must be different from "
> > inter.broker.listener.name".
> >
> > @Ismael Juma 
> > Thanks for pointing that out. I did not know that.
> > However my question is if the argument above makes sense, and my code
> > change
> > causes the configs "host", "port", "advertised.host", "advertised.port"
> to
> > be not used under any circumstance,
> > then it's no different from removing them.
> > Anyway if there is still a concern about removing them, is there a new
> > major new version
> > now or in the future where I can remove them?
> >
> > Thanks!
> > Lucas
> >
> > On Mon, Sep 10, 2018 at 1:30 PM Ismael Juma  wrote:
> >
> >> To be clear, we can only remove configs in major new versions.
> Otherwise,
> >> we can only deprecate them.
> >>
> >> Ismael
> >>
> >> On Mon, Sep 10, 2018 at 

Jenkins build is back to normal : kafka-trunk-jdk10 #505

2018-09-19 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-6594) 第二次启动kafka是报错00000000000000000000.timeindex: 另一个程序正在使用此文件,进程无法访问。

2018-09-19 Thread Guozhang Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-6594.
--
Resolution: Not A Problem

> 第二次启动kafka是报错.timeindex: 另一个程序正在使用此文件,进程无法访问。
> -
>
> Key: KAFKA-6594
> URL: https://issues.apache.org/jira/browse/KAFKA-6594
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 1.0.0
> Environment: 环境:win10-1607 X64
> kafka:1.0.0(kafka_2.12-1.0.0)
> zookeeper:3.5.2
>Reporter: 徐兴强
>Priority: Major
>  Labels: starter, windows
> Attachments: kafka报错.png
>
>
>  
> 当我第一次运行kafka时,没有任何问题,但是当我关闭kafka(Ctrl+C)后,在第二次启动时,报错,提示.timeindex:
>  另一个程序正在使用此文件,进程无法访问。



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-347: Enable batching in FindCoordinatorRequest

2018-09-19 Thread Guozhang Wang
Hello Yishun,

I looked further into the change needed on NetworkClient, comparing them
with the changes on AdminClient / ConsumerNetworkClient, and I think it is
indeed a quite large change needed on the latter two classes if we do not
want to touch on NetworkClient.

Colin's argument against having the changes on the NetworkClient is
primarily that the NetworkClient should be ideally only be responsible for
sending request out and receiving responses, and the current implementation
already leaked a lot of the request-type related logic which makes it quite
complicated, so further adding the conversion logic on it will make things
worse. While I agree with the argument, I also feel that the changes on the
higher-level classes (AdminClient and ConsumerNetworkClient) is more
intrusive than we thought originally, which makes this optimization on the
admin client less "cost-effective".

If we want to just make this optimization on AdminClient I'm still fine
with it (in Consumer since we are only going to ask for the coordinator of
a single consumer always), or if you want to postpone this KIP that's also
okay. Personally I felt guilty to let you dive into such a big scoped task
without careful thinking beforehand than suggesting you to start with a
much smaller task to work on.

In case you want to find a smaller JIRA:
https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20resolution%20%3D%20Unresolved%20AND%20labels%20in%20(%22newbie%2B%2B%22%2C%20newbie)%20ORDER%20BY%20assignee%20DESC%2C%20priority%20DESC


Guozhang


On Mon, Sep 17, 2018 at 4:50 PM, Yishun Guan  wrote:

> @Guozhang Wang What do you think?
> On Fri, Sep 14, 2018 at 2:39 PM Yishun Guan  wrote:
> >
> > Hi All,
> >
> > After looking into AdminClient.java and ConsumerClient.java, following
> > the original idea, I think some type specific codes are unavoidable
> > (we can have a enum class that contain a list of batch-enabled APIs).
> > As the compatibility codes that breaks down the batch, we need to
> > either map one Builder to multiple Builders, or map one request to
> > multiple requests. (I am not an expert, so I would love other's output
> > on this.) This will be an extra conditional check before building or
> > sending out a request.
> >
> > From my observation, now a batching optimization for request is only
> > needed in KafkaAdminClient (In other words, we want to replace the
> > for-loop with a batch request). That limited the scope of the
> > optimization, maybe this optimization might seem a little trivial
> > compare to the incompatible risk or inconsistency within codes that we
> > might face?
> >
> > If we are not comfortable with making it "ugly and dirty" (or I just
> > couldn't enough to come up with a balanced solution) within
> > AdminNetworkClient.java and ConsumerNetworkClient.java, we should
> > revisit this: https://github.com/apache/kafka/pull/5353 or postpone
> > this improvement?
> >
> > Thanks,
> > Yishun
> > On Thu, Sep 6, 2018 at 5:22 PM Yishun Guan  wrote:
> > >
> > > Hi Collin and Guozhang,
> > >
> > > I see. But even if the fall-back logic goes into AdminClient and
> ConsumerClient, it still have to be somehow type specific right?
> > > So either way, there will be api-specific process code somewhere?
> > >
> > > Thanks,
> > > Yishun
> > >
> > >
> > > On Tue, Sep 4, 2018 at 5:46 PM Colin McCabe  wrote:
> > > >
> > > > Hi Yishun,
> > > >
> > > > I agree with Guozhang.  NetworkClient is the wrong place to put
> things which are specific to a particular message type.  NetworkClient
> should not have to care what the type of the message is that it is sending.
> > > >
> > > > Adding type-specific handling is much more "ugly and dirty" than
> adding some compatibility code to AdminClient and ConsumerClient.  It is
> true that there is some code duplication, but I think it will be minimal in
> this case.
> > > >
> > > > best,
> > > > Colin
> > > >
> > > >
> > > > On Tue, Sep 4, 2018, at 13:28, Guozhang Wang wrote:
> > > > > Hello Yishun,
> > > > >
> > > > > I reviewed the latest wiki page, and noticed that the special
> handling
> > > > > logic needs to be in the NetworkClient.
> > > > >
> > > > > Comparing it with another alternative way, i.e. we add the
> fall-back logic
> > > > > in the AdminClient, as well as in the ConsumerClient to capture the
> > > > > UnsupportedException and fallback, because the two of them are
> possibly
> > > > > sending FindCoordinatorRequest (though for consumers today we do
> not expect
> > > > > it to send for more than one coordinator); personally I think the
> current
> > > > > approach is better, but I'd like to hear other people's opinion as
> well
> > > > > (cc'ed Colin, who implemented the AdminClient).
> > > > >
> > > > >
> > > > > Guozhang
> > > > >
> > > > >
> > > > > On Mon, Sep 3, 2018 at 11:57 AM, Yishun Guan 
> wrote:
> > > > >
> > > > > > Hi Guozhang,
> > > > > >
> > > > > > Yes, I totally agree with you. Like I said, I think it is an
> overkill
> > > 

Build failed in Jenkins: kafka-trunk-jdk10 #504

2018-09-19 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H23 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read bb745c0f9142717ddf68dc83bbd940dfe0c59b9a
error: Could not read c93a9ff0cabaae0d16e0050690cf6594e8cf780d
error: Could not read b95a7edde0df10b1c567094f1fa3e5fbe3e6cd27
error: Could not read 7299e1836ba2ff9485c1256410c569e35379
remote: Counting objects: 10808, done.
remote: Compressing objects:   6% (1/15)   remote: Compressing objects: 
 13% (2/15)   remote: Compressing objects:  20% (3/15)   
remote: Compressing objects:  26% (4/15)   remote: Compressing objects: 
 33% (5/15)   remote: Compressing objects:  40% (6/15)   
remote: Compressing objects:  46% (7/15)   remote: Compressing objects: 
 53% (8/15)   remote: Compressing objects:  60% (9/15)   
remote: Compressing objects:  66% (10/15)   remote: Compressing 
objects:  73% (11/15)   remote: Compressing objects:  80% (12/15)   
remote: Compressing objects:  86% (13/15)   remote: Compressing 
objects:  93% (14/15)   remote: Compressing objects: 100% (15/15)   
remote: Compressing objects: 100% (15/15), done.
Receiving objects:   0% (1/10808)   Receiving objects:   1% (109/10808)   
Receiving objects:   2% (217/10808)   Receiving objects:   3% (325/10808)   
Receiving objects:   4% (433/10808)   Receiving objects:   5% (541/10808)   
Receiving objects:   6% (649/10808)   Receiving objects:   7% (757/10808)   
Receiving objects:   8% (865/10808)   Receiving objects:   9% (973/10808)   
Receiving objects:  10% (1081/10808)   Receiving objects:  11% (1189/10808)   
Receiving objects:  12% (1297/10808)   Receiving objects:  13% (1406/10808)   
Receiving objects:  14% (1514/10808)   Receiving objects:  15% (1622/10808)   
Receiving objects:  16% (1730/10808)   Receiving objects:  17% (1838/10808)   
Receiving objects:  18% (1946/10808)   Receiving objects:  19% (2054/10808)   
Receiving objects:  20% (2162/10808)   Receiving objects:  21% (2270/10808)   
Receiving objects:  22% (2378/10808)   Receiving objects:  23% (2486/10808)   
Receiving objects:  24% (2594/10808)   Receiving objects:  25% (2702/10808)   
Receiving objects:  26% (2811/10808)   Receiving objects:  27% (2919/10808)   
Receiving objects:  28% (3027/10808)   Receiving objects:  29% (3135/10808)   
Receiving objects:  30% (3243/10808)   Receiving objects:  31% (3351/10808)   
Receiving objects:  32% (3459/10808)   Receiving objects:  33% (3567/10808)   
Receiving objects:  34% (3675/10808)   Receiving objects:  35% (3783/10808)   
Receiving objects:  36% (3891/10808)   Receiving objects:  37% (3999/10808)   
Receiving objects:  38% (4108/10808)   Receiving objects:  39% (4216/10808)   
Receiving objects:  40% (4324/10808)   Receiving objects:  41% (4432/10808)   
Receiving objects:  42% (4540/10808)   Receiving objects:  43% (4648/10808)   
Receiving objects:  44% (4756/10808)   Receiving objects:  45% (4864/10808)   
Receiving objects:  46% (4972/10808)   Receiving objects:  

Re: Request for contributor permissions

2018-09-19 Thread Matthias J. Sax
Added wiki permission to user

doniel0614

(There is two "Daniel Huang" -- hope I picked the correct ID).


Also added you as contributor to Jira.


-Matthias


On 9/18/18 9:09 PM, DANIEL HUANG wrote:
> as title
> 
> JIRA ID: doniel0...@gmail.com
> Cwiki ID: DANIEL HUANG
> 
> Thanks!
> 



signature.asc
Description: OpenPGP digital signature


Re: ***UNCHECKED*** "Create KIP" permissions

2018-09-19 Thread Matthias J. Sax
Done.

On 9/19/18 4:59 AM, vasily.sulats...@cfm.fr wrote:
> Hi,
> 
> I'd like to create a KIP, but lack necessary permissions. Can please 
> someone, who controls the permissions, grant me KIP creation rights? My 
> wiki ID is "vasily.sulatskov".
> 



signature.asc
Description: OpenPGP digital signature


Build failed in Jenkins: kafka-trunk-jdk10 #503

2018-09-19 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H27 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git config 
remote.origin.url https://github.com/apache/kafka.git; returned status code 4:
stdout: 
stderr: error: failed to write new configuration file 


at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2002)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1970)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1966)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1597)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1609)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.setRemoteUrl(CliGitAPIImpl.java:1243)
at hudson.plugins.git.GitAPI.setRemoteUrl(GitAPI.java:160)
at sun.reflect.GeneratedMethodAccessor563.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:929)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:903)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:855)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Suppressed: hudson.remoting.Channel$CallSiteStackTrace: Remote call to 
H27
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1741)
at 
hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
at hudson.remoting.Channel.call(Channel.java:955)
at 
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:283)
at com.sun.proxy.$Proxy118.setRemoteUrl(Unknown Source)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl.setRemoteUrl(RemoteGitImpl.java:295)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:876)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at 
hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at 
jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at 

Re: Kafka Contributor access

2018-09-19 Thread Guozhang Wang
Hello Srinivas,

I've added you to the contributor list and assigned the ticket to you.
Thanks for your interest contributing to Kafka!!

Guozhang

On Wed, Sep 19, 2018 at 6:08 AM, Srinivas Reddy 
wrote:

> My Apache JIRA id is : mrsrinivas
>
>
>
> --
> Srinivas Reddy
>
> http://mrsrinivas.com/
>
>
> (Sent via gmail web)
>
>
> On Wed, 19 Sep 2018 at 18:23, Srinivas Reddy 
> wrote:
>
> > Hi,
> >
> > I would like to work on KAFKA-7418 as it is not assigned to anyone.
> >
> > "Assign to me" option is not available to me. Can someone help me with
> > that ?
> >
> > Thank you in advance.
> >
> > -
> > Srinivas
> >
> > - Typed on tiny keys. pls ignore typos.{mobile app}
> >
>



-- 
-- Guozhang


Re: [DISCUSS] KIP-370: Remove Orphan Partitions

2018-09-19 Thread xiongqi wu
Any comments?

Xiongqi (Wesley) Wu


On Mon, Sep 10, 2018 at 3:04 PM xiongqi wu  wrote:

> Here is the implementation for the KIP 370.
>
>
> https://github.com/xiowu0/kafka/commit/f1bd3085639f41a7af02567550a8e3018cfac3e9
>
>
> The purpose is to do one time cleanup (after a configured delay) of orphan
> partitions when a broker starts up.
>
>
> Xiongqi (Wesley) Wu
>
>
> On Wed, Sep 5, 2018 at 10:51 AM xiongqi wu  wrote:
>
>>
>> This KIP enables broker to remove orphan partitions automatically.
>>
>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-370%3A+Remove+Orphan+Partitions
>>
>>
>> Xiongqi (Wesley) Wu
>>
>


Re: [VOTE] KIP-354 Time-based log compaction policy

2018-09-19 Thread xiongqi wu
Any other votes or comments?

Xiongqi (Wesley) Wu


On Tue, Sep 11, 2018 at 11:45 AM xiongqi wu  wrote:

> Yes, more votes and code review.
>
> Xiongqi (Wesley) Wu
>
>
> On Mon, Sep 10, 2018 at 11:37 PM Brett Rann 
> wrote:
>
>> +1 (non binding) from on 0 then, and on the KIP.
>>
>> Where do we go from here? More votes?
>>
>> On Tue, Sep 11, 2018 at 5:34 AM Colin McCabe  wrote:
>>
>> > On Mon, Sep 10, 2018, at 11:44, xiongqi wu wrote:
>> > > Thank you for comments. I will use '0' for now.
>> > >
>> > > If we create topics through admin client in the future, we might
>> perform
>> > > some useful checks. (but the assumption is all brokers in the same
>> > cluster
>> > > have the same default configurations value, otherwise,it might still
>> be
>> > > tricky to do such cross validation check.)
>> >
>> > This isn't something that we might do in the future-- this is something
>> we
>> > are doing now. We already have Create Topic policies which are enforced
>> by
>> > the broker. Check KIP-108 and KIP-170 for details. This is one of the
>> > motivations for getting rid of direct ZK access-- making sure that these
>> > policies are applied.
>> >
>> > I agree that having different configurations on different brokers can be
>> > confusing and frustrating . That's why more configurations are being
>> made
>> > dynamic using KIP-226. Dynamic configurations are stored centrally in
>> ZK,
>> > so they are the same on all brokers (modulo propagation delays). In any
>> > case, this is a general issue, not specific to "create topics".
>> >
>> > cheers,
>> > Colin
>> >
>> >
>> > >
>> > >
>> > > Xiongqi (Wesley) Wu
>> > >
>> > >
>> > > On Mon, Sep 10, 2018 at 11:15 AM Colin McCabe 
>> > wrote:
>> > >
>> > > > I don't have a strong opinion. But I think we should probably be
>> > > > consistent with how segment.ms works, and just use 0.
>> > > >
>> > > > best,
>> > > > Colin
>> > > >
>> > > >
>> > > > On Wed, Sep 5, 2018, at 21:19, Brett Rann wrote:
>> > > > > OK thanks for that clarification. I see why you're uncomfortable
>> > with 0
>> > > > now.
>> > > > >
>> > > > > I'm not really fussed. I just prefer consistency in configuration
>> > > > options.
>> > > > >
>> > > > > Personally I lean towards treating 0 and 1 similarly in that
>> > scenario,
>> > > > > because it favours the person thinking about setting the
>> > configurations,
>> > > > > and a person doesn't care about a 1ms edge case especially when
>> the
>> > > > context
>> > > > > is the true minimum is tied to the log cleaner cadence.
>> > > > >
>> > > > > Introducing 0 to mean "disabled" because there is some uniquness
>> in
>> > > > > segment.ms not being able to be set to 0, reduces configuration
>> > > > consistency
>> > > > > in favour of capturing a MS gap in an edge case that nobody would
>> > ever
>> > > > > notice. For someone to understand why everywhere else -1 is used
>> to
>> > > > > disable, but here 0 is used, they would need to learn about
>> > segment.ms
>> > > > > having a 1ms minimum and then after learning would think "who
>> cares
>> > about
>> > > > > 1ms?" in this context. I would anyway :)
>> > > > >
>> > > > > my 2c anyway. Will again defer to majority. Curious which way
>> Colin
>> > falls
>> > > > > now.
>> > > > >
>> > > > > Don't want to spend more time on this though, It's well into
>> > > > bikeshedding
>> > > > > territory now. :)
>> > > > >
>> > > > >
>> > > > >
>> > > > > On Thu, Sep 6, 2018 at 1:31 PM xiongqi wu 
>> > wrote:
>> > > > >
>> > > > > > I want to honor the minimum value of segment.ms (which is 1ms)
>> to
>> > > > force
>> > > > > > roll an active segment.
>> > > > > > So if we set "max.compaction.lag.ms" any value > 0, the
>> minimum of
>> > > > > > max.compaction.lag.ms and segment.ms will be used to seal an
>> > active
>> > > > > > segment.
>> > > > > >
>> > > > > > If we set max.compaction.lag.ms to 0, the current
>> implementation
>> > will
>> > > > > > treat it as disabled.
>> > > > > >
>> > > > > > It is a little bit weird to treat max.compaction.lag=0 the same
>> as
>> > > > > > max.compaction.lag=1.
>> > > > > >
>> > > > > > There might be a reason why we set the minimum of segment.ms
>> to 1,
>> > > > and I
>> > > > > > don't want to break this assumption.
>> > > > > >
>> > > > > >
>> > > > > >
>> > > > > > Xiongqi (Wesley) Wu
>> > > > > >
>> > > > > >
>> > > > > > On Wed, Sep 5, 2018 at 7:54 PM Brett Rann
>> > 
>> > > > > > wrote:
>> > > > > >
>> > > > > > > You're rolling a new segment if the condition is met right? So
>> > I'm
>> > > > > > > struggling to understand the relevance of segment.ms here.
>> > Maybe an
>> > > > > > > example
>> > > > > > > would help my understanding:
>> > > > > > >
>> > > > > > > segment.ms=999
>> > > > > > > *min.cleanable.dirty.ratio=1*
>> > > > > > > max.compaction.lag.ms=1
>> > > > > > >
>> > > > > > > When a duplicate message comes in, after 1ms the topic should
>> be
>> > > > eligible
>> > > > > > > for compaction when the log compaction thread gets around to
>> 

Re: [DISCUSS] KIP 368: Allow SASL Connections to Periodically Re-Authenticate

2018-09-19 Thread Ron Dagostino
Hi Tyler.  This KIP is written such that the sever (broker) specifies a
session lifetime to the client, and then the client will re-authenticate at
a time consistent with that with whatever credentials it has when the
re-authentication kicks off.  You could specify a low max session lifetime
on the broker, but then you have to make sure the client refreshes its
credentials at that rate (and there are refresh-related configs for
kerberos to helkp you do that, such as
sasl.kerberos.ticket.renew.window.factor).  But whether this would solve
your problem or not I don't know.  It certainly won't allow you to react
when the scenario occurs, but it if your session lifetime and credential
refresh window are short enough you would end up reacting relatively soon
thereafter -- but again, my knowledge of Kerberos and what is exactly going
on in your situation is limited/practically zero.  I'm in the process of
getting the PR into shape, and hopefully it will be ready in the next week
or so -- you could of course try it out at that time and see.

Ron

On Wed, Sep 19, 2018 at 1:26 PM Tyler Monahan  wrote:

> Hello,
>
> I have a rather odd issue that I think this KIP might fix but wanted to
> check. I have a kafka setup using SASL/Kerberos and when a broker dies in
> aws a new one is created using the same name/id. The Kerberos credentials
> however are different on the new instances and existing
> brokers/consumers/producers continue to try using the stored credentials
> for the old instance on the new instance which fails until everything is
> restarted to clear out stored credentials. My understanding is this KIP
> would make it so Re-Authenticate will clear out bad stored credentials. I
> am not sure if the re-authentication process would kick of when something
> fails with bad credential errors though.
>
> Tyler Monahan
>


Re: [DISCUSS] KIP 368: Allow SASL Connections to Periodically Re-Authenticate

2018-09-19 Thread Tyler Monahan
Hello,

I have a rather odd issue that I think this KIP might fix but wanted to
check. I have a kafka setup using SASL/Kerberos and when a broker dies in
aws a new one is created using the same name/id. The Kerberos credentials
however are different on the new instances and existing
brokers/consumers/producers continue to try using the stored credentials
for the old instance on the new instance which fails until everything is
restarted to clear out stored credentials. My understanding is this KIP
would make it so Re-Authenticate will clear out bad stored credentials. I
am not sure if the re-authentication process would kick of when something
fails with bad credential errors though.

Tyler Monahan


Build failed in Jenkins: kafka-trunk-jdk10 #502

2018-09-19 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H27 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git config 
remote.origin.url https://github.com/apache/kafka.git; returned status code 4:
stdout: 
stderr: error: failed to write new configuration file 


at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2002)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1970)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1966)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1597)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1609)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.setRemoteUrl(CliGitAPIImpl.java:1243)
at hudson.plugins.git.GitAPI.setRemoteUrl(GitAPI.java:160)
at sun.reflect.GeneratedMethodAccessor563.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:929)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:903)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:855)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Suppressed: hudson.remoting.Channel$CallSiteStackTrace: Remote call to 
H27
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1741)
at 
hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
at hudson.remoting.Channel.call(Channel.java:955)
at 
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:283)
at com.sun.proxy.$Proxy118.setRemoteUrl(Unknown Source)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl.setRemoteUrl(RemoteGitImpl.java:295)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:876)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at 
hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at 
jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at 

Contributor access

2018-09-19 Thread Srinivas Reddy
Hi,

I would like to work on KAFKA-7418 as it is not assigned to anyone.

"Assign to me" option is not available to me. Can someone help me with that
?

Thank you in advance.

-
Srinivas

- Typed on tiny keys. pls ignore typos.{mobile app}


Re: Kafka Contributor access

2018-09-19 Thread Srinivas Reddy
My Apache JIRA id is : mrsrinivas



--
Srinivas Reddy

http://mrsrinivas.com/


(Sent via gmail web)


On Wed, 19 Sep 2018 at 18:23, Srinivas Reddy 
wrote:

> Hi,
>
> I would like to work on KAFKA-7418 as it is not assigned to anyone.
>
> "Assign to me" option is not available to me. Can someone help me with
> that ?
>
> Thank you in advance.
>
> -
> Srinivas
>
> - Typed on tiny keys. pls ignore typos.{mobile app}
>


Re: [VOTE] KIP 368: Allow SASL Connections to Periodically Re-Authenticate

2018-09-19 Thread Harsha
KIP looks good. +1 (binding)

Thanks,
Harsha

On Wed, Sep 19, 2018, at 7:44 AM, Rajini Sivaram wrote:
> Hi Ron,
> 
> Thanks for the KIP!
> 
> +1 (binding)
> 
> On Tue, Sep 18, 2018 at 6:24 PM, Konstantin Chukhlomin 
> wrote:
> 
> > +1 (non binding)
> >
> > > On Sep 18, 2018, at 1:18 PM, michael.kamin...@nytimes.com wrote:
> > >
> > >
> > >
> > > On 2018/09/18 14:59:09, Ron Dagostino  wrote:
> > >> Hi everyone.  I would like to start the vote for KIP-368:
> > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 368%3A+Allow+SASL+Connections+to+Periodically+Re-Authenticate
> > >>
> > >> This KIP proposes adding the ability for SASL clients (and brokers when
> > a
> > >> SASL mechanism is the inter-broker protocol) to re-authenticate their
> > >> connections to brokers and for brokers to close connections that
> > continue
> > >> to use expired sessions.
> > >>
> > >> Ron
> > >>
> > >
> > > +1 (non binding)
> >
> >


***UNCHECKED*** Re: [VOTE] KIP-361: Add Consumer Configuration to Disable Auto Topic Creation

2018-09-19 Thread Jason Gustafson
A bit late here, but I'm +1 as well. Thanks for the KIP.

On Mon, Sep 17, 2018 at 8:20 AM, Dhruvil Shah  wrote:

> Thank you for the votes and discussion, everyone. The KIP has passed with 3
> binding votes (Ismael, Gwen, Matthias) and 5 non-binding votes (Brandon,
> Bill, Manikumar, Colin, Mickael).
>
> - Dhruvil
>
> On Mon, Sep 17, 2018 at 1:59 AM Mickael Maison 
> wrote:
>
> > +1 (non-binding)
> > Thanks for the KIP!
> > On Sun, Sep 16, 2018 at 7:40 PM Matthias J. Sax 
> > wrote:
> > >
> > > +1 (binding)
> > >
> > > -Matthias
> > >
> > > On 9/14/18 4:57 PM, Ismael Juma wrote:
> > > > Thanks for the KIP, +1 (binding).
> > > >
> > > > Ismael
> > > >
> > > > On Fri, Sep 14, 2018 at 4:56 PM Dhruvil Shah 
> > wrote:
> > > >
> > > >> Hi all,
> > > >>
> > > >> I would like to start a vote on KIP-361.
> > > >>
> > > >> Link to the KIP:
> > > >>
> > > >>
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 361%3A+Add+Consumer+Configuration+to+Disable+Auto+Topic+Creation
> > > >>
> > > >> Thanks,
> > > >> Dhruvil
> > > >>
> > > >
> > >
> >
>


[jira] [Resolved] (KAFKA-7399) FunctionConversions in Streams-Scala should be private

2018-09-19 Thread Guozhang Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-7399.
--
Resolution: Fixed

> FunctionConversions in Streams-Scala should be private
> --
>
> Key: KAFKA-7399
> URL: https://issues.apache.org/jira/browse/KAFKA-7399
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: John Roesler
>Assignee: Joan Goyeau
>Priority: Minor
> Fix For: 2.1.0
>
>
> FunctionConversions defines several implicit conversions for internal use, 
> but the class was made public accidentally.
> We should deprecate it and replace it with an equivalent private class.
>  
> KIP: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-366%3A+Make+FunctionConversions+private



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-367 Introduce close(Duration) to Producer and AdminClient instead of close(long, TimeUnit)

2018-09-19 Thread Jason Gustafson
+1 Thanks for the KIP

On Fri, Sep 14, 2018 at 10:24 PM, Colin McCabe  wrote:

> +1 (non-binding).
>
> Thanks, Chia-Ping.
>
> best,
>
>
> On Fri, Sep 14, 2018, at 21:01, Chia-Ping Tsai wrote:
> > hi Colin
> >
> > > I like the idea of having a close(Duration) call.  I would rather keep
> the existing close(long, TimeUnit) call to avoid breaking compatibility,
> however.  Does the KIP specify keeping the old overload?  I can't seem to
> access the wiki now.
> >
> > As Ismael said. All existing close(long, TimeUnit) aren't removed but
> > all of them are marked as "deprecated".
> >
> > --
> > Chia-Ping
> >
> > On 2018/09/14 21:06:13, Colin McCabe  wrote:
> > > Hi Chia-Ping,
> > >
> > > I like the idea of having a close(Duration) call.  I would rather keep
> the existing close(long, TimeUnit) call to avoid breaking compatibility,
> however.  Does the KIP specify keeping the old overload?  I can't seem to
> access the wiki now.
> > >
> > > best,
> > > Colin
> > >
> > >
> > > On Sat, Sep 8, 2018, at 11:27, Chia-Ping Tsai wrote:
> > > > Hi All,
> > > >
> > > > I'd like to put KIP-367 to the vote.
> > > >
> > > > https://cwiki.apache.org/confluence/pages/viewpage.
> action?pageId=89070496
> > > >
> > > > --
> > > > Chia-Ping
> > >
>


***UNCHECKED*** RE: [VOTE] KIP-302 - Enable Kafka clients to use all DNS resolved IP addresses

2018-09-19 Thread Skrzypek, Jonathan
I'm assuming this needs KIP-235 to be merged.
Unfortunately I've tripped over some merge issues with git and struggled to fix.
Hopefully this is fixed but any help appreciated : 
https://github.com/apache/kafka/pull/4485

Jonathan Skrzypek



-Original Message-
From: Eno Thereska [mailto:eno.there...@gmail.com]
Sent: 19 September 2018 11:01
To: dev@kafka.apache.org
Subject: Re: [VOTE] KIP-302 - Enable Kafka clients to use all DNS resolved IP 
addresses

+1 (non-binding).

Thanks
Eno

On Wed, Sep 19, 2018 at 10:09 AM, Rajini Sivaram 
wrote:

> Hi Edo,
>
> Thanks for the KIP!
>
> +1 (binding)
>
> On Tue, Sep 18, 2018 at 3:51 PM, Edoardo Comar  wrote:
>
> > Hi All,
> >
> > I'd like to start the vote on KIP-302:
> >
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_display_KAFKA_KIP-2D=DwIBaQ=7563p3e2zaQw0AB1wrFVgyagb2IE5rTZOYPxLxfZlX4=nNmJlu1rR_QFAPdxGlafmDu9_r6eaCbPOM0NM1EHo-E=BT04yLI3iq0Sgaxa_AZcswG9jeO7NeiIcI5fe4Nm-ts=gZ6szA9kizQmSH7SlRG0xAUMVcmwzQLK8L1FqtlBd4k=
> > 302+-+Enable+Kafka+clients+to+use+all+DNS+resolved+IP+addresses
> >
> > We'd love to get this in 2.1.0
> > Kip freeze is just a few days away ... please cast your votes  :-):-)
> >
> > Thanks!!
> > Edo
> >
> > --
> >
> > Edoardo Comar
> >
> > IBM Message Hub
> >
> > IBM UK Ltd, Hursley Park, SO21 2JN
> > Unless stated otherwise above:
> > IBM United Kingdom Limited - Registered in England and Wales with number
> > 741598.
> > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6
> 3AU
> >
>



Your Personal Data: We may collect and process information about you that may 
be subject to data protection laws. For more information about how we use and 
disclose your personal data, how we protect your information, our legal basis 
to use your information, your rights and who you can contact, please refer to: 
www.gs.com/privacy-notices


Re: [VOTE] KIP 368: Allow SASL Connections to Periodically Re-Authenticate

2018-09-19 Thread Rajini Sivaram
Hi Ron,

Thanks for the KIP!

+1 (binding)

On Tue, Sep 18, 2018 at 6:24 PM, Konstantin Chukhlomin 
wrote:

> +1 (non binding)
>
> > On Sep 18, 2018, at 1:18 PM, michael.kamin...@nytimes.com wrote:
> >
> >
> >
> > On 2018/09/18 14:59:09, Ron Dagostino  wrote:
> >> Hi everyone.  I would like to start the vote for KIP-368:
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 368%3A+Allow+SASL+Connections+to+Periodically+Re-Authenticate
> >>
> >> This KIP proposes adding the ability for SASL clients (and brokers when
> a
> >> SASL mechanism is the inter-broker protocol) to re-authenticate their
> >> connections to brokers and for brokers to close connections that
> continue
> >> to use expired sessions.
> >>
> >> Ron
> >>
> >
> > +1 (non binding)
>
>


Re: [ANNOUNCE] New Kafka PMC member: Dong Lin

2018-09-19 Thread Viktor Somogyi-Vass
Congrats Dong!

On Sun, Sep 16, 2018 at 2:44 PM Matt Farmer  wrote:

> Congrats Dong!
>
> On Sat, Sep 15, 2018 at 4:40 PM Bill Bejeck  wrote:
>
> > Congrats!
> >
> > On Sat, Sep 15, 2018 at 1:27 PM Jorge Esteban Quilcate Otoya <
> > quilcate.jo...@gmail.com> wrote:
> >
> > > Congratulations!!
> > >
> > > El sáb., 15 sept. 2018 a las 15:18, Dongjin Lee ()
> > > escribió:
> > >
> > > > Congratulations!
> > > >
> > > > Best,
> > > > Dongjin
> > > >
> > > > On Sat, Sep 15, 2018 at 3:00 PM Colin McCabe 
> > wrote:
> > > >
> > > > > Congratulations, Dong Lin!
> > > > >
> > > > > best,
> > > > > Colin
> > > > >
> > > > > On Wed, Aug 22, 2018, at 05:26, Satish Duggana wrote:
> > > > > > Congrats Dong Lin!
> > > > > >
> > > > > > On Wed, Aug 22, 2018 at 10:08 AM, Abhimanyu Nagrath <
> > > > > > abhimanyunagr...@gmail.com> wrote:
> > > > > >
> > > > > > > Congratulations, Dong!
> > > > > > >
> > > > > > > On Wed, Aug 22, 2018 at 6:20 AM Dhruvil Shah <
> > dhru...@confluent.io
> > > >
> > > > > wrote:
> > > > > > >
> > > > > > > > Congratulations, Dong!
> > > > > > > >
> > > > > > > > On Tue, Aug 21, 2018 at 4:38 PM Jason Gustafson <
> > > > ja...@confluent.io>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > Congrats!
> > > > > > > > >
> > > > > > > > > On Tue, Aug 21, 2018 at 10:03 AM, Ray Chiang <
> > > rchi...@apache.org
> > > > >
> > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Congrats Dong!
> > > > > > > > > >
> > > > > > > > > > -Ray
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > On 8/21/18 9:33 AM, Becket Qin wrote:
> > > > > > > > > >
> > > > > > > > > >> Congrats, Dong!
> > > > > > > > > >>
> > > > > > > > > >> On Aug 21, 2018, at 11:03 PM, Eno Thereska <
> > > > > eno.there...@gmail.com>
> > > > > > > > > >>> wrote:
> > > > > > > > > >>>
> > > > > > > > > >>> Congrats Dong!
> > > > > > > > > >>>
> > > > > > > > > >>> Eno
> > > > > > > > > >>>
> > > > > > > > > >>> On Tue, Aug 21, 2018 at 7:05 AM, Ted Yu <
> > > yuzhih...@gmail.com
> > > > >
> > > > > > > wrote:
> > > > > > > > > >>>
> > > > > > > > > >>> Congratulation Dong!
> > > > > > > > > 
> > > > > > > > >  On Tue, Aug 21, 2018 at 1:59 AM Viktor Somogyi-Vass <
> > > > > > > > >  viktorsomo...@gmail.com>
> > > > > > > > >  wrote:
> > > > > > > > > 
> > > > > > > > >  Congrats Dong! :)
> > > > > > > > > >
> > > > > > > > > > On Tue, Aug 21, 2018 at 10:09 AM James Cheng <
> > > > > > > wushuja...@gmail.com
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > >  wrote:
> > > > > > > > > 
> > > > > > > > > > Congrats Dong!
> > > > > > > > > >>
> > > > > > > > > >> -James
> > > > > > > > > >>
> > > > > > > > > >> On Aug 20, 2018, at 3:54 AM, Ismael Juma <
> > > > ism...@juma.me.uk
> > > > > >
> > > > > > > > wrote:
> > > > > > > > > >>>
> > > > > > > > > >>> Hi everyone,
> > > > > > > > > >>>
> > > > > > > > > >>> Dong Lin became a committer in March 2018. Since
> > then,
> > > he
> > > > > has
> > > > > > > > > >>>
> > > > > > > > > >> remained
> > > > > > > > > 
> > > > > > > > > > active in the community and contributed a number of
> > > > patches,
> > > > > > > > reviewed
> > > > > > > > > >>> several pull requests and participated in numerous
> > KIP
> > > > > > > > > discussions. I
> > > > > > > > > >>>
> > > > > > > > > >> am
> > > > > > > > > >
> > > > > > > > > >> happy to announce that Dong is now a member of the
> > > > > > > > > >>> Apache Kafka PM
> > > > > > > > > >>>
> > > > > > > > > >>> Congratulation Dong! Looking forward to your future
> > > > > > > > contributions.
> > > > > > > > > >>>
> > > > > > > > > >>> Ismael, on behalf of the Apache Kafka PMC
> > > > > > > > > >>>
> > > > > > > > > >>
> > > > > > > > > >>
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > *Dongjin Lee*
> > > >
> > > > *A hitchhiker in the mathematical world.*
> > > >
> > > > *github:  github.com/dongjinleekr
> > > > linkedin:
> > > kr.linkedin.com/in/dongjinleekr
> > > > slideshare:
> > > > www.slideshare.net/dongjinleekr
> > > > *
> > > >
> > >
> >
>


Re: [DISCUSS]: KIP-339: Create a new ModifyConfigs API

2018-09-19 Thread Viktor Somogyi-Vass
I think so, I'm +1 on this.

On Sat, Sep 15, 2018 at 8:14 AM Colin McCabe  wrote:

> On Wed, Aug 15, 2018, at 05:04, Viktor Somogyi wrote:
> > Hi,
> >
> > To weigh-in, I agree with Colin on the API naming, overloads shouldn't
> > change behavior. I think all of the Java APIs I've used so far followed
> > this principle and I think we shouldn't diverge.
> >
> > Also I think I have an entry about this incremental thing in KIP-248. It
> > died off a bit at voting (I guess 2.0 came quick) but I was about to
> revive
> > and restructure it a bit. If you remember it would have done something
> > similar. Back then we discussed an "incremental_update" flag would have
> > been sufficient to keep backward compatibility with the protocol. Since
> > here you designed a new protocol I think I'll remove this bit from my KIP
> > and also align the other parts/namings to yours so we'll have a more
> > unified interface on this front.
> >
> > At last, one minor comment: is throttling a part of your protocol
> similarly
> > to alterConfigs?
>
> It should cover the same ground as alterConfigs, basically.
>
> Does it make sense to start a VOTE thread on this?
>
> best,
> Colin
>
> >
> > Viktor
> >
> >
> > On Fri, Jul 20, 2018 at 8:05 PM Colin McCabe  wrote:
> >
> > > I updated the KIP.
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-339%3A+Create+a+new+IncrementalAlterConfigs+API
> > >
> > > Updates:
> > > * Use "incrementalAlterConfigs" rather than "modifyConfigs," for
> > > consistency with the other "alter" APIs.
> > > * Implement Magnus' idea of supporting "append" and "subtract" on
> > > configuration keys that contain lists.
> > >
> > > best,
> > > Colin
> > >
> > >
> > > On Mon, Jul 16, 2018, at 14:12, Colin McCabe wrote:
> > > > Hi Magnus,
> > > >
> > > > Thanks for taking a look.
> > > >
> > > > On Mon, Jul 16, 2018, at 11:43, Magnus Edenhill wrote:
> > > > > Thanks for driving this KIP, Colin.
> > > > >
> > > > > I agree with Dong that a new similar modifyConfigs API (and
> protocol
> > > API)
> > > > > is confusing and that
> > > > > we should try to extend the current alterConfigs interface to
> support
> > > the
> > > > > incremental mode instead,
> > > > > deprecating the non-incremental mode in the process.
> > > >
> > > > In the longer term, I think that the non-incremental mode should
> > > > definitely go away, and not be an option at all.  That's why I don't
> > > > think of this KIP as "adding more options  to AlterConfigs" but as
> > > > getting rid of a broken API.  I've described a lot of reasons why
> non-
> > > > incremental mode is broken.  I've also described why the brokenness
> is
> > > > subtle and an easy trap for newbies to fall into.  Hopefully everyone
> > > > agrees that getting rid of non-incremental mode completely should be
> the
> > > > eventual goal.
> > > >
> > > > I do not think that having a different name for modifyConfigs is
> > > > confusing.  "Deleting all the configs and then setting some
> designated
> > > > ones" is a very different operation from "modifying some
> > > > configurations".  Giving the two operations different names expresses
> > > > the fact  that they really are very different.  Would it be less
> > > > confusing if the new function were called alterConfigsIncremental
> rather
> > > > than modifyConfigs?
> > > >
> > > > I think it's important to have a new name for the new function.  If
> the
> > > > names are the same, how can we even explain to users which API they
> > > > should or should not use?  "Use the three argument overload, or the
> two
> > > > argument overload where the Options class is not the final parameter"
> > > > That is not user-friendly.
> > > >
> > > > You could say that some of the overloads would be deprecated.
> However,
> > > > my experience as a Hadoop developer is that most users simply don't
> care
> > > > about deprecation warnings.  They will use autocomplete in their IDEs
> > > > and use whatever function seems to have the parameters they need.
> > > > Hadoop and Kafka themselves use plenty of deprecated APIs.  But
> somehow
> > > > we expect that our users have much more respect for @deprecated than
> we
> > > > ourselves do.
> > > >
> > > > I would further argue that function overloads in Java are intended to
> > > > provide additional parameters, not to fundamentally change the
> semantics
> > > > of a function.  If you have two functions int addTwoNumbers(int a,
> int
> > > > b) and int addTwoNumbers(int a, int b, boolean verbose), they should
> > > > both add together two numbers.  And a user should be able to expect
> that
> > > > the plain old addTwoNumbers is equivalent to either
> > > > addTwoNumbers(verbose=true) or addTwoNumbers(verbose=false), not a
> > > > totally different operation.
> > > >
> > > > Every time programmers violate this contract, it inevitably leads to
> > > > misunderstanding.  One example is how in HDFS there are multiple
> > > > function overloads for renaming a file.  Depending on 

Re: [DISCUSS] KIP 368: Allow SASL Connections to Periodically Re-Authenticate

2018-09-19 Thread Ron Dagostino
Thanks, Rajini -- I updated the KIP to fix this.

Ron

On Wed, Sep 19, 2018 at 4:54 AM Rajini Sivaram 
wrote:

> I should have said `security configs` instead of `channel configs`.
>
> The KIP says:
>
>- The configuration option this KIP proposes to add to enable
>server-side expired-connection-kill is '*connections.max.reauth.ms
>*' (not prefixed with listener prefix
>or SASL mechanism name – this is a single value for the cluster)
>- The '*connections.max.reauth.ms *'
>configuration option will not be dynamically changeable; restarts will
> be
>required if the value is to be changed.  However, if a new listener is
>dynamically added, the value could be set for that listener at that
> time.
>
> Those statements are contradictory. Perhaps the first one should say `it
> may be optionally prefixed with the listener name`?
>
> On Tue, Sep 18, 2018 at 3:55 PM, Ron Dagostino  wrote:
>
> > HI Rajini.  The KIP is updated as summarized below, and I will start a
> vote
> > immediately.
> >
> > << > Ok, agreed.  I called it expired-connections-killed-total
> >
> > << > << > Ok, agreed.  I kept existing metrics unchanged but added an additional
> tag
> > to the V0 metrics so they are separate.
> >
> > << > <<<(rate/total with success/failure). Perhaps just success/total is
> > sufficient?
> > Ok, agreed, just kept the successful total.
> >
> > << or
> > << > Ok, agreed, the config is now cluster-wide.
> >
> > << > << > Not sure what this is referring to.  We don't have channel configs here,
> > right?
> >
> > << > << > << > << > Yes, I was planning on that optimization; agreed, I removed it from the
> > list
> >
> > << > << > << > for
> > << > << I
> > << implemented
> > at
> > << > Ok, agreed
> >
> >  Thanks again for all the feedback and discussion.
> >
> > Ron
> >
> > On Tue, Sep 18, 2018 at 6:43 AM Rajini Sivaram 
> > wrote:
> >
> > > Hi Ron,
> > >
> > > Thanks for the updates. The KIP looks good. A few comments and minor
> > points
> > > below, but feel free to start vote to try and get it into 2.1.0. More
> > > community feedback will be really useful.
> > >
> > > 1) It may be useful to have a metric of expired connections killed by
> the
> > > broker. There could be a client implementation that doesn't support
> > > re-authentications, but happens to use the latest version of
> > > SaslAuthenticateRequest. Or cases where re-authentication didn't happen
> > on
> > > time.
> > >
> > > 2) For `successful-v0-authentication-{rate,total}`, we probably want
> > > version as a tag rather in the name. Not sure if we need four of these
> > > (rate/total with success/failure). Perhaps just success/total is
> > > sufficient?
> > >
> > > 3) For the session lifetime config, we don't need to require a listener
> > or
> > > mechanism prefix. In most cases, we would expect a single config on the
> > > broker-side. For all channel configs, we allow an optional listener
> > prefix,
> > > so we should do the same here.
> > >
> > > 4) The KIP says connections are terminated on requests not related to
> > > re-authentication (ApiVersionsRequest, SaslHandshakeRequest, and
> > > SaslAuthenticateRequest). We can skip for ApiVersionsRequest for
> > > re-authentication, so that doesn't need to be in the list.
> > >
> > > 5) The KIP says that the new config will not be dynamically updatable.
> We
> > > have a very limited set of configs that are dynamically updatable for
> an
> > > existing listener. And we don't want to add this config to the list
> since
> > > we don't expect this value to change frequently. But we allow new
> > listeners
> > > to be added dynamically and all configs for the listener can be added
> > > dynamically (with the listener prefix). I think we want to allow that
> for
> > > this config (i.e. add a new OAuth listener with re-authentication
> > enabled).
> > > We should mention this in the KIP, though in terms of implementation, I
> > > would leave that for a separate JIRA (it doesn't need to be implemented
> > at
> > > the same time).
> > >
> > >
> > >
> > > On Tue, Sep 18, 2018 at 3:06 AM, Ron Dagostino 
> > wrote:
> > >
> > > > HI again, Rajini.  Would we ever want the max session time to be
> > > different
> > > > across different SASL mechanisms?  I'm wondering, now that we are
> > > > supporting all SASL mechanisms via this KIP, if we still need to
> prefix
> > > > this config with the "[listener].[mechanism]." prefix.  I've kept the
> > > > prefix in the KIP for now, but it would be easier to just set it once
> > for
> > > > all mechanisms, and I don't see that as being a problem.  Let me know
> > > what
> > > > you think.
> > > >
> > > > Ron
> > > >
> > > > On Mon, Sep 17, 2018 at 9:51 PM Ron Dagostino 
> > wrote:
> > > >
> > > > > Hi Rajini.  The KIP is updated.  Aside from a once-over to make
> sure
> > it
> > > > is
> > > > > all accurate, I think we need to confirm the metrics.  The decision
> > to
> 

[VOTE] KIP-371: Add a configuration to build custom SSL principal name

2018-09-19 Thread Manikumar
Hi All,

I would like to start voting on KIP-371, which adds a configuration option
for building custom SSL principal names.

KIP:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-371%3A+Add+a+configuration+to+build+custom+SSL+principal+name

Discussion Thread:
https://lists.apache.org/thread.html/e346f5e3e3dd1feb863594e40eac1ed54138613a667f319b99344710@%3Cdev.kafka.apache.org%3E

Thanks,
Manikumar


***UNCHECKED*** "Create KIP" permissions

2018-09-19 Thread Vasily . Sulatskov
Hi,

I'd like to create a KIP, but lack necessary permissions. Can please 
someone, who controls the permissions, grant me KIP creation rights? My 
wiki ID is "vasily.sulatskov".

-- 
Best regards,
Vasily Sulatskov


[DISCUSS] KIP-373: Add '--help' option to all available Kafka CLI commands

2018-09-19 Thread Attila Sasvári
Hi all,

I have just created a KIP to add '--help' option to all available Kafka CLI
commands:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-373%3A+Add+%27--help%27+option+to+all+available+Kafka+CLI+commands

Tracking JIRA: https://issues.apache.org/jira/browse/KAFKA-7418

Regards,
- Attila


Re: [VOTE] KIP-302 - Enable Kafka clients to use all DNS resolved IP addresses

2018-09-19 Thread Eno Thereska
+1 (non-binding).

Thanks
Eno

On Wed, Sep 19, 2018 at 10:09 AM, Rajini Sivaram 
wrote:

> Hi Edo,
>
> Thanks for the KIP!
>
> +1 (binding)
>
> On Tue, Sep 18, 2018 at 3:51 PM, Edoardo Comar  wrote:
>
> > Hi All,
> >
> > I'd like to start the vote on KIP-302:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 302+-+Enable+Kafka+clients+to+use+all+DNS+resolved+IP+addresses
> >
> > We'd love to get this in 2.1.0
> > Kip freeze is just a few days away ... please cast your votes  :-):-)
> >
> > Thanks!!
> > Edo
> >
> > --
> >
> > Edoardo Comar
> >
> > IBM Message Hub
> >
> > IBM UK Ltd, Hursley Park, SO21 2JN
> > Unless stated otherwise above:
> > IBM United Kingdom Limited - Registered in England and Wales with number
> > 741598.
> > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6
> 3AU
> >
>


Re: [DISCUSS] KIP-371: Add a configuration to build custom SSL principal name

2018-09-19 Thread Manikumar
Thanks for the reviews.

Makes sense. I don't see any downsides. Updated the KIP, to go with Option
2.
I will be starting the vote thread soon.

Thanks,


On Wed, Sep 19, 2018 at 2:17 PM Rajini Sivaram 
wrote:

> Hi Manikumar,
>
> Unless there is a downside to option 2 (e.g will it impose character limits
> in the DN), it may be better to go with option 2 for consistency and
> flexibility.
>
> On Wed, Sep 19, 2018 at 5:11 AM, Harsha  wrote:
>
> > Thanks. I am also leaning towards option 2, as it will help the
> > consistency of expressing such mapping between sasl and ssl.
> > -Harsha
> >
> > On Tue, Sep 18, 2018, at 8:27 PM, Manikumar wrote:
> > > Hi Harsha,
> > >
> > > Thanks for the review. Yes, As mentioned on the motivation section,
> this
> > is
> > > to simply extracting fields from the certificates
> > > for the common use cases. Yes, we are not supporting extracting
> > > SubjectAltName using this KIP.
> > >
> > > Thanks,
> > >
> > >
> > > On Wed, Sep 19, 2018 at 8:29 AM Harsha  wrote:
> > >
> > > > Hi Manikumar,
> > > > I am interested to know the reason for exposing this config,
> > given
> > > > a user has access to PrincipalBuilder interface to build their
> > > > interpretation of an identity from the X509 certificates. Is this to
> > > > simplify extraction of identity? and also there are other use cases
> > where
> > > > user's will extract SubjectAltName to construct the identity I guess
> > thats
> > > > not going to supported by this method.
> > > >
> > > > Thanks,
> > > > Harsha
> > > >
> > > > On Tue, Sep 18, 2018, at 8:25 AM, Manikumar wrote:
> > > > > Hi Rajini,
> > > > >
> > > > > I don't have strong reasons for rejecting Option 2. I just felt
> > Option 1
> > > > is
> > > > > sufficient for
> > > > > the common use-cases (extracting single field, like CN etc..).
> > > > >
> > > > > We are open to go with Option 2, for more flexible mapping
> mechanism.
> > > > > Let us know, your preference.
> > > > >
> > > > > Thanks,
> > > > >
> > > > >
> > > > > On Tue, Sep 18, 2018 at 8:05 PM Rajini Sivaram <
> > rajinisiva...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi Manikumar,
> > > > > >
> > > > > > It wasn't entirely clear to me why Option 2 was rejected.
> > > > > >
> > > > > > On Tue, Sep 18, 2018 at 7:47 AM, Manikumar <
> > manikumar.re...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Hi All,
> > > > > > >
> > > > > > > We would like to go with Option 1, which adds a new
> configuration
> > > > > > parameter
> > > > > > > pair of the form:
> > > > > > > ssl.principal.mapping.pattern, ssl.principal.mapping.value.
> This
> > will
> > > > > > > fulfill the requirement for most of the common use cases.
> > > > > > >
> > > > > > > We would like to include the KIP in the upcoming release. If
> > there no
> > > > > > > concerns, would like to start vote on this KIP.
> > > > > > >
> > > > > > > Thanks,
> > > > > > >
> > > > > > > On Fri, Sep 14, 2018 at 11:32 PM Priyank Shah <
> > ps...@hortonworks.com
> > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Definitely a helpful change. +1 for Option 2.
> > > > > > > >
> > > > > > > > On 9/14/18, 10:52 AM, "Manikumar" <
> manikumar.re...@gmail.com>
> > > > wrote:
> > > > > > > >
> > > > > > > > Hi Eno,
> > > > > > > >
> > > > > > > > Thanks for the review.
> > > > > > > >
> > > > > > > > Most often users want to extract one of the field (eg.
> > CN). CN
> > > > is
> > > > > > the
> > > > > > > > commonly used field.
> > > > > > > > For this simple change, users need to build and maintain
> > the
> > > > custom
> > > > > > > > principal builder class
> > > > > > > > and also package and deploy to the all brokers. Having
> > > > configurable
> > > > > > > > rules
> > > > > > > > will be useful.
> > > > > > > >
> > > > > > > > Proposed mapping rules works on string representation of
> > the
> > > > X.500
> > > > > > > > distinguished name(RFC2253 format) [1].
> > > > > > > > Mapping rules can use the attribute types keywords
> defined
> > in
> > > > RFC
> > > > > > > 2253
> > > > > > > > (CN,
> > > > > > > > L, ST, O, OU, C, STREET, DC, UID).
> > > > > > > >
> > > > > > > > Any additional/custom attribute types are emitted as
> OIDs.
> > To
> > > > emit
> > > > > > > > additional attribute type keys,
> > > > > > > > we need to have OID -> attribute type keyword String
> > > > mapping.[2]
> > > > > > > >
> > > > > > > > For example, String representation of
> > X500Principal("CN=Duke,
> > > > > > > > OU=JavaSoft,
> > > > > > > > O=Sun Microsystems, C=US, EMAILADDRESS=t...@test.com")
> > > > > > > > will be "CN=Duke,OU=JavaSoft,O=Sun
> > > > > > > > Microsystems,C=US,1.2.840.113549.1.9.1=#
> > > > > > > 160d7465737440746573742e636f6d"
> > > > > > > >
> > > > > > > > If we have the OID - key mapping ("1.2.840.113549.1.9.1",
> > > > > > > > "emailAddress"),
> > > > > > > > the string will be
> > > > > > > > 

Re: [VOTE] KIP-302 - Enable Kafka clients to use all DNS resolved IP addresses

2018-09-19 Thread Rajini Sivaram
Hi Edo,

Thanks for the KIP!

+1 (binding)

On Tue, Sep 18, 2018 at 3:51 PM, Edoardo Comar  wrote:

> Hi All,
>
> I'd like to start the vote on KIP-302:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 302+-+Enable+Kafka+clients+to+use+all+DNS+resolved+IP+addresses
>
> We'd love to get this in 2.1.0
> Kip freeze is just a few days away ... please cast your votes  :-):-)
>
> Thanks!!
> Edo
>
> --
>
> Edoardo Comar
>
> IBM Message Hub
>
> IBM UK Ltd, Hursley Park, SO21 2JN
> Unless stated otherwise above:
> IBM United Kingdom Limited - Registered in England and Wales with number
> 741598.
> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
>


Re: [DISCUSS] KIP 368: Allow SASL Connections to Periodically Re-Authenticate

2018-09-19 Thread Rajini Sivaram
I should have said `security configs` instead of `channel configs`.

The KIP says:

   - The configuration option this KIP proposes to add to enable
   server-side expired-connection-kill is '*connections.max.reauth.ms
   *' (not prefixed with listener prefix
   or SASL mechanism name – this is a single value for the cluster)
   - The '*connections.max.reauth.ms *'
   configuration option will not be dynamically changeable; restarts will be
   required if the value is to be changed.  However, if a new listener is
   dynamically added, the value could be set for that listener at that time.

Those statements are contradictory. Perhaps the first one should say `it
may be optionally prefixed with the listener name`?

On Tue, Sep 18, 2018 at 3:55 PM, Ron Dagostino  wrote:

> HI Rajini.  The KIP is updated as summarized below, and I will start a vote
> immediately.
>
> << Ok, agreed.  I called it expired-connections-killed-total
>
> << << Ok, agreed.  I kept existing metrics unchanged but added an additional tag
> to the V0 metrics so they are separate.
>
> << <<<(rate/total with success/failure). Perhaps just success/total is
> sufficient?
> Ok, agreed, just kept the successful total.
>
> << << Ok, agreed, the config is now cluster-wide.
>
> << << Not sure what this is referring to.  We don't have channel configs here,
> right?
>
> << << << << Yes, I was planning on that optimization; agreed, I removed it from the
> list
>
> << << << for
> << << << at
> << Ok, agreed
>
>  Thanks again for all the feedback and discussion.
>
> Ron
>
> On Tue, Sep 18, 2018 at 6:43 AM Rajini Sivaram 
> wrote:
>
> > Hi Ron,
> >
> > Thanks for the updates. The KIP looks good. A few comments and minor
> points
> > below, but feel free to start vote to try and get it into 2.1.0. More
> > community feedback will be really useful.
> >
> > 1) It may be useful to have a metric of expired connections killed by the
> > broker. There could be a client implementation that doesn't support
> > re-authentications, but happens to use the latest version of
> > SaslAuthenticateRequest. Or cases where re-authentication didn't happen
> on
> > time.
> >
> > 2) For `successful-v0-authentication-{rate,total}`, we probably want
> > version as a tag rather in the name. Not sure if we need four of these
> > (rate/total with success/failure). Perhaps just success/total is
> > sufficient?
> >
> > 3) For the session lifetime config, we don't need to require a listener
> or
> > mechanism prefix. In most cases, we would expect a single config on the
> > broker-side. For all channel configs, we allow an optional listener
> prefix,
> > so we should do the same here.
> >
> > 4) The KIP says connections are terminated on requests not related to
> > re-authentication (ApiVersionsRequest, SaslHandshakeRequest, and
> > SaslAuthenticateRequest). We can skip for ApiVersionsRequest for
> > re-authentication, so that doesn't need to be in the list.
> >
> > 5) The KIP says that the new config will not be dynamically updatable. We
> > have a very limited set of configs that are dynamically updatable for an
> > existing listener. And we don't want to add this config to the list since
> > we don't expect this value to change frequently. But we allow new
> listeners
> > to be added dynamically and all configs for the listener can be added
> > dynamically (with the listener prefix). I think we want to allow that for
> > this config (i.e. add a new OAuth listener with re-authentication
> enabled).
> > We should mention this in the KIP, though in terms of implementation, I
> > would leave that for a separate JIRA (it doesn't need to be implemented
> at
> > the same time).
> >
> >
> >
> > On Tue, Sep 18, 2018 at 3:06 AM, Ron Dagostino 
> wrote:
> >
> > > HI again, Rajini.  Would we ever want the max session time to be
> > different
> > > across different SASL mechanisms?  I'm wondering, now that we are
> > > supporting all SASL mechanisms via this KIP, if we still need to prefix
> > > this config with the "[listener].[mechanism]." prefix.  I've kept the
> > > prefix in the KIP for now, but it would be easier to just set it once
> for
> > > all mechanisms, and I don't see that as being a problem.  Let me know
> > what
> > > you think.
> > >
> > > Ron
> > >
> > > On Mon, Sep 17, 2018 at 9:51 PM Ron Dagostino 
> wrote:
> > >
> > > > Hi Rajini.  The KIP is updated.  Aside from a once-over to make sure
> it
> > > is
> > > > all accurate, I think we need to confirm the metrics.  The decision
> to
> > > not
> > > > reject authentications that use tokens with too-long a lifetime
> allowed
> > > the
> > > > metrics to be simpler.  I decided that in addition to tracking these
> > > > metrics on the broker:
> > > >
> > > > failed-reauthentication-{rate,total} and
> > > > successful-reauthentication-{rate,total}
> > > >
> > > > we simply need one more set of broker metrics to track the subset of
> > > > clients clients that 

***UNCHECKED*** Re: [DISCUSS] KIP-371: Add a configuration to build custom SSL principal name

2018-09-19 Thread Rajini Sivaram
Hi Manikumar,

Unless there is a downside to option 2 (e.g will it impose character limits
in the DN), it may be better to go with option 2 for consistency and
flexibility.

On Wed, Sep 19, 2018 at 5:11 AM, Harsha  wrote:

> Thanks. I am also leaning towards option 2, as it will help the
> consistency of expressing such mapping between sasl and ssl.
> -Harsha
>
> On Tue, Sep 18, 2018, at 8:27 PM, Manikumar wrote:
> > Hi Harsha,
> >
> > Thanks for the review. Yes, As mentioned on the motivation section, this
> is
> > to simply extracting fields from the certificates
> > for the common use cases. Yes, we are not supporting extracting
> > SubjectAltName using this KIP.
> >
> > Thanks,
> >
> >
> > On Wed, Sep 19, 2018 at 8:29 AM Harsha  wrote:
> >
> > > Hi Manikumar,
> > > I am interested to know the reason for exposing this config,
> given
> > > a user has access to PrincipalBuilder interface to build their
> > > interpretation of an identity from the X509 certificates. Is this to
> > > simplify extraction of identity? and also there are other use cases
> where
> > > user's will extract SubjectAltName to construct the identity I guess
> thats
> > > not going to supported by this method.
> > >
> > > Thanks,
> > > Harsha
> > >
> > > On Tue, Sep 18, 2018, at 8:25 AM, Manikumar wrote:
> > > > Hi Rajini,
> > > >
> > > > I don't have strong reasons for rejecting Option 2. I just felt
> Option 1
> > > is
> > > > sufficient for
> > > > the common use-cases (extracting single field, like CN etc..).
> > > >
> > > > We are open to go with Option 2, for more flexible mapping mechanism.
> > > > Let us know, your preference.
> > > >
> > > > Thanks,
> > > >
> > > >
> > > > On Tue, Sep 18, 2018 at 8:05 PM Rajini Sivaram <
> rajinisiva...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi Manikumar,
> > > > >
> > > > > It wasn't entirely clear to me why Option 2 was rejected.
> > > > >
> > > > > On Tue, Sep 18, 2018 at 7:47 AM, Manikumar <
> manikumar.re...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi All,
> > > > > >
> > > > > > We would like to go with Option 1, which adds a new configuration
> > > > > parameter
> > > > > > pair of the form:
> > > > > > ssl.principal.mapping.pattern, ssl.principal.mapping.value. This
> will
> > > > > > fulfill the requirement for most of the common use cases.
> > > > > >
> > > > > > We would like to include the KIP in the upcoming release. If
> there no
> > > > > > concerns, would like to start vote on this KIP.
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > On Fri, Sep 14, 2018 at 11:32 PM Priyank Shah <
> ps...@hortonworks.com
> > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Definitely a helpful change. +1 for Option 2.
> > > > > > >
> > > > > > > On 9/14/18, 10:52 AM, "Manikumar" 
> > > wrote:
> > > > > > >
> > > > > > > Hi Eno,
> > > > > > >
> > > > > > > Thanks for the review.
> > > > > > >
> > > > > > > Most often users want to extract one of the field (eg.
> CN). CN
> > > is
> > > > > the
> > > > > > > commonly used field.
> > > > > > > For this simple change, users need to build and maintain
> the
> > > custom
> > > > > > > principal builder class
> > > > > > > and also package and deploy to the all brokers. Having
> > > configurable
> > > > > > > rules
> > > > > > > will be useful.
> > > > > > >
> > > > > > > Proposed mapping rules works on string representation of
> the
> > > X.500
> > > > > > > distinguished name(RFC2253 format) [1].
> > > > > > > Mapping rules can use the attribute types keywords defined
> in
> > > RFC
> > > > > > 2253
> > > > > > > (CN,
> > > > > > > L, ST, O, OU, C, STREET, DC, UID).
> > > > > > >
> > > > > > > Any additional/custom attribute types are emitted as OIDs.
> To
> > > emit
> > > > > > > additional attribute type keys,
> > > > > > > we need to have OID -> attribute type keyword String
> > > mapping.[2]
> > > > > > >
> > > > > > > For example, String representation of
> X500Principal("CN=Duke,
> > > > > > > OU=JavaSoft,
> > > > > > > O=Sun Microsystems, C=US, EMAILADDRESS=t...@test.com")
> > > > > > > will be "CN=Duke,OU=JavaSoft,O=Sun
> > > > > > > Microsystems,C=US,1.2.840.113549.1.9.1=#
> > > > > > 160d7465737440746573742e636f6d"
> > > > > > >
> > > > > > > If we have the OID - key mapping ("1.2.840.113549.1.9.1",
> > > > > > > "emailAddress"),
> > > > > > > the string will be
> > > > > > > "CN=Duke,OU=JavaSoft,O=Sun Microsystems,C=US,emailAddress=
> > > > > > > t...@test.com"
> > > > > > >
> > > > > > > Since we are not passing this mapping, we can not extarct
> using
> > > > > > > additional
> > > > > > > attribute type keyword string.
> > > > > > > If the user want to extract additional attribute keys, we
> need
> > > to
> > > > > > write
> > > > > > > custom principal builder class.
> > > > > > >
> > > > > > > Hope the above helps. Update the KIP.
> > > > > > >
> > > > > > > [1]
> > > > > > >
> > > > > > >

[jira] [Created] (KAFKA-7423) Kafka compact topic data missing.

2018-09-19 Thread Hanlin Liu (JIRA)
Hanlin Liu created KAFKA-7423:
-

 Summary: Kafka compact topic data missing.
 Key: KAFKA-7423
 URL: https://issues.apache.org/jira/browse/KAFKA-7423
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.10.2.0
Reporter: Hanlin Liu


We are using a single node kafka and schema registry with version 0.10.2.
We are experiencing data lost for _schemas topic which is a compact topic used 
by schema registry for storage.
 
There are only 4 messages in the topic and we can see a lot of (50+) subjects 
from schema registry rest service \{schema registry}:8081/subjects.
The topic is configured to compact cleanup policy:
Topic:_schemas PartitionCount:1 ReplicationFactor:1 
Configs:cleanup.policy=compact
Topic: _schemas Partition: 0 Leader: 1001 Replicas: 1001 Isr: 1001
 
The broker retention hours is set to 120 hours and the first message in the 
topic was written 3 days ago.
 
We can see the data from the schema registry rest service, we believe schema 
registry didn't delete the data.
So we are suspecting kafka broker retentioned the data for some reason. 
 
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


***UNCHECKED*** [jira] [Created] (KAFKA-7421) Deadlock in Kafka Connect

2018-09-19 Thread JIRA
Maciej Bryński created KAFKA-7421:
-

 Summary: Deadlock in Kafka Connect
 Key: KAFKA-7421
 URL: https://issues.apache.org/jira/browse/KAFKA-7421
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Affects Versions: 2.0.0
Reporter: Maciej Bryński


I'm getting this deadlock on half of Kafka Connect runs.
Thread 1:
{code}
"pool-22-thread-2@4748" prio=5 tid=0x4d nid=NA waiting for monitor entry
  java.lang.Thread.State: BLOCKED
 waiting for pool-22-thread-1@4747 to release lock on <0x1423> (a 
org.apache.kafka.connect.runtime.isolation.PluginClassLoader)
  at 
org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:91)
  at 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.loadClass(DelegatingClassLoader.java:367)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
  at java.lang.Class.forName0(Class.java:-1)
  at java.lang.Class.forName(Class.java:348)
  at 
org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:715)
  at 
org.apache.kafka.connect.runtime.ConnectorConfig.enrich(ConnectorConfig.java:295)
  at 
org.apache.kafka.connect.runtime.ConnectorConfig.(ConnectorConfig.java:200)
  at 
org.apache.kafka.connect.runtime.ConnectorConfig.(ConnectorConfig.java:194)
  at 
org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:233)
  at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:916)
  at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1300(DistributedHerder.java:111)
  at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder$15.call(DistributedHerder.java:932)
  at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder$15.call(DistributedHerder.java:928)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  at java.lang.Thread.run(Thread.java:748)
{code}

Thread 2:
{code}
"pool-22-thread-1@4747" prio=5 tid=0x4c nid=NA waiting for monitor entry
  java.lang.Thread.State: BLOCKED
 blocks pool-22-thread-2@4748
 waiting for pool-22-thread-2@4748 to release lock on <0x1421> (a 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:406)
  at 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.loadClass(DelegatingClassLoader.java:358)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:411)
  - locked <0x1424> (a java.lang.Object)
  at 
org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:104)
  - locked <0x1423> (a 
org.apache.kafka.connect.runtime.isolation.PluginClassLoader)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
  at 
io.debezium.transforms.ByLogicalTableRouter.(ByLogicalTableRouter.java:57)
  at java.lang.Class.forName0(Class.java:-1)
  at java.lang.Class.forName(Class.java:348)
  at 
org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:715)
  at 
org.apache.kafka.connect.runtime.ConnectorConfig.enrich(ConnectorConfig.java:295)
  at 
org.apache.kafka.connect.runtime.ConnectorConfig.(ConnectorConfig.java:200)
  at 
org.apache.kafka.connect.runtime.ConnectorConfig.(ConnectorConfig.java:194)
  at 
org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:233)
  at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:916)
  at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1300(DistributedHerder.java:111)
  at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder$15.call(DistributedHerder.java:932)
  at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder$15.call(DistributedHerder.java:928)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  at java.lang.Thread.run(Thread.java:748)
{code}

I'm using official Confluent Docker images.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7422) Huge memory consumption (leak?) by Kafka Producer

2018-09-19 Thread Shantanu Deshmukh (JIRA)
Shantanu Deshmukh created KAFKA-7422:


 Summary: Huge memory consumption (leak?) by Kafka Producer
 Key: KAFKA-7422
 URL: https://issues.apache.org/jira/browse/KAFKA-7422
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.10.1.0
 Environment: RedHat Linux 7.3
Reporter: Shantanu Deshmukh
 Attachments: heapdump.20180918.111423.4587876.0002.phd

Hello,
 
We have a 3 broker Kafka 0.10.1.0 deployment in production. There are some 
applications which have Kafka Producers embedded in them which send application 
logs to a topic. This topic has 10 partitions with replication factor of 3.
 
We are observing that memory usage on some of these application servers keep 
shooting through the roof intermittently. After taking heapdump we found out 
that top suspects were:
*-*

*org.apache.kafka.common.network.Selector -*

occupies *352,519,104 (24.96%)* bytes. The memory is accumulated in one 
instance of *"byte[]"* loaded by *""*.
*org.apache.kafka.common.network.KafkaChannel -*
occupies *352,527,424 (24.96%)* bytes. The memory is accumulated in one 
instance of *"byte[]"* loaded by *""*

*-*

Both of these were holding about 352MB of space. 3 such instances, so they were 
consuming about 1.2GB of memory.

Now regarding usage of producers. Not a huge amount of logs are being sent to 
Kafka cluster. It is about 200 msgs/sec. Only one producer object is being used 
throughout application. Async send function is used.

What could be the cause of such huge memory usage? Is this some sort of memory 
leak in this specific Kafka version?

Here's producer config being used at the present.

{{}}
{{kafka.bootstrap.servers=x.x.x.x:9092,x.x.x.x:9092,x.x.x.x:9092}}
{{ kafka.acks=0}}
{{ kafka.key.serializer=org.apache.kafka.common.serialization.StringSerializer}}
{{ 
kafka.value.serializer=org.apache.kafka.common.serialization.StringSerializer}}
{{[kafka.max.block.ms|http://kafka.max.block.ms/]=1000}}
{{[kafka.request.timeout.ms|http://kafka.request.timeout.ms/]=1000}}
{{ kafka.max.in.flight.requests.per.connection=1}}
{{ kafka.retries=0}}
{{ kafka.compression.type=gzip}}
{{ kafka.security.protocol=SSL}}
{{ kafka.ssl.truststore.location=/data/kafka/kafka-server-truststore.jks}}
{{ kafka.ssl.truststore.password=XX}}
{{ logger.level=INFO }}{{<<}}
Attaching heapdump for your perusal.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)