[jira] [Commented] (KAFKA-7757) Too many open files after java.io.IOException: Connection to n was disconnected before the response was read

2018-12-20 Thread JIRA


[ 
https://issues.apache.org/jira/browse/KAFKA-7757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725712#comment-16725712
 ] 

点儿郎当 commented on KAFKA-7757:
-

Thank you for your reply. Can we use TCPIP proxy to relieve broker's I/O 
pressure? By the way, is the I/O you monitor written in JMX?

> Too many open files after java.io.IOException: Connection to n was 
> disconnected before the response was read
> 
>
> Key: KAFKA-7757
> URL: https://issues.apache.org/jira/browse/KAFKA-7757
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.1.0
>Reporter: Pedro Gontijo
>Priority: Major
> Attachments: kafka-allocated-file-handles.png, server.properties, 
> td1.txt, td2.txt, td3.txt
>
>
> We upgraded from 0.10.2.2 to 2.1.0 (a cluster with 3 brokers)
> After a while (hours) 2 brokers start to throw:
> {code:java}
> java.io.IOException: Connection to NN was disconnected before the response 
> was read
> at 
> org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(NetworkClientUtils.java:97)
> at 
> kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:97)
> at 
> kafka.server.ReplicaFetcherThread.fetchFromLeader(ReplicaFetcherThread.scala:190)
> at 
> kafka.server.AbstractFetcherThread.kafka$server$AbstractFetcherThread$$processFetchRequest(AbstractFetcherThread.scala:241)
> at 
> kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:130)
> at 
> kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:129)
> at scala.Option.foreach(Option.scala:257)
> at 
> kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:129)
> at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:111)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
> {code}
> File descriptors start to pile up and if I do not restart it throws "Too many 
> open files" and crashes.  
> {code:java}
> ERROR Error while accepting connection (kafka.network.Acceptor)
> java.io.IOException: Too many open files in system
> at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
> at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422)
> at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250)
> at kafka.network.Acceptor.accept(SocketServer.scala:460)
> at kafka.network.Acceptor.run(SocketServer.scala:403)
> at java.lang.Thread.run(Thread.java:748)
> {code}
>  
>  After some hours the issue happens again... It has happened with all 
> brokers, so it is not something specific to an instance.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7742) DelegationTokenCache#hmacIdCache entry is not cleared when a token is removed using removeToken(String tokenId) API.

2018-12-20 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7742.
--
   Resolution: Fixed
Fix Version/s: 2.2.0

Issue resolved by pull request 6037
[https://github.com/apache/kafka/pull/6037]

> DelegationTokenCache#hmacIdCache entry is not cleared when a token is removed 
> using removeToken(String tokenId) API.
> 
>
> Key: KAFKA-7742
> URL: https://issues.apache.org/jira/browse/KAFKA-7742
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Reporter: Satish Duggana
>Assignee: Satish Duggana
>Priority: Major
> Fix For: 2.2.0
>
>
> DelegationTokenCache#hmacIdCache entry is not cleared when a token is removed 
> using `removeToken(String tokenId)`[1] API.
> 1) 
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/token/delegation/internals/DelegationTokenCache.java#L84



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7742) DelegationTokenCache#hmacIdCache entry is not cleared when a token is removed using removeToken(String tokenId) API.

2018-12-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725749#comment-16725749
 ] 

ASF GitHub Bot commented on KAFKA-7742:
---

omkreddy closed pull request #6037: KAFKA-7742: Fixed removing hmac entry for a 
token being removed from DelegationTokenCache
URL: https://github.com/apache/kafka/pull/6037
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/clients/src/main/java/org/apache/kafka/common/security/token/delegation/internals/DelegationTokenCache.java
 
b/clients/src/main/java/org/apache/kafka/common/security/token/delegation/internals/DelegationTokenCache.java
index a74781f0e13..9cc913f5750 100644
--- 
a/clients/src/main/java/org/apache/kafka/common/security/token/delegation/internals/DelegationTokenCache.java
+++ 
b/clients/src/main/java/org/apache/kafka/common/security/token/delegation/internals/DelegationTokenCache.java
@@ -32,10 +32,15 @@
 public class DelegationTokenCache {
 
 private CredentialCache credentialCache = new CredentialCache();
+
 //Cache to hold all the tokens
 private Map tokenCache = new 
ConcurrentHashMap<>();
+
 //Cache to hold hmac->tokenId mapping. This is required for renew, expire 
requests
-private Map hmacIDCache = new ConcurrentHashMap<>();
+private Map hmacTokenIdCache = new ConcurrentHashMap<>();
+
+//Cache to hold tokenId->hmac mapping. This is required for removing entry 
from hmacTokenIdCache using tokenId.
+private Map tokenIdHmacCache = new ConcurrentHashMap<>();
 
 public DelegationTokenCache(Collection scramMechanisms) {
 //Create caches for scramMechanisms
@@ -60,17 +65,21 @@ public void updateCache(DelegationToken token, Map scra
 //Update Scram Credentials
 updateCredentials(tokenId, scramCredentialMap);
 //Update hmac-id cache
-hmacIDCache.put(hmac, tokenId);
+hmacTokenIdCache.put(hmac, tokenId);
+tokenIdHmacCache.put(tokenId, hmac);
 }
 
-
 public void removeCache(String tokenId) {
 removeToken(tokenId);
-updateCredentials(tokenId, new HashMap());
+updateCredentials(tokenId, new HashMap<>());
+}
+
+public String tokenIdForHmac(String base64hmac) {
+return hmacTokenIdCache.get(base64hmac);
 }
 
 public TokenInformation tokenForHmac(String base64hmac) {
-String tokenId = hmacIDCache.get(base64hmac);
+String tokenId = hmacTokenIdCache.get(base64hmac);
 return tokenId == null ? null : tokenCache.get(tokenId);
 }
 
@@ -81,7 +90,10 @@ public TokenInformation addToken(String tokenId, 
TokenInformation tokenInfo) {
 public void removeToken(String tokenId) {
 TokenInformation tokenInfo = tokenCache.remove(tokenId);
 if (tokenInfo != null) {
-hmacIDCache.remove(tokenInfo.tokenId());
+String hmac = tokenIdHmacCache.remove(tokenInfo.tokenId());
+if (hmac != null) {
+hmacTokenIdCache.remove(hmac);
+}
 }
 }
 
diff --git 
a/core/src/test/scala/unit/kafka/security/token/delegation/DelegationTokenManagerTest.scala
 
b/core/src/test/scala/unit/kafka/security/token/delegation/DelegationTokenManagerTest.scala
index b8d4376c54a..ed82f5ed446 100644
--- 
a/core/src/test/scala/unit/kafka/security/token/delegation/DelegationTokenManagerTest.scala
+++ 
b/core/src/test/scala/unit/kafka/security/token/delegation/DelegationTokenManagerTest.scala
@@ -19,7 +19,7 @@ package kafka.security.token.delegation
 
 import java.net.InetAddress
 import java.nio.ByteBuffer
-import java.util.Properties
+import java.util.{Base64, Properties}
 
 import kafka.network.RequestChannel.Session
 import kafka.security.auth.Acl.WildCardHost
@@ -189,6 +189,30 @@ class DelegationTokenManagerTest extends 
ZooKeeperTestHarness  {
 assertEquals(time.milliseconds, expiryTimeStamp)
   }
 
+  @Test
+  def testRemoveTokenHmac():Unit = {
+val config = KafkaConfig.fromProps(props)
+val tokenManager = createDelegationTokenManager(config, tokenCache, time, 
zkClient)
+tokenManager.startup
+
+tokenManager.createToken(owner, renewer, -1 , createTokenResultCallBack)
+val issueTime = time.milliseconds
+val tokenId = createTokenResult.tokenId
+val password = DelegationTokenManager.createHmac(tokenId, masterKey)
+assertEquals(CreateTokenResult(issueTime, issueTime + renewTimeMsDefault,  
issueTime + maxLifeTimeMsDefault, tokenId, password, Errors.NONE), 
createTokenResult)
+
+// expire the token immediately
+tokenManager.expireToken(owner, ByteBuffer.wrap(password), -1, 
renewResponseCallback)
+
+val encodedHmac = Base64.getEncoder.encodeToString(passw

[jira] [Created] (KAFKA-7762) KafkaConsumer uses old API in the javadocs

2018-12-20 Thread JIRA
Matthias Weßendorf created KAFKA-7762:
-

 Summary: KafkaConsumer uses old API in the javadocs
 Key: KAFKA-7762
 URL: https://issues.apache.org/jira/browse/KAFKA-7762
 Project: Kafka
  Issue Type: Improvement
Reporter: Matthias Weßendorf


the `poll(ms)` API is deprecated, hence the javadoc should not use it 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7762) KafkaConsumer uses old API in the javadocs

2018-12-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725771#comment-16725771
 ] 

ASF GitHub Bot commented on KAFKA-7762:
---

matzew opened a new pull request #6052: KAFKA-7762 KafkaConsumer uses old API 
in the javadocs
URL: https://github.com/apache/kafka/pull/6052
 
 
   Minor fix: I noticed the JavaDoc uses the _deprecated_ `poll`, replacing 
that...
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> KafkaConsumer uses old API in the javadocs
> --
>
> Key: KAFKA-7762
> URL: https://issues.apache.org/jira/browse/KAFKA-7762
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Matthias Weßendorf
>Priority: Major
>
> the `poll(ms)` API is deprecated, hence the javadoc should not use it 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7763) KafkaProducer with transactionId endless waits when network is disconnection

2018-12-20 Thread weasker (JIRA)
weasker created KAFKA-7763:
--

 Summary: KafkaProducer with transactionId endless waits when 
network is disconnection
 Key: KAFKA-7763
 URL: https://issues.apache.org/jira/browse/KAFKA-7763
 Project: Kafka
  Issue Type: Bug
  Components: clients, producer 
Affects Versions: 2.1.0
Reporter: weasker


When the client disconnect with the bootstrap server, a KafkaProducer with 
transactionId endless waits on commitTransaction, the question is the same with 
below issues:

https://issues.apache.org/jira/browse/KAFKA-6446

the reproduce condition you can do it as belows:

1、producer.initTransactions();

2、producer.beginTransaction();

3、producer.send(record1);//set the breakpoint here

key step: run the breakpoint above 3 then disconnect the network by manual, 
10-20seconds recover the network and continute the program by canceling the 
breakpoint

4、producer.send(record2);

5、producer.commitTransaction();//endless waits

 

I found in 2.1.0 version the modificaiton about the initTransactions method, 
but the 

commitTransaction and abortTransaction method, I think it's the same question 
with initTransactions...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7763) KafkaProducer with transactionId endless waits when network is disconnection for 10-20s

2018-12-20 Thread weasker (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

weasker updated KAFKA-7763:
---
Summary: KafkaProducer with transactionId endless waits when network is 
disconnection for 10-20s  (was: KafkaProducer with transactionId endless waits 
when network is disconnection)

> KafkaProducer with transactionId endless waits when network is disconnection 
> for 10-20s
> ---
>
> Key: KAFKA-7763
> URL: https://issues.apache.org/jira/browse/KAFKA-7763
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 2.1.0
>Reporter: weasker
>Priority: Blocker
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> When the client disconnect with the bootstrap server, a KafkaProducer with 
> transactionId endless waits on commitTransaction, the question is the same 
> with below issues:
> https://issues.apache.org/jira/browse/KAFKA-6446
> the reproduce condition you can do it as belows:
> 1、producer.initTransactions();
> 2、producer.beginTransaction();
> 3、producer.send(record1);//set the breakpoint here
> key step: run the breakpoint above 3 then disconnect the network by manual, 
> 10-20seconds recover the network and continute the program by canceling the 
> breakpoint
> 4、producer.send(record2);
> 5、producer.commitTransaction();//endless waits
>  
> I found in 2.1.0 version the modificaiton about the initTransactions method, 
> but the 
> commitTransaction and abortTransaction method, I think it's the same question 
> with initTransactions...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7753) ValueTransformerWithKey should not require producing one value every message

2018-12-20 Thread Mateusz Owczarek (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725902#comment-16725902
 ] 

Mateusz Owczarek commented on KAFKA-7753:
-

Actually it would be fine, but because I missread ValueTransformerWithKey docs 
it does not work for me. Apparently I cannot forward any messages while 
punctuating in this type of transformer. If I don't want to change the keys and 
forward them frequently, but not only while transforming them, I need to use 
Transformer and risk repartitioning. Why is that?

> ValueTransformerWithKey should not require producing one value every message
> 
>
> Key: KAFKA-7753
> URL: https://issues.apache.org/jira/browse/KAFKA-7753
> Project: Kafka
>  Issue Type: Wish
>  Components: streams
>Reporter: Mateusz Owczarek
>Priority: Minor
>
> Hi, speaking about: 
> [https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/kstream/ValueTransformerWithKey.java]
> I have a quite simple case - I want to implement a Transformer which I will 
> use later in my DSL-api-defined topology. Those are my requirements:
> - no repartition topic should be created since I do not change the keys
> - I can't forward messages on every message transformed. I have my internal 
> state store which I'm using to ensure max. 1 message per window is sent.
> Basic transformer gives me possibility to not send messages down the stream 
> (null internal handler), but it forces repartitioning which I think is 
> unnecessary in my case. WDYT?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7753) ValueTransformerWithKey should not require producing one value every message

2018-12-20 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725921#comment-16725921
 ] 

Matthias J. Sax commented on KAFKA-7753:


If you use punctuations and call `context.forward(key, value)`, we cannot 
guarantee that the key is not modified. That's why we need to restrict the 
usage of punctuations for this case.

> ValueTransformerWithKey should not require producing one value every message
> 
>
> Key: KAFKA-7753
> URL: https://issues.apache.org/jira/browse/KAFKA-7753
> Project: Kafka
>  Issue Type: Wish
>  Components: streams
>Reporter: Mateusz Owczarek
>Priority: Minor
>
> Hi, speaking about: 
> [https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/kstream/ValueTransformerWithKey.java]
> I have a quite simple case - I want to implement a Transformer which I will 
> use later in my DSL-api-defined topology. Those are my requirements:
> - no repartition topic should be created since I do not change the keys
> - I can't forward messages on every message transformed. I have my internal 
> state store which I'm using to ensure max. 1 message per window is sent.
> Basic transformer gives me possibility to not send messages down the stream 
> (null internal handler), but it forces repartitioning which I think is 
> unnecessary in my case. WDYT?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7753) ValueTransformerWithKey should not require producing one value every message

2018-12-20 Thread Mateusz Owczarek (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mateusz Owczarek resolved KAFKA-7753.
-
Resolution: Feedback Received

> ValueTransformerWithKey should not require producing one value every message
> 
>
> Key: KAFKA-7753
> URL: https://issues.apache.org/jira/browse/KAFKA-7753
> Project: Kafka
>  Issue Type: Wish
>  Components: streams
>Reporter: Mateusz Owczarek
>Priority: Minor
>
> Hi, speaking about: 
> [https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/kstream/ValueTransformerWithKey.java]
> I have a quite simple case - I want to implement a Transformer which I will 
> use later in my DSL-api-defined topology. Those are my requirements:
> - no repartition topic should be created since I do not change the keys
> - I can't forward messages on every message transformed. I have my internal 
> state store which I'm using to ensure max. 1 message per window is sent.
> Basic transformer gives me possibility to not send messages down the stream 
> (null internal handler), but it forces repartitioning which I think is 
> unnecessary in my case. WDYT?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7753) ValueTransformerWithKey should not require producing one value every message

2018-12-20 Thread Mateusz Owczarek (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725955#comment-16725955
 ] 

Mateusz Owczarek commented on KAFKA-7753:
-

I see, thanks for quick answer, closing the issue ;)

> ValueTransformerWithKey should not require producing one value every message
> 
>
> Key: KAFKA-7753
> URL: https://issues.apache.org/jira/browse/KAFKA-7753
> Project: Kafka
>  Issue Type: Wish
>  Components: streams
>Reporter: Mateusz Owczarek
>Priority: Minor
>
> Hi, speaking about: 
> [https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/kstream/ValueTransformerWithKey.java]
> I have a quite simple case - I want to implement a Transformer which I will 
> use later in my DSL-api-defined topology. Those are my requirements:
> - no repartition topic should be created since I do not change the keys
> - I can't forward messages on every message transformed. I have my internal 
> state store which I'm using to ensure max. 1 message per window is sent.
> Basic transformer gives me possibility to not send messages down the stream 
> (null internal handler), but it forces repartitioning which I think is 
> unnecessary in my case. WDYT?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7762) KafkaConsumer uses old API in the javadocs

2018-12-20 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7762.
--
   Resolution: Fixed
Fix Version/s: 2.2.0

Issue resolved by pull request 6052
[https://github.com/apache/kafka/pull/6052]

> KafkaConsumer uses old API in the javadocs
> --
>
> Key: KAFKA-7762
> URL: https://issues.apache.org/jira/browse/KAFKA-7762
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Matthias Weßendorf
>Priority: Major
> Fix For: 2.2.0
>
>
> the `poll(ms)` API is deprecated, hence the javadoc should not use it 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7762) KafkaConsumer uses old API in the javadocs

2018-12-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16726038#comment-16726038
 ] 

ASF GitHub Bot commented on KAFKA-7762:
---

omkreddy closed pull request #6052: KAFKA-7762 KafkaConsumer uses old API in 
the javadocs
URL: https://github.com/apache/kafka/pull/6052
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
b/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java
index 5c673a58c10..7a5485b810a 100644
--- a/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java
+++ b/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java
@@ -210,7 +210,7 @@
  * KafkaConsumer consumer = new 
KafkaConsumer<>(props);
  * consumer.subscribe(Arrays.asList("foo", "bar"));
  * while (true) {
- * ConsumerRecords records = consumer.poll(100);
+ * ConsumerRecords records = 
consumer.poll(Duration.ofMillis(100));
  * for (ConsumerRecord record : records)
  * System.out.printf("offset = %d, key = %s, value = 
%s%n", record.offset(), record.key(), record.value());
  * }
@@ -249,7 +249,7 @@
  * final int minBatchSize = 200;
  * List> buffer = new 
ArrayList<>();
  * while (true) {
- * ConsumerRecords records = consumer.poll(100);
+ * ConsumerRecords records = 
consumer.poll(Duration.ofMillis(100);
  * for (ConsumerRecord record : records) {
  * buffer.add(record);
  * }
@@ -288,7 +288,7 @@
  * 
  * try {
  * while(running) {
- * ConsumerRecords records = 
consumer.poll(Long.MAX_VALUE);
+ * ConsumerRecords records = 
consumer.poll(Duration.ofMillis(Long.MAX_VALUE));
  * for (TopicPartition partition : records.partitions()) {
  * List> 
partitionRecords = records.records(partition);
  * for (ConsumerRecord record : 
partitionRecords) {


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> KafkaConsumer uses old API in the javadocs
> --
>
> Key: KAFKA-7762
> URL: https://issues.apache.org/jira/browse/KAFKA-7762
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Matthias Weßendorf
>Priority: Major
> Fix For: 2.2.0
>
>
> the `poll(ms)` API is deprecated, hence the javadoc should not use it 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-7762) KafkaConsumer uses old API in the javadocs

2018-12-20 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-7762:


Assignee: Matthias Weßendorf

> KafkaConsumer uses old API in the javadocs
> --
>
> Key: KAFKA-7762
> URL: https://issues.apache.org/jira/browse/KAFKA-7762
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Matthias Weßendorf
>Assignee: Matthias Weßendorf
>Priority: Major
> Fix For: 2.2.0
>
>
> the `poll(ms)` API is deprecated, hence the javadoc should not use it 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7758) When Naming a Repartition Topic with Aggregations Reuse Repartition Graph Node for Multiple Operations

2018-12-20 Thread John Roesler (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16726039#comment-16726039
 ] 

John Roesler commented on KAFKA-7758:
-

+1 from me. I guess there's no compatibility concern here, since the above code 
is currently impossible.

> When Naming a Repartition Topic with Aggregations Reuse Repartition Graph 
> Node for Multiple Operations
> --
>
> Key: KAFKA-7758
> URL: https://issues.apache.org/jira/browse/KAFKA-7758
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 2.1.0
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Major
> Fix For: 2.2.0
>
>
> When performing aggregations that require repartitioning and the repartition 
> topic name is specified, and using the resulting {{KGroupedStream}} for 
> multiple operations i.e.
>  
> {code:java}
> final KGroupedStream kGroupedStream = builder. String>stream("topic").selectKey((k, v) -> 
> k).groupByKey(Grouped.as("grouping"));
> kGroupedStream.windowedBy(TimeWindows.of(Duration.ofMillis(10L))).count();
> kGroupedStream.windowedBy(TimeWindows.of(Duration.ofMillis(30L))).count();
> {code}
> If optimizations aren't enabled, Streams will attempt to build two 
> repartition topics of the same name resulting in a failure creating the 
> topology.  
>  
> However, we have enough information to re-use the existing repartition node 
> via graph nodes used for building the intermediate representation of the 
> topology. This ticket will make the 
> behavior of reusing a {{KGroupedStream}} consistent regardless if 
> optimizations are turned on or not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7749) confluent does not provide option to set consumer properties at connector level

2018-12-20 Thread Paul Davidson (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16726149#comment-16726149
 ] 

Paul Davidson commented on KAFKA-7749:
--

I would also like to see this in the context of Source Connectors, where it 
would be useful for the connector to override producer properties - ideally at 
the task level.  For example, in Mirus 
([https://github.com/salesforce/mirus|https://github.com/salesforce/mirus).]) 
this would allow each Source Connector to be directed at a different 
destination cluster without setting up a separate set of workers for each 
destination.  It would also allow the connector to tune the producer properties 
for each destination cluster (e.g. by tuning the linger time and batch size 
depending on whether the destination cluster is local or remote).

> confluent does not provide option to set consumer properties at connector 
> level
> ---
>
> Key: KAFKA-7749
> URL: https://issues.apache.org/jira/browse/KAFKA-7749
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Manjeet Duhan
>Priority: Major
>
> _We want to increase consumer.max.poll.record to increase performance but 
> this  value can only be set in worker properties which is applicable to all 
> connectors given cluster._
>  __ 
> _Operative Situation :- We have one project which is communicating with 
> Elasticsearch and we set consumer.max.poll.record=500 after multiple 
> performance tests which worked fine for an year._
>  _Then one more project onboarded in the same cluster which required 
> consumer.max.poll.record=5000 based on their performance tests. This 
> configuration is moved to production._
>   _Admetric started failing as it was taking more than 5 minutes to process 
> 5000 polled records and started throwing commitfailed exception which is 
> vicious cycle as it will process same data over and over again._
>  __ 
> _We can control above if start consumer using plain java but this control was 
> not available at each consumer level in confluent connector._
> _I have overridden kafka code to accept connector properties which will be 
> applied to single connector and others will keep on using default properties 
> . These changes are already running in production for more than 5 months._
> _Some of the properties which were useful for us._
> max.poll.records
> max.poll.interval.ms
> request.timeout.ms
> key.deserializer
> value.deserializer
> heartbeat.interval.ms
> session.timeout.ms
> auto.offset.reset
> connections.max.idle.ms
> enable.auto.commit
>  
> auto.commit.interval.ms
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7758) When Naming a Repartition Topic with Aggregations Reuse Repartition Graph Node for Multiple Operations

2018-12-20 Thread Bill Bejeck (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16726242#comment-16726242
 ] 

Bill Bejeck commented on KAFKA-7758:


If optimizations are enabled then the above code does work, one of the 
repartition topics are removed before the topology is built. It's when 
optimizations aren't used that the error occurs.

> When Naming a Repartition Topic with Aggregations Reuse Repartition Graph 
> Node for Multiple Operations
> --
>
> Key: KAFKA-7758
> URL: https://issues.apache.org/jira/browse/KAFKA-7758
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 2.1.0
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Major
> Fix For: 2.2.0
>
>
> When performing aggregations that require repartitioning and the repartition 
> topic name is specified, and using the resulting {{KGroupedStream}} for 
> multiple operations i.e.
>  
> {code:java}
> final KGroupedStream kGroupedStream = builder. String>stream("topic").selectKey((k, v) -> 
> k).groupByKey(Grouped.as("grouping"));
> kGroupedStream.windowedBy(TimeWindows.of(Duration.ofMillis(10L))).count();
> kGroupedStream.windowedBy(TimeWindows.of(Duration.ofMillis(30L))).count();
> {code}
> If optimizations aren't enabled, Streams will attempt to build two 
> repartition topics of the same name resulting in a failure creating the 
> topology.  
>  
> However, we have enough information to re-use the existing repartition node 
> via graph nodes used for building the intermediate representation of the 
> topology. This ticket will make the 
> behavior of reusing a {{KGroupedStream}} consistent regardless if 
> optimizations are turned on or not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7759) Disable WADL output on OPTIONS method in Connect REST

2018-12-20 Thread Oleksandr Diachenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko updated KAFKA-7759:
---
Description: 
Currently, Connect REST API exposes WADL output on OPTIONS method:
{code:java}
curl -i -X OPTIONS http://localhost:8083/connectors
HTTP/1.1 200 OK
Date: Fri, 07 Dec 2018 22:51:53 GMT
Content-Type: application/vnd.sun.wadl+xml
Allow: HEAD,POST,GET,OPTIONS
Last-Modified: Fri, 07 Dec 2018 14:51:53 PST
Content-Length: 1331
Server: Jetty(9.4.12.v20180830)


http://wadl.dev.java.net/2009/02";>
http://jersey.java.net/"; jersey:generatedBy="Jersey: 2.27 
2018-04-10 07:34:57"/>

http://localhost:8083/application.wadl/xsd0.xsd";>



http://localhost:8083/";>



http://www.w3.org/2001/XMLSchema"; name="forward" style="query" 
type="xs:boolean"/>








http://www.w3.org/2001/XMLSchema"; name="forward" style="query" 
type="xs:boolean"/>








{code}
It was never documented, so in order to remove unintended bwhaviour it should 
be disabled.

  was:
Currently, Connect REST API exposes WADL output on OPTIONS method:
{code:java}
curl -i -X OPTIONS http://localhost:8083/connectors
HTTP/1.1 200 OK
Date: Fri, 07 Dec 2018 22:51:53 GMT
Content-Type: application/vnd.sun.wadl+xml
Allow: HEAD,POST,GET,OPTIONS
Last-Modified: Fri, 07 Dec 2018 14:51:53 PST
Content-Length: 1331
Server: Jetty(9.4.12.v20180830)


http://wadl.dev.java.net/2009/02";>
http://jersey.java.net/"; jersey:generatedBy="Jersey: 2.27 
2018-04-10 07:34:57"/>

http://localhost:8083/application.wadl/xsd0.xsd";>



http://localhost:8083/";>



http://www.w3.org/2001/XMLSchema"; name="forward" style="query" 
type="xs:boolean"/>








http://www.w3.org/2001/XMLSchema"; name="forward" style="query" 
type="xs:boolean"/>








{code}
It was never documented , so it should be disabled.


> Disable WADL output on OPTIONS method in Connect REST
> -
>
> Key: KAFKA-7759
> URL: https://issues.apache.org/jira/browse/KAFKA-7759
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
>Priority: Major
> Fix For: 2.2.0
>
>
> Currently, Connect REST API exposes WADL output on OPTIONS method:
> {code:java}
> curl -i -X OPTIONS http://localhost:8083/connectors
> HTTP/1.1 200 OK
> Date: Fri, 07 Dec 2018 22:51:53 GMT
> Content-Type: application/vnd.sun.wadl+xml
> Allow: HEAD,POST,GET,OPTIONS
> Last-Modified: Fri, 07 Dec 2018 14:51:53 PST
> Content-Length: 1331
> Server: Jetty(9.4.12.v20180830)
> 
> http://wadl.dev.java.net/2009/02";>
> http://jersey.java.net/"; jersey:generatedBy="Jersey: 2.27 
> 2018-04-10 07:34:57"/>
> 
> http://localhost:8083/application.wadl/xsd0.xsd";>
> 
> 
> 
> http://localhost:8083/";>
> 
> 
> 
> http://www.w3.org/2001/XMLSchema"; name="forward" 
> style="query" type="xs:boolean"/>
> 
> 
> 
> 
> 
> 
> 
> 
> http://www.w3.org/2001/XMLSchema"; name="forward" 
> style="query" type="xs:boolean"/>
> 
> 
> 
> 
> 
> 
> 
> 
> {code}
> It was never documented, so in order to remove unintended bwhaviour it should 
> be disabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7759) Disable WADL output on OPTIONS method in Connect REST

2018-12-20 Thread Oleksandr Diachenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko updated KAFKA-7759:
---
Description: 
Currently, Connect REST API exposes WADL output on OPTIONS method:
{code:java}
curl -i -X OPTIONS http://localhost:8083/connectors
HTTP/1.1 200 OK
Date: Fri, 07 Dec 2018 22:51:53 GMT
Content-Type: application/vnd.sun.wadl+xml
Allow: HEAD,POST,GET,OPTIONS
Last-Modified: Fri, 07 Dec 2018 14:51:53 PST
Content-Length: 1331
Server: Jetty(9.4.12.v20180830)


http://wadl.dev.java.net/2009/02";>
http://jersey.java.net/"; jersey:generatedBy="Jersey: 2.27 
2018-04-10 07:34:57"/>

http://localhost:8083/application.wadl/xsd0.xsd";>



http://localhost:8083/";>



http://www.w3.org/2001/XMLSchema"; name="forward" style="query" 
type="xs:boolean"/>








http://www.w3.org/2001/XMLSchema"; name="forward" style="query" 
type="xs:boolean"/>








{code}
It was never documented, so in order to remove unintended behaviour, WADL 
output should be disabled.

  was:
Currently, Connect REST API exposes WADL output on OPTIONS method:
{code:java}
curl -i -X OPTIONS http://localhost:8083/connectors
HTTP/1.1 200 OK
Date: Fri, 07 Dec 2018 22:51:53 GMT
Content-Type: application/vnd.sun.wadl+xml
Allow: HEAD,POST,GET,OPTIONS
Last-Modified: Fri, 07 Dec 2018 14:51:53 PST
Content-Length: 1331
Server: Jetty(9.4.12.v20180830)


http://wadl.dev.java.net/2009/02";>
http://jersey.java.net/"; jersey:generatedBy="Jersey: 2.27 
2018-04-10 07:34:57"/>

http://localhost:8083/application.wadl/xsd0.xsd";>



http://localhost:8083/";>



http://www.w3.org/2001/XMLSchema"; name="forward" style="query" 
type="xs:boolean"/>








http://www.w3.org/2001/XMLSchema"; name="forward" style="query" 
type="xs:boolean"/>








{code}
It was never documented, so in order to remove unintended bwhaviour it should 
be disabled.


> Disable WADL output on OPTIONS method in Connect REST
> -
>
> Key: KAFKA-7759
> URL: https://issues.apache.org/jira/browse/KAFKA-7759
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
>Priority: Major
> Fix For: 2.2.0
>
>
> Currently, Connect REST API exposes WADL output on OPTIONS method:
> {code:java}
> curl -i -X OPTIONS http://localhost:8083/connectors
> HTTP/1.1 200 OK
> Date: Fri, 07 Dec 2018 22:51:53 GMT
> Content-Type: application/vnd.sun.wadl+xml
> Allow: HEAD,POST,GET,OPTIONS
> Last-Modified: Fri, 07 Dec 2018 14:51:53 PST
> Content-Length: 1331
> Server: Jetty(9.4.12.v20180830)
> 
> http://wadl.dev.java.net/2009/02";>
> http://jersey.java.net/"; jersey:generatedBy="Jersey: 2.27 
> 2018-04-10 07:34:57"/>
> 
> http://localhost:8083/application.wadl/xsd0.xsd";>
> 
> 
> 
> http://localhost:8083/";>
> 
> 
> 
> http://www.w3.org/2001/XMLSchema"; name="forward" 
> style="query" type="xs:boolean"/>
> 
> 
> 
> 
> 
> 
> 
> 
> http://www.w3.org/2001/XMLSchema"; name="forward" 
> style="query" type="xs:boolean"/>
> 
> 
> 
> 
> 
> 
> 
> 
> {code}
> It was never documented, so in order to remove unintended behaviour, WADL 
> output should be disabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7759) Disable WADL output on OPTIONS method in Connect REST

2018-12-20 Thread Oleksandr Diachenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko updated KAFKA-7759:
---
Description: 
Currently, Connect REST API exposes WADL output on OPTIONS method:
{code:java}
curl -i -X OPTIONS http://localhost:8083/connectors
HTTP/1.1 200 OK
Date: Fri, 07 Dec 2018 22:51:53 GMT
Content-Type: application/vnd.sun.wadl+xml
Allow: HEAD,POST,GET,OPTIONS
Last-Modified: Fri, 07 Dec 2018 14:51:53 PST
Content-Length: 1331
Server: Jetty(9.4.12.v20180830)


http://wadl.dev.java.net/2009/02";>
http://jersey.java.net/"; jersey:generatedBy="Jersey: 2.27 
2018-04-10 07:34:57"/>

http://localhost:8083/application.wadl/xsd0.xsd";>



http://localhost:8083/";>



http://www.w3.org/2001/XMLSchema"; name="forward" style="query" 
type="xs:boolean"/>








http://www.w3.org/2001/XMLSchema"; name="forward" style="query" 
type="xs:boolean"/>








{code}
It was never documented, so in order to remove unintended behavior, WADL output 
should be disabled.

  was:
Currently, Connect REST API exposes WADL output on OPTIONS method:
{code:java}
curl -i -X OPTIONS http://localhost:8083/connectors
HTTP/1.1 200 OK
Date: Fri, 07 Dec 2018 22:51:53 GMT
Content-Type: application/vnd.sun.wadl+xml
Allow: HEAD,POST,GET,OPTIONS
Last-Modified: Fri, 07 Dec 2018 14:51:53 PST
Content-Length: 1331
Server: Jetty(9.4.12.v20180830)


http://wadl.dev.java.net/2009/02";>
http://jersey.java.net/"; jersey:generatedBy="Jersey: 2.27 
2018-04-10 07:34:57"/>

http://localhost:8083/application.wadl/xsd0.xsd";>



http://localhost:8083/";>



http://www.w3.org/2001/XMLSchema"; name="forward" style="query" 
type="xs:boolean"/>








http://www.w3.org/2001/XMLSchema"; name="forward" style="query" 
type="xs:boolean"/>








{code}
It was never documented, so in order to remove unintended behaviour, WADL 
output should be disabled.


> Disable WADL output on OPTIONS method in Connect REST
> -
>
> Key: KAFKA-7759
> URL: https://issues.apache.org/jira/browse/KAFKA-7759
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
>Priority: Major
> Fix For: 2.2.0
>
>
> Currently, Connect REST API exposes WADL output on OPTIONS method:
> {code:java}
> curl -i -X OPTIONS http://localhost:8083/connectors
> HTTP/1.1 200 OK
> Date: Fri, 07 Dec 2018 22:51:53 GMT
> Content-Type: application/vnd.sun.wadl+xml
> Allow: HEAD,POST,GET,OPTIONS
> Last-Modified: Fri, 07 Dec 2018 14:51:53 PST
> Content-Length: 1331
> Server: Jetty(9.4.12.v20180830)
> 
> http://wadl.dev.java.net/2009/02";>
> http://jersey.java.net/"; jersey:generatedBy="Jersey: 2.27 
> 2018-04-10 07:34:57"/>
> 
> http://localhost:8083/application.wadl/xsd0.xsd";>
> 
> 
> 
> http://localhost:8083/";>
> 
> 
> 
> http://www.w3.org/2001/XMLSchema"; name="forward" 
> style="query" type="xs:boolean"/>
> 
> 
> 
> 
> 
> 
> 
> 
> http://www.w3.org/2001/XMLSchema"; name="forward" 
> style="query" type="xs:boolean"/>
> 
> 
> 
> 
> 
> 
> 
> 
> {code}
> It was never documented, so in order to remove unintended behavior, WADL 
> output should be disabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7759) Disable WADL output on OPTIONS method in Connect REST

2018-12-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16726271#comment-16726271
 ] 

ASF GitHub Bot commented on KAFKA-7759:
---

hachikuji closed pull request #6051: KAFKA-7759: Disable WADL output on OPTIONS 
method in Connect REST.
URL: https://github.com/apache/kafka/pull/6051
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java
index 15386430bc5..c0d83f29103 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java
@@ -45,6 +45,7 @@
 import org.eclipse.jetty.servlets.CrossOriginFilter;
 import org.eclipse.jetty.util.ssl.SslContextFactory;
 import org.glassfish.jersey.server.ResourceConfig;
+import org.glassfish.jersey.server.ServerProperties;
 import org.glassfish.jersey.servlet.ServletContainer;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -171,6 +172,7 @@ public void start(Herder herder) {
 resourceConfig.register(new ConnectorPluginsResource(herder));
 
 resourceConfig.register(ConnectExceptionMapper.class);
+resourceConfig.property(ServerProperties.WADL_FEATURE_DISABLE, true);
 
 registerRestExtensions(herder, resourceConfig);
 
diff --git 
a/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/rest/RestServerTest.java
 
b/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/rest/RestServerTest.java
index c66ce36d8b0..a0fb685aff1 100644
--- 
a/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/rest/RestServerTest.java
+++ 
b/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/rest/RestServerTest.java
@@ -25,7 +25,6 @@
 import org.apache.kafka.connect.util.Callback;
 import org.easymock.Capture;
 import org.easymock.EasyMock;
-import org.easymock.IAnswer;
 import org.junit.After;
 import org.junit.Assert;
 import org.junit.Test;
@@ -41,11 +40,11 @@
 import java.util.Collections;
 import java.util.HashMap;
 import java.util.Map;
-
 import javax.ws.rs.client.Client;
 import javax.ws.rs.client.ClientBuilder;
 import javax.ws.rs.client.Invocation;
 import javax.ws.rs.client.WebTarget;
+import javax.ws.rs.core.MediaType;
 import javax.ws.rs.core.Response;
 
 import static org.junit.Assert.assertEquals;
@@ -158,6 +157,33 @@ public void testAdvertisedUri() {
 Assert.assertEquals("http://my-hostname:8080/";, 
server.advertisedUrl().toString());
 }
 
+@Test
+public void testOptionsDoesNotIncludeWadlOutput() {
+Map configMap = new HashMap<>(baseWorkerProps());
+DistributedConfig workerConfig = new DistributedConfig(configMap);
+
+EasyMock.expect(herder.plugins()).andStubReturn(plugins);
+EasyMock.expect(plugins.newPlugins(Collections.emptyList(),
+workerConfig,
+ConnectRestExtension.class))
+.andStubReturn(Collections.emptyList());
+PowerMock.replayAll();
+
+server = new RestServer(workerConfig);
+server.start(herder);
+
+Response response = request("/connectors")
+.accept(MediaType.WILDCARD)
+.options();
+Assert.assertEquals(MediaType.TEXT_PLAIN_TYPE, 
response.getMediaType());
+Assert.assertArrayEquals(
+response.getAllowedMethods().toArray(new String[0]),
+response.readEntity(String.class).split(", ")
+);
+
+PowerMock.verifyAll();
+}
+
 public void checkCORSRequest(String corsDomain, String origin, String 
expectedHeader, String method) {
 // To be able to set the Origin, we need to toggle this flag
 
@@ -175,12 +201,9 @@ public void checkCORSRequest(String corsDomain, String 
origin, String expectedHe
 
 final Capture>> connectorsCallback = 
EasyMock.newCapture();
 herder.connectors(EasyMock.capture(connectorsCallback));
-PowerMock.expectLastCall().andAnswer(new IAnswer() {
-@Override
-public Object answer() throws Throwable {
-connectorsCallback.getValue().onCompletion(null, 
Arrays.asList("a", "b"));
-return null;
-}
+PowerMock.expectLastCall().andAnswer(() -> {
+connectorsCallback.getValue().onCompletion(null, 
Arrays.asList("a", "b"));
+return null;
 });
 
 PowerMock.replayAll();


 


This is an automated message from the Apache Git Service.

[jira] [Updated] (KAFKA-7759) Disable WADL output on OPTIONS method in Connect REST

2018-12-20 Thread Oleksandr Diachenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko updated KAFKA-7759:
---
Description: 
Currently, Connect REST API exposes WADL output on OPTIONS method:
{code:java}
curl -i -X OPTIONS http://localhost:8083/connectors
HTTP/1.1 200 OK
Date: Fri, 07 Dec 2018 22:51:53 GMT
Content-Type: application/vnd.sun.wadl+xml
Allow: HEAD,POST,GET,OPTIONS
Last-Modified: Fri, 07 Dec 2018 14:51:53 PST
Content-Length: 1331
Server: Jetty(9.4.12.v20180830)


http://wadl.dev.java.net/2009/02";>
http://jersey.java.net/"; jersey:generatedBy="Jersey: 2.27 
2018-04-10 07:34:57"/>

http://localhost:8083/application.wadl/xsd0.xsd";>



http://localhost:8083/";>



http://www.w3.org/2001/XMLSchema"; name="forward" style="query" 
type="xs:boolean"/>








http://www.w3.org/2001/XMLSchema"; name="forward" style="query" 
type="xs:boolean"/>








{code}
It was never documented , so it should be disabled.

  was:
Currently, Connect REST API exposes WADL output on OPTIONS method:
{code}
curl -i -X OPTIONS http://localhost:8083/connectors
HTTP/1.1 200 OK
Date: Fri, 07 Dec 2018 22:51:53 GMT
Content-Type: application/vnd.sun.wadl+xml
Allow: HEAD,POST,GET,OPTIONS
Last-Modified: Fri, 07 Dec 2018 14:51:53 PST
Content-Length: 1331
Server: Jetty(9.4.12.v20180830)


http://wadl.dev.java.net/2009/02";>
http://jersey.java.net/"; jersey:generatedBy="Jersey: 2.27 
2018-04-10 07:34:57"/>

http://localhost:8083/application.wadl/xsd0.xsd";>



http://localhost:8083/";>



http://www.w3.org/2001/XMLSchema"; name="forward" style="query" 
type="xs:boolean"/>








http://www.w3.org/2001/XMLSchema"; name="forward" style="query" 
type="xs:boolean"/>








{code}
It was never documented and poses potential security vulnerability, so it 
should be disabled.


> Disable WADL output on OPTIONS method in Connect REST
> -
>
> Key: KAFKA-7759
> URL: https://issues.apache.org/jira/browse/KAFKA-7759
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
>Priority: Major
> Fix For: 2.2.0
>
>
> Currently, Connect REST API exposes WADL output on OPTIONS method:
> {code:java}
> curl -i -X OPTIONS http://localhost:8083/connectors
> HTTP/1.1 200 OK
> Date: Fri, 07 Dec 2018 22:51:53 GMT
> Content-Type: application/vnd.sun.wadl+xml
> Allow: HEAD,POST,GET,OPTIONS
> Last-Modified: Fri, 07 Dec 2018 14:51:53 PST
> Content-Length: 1331
> Server: Jetty(9.4.12.v20180830)
> 
> http://wadl.dev.java.net/2009/02";>
> http://jersey.java.net/"; jersey:generatedBy="Jersey: 2.27 
> 2018-04-10 07:34:57"/>
> 
> http://localhost:8083/application.wadl/xsd0.xsd";>
> 
> 
> 
> http://localhost:8083/";>
> 
> 
> 
> http://www.w3.org/2001/XMLSchema"; name="forward" 
> style="query" type="xs:boolean"/>
> 
> 
> 
> 
> 
> 
> 
> 
> http://www.w3.org/2001/XMLSchema"; name="forward" 
> style="query" type="xs:boolean"/>
> 
> 
> 
> 
> 
> 
> 
> 
> {code}
> It was never documented , so it should be disabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7759) Disable WADL output in Connect REST API

2018-12-20 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-7759:
---
Summary: Disable WADL output in Connect REST API  (was: Disable WADL output 
on OPTIONS method in Connect REST)

> Disable WADL output in Connect REST API
> ---
>
> Key: KAFKA-7759
> URL: https://issues.apache.org/jira/browse/KAFKA-7759
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
>Priority: Major
> Fix For: 2.2.0, 2.1.1, 2.0.2
>
>
> Currently, Connect REST API exposes WADL output on OPTIONS method:
> {code:java}
> curl -i -X OPTIONS http://localhost:8083/connectors
> HTTP/1.1 200 OK
> Date: Fri, 07 Dec 2018 22:51:53 GMT
> Content-Type: application/vnd.sun.wadl+xml
> Allow: HEAD,POST,GET,OPTIONS
> Last-Modified: Fri, 07 Dec 2018 14:51:53 PST
> Content-Length: 1331
> Server: Jetty(9.4.12.v20180830)
> 
> http://wadl.dev.java.net/2009/02";>
> http://jersey.java.net/"; jersey:generatedBy="Jersey: 2.27 
> 2018-04-10 07:34:57"/>
> 
> http://localhost:8083/application.wadl/xsd0.xsd";>
> 
> 
> 
> http://localhost:8083/";>
> 
> 
> 
> http://www.w3.org/2001/XMLSchema"; name="forward" 
> style="query" type="xs:boolean"/>
> 
> 
> 
> 
> 
> 
> 
> 
> http://www.w3.org/2001/XMLSchema"; name="forward" 
> style="query" type="xs:boolean"/>
> 
> 
> 
> 
> 
> 
> 
> 
> {code}
> It was never documented, so in order to remove unintended behavior, WADL 
> output should be disabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7759) Disable WADL output on OPTIONS method in Connect REST

2018-12-20 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-7759.

   Resolution: Fixed
Fix Version/s: 2.0.2
   2.1.1

> Disable WADL output on OPTIONS method in Connect REST
> -
>
> Key: KAFKA-7759
> URL: https://issues.apache.org/jira/browse/KAFKA-7759
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
>Priority: Major
> Fix For: 2.2.0, 2.1.1, 2.0.2
>
>
> Currently, Connect REST API exposes WADL output on OPTIONS method:
> {code:java}
> curl -i -X OPTIONS http://localhost:8083/connectors
> HTTP/1.1 200 OK
> Date: Fri, 07 Dec 2018 22:51:53 GMT
> Content-Type: application/vnd.sun.wadl+xml
> Allow: HEAD,POST,GET,OPTIONS
> Last-Modified: Fri, 07 Dec 2018 14:51:53 PST
> Content-Length: 1331
> Server: Jetty(9.4.12.v20180830)
> 
> http://wadl.dev.java.net/2009/02";>
> http://jersey.java.net/"; jersey:generatedBy="Jersey: 2.27 
> 2018-04-10 07:34:57"/>
> 
> http://localhost:8083/application.wadl/xsd0.xsd";>
> 
> 
> 
> http://localhost:8083/";>
> 
> 
> 
> http://www.w3.org/2001/XMLSchema"; name="forward" 
> style="query" type="xs:boolean"/>
> 
> 
> 
> 
> 
> 
> 
> 
> http://www.w3.org/2001/XMLSchema"; name="forward" 
> style="query" type="xs:boolean"/>
> 
> 
> 
> 
> 
> 
> 
> 
> {code}
> It was never documented, so in order to remove unintended behavior, WADL 
> output should be disabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7716) Unprocessed messages when Broker fails

2018-12-20 Thread Guozhang Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16726341#comment-16726341
 ] 

Guozhang Wang commented on KAFKA-7716:
--

[~Finbarr] Thanks for the PR, I've read through the SO thread as well. The 
first thing I'd ask is also about the replication factor: if your internal 
topics are not configured with replication factor >= 3 then when a broker fails 
it is still possible that you'll lose your acked messages, even with EOS turned 
on.

> Unprocessed messages when Broker fails
> --
>
> Key: KAFKA-7716
> URL: https://issues.apache.org/jira/browse/KAFKA-7716
> Project: Kafka
>  Issue Type: Bug
>  Components: core, streams
>Affects Versions: 1.0.0, 2.0.1
>Reporter: Finbarr Naughton
>Priority: Major
>
> This occurs when running on Kubernetes on bare metal.
> A Streams application with a single topology listening to two input topics A 
> and B. A is read as a GlobalKTable, B as a KStream. The topology joins the 
> stream to the GKTable and writes an updated message to topic A. The 
> application is configured to use exactly_once processing.
> There are three worker nodes. Kafka brokers are deployed as a statefulset on 
> the three nodes using the helm chart from here 
> -[https://github.com/helm/charts/tree/master/incubator/kafka] 
> The application has three instances spread across the three nodes.
> During a test, topic A is pre-populated with 50k messages over 5 minutes. 
> Then 50k messages with the same key-set are sent to topic B over 5 minutes. 
> The expected behaviour is that Topic A will contain 50k updated messages 
> afterwards. While all brokers are available this is the case, even when one 
> of the application pods is deleted.
> When a broker fails, however, a few expected updated messages fail to appear 
> on topic A despite their existence on topic B.
>  
> More complete description here - 
> [https://stackoverflow.com/questions/53557247/some-events-unprocessed-by-kafka-streams-application-on-kubernetes-on-bare-metal]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7764) Authentication exceptions during consumer metadata updates may not get propagated

2018-12-20 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-7764:
--

 Summary: Authentication exceptions during consumer metadata 
updates may not get propagated
 Key: KAFKA-7764
 URL: https://issues.apache.org/jira/browse/KAFKA-7764
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson


The consumer should propagate authentication errors to the user. We handle the 
common case in ConsumerNetworkClient when the exception occurs in response to 
an explicitly provided request. However, we are missing the logic to propagate 
exceptions during metadata updates, which are handled internally by 
NetworkClient. This logic exists in ConsumerNetworkClient.awaitMetadataUpdate, 
but metadata updates can occur outside of this path. Probably we just need to 
move that logic into ConsumerNetworkClient.poll() so that errors are always 
checked.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7764) Authentication exceptions during consumer metadata updates may not get propagated

2018-12-20 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-7764:
---
Component/s: consumer

> Authentication exceptions during consumer metadata updates may not get 
> propagated
> -
>
> Key: KAFKA-7764
> URL: https://issues.apache.org/jira/browse/KAFKA-7764
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Jason Gustafson
>Priority: Major
>
> The consumer should propagate authentication errors to the user. We handle 
> the common case in ConsumerNetworkClient when the exception occurs in 
> response to an explicitly provided request. However, we are missing the logic 
> to propagate exceptions during metadata updates, which are handled internally 
> by NetworkClient. This logic exists in 
> ConsumerNetworkClient.awaitMetadataUpdate, but metadata updates can occur 
> outside of this path. Probably we just need to move that logic into 
> ConsumerNetworkClient.poll() so that errors are always checked.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)