[jira] [Resolved] (KAFKA-2124) gradlew is not working on a fresh checkout

2018-12-13 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KAFKA-2124.

Resolution: Duplicate
  Assignee: Grant Henke

> gradlew is not working on a fresh checkout
> --
>
> Key: KAFKA-2124
> URL: https://issues.apache.org/jira/browse/KAFKA-2124
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Reporter: Jakob Homan
>Assignee: Grant Henke
>Priority: Major
>
> For a fresh checkout, the gradlew script is not working:
> {noformat}heimdallr 15:54 $ asfclone kafka
> Cloning into 'kafka'...
> remote: Counting objects: 25676, done.
> remote: Compressing objects: 100% (36/36), done.
> remote: Total 25676 (delta 5), reused 0 (delta 0), pack-reused 25627
> Receiving objects: 100% (25676/25676), 19.58 MiB | 4.29 MiB/s, done.
> Resolving deltas: 100% (13852/13852), done.
> Checking connectivity... done.
> /tmp/kafka /tmp
> /tmp
> ✔ /tmp
> heimdallr 15:54 $ cd kafka
> ✔ /tmp/kafka [trunk|✔]
> heimdallr 15:54 $ ./gradlew tasks
> Error: Could not find or load main class 
> org.gradle.wrapper.GradleWrapperMain{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4906) Support 0.9 brokers with a newer Producer or Consumer version

2017-08-26 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KAFKA-4906.

Resolution: Won't Fix

> Support 0.9 brokers with a newer Producer or Consumer version
> -
>
> Key: KAFKA-4906
> URL: https://issues.apache.org/jira/browse/KAFKA-4906
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.10.2.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> KAFKA-4507 added the ability for newer Kafka clients to talk to older Kafka 
> brokers if a new feature supported by a newer wire protocol was not 
> used/required. 
> We currently support brokers as old as 0.10.0.0 because thats when the 
> ApiVersionsRequest/Response was added to the broker (KAFKA-3307).
> However, there are relatively few changes between 0.9.0.0 and 0.10.0.0 on the 
> wire, making it possible to support another major broker version set by 
> assuming that any disconnect resulting from an ApiVersionsRequest is from a 
> 0.9 broker and defaulting to legacy protocol versions. 
> Supporting 0.9 with newer clients can drastically simplify upgrades, allow 
> for libraries and frameworks to easily support a wider set of environments, 
> and let developers take advantage of client side improvements without 
> requiring cluster upgrades first. 
> Below is a list of the wire protocol versions by release for reference: 
> {noformat}
> 0.10.x
>   Produce(0): 0 to 2
>   Fetch(1): 0 to 2 
>   Offsets(2): 0
>   Metadata(3): 0 to 1
>   OffsetCommit(8): 0 to 2
>   OffsetFetch(9): 0 to 1
>   GroupCoordinator(10): 0
>   JoinGroup(11): 0
>   Heartbeat(12): 0
>   LeaveGroup(13): 0
>   SyncGroup(14): 0
>   DescribeGroups(15): 0
>   ListGroups(16): 0
>   SaslHandshake(17): 0
>   ApiVersions(18): 0
> 0.9.x:
>   Produce(0): 0 to 1 (no response timestamp from v2)
>   Fetch(1): 0 to 1 (no response timestamp from v2)
>   Offsets(2): 0
>   Metadata(3): 0 (no cluster id or rack info from v1)
>   OffsetCommit(8): 0 to 2
>   OffsetFetch(9): 0 to 1
>   GroupCoordinator(10): 0
>   JoinGroup(11): 0
>   Heartbeat(12): 0
>   LeaveGroup(13): 0
>   SyncGroup(14): 0
>   DescribeGroups(15): 0
>   ListGroups(16): 0
>   SaslHandshake(17): UNSUPPORTED
>   ApiVersions(18): UNSUPPORTED
> 0.8.2.x:
>   Produce(0): 0 (no quotas from v1)
>   Fetch(1): 0 (no quotas from v1)
>   Offsets(2): 0
>   Metadata(3): 0
>   OffsetCommit(8): 0 to 1 (no global retention time from v2)
>   OffsetFetch(9): 0 to 1
>   GroupCoordinator(10): 0
>   JoinGroup(11): UNSUPPORTED
>   Heartbeat(12): UNSUPPORTED
>   LeaveGroup(13): UNSUPPORTED
>   SyncGroup(14): UNSUPPORTED
>   DescribeGroups(15): UNSUPPORTED
>   ListGroups(16): UNSUPPORTED
>   SaslHandshake(17): UNSUPPORTED
>   ApiVersions(18): UNSUPPORTED
> {noformat}
> Note: Due to KAFKA-3088 it may take up to request.timeout.time to fail an 
> ApiVersionsRequest and failover to legacy protocol versions unless we handle 
> that scenario specifically in this patch. The workaround would be to reduce 
> request.timeout.time if needed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4906) Support 0.9 brokers with a newer Producer or Consumer version

2017-03-15 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15927449#comment-15927449
 ] 

Grant Henke commented on KAFKA-4906:


[~ijuma] Following up with a summary of some of our out of band chats here. 

In smoke testing of a WIP patch it appeared I was able to send messages from a 
trunk client to a 0.9 broker and receive the from a trunk consumer. We were a 
bit confused by this since the message format had changed and should not be 
parsable. I think since I was using uncompressed messages and a regular topic, 
the message could pass through without the format really being parsed or 
validated. 

However, that is likely not the case for a compacted topic or a compressed 
message set. More testing would be needed to be sure. 

Regardless the safest approach would likely be to ensure the message format 
matches the producer message version. (Produce v1 = Message Format 0, and 
Produce V2 = Message Format 1). I will investigate further and see how large of 
a change is required before posting anything further to do that. 

> Support 0.9 brokers with a newer Producer or Consumer version
> -
>
> Key: KAFKA-4906
> URL: https://issues.apache.org/jira/browse/KAFKA-4906
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.10.2.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> KAFKA-4507 added the ability for newer Kafka clients to talk to older Kafka 
> brokers if a new feature supported by a newer wire protocol was not 
> used/required. 
> We currently support brokers as old as 0.10.0.0 because thats when the 
> ApiVersionsRequest/Response was added to the broker (KAFKA-3307).
> However, there are relatively few changes between 0.9.0.0 and 0.10.0.0 on the 
> wire, making it possible to support another major broker version set by 
> assuming that any disconnect resulting from an ApiVersionsRequest is from a 
> 0.9 broker and defaulting to legacy protocol versions. 
> Supporting 0.9 with newer clients can drastically simplify upgrades, allow 
> for libraries and frameworks to easily support a wider set of environments, 
> and let developers take advantage of client side improvements without 
> requiring cluster upgrades first. 
> Below is a list of the wire protocol versions by release for reference: 
> {noformat}
> 0.10.x
>   Produce(0): 0 to 2
>   Fetch(1): 0 to 2 
>   Offsets(2): 0
>   Metadata(3): 0 to 1
>   OffsetCommit(8): 0 to 2
>   OffsetFetch(9): 0 to 1
>   GroupCoordinator(10): 0
>   JoinGroup(11): 0
>   Heartbeat(12): 0
>   LeaveGroup(13): 0
>   SyncGroup(14): 0
>   DescribeGroups(15): 0
>   ListGroups(16): 0
>   SaslHandshake(17): 0
>   ApiVersions(18): 0
> 0.9.x:
>   Produce(0): 0 to 1 (no response timestamp from v2)
>   Fetch(1): 0 to 1 (no response timestamp from v2)
>   Offsets(2): 0
>   Metadata(3): 0 (no cluster id or rack info from v1)
>   OffsetCommit(8): 0 to 2
>   OffsetFetch(9): 0 to 1
>   GroupCoordinator(10): 0
>   JoinGroup(11): 0
>   Heartbeat(12): 0
>   LeaveGroup(13): 0
>   SyncGroup(14): 0
>   DescribeGroups(15): 0
>   ListGroups(16): 0
>   SaslHandshake(17): UNSUPPORTED
>   ApiVersions(18): UNSUPPORTED
> 0.8.2.x:
>   Produce(0): 0 (no quotas from v1)
>   Fetch(1): 0 (no quotas from v1)
>   Offsets(2): 0
>   Metadata(3): 0
>   OffsetCommit(8): 0 to 1 (no global retention time from v2)
>   OffsetFetch(9): 0 to 1
>   GroupCoordinator(10): 0
>   JoinGroup(11): UNSUPPORTED
>   Heartbeat(12): UNSUPPORTED
>   LeaveGroup(13): UNSUPPORTED
>   SyncGroup(14): UNSUPPORTED
>   DescribeGroups(15): UNSUPPORTED
>   ListGroups(16): UNSUPPORTED
>   SaslHandshake(17): UNSUPPORTED
>   ApiVersions(18): UNSUPPORTED
> {noformat}
> Note: Due to KAFKA-3088 it may take up to request.timeout.time to fail an 
> ApiVersionsRequest and failover to legacy protocol versions unless we handle 
> that scenario specifically in this patch. The workaround would be to reduce 
> request.timeout.time if needed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4906) Support 0.9 brokers with a newer Producer or Consumer version

2017-03-15 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-4906:
---
Fix Version/s: (was: 0.10.2.1)

> Support 0.9 brokers with a newer Producer or Consumer version
> -
>
> Key: KAFKA-4906
> URL: https://issues.apache.org/jira/browse/KAFKA-4906
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.10.2.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> KAFKA-4507 added the ability for newer Kafka clients to talk to older Kafka 
> brokers if a new feature supported by a newer wire protocol was not 
> used/required. 
> We currently support brokers as old as 0.10.0.0 because thats when the 
> ApiVersionsRequest/Response was added to the broker (KAFKA-3307).
> However, there are relatively few changes between 0.9.0.0 and 0.10.0.0 on the 
> wire, making it possible to support another major broker version set by 
> assuming that any disconnect resulting from an ApiVersionsRequest is from a 
> 0.9 broker and defaulting to legacy protocol versions. 
> Supporting 0.9 with newer clients can drastically simplify upgrades, allow 
> for libraries and frameworks to easily support a wider set of environments, 
> and let developers take advantage of client side improvements without 
> requiring cluster upgrades first. 
> Below is a list of the wire protocol versions by release for reference: 
> {noformat}
> 0.10.x
>   Produce(0): 0 to 2
>   Fetch(1): 0 to 2 
>   Offsets(2): 0
>   Metadata(3): 0 to 1
>   OffsetCommit(8): 0 to 2
>   OffsetFetch(9): 0 to 1
>   GroupCoordinator(10): 0
>   JoinGroup(11): 0
>   Heartbeat(12): 0
>   LeaveGroup(13): 0
>   SyncGroup(14): 0
>   DescribeGroups(15): 0
>   ListGroups(16): 0
>   SaslHandshake(17): 0
>   ApiVersions(18): 0
> 0.9.x:
>   Produce(0): 0 to 1 (no response timestamp from v2)
>   Fetch(1): 0 to 1 (no response timestamp from v2)
>   Offsets(2): 0
>   Metadata(3): 0 (no cluster id or rack info from v1)
>   OffsetCommit(8): 0 to 2
>   OffsetFetch(9): 0 to 1
>   GroupCoordinator(10): 0
>   JoinGroup(11): 0
>   Heartbeat(12): 0
>   LeaveGroup(13): 0
>   SyncGroup(14): 0
>   DescribeGroups(15): 0
>   ListGroups(16): 0
>   SaslHandshake(17): UNSUPPORTED
>   ApiVersions(18): UNSUPPORTED
> 0.8.2.x:
>   Produce(0): 0 (no quotas from v1)
>   Fetch(1): 0 (no quotas from v1)
>   Offsets(2): 0
>   Metadata(3): 0
>   OffsetCommit(8): 0 to 1 (no global retention time from v2)
>   OffsetFetch(9): 0 to 1
>   GroupCoordinator(10): 0
>   JoinGroup(11): UNSUPPORTED
>   Heartbeat(12): UNSUPPORTED
>   LeaveGroup(13): UNSUPPORTED
>   SyncGroup(14): UNSUPPORTED
>   DescribeGroups(15): UNSUPPORTED
>   ListGroups(16): UNSUPPORTED
>   SaslHandshake(17): UNSUPPORTED
>   ApiVersions(18): UNSUPPORTED
> {noformat}
> Note: Due to KAFKA-3088 it may take up to request.timeout.time to fail an 
> ApiVersionsRequest and failover to legacy protocol versions unless we handle 
> that scenario specifically in this patch. The workaround would be to reduce 
> request.timeout.time if needed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-4906) Support 0.9 brokers with a newer Producer or Consumer version

2017-03-15 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-4906:
--

 Summary: Support 0.9 brokers with a newer Producer or Consumer 
version
 Key: KAFKA-4906
 URL: https://issues.apache.org/jira/browse/KAFKA-4906
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 0.10.2.0
Reporter: Grant Henke
Assignee: Grant Henke
 Fix For: 0.10.2.1


KAFKA-4507 added the ability for newer Kafka clients to talk to older Kafka 
brokers if a new feature supported by a newer wire protocol was not 
used/required. 

We currently support brokers as old as 0.10.0.0 because thats when the 
ApiVersionsRequest/Response was added to the broker (KAFKA-3307).

However, there are relatively few changes between 0.9.0.0 and 0.10.0.0 on the 
wire, making it possible to support another major broker version set by 
assuming that any disconnect resulting from an ApiVersionsRequest is from a 0.9 
broker and defaulting to legacy protocol versions. 

Supporting 0.9 with newer clients can drastically simplify upgrades, allow for 
libraries and frameworks to easily support a wider set of environments, and let 
developers take advantage of client side improvements without requiring cluster 
upgrades first. 

Below is a list of the wire protocol versions by release for reference: 
{noformat}
0.10.x
Produce(0): 0 to 2
Fetch(1): 0 to 2 
Offsets(2): 0
Metadata(3): 0 to 1
OffsetCommit(8): 0 to 2
OffsetFetch(9): 0 to 1
GroupCoordinator(10): 0
JoinGroup(11): 0
Heartbeat(12): 0
LeaveGroup(13): 0
SyncGroup(14): 0
DescribeGroups(15): 0
ListGroups(16): 0
SaslHandshake(17): 0
ApiVersions(18): 0

0.9.x:
Produce(0): 0 to 1 (no response timestamp from v2)
Fetch(1): 0 to 1 (no response timestamp from v2)
Offsets(2): 0
Metadata(3): 0 (no cluster id or rack info from v1)
OffsetCommit(8): 0 to 2
OffsetFetch(9): 0 to 1
GroupCoordinator(10): 0
JoinGroup(11): 0
Heartbeat(12): 0
LeaveGroup(13): 0
SyncGroup(14): 0
DescribeGroups(15): 0
ListGroups(16): 0
SaslHandshake(17): UNSUPPORTED
ApiVersions(18): UNSUPPORTED

0.8.2.x:
Produce(0): 0 (no quotas from v1)
Fetch(1): 0 (no quotas from v1)
Offsets(2): 0
Metadata(3): 0
OffsetCommit(8): 0 to 1 (no global retention time from v2)
OffsetFetch(9): 0 to 1
GroupCoordinator(10): 0
JoinGroup(11): UNSUPPORTED
Heartbeat(12): UNSUPPORTED
LeaveGroup(13): UNSUPPORTED
SyncGroup(14): UNSUPPORTED
DescribeGroups(15): UNSUPPORTED
ListGroups(16): UNSUPPORTED
SaslHandshake(17): UNSUPPORTED
ApiVersions(18): UNSUPPORTED
{noformat}

Note: Due to KAFKA-3088 it may take up to request.timeout.time to fail an 
ApiVersionsRequest and failover to legacy protocol versions unless we handle 
that scenario specifically in this patch. The workaround would be to reduce 
request.timeout.time if needed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-2729) Cached zkVersion not equal to that in zookeeper, broker not recovering.

2017-02-22 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879124#comment-15879124
 ] 

Grant Henke commented on KAFKA-2729:


I am curious if everyone on this Jira is actually seeing the reported issue. I 
have had multiple cases where someone presented my with an environment they 
thought was experiencing this issue. After researching the environment and 
logs, to date it has always been something else. 

The main culprits so far have been:
* Long GC pauses causing zookeeper sessions to timeout
* Slow or poorly configured zookeeper
* Bad network configuration

All of the above resulted in a soft reoccurring failure of brokers. That churn 
often caused addition load perpetuating the issue. 

If you are seeing this issue do you see the following pattern repeating in the 
logs?:
{noformat}
INFO org.I0Itec.zkclient.ZkClient: zookeeper state changed (Disconnected)
...
INFO org.I0Itec.zkclient.ZkClient: zookeeper state changed (Expired)
INFO org.apache.zookeeper.ClientCnxn: Unable to reconnect to ZooKeeper service, 
session 0x153ab38abdbd360 has expired, closing socket connection
...
INFO org.I0Itec.zkclient.ZkClient: zookeeper state changed (SyncConnected)
INFO kafka.server.KafkaHealthcheck: re-registering broker info in ZK for broker 
32
INFO kafka.utils.ZKCheckedEphemeral: Creating /brokers/ids/32 (is it secure? 
false)
INFO kafka.utils.ZKCheckedEphemeral: Result of znode creation is: OK
{noformat}

If so, something is causing communication with zookeeper to take too long and 
the broker is unregistering itself. This will cause ISRs to shrink and expand 
over and over again.

I don't think this will solve everyones issue here, but hopefully it will help 
solve some.



> Cached zkVersion not equal to that in zookeeper, broker not recovering.
> ---
>
> Key: KAFKA-2729
> URL: https://issues.apache.org/jira/browse/KAFKA-2729
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: Danil Serdyuchenko
>
> After a small network wobble where zookeeper nodes couldn't reach each other, 
> we started seeing a large number of undereplicated partitions. The zookeeper 
> cluster recovered, however we continued to see a large number of 
> undereplicated partitions. Two brokers in the kafka cluster were showing this 
> in the logs:
> {code}
> [2015-10-27 11:36:00,888] INFO Partition 
> [__samza_checkpoint_event-creation_1,3] on broker 5: Shrinking ISR for 
> partition [__samza_checkpoint_event-creation_1,3] from 6,5 to 5 
> (kafka.cluster.Partition)
> [2015-10-27 11:36:00,891] INFO Partition 
> [__samza_checkpoint_event-creation_1,3] on broker 5: Cached zkVersion [66] 
> not equal to that in zookeeper, skip updating ISR (kafka.cluster.Partition)
> {code}
> For all of the topics on the effected brokers. Both brokers only recovered 
> after a restart. Our own investigation yielded nothing, I was hoping you 
> could shed some light on this issue. Possibly if it's related to: 
> https://issues.apache.org/jira/browse/KAFKA-1382 , however we're using 
> 0.8.2.1.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4754) Correctly parse '=' characters in command line overrides

2017-02-17 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15872139#comment-15872139
 ] 

Grant Henke commented on KAFKA-4754:


{quote}
This could expose the password to anyone who is able to run ps on the system, 
or look at the bash history. So I'm not sure that we should be concerned about 
the println
{quote}

I think its worth adding, just because 1 thing is wrong and a security hole 
,doesn't mean we shouldn't close of fix others. If security were all or nothing 
we would be left with nothing. Often application logs are passed around 
aggregated and collected. Access to a machine to run ps or look at the history 
is a much lower concern than that.

> Correctly parse '=' characters in command line overrides
> 
>
> Key: KAFKA-4754
> URL: https://issues.apache.org/jira/browse/KAFKA-4754
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> When starting Kafka with an override parameter via "--override 
> my.parameter=myvalue".
> If a value contains an '=' character it fails and exits with "Invalid command 
> line properties:.."
> Often passwords contain an '=' character so its important to support that 
> value. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4754) Correctly parse '=' characters in command line overrides

2017-02-17 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15872127#comment-15872127
 ] 

Grant Henke commented on KAFKA-4754:


{quote}
Hmm. It is not a good practice to pass passwords through the command line. 
{quote}

I agree, but my usage is not via command line. Its actually used internal to 
the application and used to improve security. This functionality supports a 
workaround since there was pushback of the feature proposed in KAFKA-2629. I 
generate the password, and pass it via a call to kafka.Kafka.main(args: 
Array[String]).



> Correctly parse '=' characters in command line overrides
> 
>
> Key: KAFKA-4754
> URL: https://issues.apache.org/jira/browse/KAFKA-4754
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> When starting Kafka with an override parameter via "--override 
> my.parameter=myvalue".
> If a value contains an '=' character it fails and exits with "Invalid command 
> line properties:.."
> Often passwords contain an '=' character so its important to support that 
> value. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4754) Correctly parse '=' characters in command line overrides

2017-02-09 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-4754:
---
Status: Patch Available  (was: Open)

> Correctly parse '=' characters in command line overrides
> 
>
> Key: KAFKA-4754
> URL: https://issues.apache.org/jira/browse/KAFKA-4754
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> When starting Kafka with an override parameter via "--override 
> my.parameter=myvalue".
> If a value contains an '=' character it fails and exits with "Invalid command 
> line properties:.."
> Often passwords contain an '=' character so its important to support that 
> value. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4754) Correctly parse '=' characters in command line overrides

2017-02-09 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15860597#comment-15860597
 ] 

Grant Henke commented on KAFKA-4754:


Its worth noting, it was also possible to echo out passwords on any error in 
this code path via CommandLineUtils.parseKeyValueArgs: 
{noformat}
System.err.println("Invalid command line properties: " + args.mkString(" "))
{noformat}

> Correctly parse '=' characters in command line overrides
> 
>
> Key: KAFKA-4754
> URL: https://issues.apache.org/jira/browse/KAFKA-4754
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> When starting Kafka with an override parameter via "--override 
> my.parameter=myvalue".
> If a value contains an '=' character it fails and exits with "Invalid command 
> line properties:.."
> Often passwords contain an '=' character so its important to support that 
> value. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-4754) Correctly parse '=' characters in command line overrides

2017-02-09 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-4754:
--

 Summary: Correctly parse '=' characters in command line overrides
 Key: KAFKA-4754
 URL: https://issues.apache.org/jira/browse/KAFKA-4754
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.9.0.0
Reporter: Grant Henke
Assignee: Grant Henke


When starting Kafka with an override parameter via "--override 
my.parameter=myvalue".

If a value contains an '=' character it fails and exits with "Invalid command 
line properties:.."

Often passwords contain an '=' character so its important to support that 
value. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4746) Offsets can be committed for the offsets topic

2017-02-08 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858709#comment-15858709
 ] 

Grant Henke commented on KAFKA-4746:


I just mean that often when working with a compacted topic you read from the 
start of the topic every time your process restarts to see or rebuild "the 
current state". 

But you are right, that is a bit of an overstatement. There are likely cases 
where a process commits an offset to try and resume where it left off being 
well aware that the offsets could have been cleaned since it was last 
committed. As I understand before KIP-58/KAFKA-1981 it would be a race 
condition against the log cleaner whether the committed offset is valid or not. 
 Committing the offset also doesn't do anything to help ensure you didn't miss 
an offset that was cleaned while your application was not processing. 

KIP-58/KAFKA-1981 Fixed that to ensure some time passed before cleaning with  
min.compaction.lag.ms/min.compaction.lag.bytes/min.compaction.lag.messages

> Offsets can be committed for the offsets topic
> --
>
> Key: KAFKA-4746
> URL: https://issues.apache.org/jira/browse/KAFKA-4746
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>
> Though this is likely rare and I don't suspect to many people would try to do 
> this, we should prevent users from committing offsets for the offsets topic 
> into the offsets topic. This would essentially create an infinite loop in any 
> consumer consuming from that topic. Also committing offsets for a compacted 
> topic doesn't likely make sense anyway. 
> Here is a quick failing test I wrote to see if this guard exists:
> {code:title=OffsetCommitTest.scala|borderStyle=solid}
>  @Test
>   def testOffsetTopicOffsetCommit() {
> val topic1 = "__consumer_offsets"
> // Commit an offset
> val expectedReplicaAssignment = Map(0  -> List(1))
> val commitRequest = OffsetCommitRequest(
>   groupId = group,
>   requestInfo = immutable.Map(TopicAndPartition(topic1, 0) -> 
> OffsetAndMetadata(offset=42L)),
>   versionId = 2
> )
> val commitResponse = simpleConsumer.commitOffsets(commitRequest)
> assertEquals(Errors.INVALID_TOPIC_EXCEPTION.code, 
> commitResponse.commitStatus.get(TopicAndPartition(topic1, 0)).get)
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-4746) Offsets can be committed for the offsets topic

2017-02-08 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-4746:
--

 Summary: Offsets can be committed for the offsets topic
 Key: KAFKA-4746
 URL: https://issues.apache.org/jira/browse/KAFKA-4746
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.9.0.0
Reporter: Grant Henke


Though this is likely rare and I don't suspect to many people would try to do 
this, we should prevent users from committing offsets for the offsets topic 
into the offsets topic. This would essentially create an infinite loop in any 
consumer consuming from that topic. Also committing offsets for a compacted 
topic doesn't likely make sense anyway. 

Here is a quick failing test I wrote to see if this guard exists:

{code:title=OffsetCommitTest.scala|borderStyle=solid}
 @Test
  def testOffsetTopicOffsetCommit() {
val topic1 = "__consumer_offsets"
// Commit an offset
val expectedReplicaAssignment = Map(0  -> List(1))
val commitRequest = OffsetCommitRequest(
  groupId = group,
  requestInfo = immutable.Map(TopicAndPartition(topic1, 0) -> 
OffsetAndMetadata(offset=42L)),
  versionId = 2
)
val commitResponse = simpleConsumer.commitOffsets(commitRequest)

assertEquals(Errors.INVALID_TOPIC_EXCEPTION.code, 
commitResponse.commitStatus.get(TopicAndPartition(topic1, 0)).get)
  }
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-4525) Kafka should not require SSL trust store password

2016-12-12 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-4525:
--

 Summary: Kafka should not require SSL trust store password
 Key: KAFKA-4525
 URL: https://issues.apache.org/jira/browse/KAFKA-4525
 Project: Kafka
  Issue Type: Bug
  Components: security
Affects Versions: 0.9.0.0
Reporter: Grant Henke
Assignee: Grant Henke


When configuring SSL for Kafka; If the truststore password is not set, Kafka 
fails to start with:
{noformat}
org.apache.kafka.common.KafkaException: SSL trust store is specified, but trust 
store password is not specified.

at 
org.apache.kafka.common.security.ssl.SslFactory.createTruststore(SslFactory.java:195)
at 
org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:115)
{noformat}

The truststore password is not required for read operations. When reading the 
truststore the password is used as an integrity check but not required. 

The risk of not providing a password is that someone could add a certificate 
into the store which you do not want to trust. The store should be protected 
first by the OS permissions. The password is an additional protection.

Though this risk of trusting the OS permissions is one many may not want to 
take, its not a decision that Kafka should enforce or require. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2552) Certain admin commands such as partition assignment fail on large clusters

2016-10-11 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KAFKA-2552.

Resolution: Duplicate

> Certain admin commands such as partition assignment fail on large clusters
> --
>
> Key: KAFKA-2552
> URL: https://issues.apache.org/jira/browse/KAFKA-2552
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Abhishek Nigam
>Assignee: Abhishek Nigam
>
> This happens because the json generated is greater than 1 MB and exceeds the 
> default data limit of zookeeper nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4203) Java producer default max message size does not align with broker default

2016-09-21 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-4203:
--

 Summary: Java producer default max message size does not align 
with broker default
 Key: KAFKA-4203
 URL: https://issues.apache.org/jira/browse/KAFKA-4203
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1
Reporter: Grant Henke
Assignee: Grant Henke
Priority: Critical


The Java producer sets max.request.size = 1048576 (the base 2 version of 1 MB 
(MiB))

The broker sets max.message.bytes = 112 (the base 10 value of 1 MB + 12 
bytes for overhead)

This means that by default the producer can try to produce messages larger than 
the broker will accept resulting in RecordTooLargeExceptions.

There were not similar issues in the old producer because it sets 
max.message.size = 100 (the base 10 value of 1 MB)

I propose we increase the broker default for max.message.bytes to 1048588 (the 
base 2 value of 1 MB (MiB) + 12 bytes for overhead) so that any message 
produced with default configs from either producer does not result in a 
RecordTooLargeException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4157) Transient system test failure in replica_verification_test.test_replica_lags

2016-09-13 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-4157:
--

 Summary: Transient system test failure in 
replica_verification_test.test_replica_lags
 Key: KAFKA-4157
 URL: https://issues.apache.org/jira/browse/KAFKA-4157
 Project: Kafka
  Issue Type: Bug
  Components: system tests
Affects Versions: 0.10.0.0
Reporter: Grant Henke
Assignee: Grant Henke


The replica_verification_test.test_replica_lags test runs a background thread 
via replica_verification_tool that populates a dict with max lag for each 
"topic,partition" key. Because populating that map is in a separate thread, 
there is a race condition on populating the key and querying it via 
replica_verification_tool.get_lag_for_partition. This results in a key error 
like below: 
{noformat}
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/ducktape/tests/runner.py", line 106, 
in run_all_tests
data = self.run_single_test()
  File "/usr/lib/python2.7/site-packages/ducktape/tests/runner.py", line 162, 
in run_single_test
return self.current_test_context.function(self.current_test)
  File "/root/kafka/tests/kafkatest/tests/tools/replica_verification_test.py", 
line 82, in test_replica_lags
err_msg="Timed out waiting to reach zero replica lags.")
  File "/usr/lib/python2.7/site-packages/ducktape/utils/util.py", line 31, in 
wait_until
if condition():
  File "/root/kafka/tests/kafkatest/tests/tools/replica_verification_test.py", 
line 81, in 
wait_until(lambda: self.replica_verifier.get_lag_for_partition(TOPIC, 0) == 
0, timeout_sec=10,
  File "/root/kafka/tests/kafkatest/services/replica_verification_tool.py", 
line 66, in get_lag_for_partition
lag = self.partition_lag[topic_partition]
KeyError: 'topic-replica-verification,0'
{noformat}

Instead of an error, None should be returned when no key is found. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4157) Transient system test failure in replica_verification_test.test_replica_lags

2016-09-13 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-4157:
---
Status: Patch Available  (was: Open)

> Transient system test failure in replica_verification_test.test_replica_lags
> 
>
> Key: KAFKA-4157
> URL: https://issues.apache.org/jira/browse/KAFKA-4157
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Affects Versions: 0.10.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> The replica_verification_test.test_replica_lags test runs a background thread 
> via replica_verification_tool that populates a dict with max lag for each 
> "topic,partition" key. Because populating that map is in a separate thread, 
> there is a race condition on populating the key and querying it via 
> replica_verification_tool.get_lag_for_partition. This results in a key error 
> like below: 
> {noformat}
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/ducktape/tests/runner.py", line 106, 
> in run_all_tests
> data = self.run_single_test()
>   File "/usr/lib/python2.7/site-packages/ducktape/tests/runner.py", line 162, 
> in run_single_test
> return self.current_test_context.function(self.current_test)
>   File 
> "/root/kafka/tests/kafkatest/tests/tools/replica_verification_test.py", line 
> 82, in test_replica_lags
> err_msg="Timed out waiting to reach zero replica lags.")
>   File "/usr/lib/python2.7/site-packages/ducktape/utils/util.py", line 31, in 
> wait_until
> if condition():
>   File 
> "/root/kafka/tests/kafkatest/tests/tools/replica_verification_test.py", line 
> 81, in 
> wait_until(lambda: self.replica_verifier.get_lag_for_partition(TOPIC, 0) 
> == 0, timeout_sec=10,
>   File "/root/kafka/tests/kafkatest/services/replica_verification_tool.py", 
> line 66, in get_lag_for_partition
> lag = self.partition_lag[topic_partition]
> KeyError: 'topic-replica-verification,0'
> {noformat}
> Instead of an error, None should be returned when no key is found. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4032) Uncaught exceptions when autocreating topics

2016-08-14 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-4032:
---
Status: Patch Available  (was: Open)

> Uncaught exceptions when autocreating topics
> 
>
> Key: KAFKA-4032
> URL: https://issues.apache.org/jira/browse/KAFKA-4032
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jason Gustafson
>Assignee: Grant Henke
>
> With the addition of the CreateTopics API in KIP-4, we have some new 
> exceptions which can be raised from {{AdminUtils.createTopic}}. For example, 
> it is possible to raise InvalidReplicationFactorException. Since we have not 
> yet removed the ability to create topics automatically, we need to make sure 
> these exceptions are caught and handled in both the TopicMetadata and 
> GroupCoordinator request handlers. Currently these exceptions are propagated 
> all the way to the processor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4038) Transient failure in DeleteTopicsRequestTest.testErrorDeleteTopicRequests

2016-08-14 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-4038:
---
Status: Patch Available  (was: Open)

> Transient failure in DeleteTopicsRequestTest.testErrorDeleteTopicRequests
> -
>
> Key: KAFKA-4038
> URL: https://issues.apache.org/jira/browse/KAFKA-4038
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jason Gustafson
>Assignee: Grant Henke
>
> {code}
> java.lang.AssertionError: The response error should match 
> Expected :REQUEST_TIMED_OUT
> Actual   :NONE
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at 
> kafka.server.DeleteTopicsRequestTest$$anonfun$validateErrorDeleteTopicRequests$1.apply(DeleteTopicsRequestTest.scala:89)
>   at 
> kafka.server.DeleteTopicsRequestTest$$anonfun$validateErrorDeleteTopicRequests$1.apply(DeleteTopicsRequestTest.scala:88)
>   at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
>   at 
> kafka.server.DeleteTopicsRequestTest.validateErrorDeleteTopicRequests(DeleteTopicsRequestTest.scala:88)
>   at 
> kafka.server.DeleteTopicsRequestTest.testErrorDeleteTopicRequests(DeleteTopicsRequestTest.scala:76)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-4038) Transient failure in DeleteTopicsRequestTest.testErrorDeleteTopicRequests

2016-08-14 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KAFKA-4038:
--

Assignee: Grant Henke

> Transient failure in DeleteTopicsRequestTest.testErrorDeleteTopicRequests
> -
>
> Key: KAFKA-4038
> URL: https://issues.apache.org/jira/browse/KAFKA-4038
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jason Gustafson
>Assignee: Grant Henke
>
> {code}
> java.lang.AssertionError: The response error should match 
> Expected :REQUEST_TIMED_OUT
> Actual   :NONE
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at 
> kafka.server.DeleteTopicsRequestTest$$anonfun$validateErrorDeleteTopicRequests$1.apply(DeleteTopicsRequestTest.scala:89)
>   at 
> kafka.server.DeleteTopicsRequestTest$$anonfun$validateErrorDeleteTopicRequests$1.apply(DeleteTopicsRequestTest.scala:88)
>   at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
>   at 
> kafka.server.DeleteTopicsRequestTest.validateErrorDeleteTopicRequests(DeleteTopicsRequestTest.scala:88)
>   at 
> kafka.server.DeleteTopicsRequestTest.testErrorDeleteTopicRequests(DeleteTopicsRequestTest.scala:76)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3959) __consumer_offsets wrong number of replicas at startup

2016-08-12 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15419471#comment-15419471
 ] 

Grant Henke commented on KAFKA-3959:


I would like to present an alternative option. This problem exists with any 
topic created using default.replication.factor > 1 as well. That prevents using 
2 or 3 as the configuration default because we want to support single nodes 
clusters without changing the defaults. 

Instead of preventing topics from being created with a low replication factor 
(unless min.isr is set). Instead it would be really nice if we tracked a 
"target replication factor" in the topic metadata. This is an improvement over 
assuming the target replication factor based on the actual replicas as is done 
today and can actually result in a more accurate under replicated count. 

This change would also help support any ability to automatically maintain the 
desired replication factor as nodes are started, stopped, etc. Some related 
KIPs for that are:
* [KIP-73 Replication 
Quotas|https://cwiki.apache.org/confluence/display/KAFKA/KIP-73+Replication+Quotas]
* [KIP-46: Self Healing 
Kafka|https://cwiki.apache.org/confluence/display/KAFKA/KIP-46%3A+Self+Healing+Kafka]

Would that be a viable option?

> __consumer_offsets wrong number of replicas at startup
> --
>
> Key: KAFKA-3959
> URL: https://issues.apache.org/jira/browse/KAFKA-3959
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, offset manager, replication
>Affects Versions: 0.9.0.1, 0.10.0.0
> Environment: Brokers of 3 kafka nodes running Red Hat Enterprise 
> Linux Server release 7.2 (Maipo)
>Reporter: Alban Hurtaud
>
> When creating a stack of 3 kafka brokers, the consumer is starting faster 
> than kafka nodes and when trying to read a topic, only one kafka node is 
> available.
> So the __consumer_offsets is created with a replication factor set to 1 
> (instead of configured 3) :
> offsets.topic.replication.factor=3
> default.replication.factor=3
> min.insync.replicas=2
> Then, other kafka nodes go up and we have exceptions because the replicas # 
> for __consumer_offsets is 1 and min insync is 2. So exceptions are thrown.
> What I missed is : Why the __consumer_offsets is created with replication to 
> 1 (when 1 broker is running) whereas in server.properties it is set to 3 ?
> To reproduce : 
> - Prepare 3 kafka nodes with the 3 lines above added to servers.properties.
> - Run one kafka,
> - Run one consumer (the __consumer_offsets is created with replicas =1)
> - Run 2 more kafka nodes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4032) Uncaught exceptions when autocreating topics

2016-08-12 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15419350#comment-15419350
 ] 

Grant Henke commented on KAFKA-4032:


I will make a patch for this shortly

> Uncaught exceptions when autocreating topics
> 
>
> Key: KAFKA-4032
> URL: https://issues.apache.org/jira/browse/KAFKA-4032
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jason Gustafson
>Assignee: Grant Henke
>
> With the addition of the CreateTopics API in KIP-4, we have some new 
> exceptions which can be raised from {{AdminUtils.createTopic}}. For example, 
> it is possible to raise InvalidReplicationFactorException. Since we have not 
> yet removed the ability to create topics automatically, we need to make sure 
> these exceptions are caught and handled in both the TopicMetadata and 
> GroupCoordinator request handlers. Currently these exceptions are propagated 
> all the way to the processor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-4032) Uncaught exceptions when autocreating topics

2016-08-12 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KAFKA-4032:
--

Assignee: Grant Henke

> Uncaught exceptions when autocreating topics
> 
>
> Key: KAFKA-4032
> URL: https://issues.apache.org/jira/browse/KAFKA-4032
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jason Gustafson
>Assignee: Grant Henke
>
> With the addition of the CreateTopics API in KIP-4, we have some new 
> exceptions which can be raised from {{AdminUtils.createTopic}}. For example, 
> it is possible to raise InvalidReplicationFactorException. Since we have not 
> yet removed the ability to create topics automatically, we need to make sure 
> these exceptions are caught and handled in both the TopicMetadata and 
> GroupCoordinator request handlers. Currently these exceptions are propagated 
> all the way to the processor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3934) Start scripts enable GC by default with no way to disable

2016-08-09 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3934:
---
Summary: Start scripts enable GC by default with no way to disable  (was: 
kafka-server-start.sh enables GC by default with no way to disable)

> Start scripts enable GC by default with no way to disable
> -
>
> Key: KAFKA-3934
> URL: https://issues.apache.org/jira/browse/KAFKA-3934
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> In KAFKA-1127 the following line was added to kafka-server-start.sh:
> {noformat}
> EXTRA_ARGS="-name kafkaServer -loggc"
> {noformat}
> This prevents gc logging from being disabled without some unusual environment 
> variable workarounds. 
> I suggest EXTRA_ARGS is made overridable like below: 
> {noformat}
> if [ "x$EXTRA_ARGS" = "x" ]; then
> export EXTRA_ARGS="-name kafkaServer -loggc"
> fi
> {noformat}
> *Note:* I am also not sure I understand why the existing code uses the "x" 
> thing when checking the variable instead of the following:
> {noformat}
> export EXTRA_ARGS=${EXTRA_ARGS-'-name kafkaServer -loggc'}
> {noformat}
> This lets the variable be overridden to "" without taking the default. 
> *Workaround:* As a workaround the user should be able to set 
> $KAFKA_GC_LOG_OPTS to fit their needs. Since kafka-run-class.sh will not 
> ignore the -loggc parameter if that is set. 
> {noformat}
> -loggc)
>   if [ -z "$KAFKA_GC_LOG_OPTS" ]; then
> GC_LOG_ENABLED="true"
>   fi
>   shift
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2507) Replace ControlledShutdown{Request,Response} with org.apache.kafka.common.requests equivalent

2016-07-18 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-2507:
---
Fix Version/s: 0.11.0.0

> Replace ControlledShutdown{Request,Response} with 
> org.apache.kafka.common.requests equivalent
> -
>
> Key: KAFKA-2507
> URL: https://issues.apache.org/jira/browse/KAFKA-2507
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Grant Henke
> Fix For: 0.11.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3934) kafka-server-start.sh enables GC by default with no way to disable

2016-07-18 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3934:
---
Status: Patch Available  (was: Open)

> kafka-server-start.sh enables GC by default with no way to disable
> --
>
> Key: KAFKA-3934
> URL: https://issues.apache.org/jira/browse/KAFKA-3934
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> In KAFKA-1127 the following line was added to kafka-server-start.sh:
> {noformat}
> EXTRA_ARGS="-name kafkaServer -loggc"
> {noformat}
> This prevents gc logging from being disabled without some unusual environment 
> variable workarounds. 
> I suggest EXTRA_ARGS is made overridable like below: 
> {noformat}
> if [ "x$EXTRA_ARGS" = "x" ]; then
> export EXTRA_ARGS="-name kafkaServer -loggc"
> fi
> {noformat}
> *Note:* I am also not sure I understand why the existing code uses the "x" 
> thing when checking the variable instead of the following:
> {noformat}
> export EXTRA_ARGS=${EXTRA_ARGS-'-name kafkaServer -loggc'}
> {noformat}
> This lets the variable be overridden to "" without taking the default. 
> *Workaround:* As a workaround the user should be able to set 
> $KAFKA_GC_LOG_OPTS to fit their needs. Since kafka-run-class.sh will not 
> ignore the -loggc parameter if that is set. 
> {noformat}
> -loggc)
>   if [ -z "$KAFKA_GC_LOG_OPTS" ]; then
> GC_LOG_ENABLED="true"
>   fi
>   shift
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2946) DeleteTopic - protocol and server side implementation

2016-07-12 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-2946:
---
Status: Patch Available  (was: In Progress)

> DeleteTopic - protocol and server side implementation
> -
>
> Key: KAFKA-2946
> URL: https://issues.apache.org/jira/browse/KAFKA-2946
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Grant Henke
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3934) kafka-server-start.sh enables GC by default with no way to disable

2016-07-07 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-3934:
--

 Summary: kafka-server-start.sh enables GC by default with no way 
to disable
 Key: KAFKA-3934
 URL: https://issues.apache.org/jira/browse/KAFKA-3934
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.0
Reporter: Grant Henke
Assignee: Grant Henke


In KAFKA-1127 the following line was added to kafka-server-start.sh:

{noformat}
EXTRA_ARGS="-name kafkaServer -loggc"
{noformat}

This prevents gc logging from being disabled without some unusual environment 
variable workarounds. 

I suggest EXTRA_ARGS is made overridable like below: 

{noformat}
if [ "x$EXTRA_ARGS" = "x" ]; then
export EXTRA_ARGS="-name kafkaServer -loggc"
fi
{noformat}

*Note:* I am also not sure I understand why the existing code uses the "x" 
thing when checking the variable instead of the following:

{noformat}
export EXTRA_ARGS=${EXTRA_ARGS-'-name kafkaServer -loggc'}
{noformat}

This lets the variable be overridden to "" without taking the default. 

*Workaround:* As a workaround the user should be able to set $KAFKA_GC_LOG_OPTS 
to fit their needs. Since kafka-run-class.sh will not ignore the -loggc 
parameter if that is set. 

{noformat}
-loggc)
  if [ -z "$KAFKA_GC_LOG_OPTS" ]; then
GC_LOG_ENABLED="true"
  fi
  shift
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3818) Change Mirror Maker default assignment strategy to round robin

2016-06-15 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332524#comment-15332524
 ] 

Grant Henke commented on KAFKA-3818:


A few older threads mention that its possible to get clumping (due to the hash 
on the RoundRobinAssignor). Does that problem still exist? Is that something we 
should fix before changing the default?

This thread discusses it recently: 
http://search-hadoop.com/m/uyzND135BcA1lXiM=Re+DISCUSS+KIP+49+Fair+Partition+Assignment+Strategy
{quote}
 - WRT roundrobin we later realized a significant flaw in the way we lay
   out partitions: we originally wanted to randomize the partition layout to
   reduce the likelihood of most partitions of the same topic from ending up
   on a given consumer which is important if you have a few very large topics.
   Unfortunately we used hashCode - which does a splendid job of clumping
   partitions from the same topic together :( We can probably just "fix" that
   in the new consumer's roundrobin assignor.
{quote}

And this older jira looks to describe the issue [~jjkoshy] is referring to: 
KAFKA-2019

[~jjkoshy] do you have any thoughts?


> Change Mirror Maker default assignment strategy to round robin
> --
>
> Key: KAFKA-3818
> URL: https://issues.apache.org/jira/browse/KAFKA-3818
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>
> It might make more sense to use round robin assignment by default for MM 
> since it gives a better balance between the instances, in particular when the 
> number of MM instances exceeds the typical number of partitions per topic. 
> There doesn't seem to be any need to keep range assignment since 
> copartitioning is not an issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3691) Confusing logging during metadata update timeout

2016-06-15 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3691:
---
Fix Version/s: 0.10.0.1

> Confusing logging during metadata update timeout
> 
>
> Key: KAFKA-3691
> URL: https://issues.apache.org/jira/browse/KAFKA-3691
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.10.1.0, 0.10.0.1
>
>
> When the KafkaProducer calls waitOnMetadata it will loop decrementing the 
> remainingWaitMs until it either receives the request metadata or runs out of 
> time. Inside the loop Metadata.awaitUpdate is called with the value in 
> remainingWaitMs. Inside Metadata.awaitUpdate a timeout execption could be 
> thrown using the remainingWaitMs which results in messages like:
> {noformat}
> org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
> after 3 ms.
> {noformat}
> Perhaps we should catch the exception and log the real maxWaitMs or change 
> the language to make the exception more clear. 
> Note: I still need to investigate further to be sure exactly when this 
> happens, but wanted to log the jira to make sure this is not forgotten. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3691) Confusing logging during metadata update timeout

2016-06-15 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3691:
---
Fix Version/s: 0.10.1.0

> Confusing logging during metadata update timeout
> 
>
> Key: KAFKA-3691
> URL: https://issues.apache.org/jira/browse/KAFKA-3691
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.10.1.0, 0.10.0.1
>
>
> When the KafkaProducer calls waitOnMetadata it will loop decrementing the 
> remainingWaitMs until it either receives the request metadata or runs out of 
> time. Inside the loop Metadata.awaitUpdate is called with the value in 
> remainingWaitMs. Inside Metadata.awaitUpdate a timeout execption could be 
> thrown using the remainingWaitMs which results in messages like:
> {noformat}
> org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
> after 3 ms.
> {noformat}
> Perhaps we should catch the exception and log the real maxWaitMs or change 
> the language to make the exception more clear. 
> Note: I still need to investigate further to be sure exactly when this 
> happens, but wanted to log the jira to make sure this is not forgotten. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3691) Confusing logging during metadata update timeout

2016-06-15 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3691:
---
Affects Version/s: 0.10.0.0

> Confusing logging during metadata update timeout
> 
>
> Key: KAFKA-3691
> URL: https://issues.apache.org/jira/browse/KAFKA-3691
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.10.1.0, 0.10.0.1
>
>
> When the KafkaProducer calls waitOnMetadata it will loop decrementing the 
> remainingWaitMs until it either receives the request metadata or runs out of 
> time. Inside the loop Metadata.awaitUpdate is called with the value in 
> remainingWaitMs. Inside Metadata.awaitUpdate a timeout execption could be 
> thrown using the remainingWaitMs which results in messages like:
> {noformat}
> org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
> after 3 ms.
> {noformat}
> Perhaps we should catch the exception and log the real maxWaitMs or change 
> the language to make the exception more clear. 
> Note: I still need to investigate further to be sure exactly when this 
> happens, but wanted to log the jira to make sure this is not forgotten. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3691) Confusing logging during metadata update timeout

2016-06-15 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3691:
---
Status: Patch Available  (was: Open)

> Confusing logging during metadata update timeout
> 
>
> Key: KAFKA-3691
> URL: https://issues.apache.org/jira/browse/KAFKA-3691
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.10.1.0, 0.10.0.1
>
>
> When the KafkaProducer calls waitOnMetadata it will loop decrementing the 
> remainingWaitMs until it either receives the request metadata or runs out of 
> time. Inside the loop Metadata.awaitUpdate is called with the value in 
> remainingWaitMs. Inside Metadata.awaitUpdate a timeout execption could be 
> thrown using the remainingWaitMs which results in messages like:
> {noformat}
> org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
> after 3 ms.
> {noformat}
> Perhaps we should catch the exception and log the real maxWaitMs or change 
> the language to make the exception more clear. 
> Note: I still need to investigate further to be sure exactly when this 
> happens, but wanted to log the jira to make sure this is not forgotten. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3789) Upgrade Snappy to fix snappy decompression errors

2016-06-03 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3789:
---
Status: Patch Available  (was: Open)

> Upgrade Snappy to fix snappy decompression errors
> -
>
> Key: KAFKA-3789
> URL: https://issues.apache.org/jira/browse/KAFKA-3789
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Critical
> Fix For: 0.10.0.1
>
>
> snappy-java recently fixed a bug where parsing the MAGIC HEADER was being 
> handled incorrectly: https://github.com/xerial/snappy-java/issues/142
> This issue caused "unknown broker exceptions" in the clients and prevented 
> these messages from being appended to the log when messages were written 
> using snappy c bindings in clients like librdkafka or ruby-kafka and read 
> using snappy-java in the broker.   
> The related librdkafka issue is here: 
> https://github.com/edenhill/librdkafka/issues/645
> I am able to regularly reproduce the issue with librdkafka in 0.10 and after 
> upgrading snappy-java to 1.1.2.6 the issue is resolved. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3789) Upgrade Snappy to fix snappy decompression errors

2016-06-03 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-3789:
--

 Summary: Upgrade Snappy to fix snappy decompression errors
 Key: KAFKA-3789
 URL: https://issues.apache.org/jira/browse/KAFKA-3789
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.10.0.0
Reporter: Grant Henke
Assignee: Grant Henke
Priority: Critical
 Fix For: 0.10.0.1


snappy-java recently fixed a bug where parsing the MAGIC HEADER was being 
handled incorrectly: https://github.com/xerial/snappy-java/issues/142

This issue caused "unknown broker exceptions" in the clients and prevented 
these messages from being appended to the log when messages were written using 
snappy c bindings in clients like librdkafka or ruby-kafka and read using 
snappy-java in the broker.   

The related librdkafka issue is here: 
https://github.com/edenhill/librdkafka/issues/645

I am able to regularly reproduce the issue with librdkafka in 0.10 and after 
upgrading snappy-java to 1.1.2.6 the issue is resolved. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3764) Error processing append operation on partition

2016-06-02 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KAFKA-3764:
--

Assignee: Grant Henke

> Error processing append operation on partition
> --
>
> Key: KAFKA-3764
> URL: https://issues.apache.org/jira/browse/KAFKA-3764
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Martin Nowak
>Assignee: Grant Henke
>
> After updating Kafka from 0.9.0.1 to 0.10.0.0 I'm getting plenty of `Error 
> processing append operation on partition` errors. This happens with 
> ruby-kafka as producer and enabled snappy compression.
> {noformat}
> [2016-05-27 20:00:11,074] ERROR [Replica Manager on Broker 2]: Error 
> processing append operation on partition m2m-0 (kafka.server.ReplicaManager)
> kafka.common.KafkaException: 
> at 
> kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:159)
> at 
> kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:85)
> at 
> kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:64)
> at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:56)
> at 
> kafka.message.ByteBufferMessageSet$$anon$2.makeNextOuter(ByteBufferMessageSet.scala:357)
> at 
> kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:369)
> at 
> kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:324)
> at 
> kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:64)
> at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:56)
> at scala.collection.Iterator$class.foreach(Iterator.scala:893)
> at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:30)
> at 
> kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:427)
> at kafka.log.Log.liftedTree1$1(Log.scala:339)
> at kafka.log.Log.append(Log.scala:338)
> at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:443)
> at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:429)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
> at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:237)
> at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:429)
> at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:406)
> at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:392)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
> at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> at scala.collection.AbstractTraversable.map(Traversable.scala:104)
> at 
> kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:392)
> at 
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:328)
> at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:405)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:76)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: failed to read chunk
> at 
> org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:433)
> at 
> org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:167)
> at java.io.DataInputStream.readFully(DataInputStream.java:195)
> at java.io.DataInputStream.readLong(DataInputStream.java:416)
> at 
> kafka.message.ByteBufferMessageSet$$anon$1.readMessageFromStream(ByteBufferMessageSet.scala:118)
> at 
> kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:153)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3764) Error processing append operation on partition

2016-06-02 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312916#comment-15312916
 ] 

Grant Henke commented on KAFKA-3764:


It looks like this is likely caused by 
https://github.com/xerial/snappy-java/issues/142 and is fixed in [snappy-java 
1.1.2.6|http://search.maven.org/#artifactdetails%7Corg.xerial.snappy%7Csnappy-java%7C1.1.2.6%7Cbundle].
 This has also been identified as the cause of 
https://github.com/edenhill/librdkafka/issues/645

I can upgrade to snappy-java 1.1.2.6, test with librdkafka and send a PR.

> Error processing append operation on partition
> --
>
> Key: KAFKA-3764
> URL: https://issues.apache.org/jira/browse/KAFKA-3764
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Martin Nowak
>
> After updating Kafka from 0.9.0.1 to 0.10.0.0 I'm getting plenty of `Error 
> processing append operation on partition` errors. This happens with 
> ruby-kafka as producer and enabled snappy compression.
> {noformat}
> [2016-05-27 20:00:11,074] ERROR [Replica Manager on Broker 2]: Error 
> processing append operation on partition m2m-0 (kafka.server.ReplicaManager)
> kafka.common.KafkaException: 
> at 
> kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:159)
> at 
> kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:85)
> at 
> kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:64)
> at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:56)
> at 
> kafka.message.ByteBufferMessageSet$$anon$2.makeNextOuter(ByteBufferMessageSet.scala:357)
> at 
> kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:369)
> at 
> kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:324)
> at 
> kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:64)
> at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:56)
> at scala.collection.Iterator$class.foreach(Iterator.scala:893)
> at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:30)
> at 
> kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:427)
> at kafka.log.Log.liftedTree1$1(Log.scala:339)
> at kafka.log.Log.append(Log.scala:338)
> at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:443)
> at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:429)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
> at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:237)
> at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:429)
> at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:406)
> at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:392)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
> at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> at scala.collection.AbstractTraversable.map(Traversable.scala:104)
> at 
> kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:392)
> at 
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:328)
> at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:405)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:76)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: failed to read chunk
> at 
> org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:433)
> at 
> org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:167)
> at java.io.DataInputStream.readFully(DataInputStream.java:195)
> at java.io.DataInputStream.readLong(DataInputStream.java:416)
> at 
> kafka.message.ByteBufferMessageSet$$anon$1.readMessageFromStream(ByteBufferMessageSet.scala:118)
> at 
> kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:153)
> {noformat}



--
This message was 

[jira] [Updated] (KAFKA-3717) Support building aggregate javadoc for all project modules

2016-05-16 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3717:
---
Summary: Support building aggregate javadoc for all project modules  (was: 
On 0.10.0 branch, building javadoc results in very small subset of expected 
javadocs)

> Support building aggregate javadoc for all project modules
> --
>
> Key: KAFKA-3717
> URL: https://issues.apache.org/jira/browse/KAFKA-3717
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
>
> If you run "./gradlew javadoc", you will only get JavaDoc for the High Level 
> Consumer. All the new clients are missing.
> See here: http://home.apache.org/~gwenshap/0.10.0.0-rc5/javadoc/
> I suggest fixing in 0.10.0 branch and in trunk, not rolling a new release 
> candidate, but updating our docs site.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3717) On 0.10.0 branch, building javadoc results in very small subset of expected javadocs

2016-05-16 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285897#comment-15285897
 ] 

Grant Henke commented on KAFKA-3717:


Currently the build places a javadoc directory and jar in each modules build 
directory. This means you need to manually grab or merge all of them. I 
confirmed with [~gwenshap] that this was good enough for the 0.10 release. 

Going forward it would be nice to aggregate the docs output for all 
sub-modules. This is related to the work tracked by KAFKA-3405.  

Since manually collecting the javadocs works for now, I will update the title 
to track aggregating javadocs. 

> On 0.10.0 branch, building javadoc results in very small subset of expected 
> javadocs
> 
>
> Key: KAFKA-3717
> URL: https://issues.apache.org/jira/browse/KAFKA-3717
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
>
> If you run "./gradlew javadoc", you will only get JavaDoc for the High Level 
> Consumer. All the new clients are missing.
> See here: http://home.apache.org/~gwenshap/0.10.0.0-rc5/javadoc/
> I suggest fixing in 0.10.0 branch and in trunk, not rolling a new release 
> candidate, but updating our docs site.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3717) On 0.10.0 branch, building javadoc results in very small subset of expected javadocs

2016-05-16 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KAFKA-3717:
--

Assignee: Grant Henke

> On 0.10.0 branch, building javadoc results in very small subset of expected 
> javadocs
> 
>
> Key: KAFKA-3717
> URL: https://issues.apache.org/jira/browse/KAFKA-3717
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
>
> If you run "./gradlew javadoc", you will only get JavaDoc for the High Level 
> Consumer. All the new clients are missing.
> See here: http://home.apache.org/~gwenshap/0.10.0.0-rc5/javadoc/
> I suggest fixing in 0.10.0 branch and in trunk, not rolling a new release 
> candidate, but updating our docs site.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3685) Auto-generate ZooKeeper data structure wiki

2016-05-11 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280421#comment-15280421
 ] 

Grant Henke commented on KAFKA-3685:


This would likely take some rework of the ZkUtils class, but that rework would 
likely prove useful for more than documentation. I can take a look at this when 
I get some time, otherwise feel free to do it [~vahid] and I am happy to help 
out. 

> Auto-generate ZooKeeper data structure wiki
> ---
>
> Key: KAFKA-3685
> URL: https://issues.apache.org/jira/browse/KAFKA-3685
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Vahid Hashemian
>Priority: Minor
>
> The ZooKeeper data structure wiki page is located at 
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+data+structures+in+Zookeeper.
>  This should be auto-generated and versioned according to various releases. A 
> similar auto-generate has been previously done for protocol. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3692) Wildcards in External CLASSPATH may cause it not be included in the CLASSPATH

2016-05-11 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280295#comment-15280295
 ] 

Grant Henke commented on KAFKA-3692:


Linked KAFKA-1508 since this patch would likely fix that issue too. 

> Wildcards in External CLASSPATH may cause it not be included in the CLASSPATH
> -
>
> Key: KAFKA-3692
> URL: https://issues.apache.org/jira/browse/KAFKA-3692
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.0.0
>Reporter: Liquan Pei
>Assignee: Liquan Pei
>Priority: Blocker
> Fix For: 0.10.0.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Currently, we doesn't use double quote when using CLASSPATH in 
> kafka-run-class.sh. This could potentially cause issues as spaces in external 
> CLASSPATH may result in the CLASSPATH to be incorrectly interpreted. As we 
> perform a check on whether CLASSPATH is provided to determine the initial 
> value of  CLASSPATH, not using double quotes may cause the external CLASSPATH 
> not be included in the final CLASSPATH. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3396) Unauthorized topics are returned to the user

2016-05-11 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280293#comment-15280293
 ] 

Grant Henke commented on KAFKA-3396:


[~ecomar] Thanks for working on a patch! Feel free to assign to yourself to 
this jira and send a PR. I am sure the exact rules may require some discussion. 

Can you help my understand why don't we want to return 
UNKNOWN_TOPIC_OR_PARTITION when auto-create topic is on, but the user has no 
CREATE permission on Cluster? 

FYI: We are slowly woking on moving auto-creation to be client side 
(KAFKA-2410), but that requires some of the 
[KIP-4|https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations]
 work first. I am hoping to get that work done shortly after the 0.10 release. 

> Unauthorized topics are returned to the user
> 
>
> Key: KAFKA-3396
> URL: https://issues.apache.org/jira/browse/KAFKA-3396
> Project: Kafka
>  Issue Type: Bug
>Reporter: Grant Henke
>
> Kafka's clients and protocol exposes unauthorized topics to the end user. 
> This is often considered a security hole. To some, the topic name is 
> considered sensitive information. Those that do not consider the name 
> sensitive, still consider it more information that allows a user to try and 
> circumvent security.  Instead, if a user does not have access to the topic, 
> the servers should act as if the topic does not exist. 
> To solve this some of the changes could include:
>   - The broker should not return a TOPIC_AUTHORIZATION(29) error for 
> requests (metadata, produce, fetch, etc) that include a topic that the user 
> does not have DESCRIBE access to.
>   - A user should not receive a TopicAuthorizationException when they do 
> not have DESCRIBE access to a topic or the cluster.
>  - The client should not maintain and expose a list of unauthorized 
> topics in org.apache.kafka.common.Cluster. 
> Other changes may be required that are not listed here. Further analysis is 
> needed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3691) Confusing logging during metadata update timeout

2016-05-10 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-3691:
--

 Summary: Confusing logging during metadata update timeout
 Key: KAFKA-3691
 URL: https://issues.apache.org/jira/browse/KAFKA-3691
 Project: Kafka
  Issue Type: Bug
Reporter: Grant Henke
Assignee: Grant Henke


When the KafkaProducer calls waitOnMetadata it will loop decrementing the 
remainingWaitMs until it either receives the request metadata or runs out of 
time. Inside the loop Metadata.awaitUpdate is called with the value in 
remainingWaitMs. Inside Metadata.awaitUpdate a timeout execption could be 
thrown using the remainingWaitMs which results in messages like:
{noformat}
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
after 3 ms.
{noformat}

Perhaps we should catch the exception and log the real maxWaitMs or change the 
language to make the exception more clear. 

Note: I still need to investigate further to be sure exactly when this happens, 
but wanted to log the jira to make sure this is not forgotten. 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3649) Add capability to query broker process for configuration properties

2016-05-04 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15270996#comment-15270996
 ] 

Grant Henke commented on KAFKA-3649:


[~liquanpei] I see you assigned yourself to this. I think this should be 
addressed after KIP-4 add the Describe/Alter config requests to the broker. I 
need to add the proposed protocol format to the wiki yet, but see here for 
details: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations#KIP-4-Commandlineandcentralizedadministrativeoperations-PublicInterfaces

> Add capability to query broker process for configuration properties
> ---
>
> Key: KAFKA-3649
> URL: https://issues.apache.org/jira/browse/KAFKA-3649
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, config, core
>Affects Versions: 0.9.0.1, 0.10.0.0
>Reporter: David Tucker
>Assignee: Liquan Pei
>
> Developing an API by which running brokers could be queries for the various 
> configuration settings is an important feature to managing the Kafka cluster.
> Long term, the API could be enhanced to allow updates for those properties 
> that could be changed at run time ... but this involves a more thorough 
> evaluation of configuration properties (which once can be modified in a 
> running broker and which require a restart {of individual nodes or the entire 
> cluster}).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3644) Use Boolean protocol type for StopReplicaRequest delete_partitions

2016-04-29 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3644:
---
Status: Patch Available  (was: Open)

> Use Boolean protocol type for StopReplicaRequest delete_partitions
> --
>
> Key: KAFKA-3644
> URL: https://issues.apache.org/jira/browse/KAFKA-3644
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.10.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.10.0.0
>
>
> Recently the boolean protocol type was added. The StopReplicaRequest 
> delete_partitions field already utilized and int8 to represent the boolean, 
> so this compatible change is mostly for cleanup and documentation. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3644) Use Boolean protocol type for StopReplicaRequest delete_partitions

2016-04-29 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-3644:
--

 Summary: Use Boolean protocol type for StopReplicaRequest 
delete_partitions
 Key: KAFKA-3644
 URL: https://issues.apache.org/jira/browse/KAFKA-3644
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.10.0.0
Reporter: Grant Henke
Assignee: Grant Henke
 Fix For: 0.10.0.0


Recently the boolean protocol type was added. The StopReplicaRequest 
delete_partitions field already utilized and int8 to represent the boolean, so 
this compatible change is mostly for cleanup and documentation. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3641) Fix RecordMetadata constructor backward compatibility

2016-04-29 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3641:
---
Fix Version/s: 0.10.0.0

> Fix RecordMetadata constructor backward compatibility 
> --
>
> Key: KAFKA-3641
> URL: https://issues.apache.org/jira/browse/KAFKA-3641
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> The old RecordMetadata constructor from 0.9.0 should be added back and 
> deprecated in order to maintain backward compatibility.
> {noformat}
> public RecordMetadata(TopicPartition topicPartition, long baseOffset, long 
> relativeOffset)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3641) Fix RecordMetadata constructor backward compatibility

2016-04-29 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3641:
---
Status: Patch Available  (was: Open)

> Fix RecordMetadata constructor backward compatibility 
> --
>
> Key: KAFKA-3641
> URL: https://issues.apache.org/jira/browse/KAFKA-3641
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> The old RecordMetadata constructor from 0.9.0 should be added back and 
> deprecated in order to maintain backward compatibility.
> {noformat}
> public RecordMetadata(TopicPartition topicPartition, long baseOffset, long 
> relativeOffset)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3641) Fix RecordMetadata constructor backward compatibility

2016-04-29 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3641:
---
Affects Version/s: 0.10.0.0

> Fix RecordMetadata constructor backward compatibility 
> --
>
> Key: KAFKA-3641
> URL: https://issues.apache.org/jira/browse/KAFKA-3641
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> The old RecordMetadata constructor from 0.9.0 should be added back and 
> deprecated in order to maintain backward compatibility.
> {noformat}
> public RecordMetadata(TopicPartition topicPartition, long baseOffset, long 
> relativeOffset)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1880) Add support for checking binary/source compatibility

2016-04-29 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-1880:
---
Description: Recent discussions around compatibility shows how important 
compatibility is to users. Kafka should leverage a tool to find, report, and 
avoid incompatibility issues in public methods.  (was: Recent discussions 
around compatibility shows how important compatibility is to users. [Java API 
Compliance 
Checker|http://ispras.linuxbase.org/index.php/Java_API_Compliance_Checker] is a 
tool for checking backward binary and source-level compatibility of a Java 
library API. Kafka can leverage the tool to find and fix existing 
incompatibility issues and avoid new issues from getting into the product.)

> Add support for checking binary/source compatibility
> 
>
> Key: KAFKA-1880
> URL: https://issues.apache.org/jira/browse/KAFKA-1880
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Ashish K Singh
>Assignee: Grant Henke
> Attachments: compatibilityReport-only-incompatible.html, 
> compatibilityReport.html
>
>
> Recent discussions around compatibility shows how important compatibility is 
> to users. Kafka should leverage a tool to find, report, and avoid 
> incompatibility issues in public methods.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1880) Add support for checking binary/source compatibility

2016-04-29 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-1880:
---
Attachment: compatibilityReport-only-incompatible.html

Adding a more complete sample report that only includes breaking changes. 

I think we need to define more tightly whats "public" and whats not. Also what 
should be serializable. We have a decent number of serialization breaks in 
common.

> Add support for checking binary/source compatibility
> 
>
> Key: KAFKA-1880
> URL: https://issues.apache.org/jira/browse/KAFKA-1880
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Ashish K Singh
>Assignee: Grant Henke
> Attachments: compatibilityReport-only-incompatible.html, 
> compatibilityReport.html
>
>
> Recent discussions around compatibility shows how important compatibility is 
> to users. [Java API Compliance 
> Checker|http://ispras.linuxbase.org/index.php/Java_API_Compliance_Checker] is 
> a tool for checking backward binary and source-level compatibility of a Java 
> library API. Kafka can leverage the tool to find and fix existing 
> incompatibility issues and avoid new issues from getting into the product.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1880) Add support for checking binary/source compatibility

2016-04-28 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263372#comment-15263372
 ] 

Grant Henke commented on KAFKA-1880:


I actually did not use the compatibility checker in the current description 
(Java API Compliance Checker).  Instead I choose to use 
[japicmp|https://siom79.github.io/japicmp/]. I will update the description when 
things are more concrete. 

I evaluated the options with the following criteria:

1. Able to be plugged into our existing build process (That means it needed to 
be a java dependency I can resolve and use from Gradle)
2. Able to detect source, binary, serialization and annotation incompatibilities
3. Able to filter the checked classes by package (Since we don't use 
annotations all over already)
4. Able to provide a clear and concise report/overview
5. Bonus: Works with Scala too. (I need to test this yet)

Here are some explanations for tools I considered but didn't choose (mostly 
taken from https://siom79.github.io/japicmp/) 
- *Java API Compliance Checker*: A Perl script. This approach cannot compare 
annotations and you need to have Perl installed. Only filters by annotation.
- *Clirr*: Tracking of API changes is implemented only partially, tracking of 
annotations is not supported. Development has stopped around 2005.
- *JDiff*: A Javadoc doclet that generates an HTML report of all API changes. 
The source code for both versions has to be available, the differences are not 
distinguished between binary incompatible or not. Comparison of annotations is 
not supported.
- *revapi*: An API analysis and change tracking tool that was started about the 
same time as japicmp. It ships with a maven plugin and an Ant task, but the 
maven plugin currently (version 0.4.1) only reports changes on the command line.




> Add support for checking binary/source compatibility
> 
>
> Key: KAFKA-1880
> URL: https://issues.apache.org/jira/browse/KAFKA-1880
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Ashish K Singh
>Assignee: Grant Henke
> Attachments: compatibilityReport.html
>
>
> Recent discussions around compatibility shows how important compatibility is 
> to users. [Java API Compliance 
> Checker|http://ispras.linuxbase.org/index.php/Java_API_Compliance_Checker] is 
> a tool for checking backward binary and source-level compatibility of a Java 
> library API. Kafka can leverage the tool to find and fix existing 
> incompatibility issues and avoid new issues from getting into the product.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1880) Add support for checking binary/source compatibility

2016-04-28 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-1880:
---
Attachment: compatibilityReport.html

Attaching a sample html compatibility report for all public changes in the 
clients module between the 0.9 branch and trunk.

> Add support for checking binary/source compatibility
> 
>
> Key: KAFKA-1880
> URL: https://issues.apache.org/jira/browse/KAFKA-1880
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Ashish K Singh
>Assignee: Grant Henke
> Attachments: compatibilityReport.html
>
>
> Recent discussions around compatibility shows how important compatibility is 
> to users. [Java API Compliance 
> Checker|http://ispras.linuxbase.org/index.php/Java_API_Compliance_Checker] is 
> a tool for checking backward binary and source-level compatibility of a Java 
> library API. Kafka can leverage the tool to find and fix existing 
> incompatibility issues and avoid new issues from getting into the product.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1880) Add support for checking binary/source compatibility

2016-04-28 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263239#comment-15263239
 ] 

Grant Henke commented on KAFKA-1880:


I have started hacking together a solution for this that is integrated into the 
build and can check compatibility across different git branches or commits. 

Locally I have a very rough implementation that works and identified 
KAFKA-3641. I will post a WIP pull request once its cleaned up a bit. 

> Add support for checking binary/source compatibility
> 
>
> Key: KAFKA-1880
> URL: https://issues.apache.org/jira/browse/KAFKA-1880
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Ashish K Singh
>Assignee: Grant Henke
>
> Recent discussions around compatibility shows how important compatibility is 
> to users. [Java API Compliance 
> Checker|http://ispras.linuxbase.org/index.php/Java_API_Compliance_Checker] is 
> a tool for checking backward binary and source-level compatibility of a Java 
> library API. Kafka can leverage the tool to find and fix existing 
> incompatibility issues and avoid new issues from getting into the product.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-1880) Add support for checking binary/source compatibility

2016-04-28 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KAFKA-1880:
--

Assignee: Grant Henke  (was: Ashish K Singh)

> Add support for checking binary/source compatibility
> 
>
> Key: KAFKA-1880
> URL: https://issues.apache.org/jira/browse/KAFKA-1880
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Ashish K Singh
>Assignee: Grant Henke
>
> Recent discussions around compatibility shows how important compatibility is 
> to users. [Java API Compliance 
> Checker|http://ispras.linuxbase.org/index.php/Java_API_Compliance_Checker] is 
> a tool for checking backward binary and source-level compatibility of a Java 
> library API. Kafka can leverage the tool to find and fix existing 
> incompatibility issues and avoid new issues from getting into the product.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3641) Fix RecordMetadata constructor backward compatibility

2016-04-28 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-3641:
--

 Summary: Fix RecordMetadata constructor backward compatibility 
 Key: KAFKA-3641
 URL: https://issues.apache.org/jira/browse/KAFKA-3641
 Project: Kafka
  Issue Type: Bug
Reporter: Grant Henke
Assignee: Grant Henke
Priority: Blocker


The old RecordMetadata constructor from 0.9.0 should be added back and 
deprecated in order to maintain backward compatibility.

{noformat}
public RecordMetadata(TopicPartition topicPartition, long baseOffset, long 
relativeOffset)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3638) Using kafka-console-producer with invalid broker list weirds to unexpected behavior

2016-04-28 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KAFKA-3638.

Resolution: Duplicate
  Assignee: Grant Henke

> Using kafka-console-producer with invalid broker list weirds to unexpected 
> behavior
> ---
>
> Key: KAFKA-3638
> URL: https://issues.apache.org/jira/browse/KAFKA-3638
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jarek Jarcec Cecho
>Assignee: Grant Henke
>
> I've messed the port when running console producer on my test cluster and 
> instead of {{ConnectionRefused}} exception or something like that, I didn't 
> get any immediate error. After I tried to produce some messages I got nothing 
> and after a minute or so got {{ERROR}} about updating metadata:
> {code}
> [root@centos6 ~]# kafka-console-producer --broker-list localhost:666 --topic 
> source
> asfasdf
> [2016-04-28 14:28:01,950] ERROR Error when sending message to topic source 
> with key: null, value: 7 bytes with error: Failed to update metadata after 
> 6 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> {code}
> Would it perhaps make sense to throw more accurate exception in this case?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3630) Consider auto closing outdated pull requests

2016-04-27 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3630:
---
Description: 
Currently we don't have access to close pull requests and the list of open pull 
requests is growing. We are nearing 200 open pull requests and many are 
outdated. 

I am not sure if this is possible but I think a potential improvement would be 
to have a Jenkins job that runs periodically to:
1. Find all pull requests that have had no activity for 15 days
2. Comment on the pull requests that they were auto closed and should be 
re-opened if there is still interest
3. Close the pull requests

I don't think closing the outdated pull request will hurt project progress in 
anyway because:
- Jira still tracks the feature or fix
- The pull requests likely need a rebase or feedback needs to be address
- The notification will encourage the pull request owner and reviewers to 
follow up

As of today the break down of older pull requests is:
- [Older than 15 
days|https://github.com/apache/kafka/pulls?utf8=%E2%9C%93=is%3Apr+is%3Aopen+updated%3A%3C%3D2016-04-12+]:
 153
- [Older than 30 
days|https://github.com/apache/kafka/pulls?utf8=%E2%9C%93=is%3Apr+is%3Aopen+updated%3A%3C%3D2016-03-28]:
 107
- [Older than 60 
days|https://github.com/apache/kafka/pulls?utf8=%E2%9C%93=is%3Apr+is%3Aopen+updated%3A%3C%3D2016-02-28+]:
 73
- [Older than 90 
days|https://github.com/apache/kafka/pulls?utf8=%E2%9C%93=is%3Apr+is%3Aopen+updated%3A%3C%3D2016-01-28+]:
 52

This jira is mainly to track discussion and ideas around this challenge. Please 
feel free to propose an alternate solution. 
 

  was:
Currently we don't have access to close pull requests and the list of open pull 
requests is growing. We are nearing 200 open pull requests and many are 
outdated. 

I am not sure if this is possible but I think a potential improvement would be 
to have a Jenkins job that runs periodically to:
1. Find all pull requests that have had no activity for 15 days
2. Comment on the pull requests that they were auto closed and should be 
re-opened if there is still interest
3. Close the pull requests

I don't think closing the outdated pull request will hurt project progress in 
anyway because:
- Jira still tracks the feature or fix
- The pull requests likely need a rebase or feedback needs to be address
- The notification will encourage the pull request owner and reviewers to 
follow up

As of today the break down of older pull requests is:
- [Older than 15 
days|https://github.com/apache/kafka/pulls?utf8=%E2%9C%93=is%3Apr+is%3Aopen+updated%3A%3C%3D2016-04-12+]:
 153
- [Older than 30 
days|https://github.com/apache/kafka/pulls?utf8=%E2%9C%93=is%3Apr+is%3Aopen+updated%3A%3C%3D2016-03-28]:
 107
- [Older than 90 
days|https://github.com/apache/kafka/pulls?utf8=%E2%9C%93=is%3Apr+is%3Aopen+updated%3A%3C%3D2016-01-28+]:
 52

This jira is mainly to track discussion and ideas around this challenge. Please 
feel free to propose an alternate solution. 
 


> Consider auto closing outdated pull requests
> 
>
> Key: KAFKA-3630
> URL: https://issues.apache.org/jira/browse/KAFKA-3630
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Grant Henke
>
> Currently we don't have access to close pull requests and the list of open 
> pull requests is growing. We are nearing 200 open pull requests and many are 
> outdated. 
> I am not sure if this is possible but I think a potential improvement would 
> be to have a Jenkins job that runs periodically to:
> 1. Find all pull requests that have had no activity for 15 days
> 2. Comment on the pull requests that they were auto closed and should be 
> re-opened if there is still interest
> 3. Close the pull requests
> I don't think closing the outdated pull request will hurt project progress in 
> anyway because:
> - Jira still tracks the feature or fix
> - The pull requests likely need a rebase or feedback needs to be address
> - The notification will encourage the pull request owner and reviewers to 
> follow up
> As of today the break down of older pull requests is:
> - [Older than 15 
> days|https://github.com/apache/kafka/pulls?utf8=%E2%9C%93=is%3Apr+is%3Aopen+updated%3A%3C%3D2016-04-12+]:
>  153
> - [Older than 30 
> days|https://github.com/apache/kafka/pulls?utf8=%E2%9C%93=is%3Apr+is%3Aopen+updated%3A%3C%3D2016-03-28]:
>  107
> - [Older than 60 
> days|https://github.com/apache/kafka/pulls?utf8=%E2%9C%93=is%3Apr+is%3Aopen+updated%3A%3C%3D2016-02-28+]:
>  73
> - [Older than 90 
> days|https://github.com/apache/kafka/pulls?utf8=%E2%9C%93=is%3Apr+is%3Aopen+updated%3A%3C%3D2016-01-28+]:
>  52
> This jira is mainly to track discussion and ideas around this challenge. 
> Please feel free to propose an alternate solution. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3630) Consider auto closing outdated pull requests

2016-04-27 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15260555#comment-15260555
 ] 

Grant Henke commented on KAFKA-3630:


[~ijuma] I agree a bit longer might be better. I started short for examples 
sake. I am leaning towards 30 but don't feel too strongly about it.  

> Consider auto closing outdated pull requests
> 
>
> Key: KAFKA-3630
> URL: https://issues.apache.org/jira/browse/KAFKA-3630
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Grant Henke
>
> Currently we don't have access to close pull requests and the list of open 
> pull requests is growing. We are nearing 200 open pull requests and many are 
> outdated. 
> I am not sure if this is possible but I think a potential improvement would 
> be to have a Jenkins job that runs periodically to:
> 1. Find all pull requests that have had no activity for 15 days
> 2. Comment on the pull requests that they were auto closed and should be 
> re-opened if there is still interest
> 3. Close the pull requests
> I don't think closing the outdated pull request will hurt project progress in 
> anyway because:
> - Jira still tracks the feature or fix
> - The pull requests likely need a rebase or feedback needs to be address
> - The notification will encourage the pull request owner and reviewers to 
> follow up
> As of today the break down of older pull requests is:
> - [Older than 15 
> days|https://github.com/apache/kafka/pulls?utf8=%E2%9C%93=is%3Apr+is%3Aopen+updated%3A%3C%3D2016-04-12+]:
>  153
> - [Older than 30 
> days|https://github.com/apache/kafka/pulls?utf8=%E2%9C%93=is%3Apr+is%3Aopen+updated%3A%3C%3D2016-03-28]:
>  107
> - [Older than 90 
> days|https://github.com/apache/kafka/pulls?utf8=%E2%9C%93=is%3Apr+is%3Aopen+updated%3A%3C%3D2016-01-28+]:
>  52
> This jira is mainly to track discussion and ideas around this challenge. 
> Please feel free to propose an alternate solution. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3630) Consider auto closing outdated pull requests

2016-04-27 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-3630:
--

 Summary: Consider auto closing outdated pull requests
 Key: KAFKA-3630
 URL: https://issues.apache.org/jira/browse/KAFKA-3630
 Project: Kafka
  Issue Type: Improvement
Reporter: Grant Henke


Currently we don't have access to close pull requests and the list of open pull 
requests is growing. We are nearing 200 open pull requests and many are 
outdated. 

I am not sure if this is possible but I think a potential improvement would be 
to have a Jenkins job that runs periodically to:
1. Find all pull requests that have had no activity for 15 days
2. Comment on the pull requests that they were auto closed and should be 
re-opened if there is still interest
3. Close the pull requests

I don't think closing the outdated pull request will hurt project progress in 
anyway because:
- Jira still tracks the feature or fix
- The pull requests likely need a rebase or feedback needs to be address
- The notification will encourage the pull request owner and reviewers to 
follow up

As of today the break down of older pull requests is:
- [Older than 15 
days|https://github.com/apache/kafka/pulls?utf8=%E2%9C%93=is%3Apr+is%3Aopen+updated%3A%3C%3D2016-04-12+]:
 153
- [Older than 30 
days|https://github.com/apache/kafka/pulls?utf8=%E2%9C%93=is%3Apr+is%3Aopen+updated%3A%3C%3D2016-03-28]:
 107
- [Older than 90 
days|https://github.com/apache/kafka/pulls?utf8=%E2%9C%93=is%3Apr+is%3Aopen+updated%3A%3C%3D2016-01-28+]:
 52

This jira is mainly to track discussion and ideas around this challenge. Please 
feel free to propose an alternate solution. 
 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3615) Exclude test jars in CLASSPATH of kafka-run-class.sh

2016-04-24 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15255747#comment-15255747
 ] 

Grant Henke commented on KAFKA-3615:


This is only true when running from source right? Not from an actual release 
where all the jars are under /lib.

> Exclude test jars in CLASSPATH of kafka-run-class.sh
> 
>
> Key: KAFKA-3615
> URL: https://issues.apache.org/jira/browse/KAFKA-3615
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, build
>Affects Versions: 0.10.0.0
>Reporter: Liquan Pei
>Assignee: Liquan Pei
>  Labels: newbie
> Fix For: 0.10.0.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3610) Improve TimeoutException message when a RecordBatch expires

2016-04-22 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254205#comment-15254205
 ] 

Grant Henke commented on KAFKA-3610:


Thinking further, perhaps documentation is a better place for those 
recommendations. This is a question I have gotten many times, so looking for 
the best way to relay the information. 

> Improve TimeoutException message when a RecordBatch expires
> ---
>
> Key: KAFKA-3610
> URL: https://issues.apache.org/jira/browse/KAFKA-3610
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Currently when a batch expires in _RecordBatch.maybeExpire_ a Timeout 
> exception is throw with the message "Batch Expired". Providing some 
> explanation and advice on configuration options to avoid or handle the 
> exception would help users. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3610) Improve TimeoutException message when a RecordBatch expires

2016-04-22 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15254190#comment-15254190
 ] 

Grant Henke commented on KAFKA-3610:


[~ijuma] You are right it is improved in trunk. Since recordCount and topic and 
partition are included. But I was thinking we could give some guidance on 
configuration changes to help avoid the exception. 

> Improve TimeoutException message when a RecordBatch expires
> ---
>
> Key: KAFKA-3610
> URL: https://issues.apache.org/jira/browse/KAFKA-3610
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Currently when a batch expires in _RecordBatch.maybeExpire_ a Timeout 
> exception is throw with the message "Batch Expired". Providing some 
> explanation and advice on configuration options to avoid or handle the 
> exception would help users. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3610) Improve TimeoutException message when a RecordBatch expires

2016-04-22 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-3610:
--

 Summary: Improve TimeoutException message when a RecordBatch 
expires
 Key: KAFKA-3610
 URL: https://issues.apache.org/jira/browse/KAFKA-3610
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.9.0.0
Reporter: Grant Henke
Assignee: Grant Henke


Currently when a batch expires in _RecordBatch.maybeExpire_ a Timeout exception 
is throw with the message "Batch Expired". Providing some explanation and 
advice on configuration options to avoid or handle the exception would help 
users. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3604) Improve error messages when null is used with a non-nullable Type

2016-04-21 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-3604:
--

 Summary: Improve error messages when null is used with a 
non-nullable Type
 Key: KAFKA-3604
 URL: https://issues.apache.org/jira/browse/KAFKA-3604
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.10.0.0
Reporter: Grant Henke
Assignee: Grant Henke


Currently when a null is passed to a non-nullable type an unclear message is 
provided in the exception. We should indicate that the issue was caused by a 
null value. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3603) Define HashCode and Equals methods for Schema, Field and Type

2016-04-21 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-3603:
--

 Summary: Define HashCode and Equals methods for Schema, Field and 
Type
 Key: KAFKA-3603
 URL: https://issues.apache.org/jira/browse/KAFKA-3603
 Project: Kafka
  Issue Type: Improvement
Reporter: Grant Henke
Assignee: Grant Henke


We should consider implementing HashCode and Equals methods for Schema, Field 
and Type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3358) Only request metadata updates once we have topics or a pattern subscription

2016-04-15 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15243135#comment-15243135
 ] 

Grant Henke commented on KAFKA-3358:


[~hachikuji] I think it would be a valuable patch separate from the KIP-4 
changes too. 

> Only request metadata updates once we have topics or a pattern subscription
> ---
>
> Key: KAFKA-3358
> URL: https://issues.apache.org/jira/browse/KAFKA-3358
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.9.0.0, 0.9.0.1
>Reporter: Ismael Juma
>Assignee: Jason Gustafson
>Priority: Critical
> Fix For: 0.10.1.0
>
>
> The current code requests a metadata update for _all_ topics which can cause 
> major load issues in large clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3563) Maintain MessageAndMetadata constructor compatibility

2016-04-15 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3563:
---
Status: Patch Available  (was: Open)

> Maintain MessageAndMetadata constructor compatibility 
> --
>
> Key: KAFKA-3563
> URL: https://issues.apache.org/jira/browse/KAFKA-3563
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.10.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.10.0.0
>
>
> The MessageAndMetadata constructor was changed to include timestamp 
> information as a part of KIP-32. Though the constructor may not be used in 
> general client usage, it may be used in unit tests or some advanced usage. We 
> should maintain compatibility if possible. 
> One example where the constructor is used is Apache Spark: 
> https://github.com/apache/spark/blob/master/external/kafka/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala#L223-L225
> The old constructor was:
> {code}
> MessageAndMetadata[K, V](topic: String,
>partition: Int,
>private val rawMessage: Message,
>offset: Long,
>keyDecoder: Decoder[K], valueDecoder: Decoder[V])
> {code}
> And after KIP-32 it is now:
> {code}
> MessageAndMetadata[K, V](topic: String,
>partition: Int,
>private val rawMessage: Message,
>offset: Long,
>timestamp: Long = Message.NoTimestamp,
>timestampType: TimestampType = TimestampType.CREATE_TIME,
>keyDecoder: Decoder[K], valueDecoder: Decoder[V])
> {code}
> Even though _timestamp_ and _timestampType_ have defaults, if _keyDecoder_ 
> and _valueDecoder_ were not accessed by name, then the new constructor is not 
> backwards compatible. 
> We can fix compatibility by moving the _timestamp_ and _timestampType_ 
> parameters to the end of the constructor, or by providing a new constructor 
> without _timestamp_ and _timestampType_ that matches the old constructor. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3563) Maintain MessageAndMetadata constructor compatibility

2016-04-15 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-3563:
--

 Summary: Maintain MessageAndMetadata constructor compatibility 
 Key: KAFKA-3563
 URL: https://issues.apache.org/jira/browse/KAFKA-3563
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.10.0.0
Reporter: Grant Henke
Assignee: Grant Henke
 Fix For: 0.10.0.0


The MessageAndMetadata constructor was changed to include timestamp information 
as a part of KIP-32. Though the constructor may not be used in general client 
usage, it may be used in unit tests or some advanced usage. We should maintain 
compatibility if possible. 

One example where the constructor is used is Apache Spark: 
https://github.com/apache/spark/blob/master/external/kafka/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala#L223-L225

The old constructor was:
{code}
MessageAndMetadata[K, V](topic: String,
   partition: Int,
   private val rawMessage: Message,
   offset: Long,
   keyDecoder: Decoder[K], valueDecoder: Decoder[V])
{code}

And after KIP-32 it is now:
{code}
MessageAndMetadata[K, V](topic: String,
   partition: Int,
   private val rawMessage: Message,
   offset: Long,
   timestamp: Long = Message.NoTimestamp,
   timestampType: TimestampType = TimestampType.CREATE_TIME,
   keyDecoder: Decoder[K], valueDecoder: Decoder[V])
{code}

Even though _timestamp_ and _timestampType_ have defaults, if _keyDecoder_ and 
_valueDecoder_ were not accessed by name, then the new constructor is not 
backwards compatible. 

We can fix compatibility by moving the _timestamp_ and _timestampType_ 
parameters to the end of the constructor, or by providing a new constructor 
without _timestamp_ and _timestampType_ that matches the old constructor. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3549) Close consumers instantiated in consumer tests

2016-04-12 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15237807#comment-15237807
 ] 

Grant Henke commented on KAFKA-3549:


[~ijuma] This patch tries to clean up a lot of them, but I am sure there are 
some leaks remaining. It would be nice if there were a generic way to 
automatically detect any closable resource that is still open.

> Close consumers instantiated in consumer tests
> --
>
> Key: KAFKA-3549
> URL: https://issues.apache.org/jira/browse/KAFKA-3549
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Close consumers instantiated in consumer tests. Since these consumers often 
> use the default group.id of "", they could cause transient failures like 
> those seen in KAFKA-3117 and KAFKA-2933. I have not been able to prove that 
> this change will fix those failures, but closing the consumers is a good 
> practice regardless.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3549) Close consumers instantiated in consumer tests

2016-04-12 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3549:
---
Status: Patch Available  (was: Open)

> Close consumers instantiated in consumer tests
> --
>
> Key: KAFKA-3549
> URL: https://issues.apache.org/jira/browse/KAFKA-3549
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Close consumers instantiated in consumer tests. Since these consumers often 
> use the default group.id of "", they could cause transient failures like 
> those seen in KAFKA-3117 and KAFKA-2933. I have not been able to prove that 
> this change will fix those failures, but closing the consumers is a good 
> practice regardless.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3550) Broker does not honor MetadataRequest api version; always returns v0 MetadataResponse

2016-04-12 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15237800#comment-15237800
 ] 

Grant Henke commented on KAFKA-3550:


Note that KAFKA-2512 has an older pull request open from [~becket_qin] to 
validate the apiVersion of all requests here: 
https://github.com/apache/kafka/pull/200

> Broker does not honor MetadataRequest api version; always returns v0 
> MetadataResponse
> -
>
> Key: KAFKA-3550
> URL: https://issues.apache.org/jira/browse/KAFKA-3550
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.0, 0.8.1.1, 0.8.2.2, 0.9.0.1
>Reporter: Dana Powers
>Assignee: Grant Henke
>
> To reproduce:
> Send a MetadataRequest (api key 3) with incorrect api version (e.g., 1234).
> The expected behavior is for the broker to reject the request as unrecognized.
> Broker (incorrectly) responds with MetadataResponse v0.
> The problem here is that any request for a "new" MetadataRequest (i.e., 
> KIP-4) sent to an old broker will generate an incorrect response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3550) Broker does not honor MetadataRequest api version; always returns v0 MetadataResponse

2016-04-12 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15237794#comment-15237794
 ] 

Grant Henke commented on KAFKA-3550:


I will look into this and provide a detailed summary. 

> Broker does not honor MetadataRequest api version; always returns v0 
> MetadataResponse
> -
>
> Key: KAFKA-3550
> URL: https://issues.apache.org/jira/browse/KAFKA-3550
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.0, 0.8.1.1, 0.8.2.2, 0.9.0.1
>Reporter: Dana Powers
>Assignee: Grant Henke
>
> To reproduce:
> Send a MetadataRequest (api key 3) with incorrect api version (e.g., 1234).
> The expected behavior is for the broker to reject the request as unrecognized.
> Broker (incorrectly) responds with MetadataResponse v0.
> The problem here is that any request for a "new" MetadataRequest (i.e., 
> KIP-4) sent to an old broker will generate an incorrect response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3550) Broker does not honor MetadataRequest api version; always returns v0 MetadataResponse

2016-04-12 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KAFKA-3550:
--

Assignee: Grant Henke

> Broker does not honor MetadataRequest api version; always returns v0 
> MetadataResponse
> -
>
> Key: KAFKA-3550
> URL: https://issues.apache.org/jira/browse/KAFKA-3550
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.0, 0.8.1.1, 0.8.2.2, 0.9.0.1
>Reporter: Dana Powers
>Assignee: Grant Henke
>
> To reproduce:
> Send a MetadataRequest (api key 3) with incorrect api version (e.g., 1234).
> The expected behavior is for the broker to reject the request as unrecognized.
> Broker (incorrectly) responds with MetadataResponse v0.
> The problem here is that any request for a "new" MetadataRequest (i.e., 
> KIP-4) sent to an old broker will generate an incorrect response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3549) Close consumers instantiated in consumer tests

2016-04-12 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-3549:
--

 Summary: Close consumers instantiated in consumer tests
 Key: KAFKA-3549
 URL: https://issues.apache.org/jira/browse/KAFKA-3549
 Project: Kafka
  Issue Type: Improvement
Reporter: Grant Henke
Assignee: Grant Henke


Close consumers instantiated in consumer tests. Since these consumers often use 
the default group.id of "", they could cause transient failures like those seen 
in KAFKA-3117 and KAFKA-2933. I have not been able to prove that this change 
will fix those failures, but closing the consumers is a good practice 
regardless.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3117) Fail test at: PlaintextConsumerTest. testAutoCommitOnRebalance

2016-04-12 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3117:
---
Issue Type: Sub-task  (was: Bug)
Parent: KAFKA-2054

> Fail test at: PlaintextConsumerTest. testAutoCommitOnRebalance 
> ---
>
> Key: KAFKA-3117
> URL: https://issues.apache.org/jira/browse/KAFKA-3117
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.9.0.0
> Environment: oracle java764bit
> ubuntu 13.10 
>Reporter: edwardt
>Assignee: Neha Narkhede
>  Labels: newbie, test
>
> java.lang.AssertionError: Expected partitions [topic-0, topic-1, topic2-0, 
> topic2-1] but actually got [topic-0, topic-1]
>   at org.junit.Assert.fail(Assert.java:88)
>   at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:730)
>   at 
> kafka.api.BaseConsumerTest.testAutoCommitOnRebalance(BaseConsumerTest.scala:125)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:22



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3509) Provide an Authorizer interface using the Java client enumerator classes

2016-04-04 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3509:
---
Description: 
Provide an Authorizer interface using the new Java classes used by the ACL 
requests/responses added as a part of KAFKA-3266. Deprecate the old one to 
encourage transition.

This may require a small KIP.

  was:Provide an Authorizer interface using the new Java classes used by the 
ACL requests/responses added as a part of KAFKA-3266. Deprecate the old one to 
encourage transition.


> Provide an Authorizer interface using the Java client enumerator classes
> 
>
> Key: KAFKA-3509
> URL: https://issues.apache.org/jira/browse/KAFKA-3509
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Provide an Authorizer interface using the new Java classes used by the ACL 
> requests/responses added as a part of KAFKA-3266. Deprecate the old one to 
> encourage transition.
> This may require a small KIP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3509) Provide an Authorizer interface using the Java client enumerator classes

2016-04-04 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-3509:
--

 Summary: Provide an Authorizer interface using the Java client 
enumerator classes
 Key: KAFKA-3509
 URL: https://issues.apache.org/jira/browse/KAFKA-3509
 Project: Kafka
  Issue Type: Improvement
Reporter: Grant Henke
Assignee: Grant Henke


Provide an Authorizer interface using the new Java classes used by the ACL 
requests/responses added as a part of KAFKA-3266. Deprecate the old one to 
encourage transition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3507) Define standard exceptions for the Authorizer interface

2016-04-04 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-3507:
--

 Summary: Define standard exceptions for the Authorizer interface
 Key: KAFKA-3507
 URL: https://issues.apache.org/jira/browse/KAFKA-3507
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.9.0.0
Reporter: Grant Henke
Assignee: Grant Henke


The Authorizer does not define an standard exceptions that can be used by an 
implementer. This means that any exception thrown on the broker, as a part of 
KAFKA-3266, can only be passed back to the client as an UnknownException(-1) 
making error handling difficult. A set of standard exceptions covering most 
foreseeable exceptions should be defined as a part of the interface and used in 
the default SimpleAclAuthorizer. 

This work will require a small KIP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3483) Restructure ducktape tests to simplify running subsets of tests

2016-03-29 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3483:
---
Status: Patch Available  (was: Open)

> Restructure ducktape tests to simplify running subsets of tests
> ---
>
> Key: KAFKA-3483
> URL: https://issues.apache.org/jira/browse/KAFKA-3483
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Provides a convenient way of running ducktape tests for a single component 
> (core, connect, streams, etc). It also separates tests from benchmarks. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3483) Restructure ducktape tests to simplify running subsets of tests

2016-03-29 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-3483:
--

 Summary: Restructure ducktape tests to simplify running subsets of 
tests
 Key: KAFKA-3483
 URL: https://issues.apache.org/jira/browse/KAFKA-3483
 Project: Kafka
  Issue Type: Improvement
Reporter: Grant Henke
Assignee: Grant Henke


Provides a convenient way of running ducktape tests for a single component 
(core, connect, streams, etc). It also separates tests from benchmarks. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3475) Introduce our own `MiniKdc`

2016-03-28 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214270#comment-15214270
 ] 

Grant Henke commented on KAFKA-3475:


I have not looked at the code in detail yet. Just posing the upfront question 
of if/why we want to maintain this ourselves. 

> Introduce our own `MiniKdc`
> ---
>
> Key: KAFKA-3475
> URL: https://issues.apache.org/jira/browse/KAFKA-3475
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>
> We rely on `hadoop-minikdc` for SASL unit and system tests. This library 
> contains a single `MiniKdc` class that depends on Apache Directory Server for 
> most of the functionality. Even so, there are a couple of bugs (KAFKA-3453,  
> KAFKA-2866) that would be easy to fix if we had our own `MiniKdc`.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3475) Introduce our own `MiniKdc`

2016-03-28 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214260#comment-15214260
 ] 

Grant Henke commented on KAFKA-3475:


Are these bugs not easily fixed in the existing MiniKdc? It would be great to 
contribute the fixes back so everyone using MiniKdc can benefit. Even though 
the code is small, I am not sure its worth owning. Especially if its basically 
a copy/paste. If its really more custom tailored to Kafka usage it might make 
sense.

> Introduce our own `MiniKdc`
> ---
>
> Key: KAFKA-3475
> URL: https://issues.apache.org/jira/browse/KAFKA-3475
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>
> We rely on `hadoop-minikdc` for SASL unit and system tests. This library 
> contains a single `MiniKdc` class that depends on Apache Directory Server for 
> most of the functionality. Even so, there are a couple of bugs (KAFKA-3453,  
> KAFKA-2866) that would be easy to fix if we had our own `MiniKdc`.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3472) Allow MirrorMaker to copy selected partitions and choose target topic name

2016-03-28 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214208#comment-15214208
 ] 

Grant Henke commented on KAFKA-3472:


I think the goal of sampling topic data could be done without adding another 
parameter to Mirrormaker, and without depending on having many partitions. I 
linked KAFKA-2670 which talks about adding a sampling rate to Mirrormaker. The 
discussion mentions using an interceptor to implement custom sampling. Would 
that work for your use case?

> Allow MirrorMaker to copy selected partitions and choose target topic name
> --
>
> Key: KAFKA-3472
> URL: https://issues.apache.org/jira/browse/KAFKA-3472
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 0.9.0.1
>Reporter: Hang Sun
>Priority: Minor
>  Labels: mirror-maker
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> It would be nice if MirrorMaker can be used to copy only a few partitions 
> instead of all to a different topic.  My use case is to sample a small 
> portion of production traffic in the pre-production environment for testing.  
> The pre-production environment is usually smaller and cannot handle the full 
> load from production.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2998) New Consumer should not retry indefinitely if no broker is available

2016-03-25 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15212084#comment-15212084
 ] 

Grant Henke commented on KAFKA-2998:


[~hachikuji] I opened KAFKA-3468 before finding this jira. I linked it for now, 
but it may in fact be a duplicate. Feel free to mark it as a duplicate if your 
patch handles that scenario too. 

> New Consumer should not retry indefinitely if no broker is available
> 
>
> Key: KAFKA-2998
> URL: https://issues.apache.org/jira/browse/KAFKA-2998
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Florian Hussonnois
>Assignee: Jason Gustafson
>Priority: Minor
>
> If no broker from bootstrap.servers is available consumer retries 
> indefinitely with debug log message :
>  
> DEBUG 17:16:13 Give up sending metadata request since no node is available
> DEBUG 17:16:13 Initialize connection to node -1 for sending metadata request
> DEBUG 17:16:13 Initiating connection to node -1 at localhost:9091.
> At least, an ERROR message should be log after a number of retries.
> In addition, maybe the consumer should fail in a such case ? This behavior 
> could be set by a configuration property ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3468) Java clients never fail when bootstrap uses incorrect port

2016-03-25 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-3468:
--

 Summary: Java clients never fail when bootstrap uses incorrect port
 Key: KAFKA-3468
 URL: https://issues.apache.org/jira/browse/KAFKA-3468
 Project: Kafka
  Issue Type: Bug
Reporter: Grant Henke


When a client bootstrap uses a valid host but an incorrect port, the java 
clients will never fail but instead loop forever trying to fetch metadata. A 
sample debug log of the consumer is:
{noformat}
[2016-03-25 11:37:39,880] DEBUG Initialize connection to node -1 for sending 
metadata request (org.apache.kafka.clients.NetworkClient:629)
[2016-03-25 11:37:39,880] DEBUG Initiating connection to node -1 at 
localhost:. (org.apache.kafka.clients.NetworkClient:492)
[2016-03-25 11:37:39,881] DEBUG Node -1 disconnected. 
(org.apache.kafka.clients.NetworkClient:459)
[2016-03-25 11:37:39,881] DEBUG Give up sending metadata request since no node 
is available (org.apache.kafka.clients.NetworkClient:614)
[2016-03-25 11:37:39,983] DEBUG Initialize connection to node -1 for sending 
metadata request (org.apache.kafka.clients.NetworkClient:629)
[2016-03-25 11:37:39,983] DEBUG Initiating connection to node -1 at 
localhost:. (org.apache.kafka.clients.NetworkClient:492)
[2016-03-25 11:37:39,984] DEBUG Node -1 disconnected. 
(org.apache.kafka.clients.NetworkClient:459)
[2016-03-25 11:37:39,985] DEBUG Give up sending metadata request since no node 
is available (org.apache.kafka.clients.NetworkClient:614)
[2016-03-25 11:37:40,090] DEBUG Initialize connection to node -1 for sending 
metadata request (org.apache.kafka.clients.NetworkClient:629)
[2016-03-25 11:37:40,090] DEBUG Initiating connection to node -1 at 
localhost:. (org.apache.kafka.clients.NetworkClient:492)
[2016-03-25 11:37:40,091] DEBUG Node -1 disconnected. 
(org.apache.kafka.clients.NetworkClient:459)
[2016-03-25 11:37:40,091] DEBUG Give up sending metadata request since no node 
is available (org.apache.kafka.clients.NetworkClient:614)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3460) Remove old 0.7 KafkaMigrationTool

2016-03-24 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3460:
---
Status: Patch Available  (was: Open)

> Remove old 0.7 KafkaMigrationTool
> -
>
> Key: KAFKA-3460
> URL: https://issues.apache.org/jira/browse/KAFKA-3460
> Project: Kafka
>  Issue Type: Task
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.10.0.0
>
>
> Unless we are supporting directly upgrading from 0.7 to 0.10 the 
> KafkaMigrationTool should be cleaned up. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3460) Remove old 0.7 KafkaMigrationTool

2016-03-24 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-3460:
--

 Summary: Remove old 0.7 KafkaMigrationTool
 Key: KAFKA-3460
 URL: https://issues.apache.org/jira/browse/KAFKA-3460
 Project: Kafka
  Issue Type: Task
Affects Versions: 0.9.0.0
Reporter: Grant Henke
Assignee: Grant Henke
 Fix For: 0.10.0.0


Unless we are supporting directly upgrading from 0.7 to 0.10 the 
KafkaMigrationTool should be cleaned up. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3441) 0.10.0 documentation still says "0.9.0"

2016-03-23 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KAFKA-3441.

Resolution: Fixed

> 0.10.0 documentation still says "0.9.0"
> ---
>
> Key: KAFKA-3441
> URL: https://issues.apache.org/jira/browse/KAFKA-3441
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Gwen Shapira
>Assignee: Grant Henke
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> See here: 
> https://github.com/apache/kafka/blob/trunk/docs/documentation.html
> And here:
> http://kafka.apache.org/0100/documentation.html
> This should be fixed in both trunk and 0.10.0 branch



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3451) Add basic HTML coverage report generation to gradle

2016-03-23 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3451:
---
Attachment: Jacoco-html.zip
scoverage.zip

> Add basic HTML coverage report generation to gradle
> ---
>
> Key: KAFKA-3451
> URL: https://issues.apache.org/jira/browse/KAFKA-3451
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.10.1.0
>
> Attachments: Jacoco-html.zip, scoverage.zip
>
>
> Having some basic ability to report and view coverage is valuable and a good 
> start. This may not be perfect and enhancements should be tracked under the 
> KAFKA-1722 umbrella, but its a start. 
> This will use Jacoco to report on the java projects and Scoverage to report 
> on the Scala projects (core). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3451) Add basic HTML coverage report generation to gradle

2016-03-23 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3451:
---
Status: Patch Available  (was: Open)

> Add basic HTML coverage report generation to gradle
> ---
>
> Key: KAFKA-3451
> URL: https://issues.apache.org/jira/browse/KAFKA-3451
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.10.1.0
>
> Attachments: Jacoco-html.zip, scoverage.zip
>
>
> Having some basic ability to report and view coverage is valuable and a good 
> start. This may not be perfect and enhancements should be tracked under the 
> KAFKA-1722 umbrella, but its a start. 
> This will use Jacoco to report on the java projects and Scoverage to report 
> on the Scala projects (core). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3451) Add basic HTML coverage report generation to gradle

2016-03-23 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15208786#comment-15208786
 ] 

Grant Henke commented on KAFKA-3451:


Attached sample report output.

> Add basic HTML coverage report generation to gradle
> ---
>
> Key: KAFKA-3451
> URL: https://issues.apache.org/jira/browse/KAFKA-3451
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.10.1.0
>
> Attachments: Jacoco-html.zip, scoverage.zip
>
>
> Having some basic ability to report and view coverage is valuable and a good 
> start. This may not be perfect and enhancements should be tracked under the 
> KAFKA-1722 umbrella, but its a start. 
> This will use Jacoco to report on the java projects and Scoverage to report 
> on the Scala projects (core). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3451) Add basic HTML coverage report generation to gradle

2016-03-23 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-3451:
--

 Summary: Add basic HTML coverage report generation to gradle
 Key: KAFKA-3451
 URL: https://issues.apache.org/jira/browse/KAFKA-3451
 Project: Kafka
  Issue Type: Sub-task
Reporter: Grant Henke
Assignee: Grant Henke


Having some basic ability to report and view coverage is valuable and a good 
start. This may not be perfect and enhancements should be tracked under the 
KAFKA-1722 umbrella, but its a start. 

This will use Jacoco to report on the java projects and Scoverage to report on 
the Scala projects (core). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3367) Delete topic dont delete the complete log from kafka

2016-03-22 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KAFKA-3367.

Resolution: Not A Problem

> Delete topic dont delete the complete log from kafka
> 
>
> Key: KAFKA-3367
> URL: https://issues.apache.org/jira/browse/KAFKA-3367
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Akshath Patkar
>
> Delete topic Just marks the topic as deleted. But data still remain in logs.
> How can we delete the topic completely with out doing manual delete of logs 
> from kafka and zookeeper



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3417) Invalid characters in config properties not being validated?

2016-03-22 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KAFKA-3417:
--

Assignee: Grant Henke

> Invalid characters in config properties not being validated?
> 
>
> Key: KAFKA-3417
> URL: https://issues.apache.org/jira/browse/KAFKA-3417
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 0.9.0.1
>Reporter: Byron Ruth
>Assignee: Grant Henke
>Priority: Minor
>
> I ran into an error using a {{client.id}} with invalid characters (per the 
> [config 
> validator|https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/common/Config.scala#L25-L35]).
>  I was able to get that exact error using the {{kafka-console-consumer}} 
> script, presumably because I supplied a consumer properties file and it 
> validated prior to hitting the server. However, when I use a client library 
> (sarama for Go in this case), an error in the metrics subsystem is thrown 
> [here|https://github.com/apache/kafka/blob/977ebbe9bafb6c1a6e1be69620f745712118fe80/clients/src/main/java/org/apache/kafka/common/metrics/Metrics.java#L380].
> The stacktrace is:
> {code:title=stack.java}
> [2016-03-17 17:43:47,342] ERROR [KafkaApi-0] error when handling request 
> Name: FetchRequest; Version: 0; CorrelationId: 2; ClientId: foo:bar; 
> ReplicaId: -1; MaxWait: 250 ms; MinBytes: 1 bytes; RequestInfo: [foo,0] -> 
> PartitionFetchInfo(0,32768) (kafka.server.KafkaApis)
> org.apache.kafka.common.KafkaException: Error creating mbean attribute for 
> metricName :MetricName [name=throttle-time, group=Fetch, description=Tracking 
> average throttle-time per client, tags={client-id=foo:bar}]
>   at 
> org.apache.kafka.common.metrics.JmxReporter.addAttribute(JmxReporter.java:113)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:76)
>   at 
> org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:288)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
> ...
> {code}
> Assuming the cause os related to the invalid characters, when the request 
> header is decoded, the {{clientId}} should be validated prior to being used?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3426) Improve protocol type errors when invalid sizes are received

2016-03-22 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3426:
---
Affects Version/s: 0.9.0.0

> Improve protocol type errors when invalid sizes are received
> 
>
> Key: KAFKA-3426
> URL: https://issues.apache.org/jira/browse/KAFKA-3426
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>
> We currently don't perform much validation on the size value read by the 
> protocol types. This means that we end up throwing exceptions like 
> `BufferUnderflowException`, `NegativeArraySizeException`, etc. `Schema.read` 
> catches these exceptions and adds some useful information like:
> {code}
> throw new SchemaException("Error reading field '" + fields[i].name +
>   "': " +
>   (e.getMessage() == null ? 
> e.getClass().getName() : e.getMessage()));
> {code}
> We could do even better by throwing a `SchemaException` with a more user 
> friendly message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3301) CommonClientConfigs.METRICS_SAMPLE_WINDOW_MS_DOC is incorrect

2016-03-22 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3301:
---
Status: Patch Available  (was: Open)

> CommonClientConfigs.METRICS_SAMPLE_WINDOW_MS_DOC  is incorrect
> --
>
> Key: KAFKA-3301
> URL: https://issues.apache.org/jira/browse/KAFKA-3301
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>Assignee: Grant Henke
> Fix For: 0.10.1.0
>
>
> The text says "The number of samples maintained to compute metrics.", which 
> is in correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3301) CommonClientConfigs.METRICS_SAMPLE_WINDOW_MS_DOC is incorrect

2016-03-22 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KAFKA-3301:
--

Assignee: Grant Henke

> CommonClientConfigs.METRICS_SAMPLE_WINDOW_MS_DOC  is incorrect
> --
>
> Key: KAFKA-3301
> URL: https://issues.apache.org/jira/browse/KAFKA-3301
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>Assignee: Grant Henke
> Fix For: 0.10.1.0
>
>
> The text says "The number of samples maintained to compute metrics.", which 
> is in correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   4   5   6   >