[jira] [Created] (KAFKA-2868) retry.backoff.ms ignored by Producer

2015-11-20 Thread Nicolas Bordier (JIRA)
Nicolas Bordier created KAFKA-2868:
--

 Summary: retry.backoff.ms ignored by Producer
 Key: KAFKA-2868
 URL: https://issues.apache.org/jira/browse/KAFKA-2868
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.8.2.1
 Environment: Host : Ubuntu 14.04 LTS x86_64 (3.16.0-53-generic)
Container LXC : Centos 6
Reporter: Nicolas Bordier
Assignee: Jun Rao


In our test, the Producer config parameter is defined as :
retry.backoff.ms = 1

During kafka partition reassignement, the producer log NOT_LEADER_FOR_PARTITION 
 errors not every 10s (as defined in retry.backoff.ms) but very quickly (every 
100ms maximum) :

[2015-11-17 11:53:30,002] WARN Got error produce response with correlation id 
23080 on topic-partition raw-9, retrying (99 attempts left). Error: 
NOT_LEADER_FOR_PARTITION 
(org.apache.kafka.clients.producer.internals.Sender:257)
[2015-11-17 11:53:30,037] WARN Got error produce response with correlation id 
23081 on topic-partition raw-9, retrying (98 attempts left). Error: 
NOT_LEADER_FOR_PARTITION 
(org.apache.kafka.clients.producer.internals.Sender:257)
[2015-11-17 11:53:30,260] WARN Got error produce response with correlation id 
23083 on topic-partition raw-9, retrying (97 attempts left). Error: 
NOT_LEADER_FOR_PARTITION 
(org.apache.kafka.clients.producer.internals.Sender:257)

In our test, the retry.backoff.ms parameter is ignored by Producer.

We need to configure this parameter to avoid errors (ie : "retrying (0 attempts 
left)" like during a long time partition reassignement).

Thanks,




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2687) Add support for ListGroups and DescribeGroup APIs

2015-11-20 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018125#comment-15018125
 ] 

Stevo Slavic commented on KAFKA-2687:
-

It seems ConsumerMetadataRequest/ConsumerMetadataResponse got renamed in some 
commit, but documentation wasn't updated (completely), there are still 
references in 
https://github.com/apache/kafka/blob/0.9.0/docs/implementation.html

> Add support for ListGroups and DescribeGroup APIs
> -
>
> Key: KAFKA-2687
> URL: https://issues.apache.org/jira/browse/KAFKA-2687
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Since the new consumer currently has no persistence in Zookeeper (pending 
> outcome of KAFKA-2017), there is no way for administrators to investigate 
> group status including getting the list of members in the group and their 
> partition assignments. We therefore propose to modify GroupMetadataRequest 
> (previously known as ConsumerMetadataRequest) to return group metadata when 
> received by the respective group's coordinator. When received by another 
> broker, the request will be handled as before: by only returning coordinator 
> host and port information.
> {code}
> GroupMetadataRequest => GroupId IncludeMetadata
>   GroupId => String
>   IncludeMetadata => Boolean
> GroupMetadataResponse => ErrorCode Coordinator GroupMetadata
>   ErrorCode => int16
>   Coordinator => Id Host Port
> Id => int32
> Host => string
> Port => int32
>   GroupMetadata => State ProtocolType Generation Protocol Leader  Members
> State => String
> ProtocolType => String
> Generation => int32
> Protocol => String
> Leader => String
> Members => [Member MemberMetadata MemberAssignment]
>   Member => MemberIp ClientId
> MemberIp => String
> ClientId => String
>   MemberMetadata => Bytes
>   MemberAssignment => Bytes
> {code}
> The request schema includes a flag to indicate whether metadata is needed, 
> which saves clients from having to read all group metadata when they are just 
> trying to find the coordinator. This is important to reduce group overhead 
> for use cases which involve a large number of topic subscriptions (e.g. 
> mirror maker).
> Tools will use the protocol type to determine how to parse metadata. For 
> example, when the protocolType is "consumer", the tool can use 
> ConsumerProtocol to parse the member metadata as topic subscriptions and 
> partition assignments. 
> The detailed proposal can be found below.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-40%3A+ListGroups+and+DescribeGroup



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2869) host used by Authorizer should be IP address not hostname/IP

2015-11-20 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-2869:
--

 Summary: host used by Authorizer should be IP address not 
hostname/IP
 Key: KAFKA-2869
 URL: https://issues.apache.org/jira/browse/KAFKA-2869
 Project: Kafka
  Issue Type: Bug
  Components: security
Reporter: Ismael Juma
Assignee: Ismael Juma
Priority: Critical
 Fix For: 0.9.0.0


Reported by Parth:

"One issue that our ranger team has found is that session.host returns 
InetAddress.toString() which is of the format hostname/IP. This means all the 
acls that wants to specify hosts will have to specify host in hostNaem/IP 
format. We can either make session.host return ip or host name or change the 
authorizer to evaluate against all 3 “host/ip”, “host”, “IP”."

Jun suggested we use IP instead of hostname/IP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2869; Host used by Authorizer should be ...

2015-11-20 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/567

KAFKA-2869; Host used by Authorizer should be IP address not hostname/IP



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-2869-host-used-by-authorizer-should-be-ip

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/567.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #567


commit 50a798fab77521cb1c9536d98f9faa0050cd9139
Author: Ismael Juma 
Date:   2015-11-20T15:07:02Z

Host used by Authorizer should be IP address not hostname/IP




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2869) host used by Authorizer should be IP address not hostname/IP

2015-11-20 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2869:
---
Status: Patch Available  (was: In Progress)

> host used by Authorizer should be IP address not hostname/IP
> 
>
> Key: KAFKA-2869
> URL: https://issues.apache.org/jira/browse/KAFKA-2869
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> Reported by Parth:
> "One issue that our ranger team has found is that session.host returns 
> InetAddress.toString() which is of the format hostname/IP. This means all the 
> acls that wants to specify hosts will have to specify host in hostNaem/IP 
> format. We can either make session.host return ip or host name or change the 
> authorizer to evaluate against all 3 “host/ip”, “host”, “IP”."
> Jun suggested we use IP instead of hostname/IP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2687) Add support for ListGroups and DescribeGroup APIs

2015-11-20 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018132#comment-15018132
 ] 

Ismael Juma commented on KAFKA-2687:


[~sslavic], would you like to submit a PR? If not, that's OK, we'll take care 
of it.

> Add support for ListGroups and DescribeGroup APIs
> -
>
> Key: KAFKA-2687
> URL: https://issues.apache.org/jira/browse/KAFKA-2687
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Since the new consumer currently has no persistence in Zookeeper (pending 
> outcome of KAFKA-2017), there is no way for administrators to investigate 
> group status including getting the list of members in the group and their 
> partition assignments. We therefore propose to modify GroupMetadataRequest 
> (previously known as ConsumerMetadataRequest) to return group metadata when 
> received by the respective group's coordinator. When received by another 
> broker, the request will be handled as before: by only returning coordinator 
> host and port information.
> {code}
> GroupMetadataRequest => GroupId IncludeMetadata
>   GroupId => String
>   IncludeMetadata => Boolean
> GroupMetadataResponse => ErrorCode Coordinator GroupMetadata
>   ErrorCode => int16
>   Coordinator => Id Host Port
> Id => int32
> Host => string
> Port => int32
>   GroupMetadata => State ProtocolType Generation Protocol Leader  Members
> State => String
> ProtocolType => String
> Generation => int32
> Protocol => String
> Leader => String
> Members => [Member MemberMetadata MemberAssignment]
>   Member => MemberIp ClientId
> MemberIp => String
> ClientId => String
>   MemberMetadata => Bytes
>   MemberAssignment => Bytes
> {code}
> The request schema includes a flag to indicate whether metadata is needed, 
> which saves clients from having to read all group metadata when they are just 
> trying to find the coordinator. This is important to reduce group overhead 
> for use cases which involve a large number of topic subscriptions (e.g. 
> mirror maker).
> Tools will use the protocol type to determine how to parse metadata. For 
> example, when the protocolType is "consumer", the tool can use 
> ConsumerProtocol to parse the member metadata as topic subscriptions and 
> partition assignments. 
> The detailed proposal can be found below.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-40%3A+ListGroups+and+DescribeGroup



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-2869) host used by Authorizer should be IP address not hostname/IP

2015-11-20 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-2869 started by Ismael Juma.
--
> host used by Authorizer should be IP address not hostname/IP
> 
>
> Key: KAFKA-2869
> URL: https://issues.apache.org/jira/browse/KAFKA-2869
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> Reported by Parth:
> "One issue that our ranger team has found is that session.host returns 
> InetAddress.toString() which is of the format hostname/IP. This means all the 
> acls that wants to specify hosts will have to specify host in hostNaem/IP 
> format. We can either make session.host return ip or host name or change the 
> authorizer to evaluate against all 3 “host/ip”, “host”, “IP”."
> Jun suggested we use IP instead of hostname/IP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2869) host used by Authorizer should be IP address not hostname/IP

2015-11-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018130#comment-15018130
 ] 

ASF GitHub Bot commented on KAFKA-2869:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/567

KAFKA-2869; Host used by Authorizer should be IP address not hostname/IP



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-2869-host-used-by-authorizer-should-be-ip

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/567.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #567


commit 50a798fab77521cb1c9536d98f9faa0050cd9139
Author: Ismael Juma 
Date:   2015-11-20T15:07:02Z

Host used by Authorizer should be IP address not hostname/IP




> host used by Authorizer should be IP address not hostname/IP
> 
>
> Key: KAFKA-2869
> URL: https://issues.apache.org/jira/browse/KAFKA-2869
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> Reported by Parth:
> "One issue that our ranger team has found is that session.host returns 
> InetAddress.toString() which is of the format hostname/IP. This means all the 
> acls that wants to specify hosts will have to specify host in hostNaem/IP 
> format. We can either make session.host return ip or host name or change the 
> authorizer to evaluate against all 3 “host/ip”, “host”, “IP”."
> Jun suggested we use IP instead of hostname/IP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2687) Add support for ListGroups and DescribeGroup APIs

2015-11-20 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018157#comment-15018157
 ] 

Stevo Slavic commented on KAFKA-2687:
-

So much changed, still trying to grasp it, would delay PR too much IMO, so 
please go ahead.

Noticed, ZKStringSerializer is now private object, and all of the ZkUtils 
constructors do not have same features available in ZkClient, like for 
configuring operation retry timeout, so cannot construct ZkUtils with 
ZkStringSerializer and ZkClient configured with operation retry timeout.

> Add support for ListGroups and DescribeGroup APIs
> -
>
> Key: KAFKA-2687
> URL: https://issues.apache.org/jira/browse/KAFKA-2687
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Since the new consumer currently has no persistence in Zookeeper (pending 
> outcome of KAFKA-2017), there is no way for administrators to investigate 
> group status including getting the list of members in the group and their 
> partition assignments. We therefore propose to modify GroupMetadataRequest 
> (previously known as ConsumerMetadataRequest) to return group metadata when 
> received by the respective group's coordinator. When received by another 
> broker, the request will be handled as before: by only returning coordinator 
> host and port information.
> {code}
> GroupMetadataRequest => GroupId IncludeMetadata
>   GroupId => String
>   IncludeMetadata => Boolean
> GroupMetadataResponse => ErrorCode Coordinator GroupMetadata
>   ErrorCode => int16
>   Coordinator => Id Host Port
> Id => int32
> Host => string
> Port => int32
>   GroupMetadata => State ProtocolType Generation Protocol Leader  Members
> State => String
> ProtocolType => String
> Generation => int32
> Protocol => String
> Leader => String
> Members => [Member MemberMetadata MemberAssignment]
>   Member => MemberIp ClientId
> MemberIp => String
> ClientId => String
>   MemberMetadata => Bytes
>   MemberAssignment => Bytes
> {code}
> The request schema includes a flag to indicate whether metadata is needed, 
> which saves clients from having to read all group metadata when they are just 
> trying to find the coordinator. This is important to reduce group overhead 
> for use cases which involve a large number of topic subscriptions (e.g. 
> mirror maker).
> Tools will use the protocol type to determine how to parse metadata. For 
> example, when the protocolType is "consumer", the tool can use 
> ConsumerProtocol to parse the member metadata as topic subscriptions and 
> partition assignments. 
> The detailed proposal can be found below.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-40%3A+ListGroups+and+DescribeGroup



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2863) Authorizer should provide lifecycle (shutdown) methods

2015-11-20 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma reassigned KAFKA-2863:
--

Assignee: Ismael Juma  (was: Parth Brahmbhatt)

> Authorizer should provide lifecycle (shutdown) methods
> --
>
> Key: KAFKA-2863
> URL: https://issues.apache.org/jira/browse/KAFKA-2863
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Joel Koshy
>Assignee: Ismael Juma
> Fix For: 0.9.0.1
>
>
> Authorizer supports configure, but no shutdown. This would be useful for 
> non-trivial authorizers that need to do some cleanup (e.g., shutting down 
> threadpools and such) on broker shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2863) Authorizer should provide lifecycle (shutdown) methods

2015-11-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018165#comment-15018165
 ] 

ASF GitHub Bot commented on KAFKA-2863:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/568

KAFKA-2863; Add  a `close()` method to `Authorizer`



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka kafka-2863-authorizer-close

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/568.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #568


commit 0102faec11c21882f3d7fc76d60a592c6f024a3c
Author: Ismael Juma 
Date:   2015-11-20T15:35:50Z

Add  a `close()` method to `Authorizer`




> Authorizer should provide lifecycle (shutdown) methods
> --
>
> Key: KAFKA-2863
> URL: https://issues.apache.org/jira/browse/KAFKA-2863
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Joel Koshy
>Assignee: Ismael Juma
> Fix For: 0.9.0.1
>
>
> Authorizer supports configure, but no shutdown. This would be useful for 
> non-trivial authorizers that need to do some cleanup (e.g., shutting down 
> threadpools and such) on broker shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2863; Add a `close()` method to `Authori...

2015-11-20 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/568

KAFKA-2863; Add  a `close()` method to `Authorizer`



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka kafka-2863-authorizer-close

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/568.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #568


commit 0102faec11c21882f3d7fc76d60a592c6f024a3c
Author: Ismael Juma 
Date:   2015-11-20T15:35:50Z

Add  a `close()` method to `Authorizer`




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2863) Authorizer should provide lifecycle (shutdown) methods

2015-11-20 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2863:
---
Reviewer: Jun Rao
  Status: Patch Available  (was: Open)

[~parth.brahmbhatt], I hope you don't mind, I created a PR for this. Your 
review would be appreciated. [~jjkoshy], is this what you had in mind?

> Authorizer should provide lifecycle (shutdown) methods
> --
>
> Key: KAFKA-2863
> URL: https://issues.apache.org/jira/browse/KAFKA-2863
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Joel Koshy
>Assignee: Ismael Juma
> Fix For: 0.9.0.1
>
>
> Authorizer supports configure, but no shutdown. This would be useful for 
> non-trivial authorizers that need to do some cleanup (e.g., shutting down 
> threadpools and such) on broker shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2870) Support configuring operationRetryTimeout of underlying ZkClient through ZkUtils constructor

2015-11-20 Thread Stevo Slavic (JIRA)
Stevo Slavic created KAFKA-2870:
---

 Summary: Support configuring operationRetryTimeout of underlying 
ZkClient through ZkUtils constructor
 Key: KAFKA-2870
 URL: https://issues.apache.org/jira/browse/KAFKA-2870
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.9.0.0
Reporter: Stevo Slavic
Priority: Minor


Currently (Kafka 0.9.0.0 RC3) it's not possible to have underlying {{ZkClient}} 
{{operationRetryTimeout}} configured and use Kafka's {{ZKStringSerializer}} in 
{{ZkUtils}} instance.

Please support configuring {{operationRetryTimeout}} via another 
{{ZkUtils.apply}} factory method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2869; Host used by Authorizer should be ...

2015-11-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/567


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2869) host used by Authorizer should be IP address not hostname/IP

2015-11-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018250#comment-15018250
 ] 

ASF GitHub Bot commented on KAFKA-2869:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/567


> host used by Authorizer should be IP address not hostname/IP
> 
>
> Key: KAFKA-2869
> URL: https://issues.apache.org/jira/browse/KAFKA-2869
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> Reported by Parth:
> "One issue that our ranger team has found is that session.host returns 
> InetAddress.toString() which is of the format hostname/IP. This means all the 
> acls that wants to specify hosts will have to specify host in hostNaem/IP 
> format. We can either make session.host return ip or host name or change the 
> authorizer to evaluate against all 3 “host/ip”, “host”, “IP”."
> Jun suggested we use IP instead of hostname/IP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2869) host used by Authorizer should be IP address not hostname/IP

2015-11-20 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2869:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 567
[https://github.com/apache/kafka/pull/567]

> host used by Authorizer should be IP address not hostname/IP
> 
>
> Key: KAFKA-2869
> URL: https://issues.apache.org/jira/browse/KAFKA-2869
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> Reported by Parth:
> "One issue that our ranger team has found is that session.host returns 
> InetAddress.toString() which is of the format hostname/IP. This means all the 
> acls that wants to specify hosts will have to specify host in hostNaem/IP 
> format. We can either make session.host return ip or host name or change the 
> authorizer to evaluate against all 3 “host/ip”, “host”, “IP”."
> Jun suggested we use IP instead of hostname/IP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Potential KafkaStreaming Bug

2015-11-20 Thread Bill Bejeck
Hi All,

I'm starting to experiment with the lower-level Processor Client API found
on the KIP-28 wiki.

When starting the KafkaStream I get the following Exception:

Exception in thread "main" java.util.NoSuchElementException: id: SINK
at
org.apache.kafka.streams.processor.internals.QuickUnion.root(QuickUnion.java:40)
at
org.apache.kafka.streams.processor.TopologyBuilder.makeNodeGroups(TopologyBuilder.java:387)
at
org.apache.kafka.streams.processor.TopologyBuilder.topicGroups(TopologyBuilder.java:339)
at
org.apache.kafka.streams.processor.internals.StreamThread.(StreamThread.java:139)
at
org.apache.kafka.streams.processor.internals.StreamThread.(StreamThread.java:120)
at org.apache.kafka.streams.KafkaStreaming.(KafkaStreaming.java:110)
at bbejeck.ProcessorDriver.main(ProcessorDriver.java:35)

The TopologyBuilder is being built like so:
topologyBuilder.addSource("SOURCE", new StringDeserializer(), new
StringDeserializer(), "src-topic")
.addProcessor("PROCESS", new
GenericProcessorClient(replaceVowels), "SOURCE")
.addSink("SINK", "dest-topic", new StringSerializer(), new
StringSerializer(), "PROCESS");

Looks to me the cause of the error is that in  TopologyBuilder.addSink
method the sink  is never connected with it's parent.

When I added the following two lines to the addSink method, the Exception
goes away.

  nodeGrouper.add(name);
  nodeGrouper.unite(name, parentNames);

Is this a bug or am I doing something incorrect?

Thanks,
Bill


[jira] [Updated] (KAFKA-2863) Authorizer should provide lifecycle (shutdown) methods

2015-11-20 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2863:
---
Fix Version/s: (was: 0.9.0.1)
   0.9.0.0

> Authorizer should provide lifecycle (shutdown) methods
> --
>
> Key: KAFKA-2863
> URL: https://issues.apache.org/jira/browse/KAFKA-2863
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Joel Koshy
>Assignee: Ismael Juma
> Fix For: 0.9.0.0
>
>
> Authorizer supports configure, but no shutdown. This would be useful for 
> non-trivial authorizers that need to do some cleanup (e.g., shutting down 
> threadpools and such) on broker shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka_0.9.0_jdk7 #35

2015-11-20 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-2860: better handling of auto commit errors

[junrao] KAFKA-2869; Host used by Authorizer should be IP address not 
hostname/IP

--
[...truncated 773 lines...]
kafka.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.message.ByteBufferMessageSetTest > testWriteTo PASSED

kafka.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.message.ByteBufferMessageSetTest > testIterator PASSED

kafka.message.MessageTest > testChecksum PASSED

kafka.message.MessageTest > testIsHashable PASSED

kafka.message.MessageTest > testFieldValues PASSED

kafka.message.MessageTest > testEquality PASSED

kafka.server.KafkaConfigTest > testAdvertiseConfigured PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeHoursProvided PASSED

kafka.server.KafkaConfigTest > testLogRollTimeBothMsAndHoursProvided PASSED

kafka.server.KafkaConfigTest > testLogRetentionValid PASSED

kafka.server.KafkaConfigTest > testSpecificProperties PASSED

kafka.server.KafkaConfigTest > testDefaultCompressionType PASSED

kafka.server.KafkaConfigTest > testDuplicateListeners PASSED

kafka.server.KafkaConfigTest > testLogRetentionUnlimited PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeMsProvided PASSED

kafka.server.KafkaConfigTest > testLogRollTimeNoConfigProvided PASSED

kafka.server.KafkaConfigTest > testAdvertiseDefaults PASSED

kafka.server.KafkaConfigTest > testBadListenerProtocol PASSED

kafka.server.KafkaConfigTest > testListenerDefaults PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeBothMinutesAndHoursProvided 
PASSED

kafka.server.KafkaConfigTest > testUncleanElectionDisabled PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeNoConfigProvided PASSED

kafka.server.KafkaConfigTest > testFromPropsInvalid PASSED

kafka.server.KafkaConfigTest > testInvalidCompressionType PASSED

kafka.server.KafkaConfigTest > testAdvertiseHostNameDefault PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeMinutesProvided PASSED

kafka.server.KafkaConfigTest > testValidCompressionType PASSED

kafka.server.KafkaConfigTest > testUncleanElectionInvalid PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeBothMinutesAndMsProvided 
PASSED

kafka.server.KafkaConfigTest > testLogRollTimeMsProvided PASSED

kafka.server.KafkaConfigTest > testUncleanLeaderElectionDefault PASSED

kafka.server.KafkaConfigTest > testUncleanElectionEnabled PASSED

kafka.server.KafkaConfigTest > testAdvertisePortDefault PASSED

kafka.server.KafkaConfigTest > testVersionConfiguration PASSED

kafka.server.DynamicConfigChangeTest > testProcessNotification PASSED

kafka.server.DynamicConfigChangeTest > testClientQuotaConfigChange PASSED

kafka.server.DynamicConfigChangeTest > testConfigChangeOnNonExistingTopic PASSED

kafka.server.DynamicConfigChangeTest > testConfigChange PASSED

kafka.server.SaslSslReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeEarliestTime PASSED

kafka.server.LogOffsetTest > testGetOffsetsForUnknownTopic PASSED

kafka.server.LogOffsetTest > testEmptyLogsGetOffsets PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeLatestTime PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeNow PASSED

kafka.server.SimpleFetchTest > testReadFromLog PASSED

kafka.server.LogRecoveryTest > testHWCheckpointNoFailuresMultipleLogSegments 
PASSED
:kafka_0.9.0_jdk7:core:test FAILED
:test_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':core:test'.
> Process 'Gradle Test Executor 2' finished with non-zero exit value 1

* Try:
Run with --info or --debug option to get more log output.

* Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task 
':core:test'.
at 
org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:69)
at 
org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:46)
at 
org.gradle.api.internal.tasks.execution.PostExecutionAnalysisTaskExecuter.execute(PostExecutionAnalysisTaskExecuter.java:35)
at 
org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:64)
at 
org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:58)
at 
org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:52)
at 
org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:52)
at 
org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfT

[GitHub] kafka pull request: KAFKA-2867: Fix missing WorkerSourceTask synch...

2015-11-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/566


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2867) Missing synchronization and improperly handled InterruptException in WorkerSourceTask

2015-11-20 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2867:
---
   Resolution: Fixed
Fix Version/s: (was: 0.9.1.0)
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 566
[https://github.com/apache/kafka/pull/566]

> Missing synchronization and improperly handled InterruptException in 
> WorkerSourceTask
> -
>
> Key: KAFKA-2867
> URL: https://issues.apache.org/jira/browse/KAFKA-2867
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> In WorkerSourceTask, finishSuccessfulFlush() is not synchronized. In one case 
> (if the flush didn't even have to be started), this is ok because we are 
> already in a synchronized block. However, the other case is outside the 
> synchronized block.
> The result of this was transient failures of the system test for clean 
> bouncing copycat nodes. The bug doesn't cause exceptions because 
> finishSuccessfulFlush() only does a swap of two maps and sets a flag to 
> false. However, because of the swapping of the two maps that maintain 
> outstanding messages, we could by chance also be starting to send a message. 
> If the message accidentally gets added to the backlog queue, then the 
> flushing flag is toggled, we can "lose" that message temporarily into the 
> backlog queue. Then we'll get a callback that will log an error because it 
> can't find a record of the acked message (which, if it ever appears, should 
> be considered a critical issue since it shouldn't be possible), and then on 
> the next commit, it'll be swapped *back into place*. On the subsequent 
> commit, the flush will never be able to complete because the message will be 
> in the outstanding list, but will already have been acked. This, in turn, 
> makes it impossible to commit offsets, and results in duplicate messages even 
> under clean bounces where we should be able to get exactly once delivery 
> assuming no network delays or other issues.
> As a result of seeing this error, it became apparent that handling of 
> WorkerSourceTaskThreads that do not complete quickly enough was not working 
> properly. The ShutdownableThread should get interrupted if it does not 
> complete quickly enough, but logs like this would happen:
> {quote}
> [2015-11-18 01:02:13,897] INFO Stopping task verifiable-source-0 
> (org.apache.kafka.connect.runtime.Worker)
> [2015-11-18 01:02:13,897] INFO Starting graceful shutdown of thread 
> WorkerSourceTask-verifiable-source-0 
> (org.apache.kafka.connect.util.ShutdownableThread)
> [2015-11-18 01:02:13,897] DEBUG WorkerSourceTask{id=verifiable-source-0} 
> Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask)
> [2015-11-18 01:02:17,901] DEBUG Submitting 1 entries to backing store 
> (org.apache.kafka.connect.storage.OffsetStorageWriter)
> [2015-11-18 01:02:18,897] INFO Forcing shutdown of thread 
> WorkerSourceTask-verifiable-source-0 
> (org.apache.kafka.connect.util.ShutdownableThread)
> [2015-11-18 01:02:18,897] ERROR Graceful stop of task 
> WorkerSourceTask{id=verifiable-source-0} failed. 
> (org.apache.kafka.connect.runtime.Worker)
> [2015-11-18 01:02:18,897] ERROR Failed to flush 
> WorkerSourceTask{id=verifiable-source-0}, timed out while waiting for 
> producer to flush outstanding messages 
> (org.apache.kafka.connect.runtime.WorkerSourceTask)
> [2015-11-18 01:02:18,898] DEBUG Submitting 1 entries to backing store 
> (org.apache.kafka.connect.storage.OffsetStorageWriter)
> [2015-11-18 01:02:18,898] INFO Finished stopping tasks in preparation for 
> rebalance (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> {quote}
> Actions in the background thread performing the commit continue to occur 
> after it is supposedly interrupted. This is because InterruptedExceptions 
> during the flush were being ignored (some time ago they were not even 
> possible). Instead, any interruption by the main thread trying to shut down 
> the thread in preparation for a rebalance should be handled by failing the 
> commit operation and returning so the thread can exit cleanly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2867) Missing synchronization and improperly handled InterruptException in WorkerSourceTask

2015-11-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018434#comment-15018434
 ] 

ASF GitHub Bot commented on KAFKA-2867:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/566


> Missing synchronization and improperly handled InterruptException in 
> WorkerSourceTask
> -
>
> Key: KAFKA-2867
> URL: https://issues.apache.org/jira/browse/KAFKA-2867
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> In WorkerSourceTask, finishSuccessfulFlush() is not synchronized. In one case 
> (if the flush didn't even have to be started), this is ok because we are 
> already in a synchronized block. However, the other case is outside the 
> synchronized block.
> The result of this was transient failures of the system test for clean 
> bouncing copycat nodes. The bug doesn't cause exceptions because 
> finishSuccessfulFlush() only does a swap of two maps and sets a flag to 
> false. However, because of the swapping of the two maps that maintain 
> outstanding messages, we could by chance also be starting to send a message. 
> If the message accidentally gets added to the backlog queue, then the 
> flushing flag is toggled, we can "lose" that message temporarily into the 
> backlog queue. Then we'll get a callback that will log an error because it 
> can't find a record of the acked message (which, if it ever appears, should 
> be considered a critical issue since it shouldn't be possible), and then on 
> the next commit, it'll be swapped *back into place*. On the subsequent 
> commit, the flush will never be able to complete because the message will be 
> in the outstanding list, but will already have been acked. This, in turn, 
> makes it impossible to commit offsets, and results in duplicate messages even 
> under clean bounces where we should be able to get exactly once delivery 
> assuming no network delays or other issues.
> As a result of seeing this error, it became apparent that handling of 
> WorkerSourceTaskThreads that do not complete quickly enough was not working 
> properly. The ShutdownableThread should get interrupted if it does not 
> complete quickly enough, but logs like this would happen:
> {quote}
> [2015-11-18 01:02:13,897] INFO Stopping task verifiable-source-0 
> (org.apache.kafka.connect.runtime.Worker)
> [2015-11-18 01:02:13,897] INFO Starting graceful shutdown of thread 
> WorkerSourceTask-verifiable-source-0 
> (org.apache.kafka.connect.util.ShutdownableThread)
> [2015-11-18 01:02:13,897] DEBUG WorkerSourceTask{id=verifiable-source-0} 
> Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask)
> [2015-11-18 01:02:17,901] DEBUG Submitting 1 entries to backing store 
> (org.apache.kafka.connect.storage.OffsetStorageWriter)
> [2015-11-18 01:02:18,897] INFO Forcing shutdown of thread 
> WorkerSourceTask-verifiable-source-0 
> (org.apache.kafka.connect.util.ShutdownableThread)
> [2015-11-18 01:02:18,897] ERROR Graceful stop of task 
> WorkerSourceTask{id=verifiable-source-0} failed. 
> (org.apache.kafka.connect.runtime.Worker)
> [2015-11-18 01:02:18,897] ERROR Failed to flush 
> WorkerSourceTask{id=verifiable-source-0}, timed out while waiting for 
> producer to flush outstanding messages 
> (org.apache.kafka.connect.runtime.WorkerSourceTask)
> [2015-11-18 01:02:18,898] DEBUG Submitting 1 entries to backing store 
> (org.apache.kafka.connect.storage.OffsetStorageWriter)
> [2015-11-18 01:02:18,898] INFO Finished stopping tasks in preparation for 
> rebalance (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> {quote}
> Actions in the background thread performing the commit continue to occur 
> after it is supposedly interrupted. This is because InterruptedExceptions 
> during the flush were being ignored (some time ago they were not even 
> possible). Instead, any interruption by the main thread trying to shut down 
> the thread in preparation for a rebalance should be handled by failing the 
> commit operation and returning so the thread can exit cleanly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-trunk-jdk7 #842

2015-11-20 Thread Apache Jenkins Server
See 



[jira] [Updated] (KAFKA-2863) Authorizer should provide lifecycle (shutdown) methods

2015-11-20 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2863:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 568
[https://github.com/apache/kafka/pull/568]

> Authorizer should provide lifecycle (shutdown) methods
> --
>
> Key: KAFKA-2863
> URL: https://issues.apache.org/jira/browse/KAFKA-2863
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Joel Koshy
>Assignee: Ismael Juma
> Fix For: 0.9.0.0
>
>
> Authorizer supports configure, but no shutdown. This would be useful for 
> non-trivial authorizers that need to do some cleanup (e.g., shutting down 
> threadpools and such) on broker shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2863; Add a `close()` method to `Authori...

2015-11-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/568


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2863) Authorizer should provide lifecycle (shutdown) methods

2015-11-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018494#comment-15018494
 ] 

ASF GitHub Bot commented on KAFKA-2863:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/568


> Authorizer should provide lifecycle (shutdown) methods
> --
>
> Key: KAFKA-2863
> URL: https://issues.apache.org/jira/browse/KAFKA-2863
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Joel Koshy
>Assignee: Ismael Juma
> Fix For: 0.9.0.0
>
>
> Authorizer supports configure, but no shutdown. This would be useful for 
> non-trivial authorizers that need to do some cleanup (e.g., shutting down 
> threadpools and such) on broker shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Log at INFO level in Benchmark tests

2015-11-20 Thread granders
GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/569

MINOR: Log at INFO level in Benchmark tests



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka minor-reduce-benchmark-logs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/569.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #569


commit 2b202aba9b57bd5bb4009b8341094362fbbc2e2f
Author: Geoff Anderson 
Date:   2015-11-20T19:05:27Z

Log at INFO level in Benchmark tests




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #172

2015-11-20 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-2869; Host used by Authorizer should be IP address not 
hostname/IP

[junrao] KAFKA-2867: Fix missing WorkerSourceTask synchronization and handling 
of

--
[...truncated 4462 lines...]
org.apache.kafka.connect.json.JsonConverterTest > timeToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > structToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testConnectSchemaMetadataTranslation PASSED

org.apache.kafka.connect.json.JsonConverterTest > shortToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > doubleToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > timeToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToConnectStringKeys PASSED

org.apache.kafka.connect.json.JsonConverterTest > floatToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > decimalToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > arrayToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testCacheSchemaToConnectConversion PASSED

org.apache.kafka.connect.json.JsonConverterTest > booleanToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > bytesToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > doubleToConnect PASSED
:connect:runtime:checkstyleMain
:connect:runtime:compileTestJavawarning: [options] bootstrap class path not set 
in conjunction with -source 1.7
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:connect:runtime:processTestResources
:connect:runtime:testClasses
:connect:runtime:checkstyleTest
:connect:runtime:test

org.apache.kafka.connect.util.KafkaBasedLogTest > testStartStop PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testReloadOnStart PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testSendAndReadToEnd FAILED
org.junit.ComparisonFailure: expected: but was:
at org.junit.Assert.assertEquals(Assert.java:115)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.kafka.connect.util.KafkaBasedLogTest.testSendAndReadToEnd(KafkaBasedLogTest.java:312)

org.apache.kafka.connect.util.KafkaBasedLogTest > testConsumerError PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testProducerError PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testGracefulShutdown 
PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testForcibleShutdown 
PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testListConnectors PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorTaskConfigs PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorExists PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnector PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorTaskConfigsConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorTaskConfigs PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnector PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnectorNotLeader PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorTaskConfigsConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorConfigConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorConfig PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorNotLeader PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnector PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testListConnectorsNotLeader PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testListConnectorsNotSynced PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommit PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitTaskSuccessAndFlushFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitConsumerFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommitTimeout 
PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testAssignmentPauseResume PAS

[jira] [Created] (KAFKA-2871) Newly replicated brokers don't expire log segments properly

2015-11-20 Thread Evan Huus (JIRA)
Evan Huus created KAFKA-2871:


 Summary: Newly replicated brokers don't expire log segments 
properly
 Key: KAFKA-2871
 URL: https://issues.apache.org/jira/browse/KAFKA-2871
 Project: Kafka
  Issue Type: Bug
  Components: replication
Affects Versions: 0.8.2.1
Reporter: Evan Huus
Assignee: Neha Narkhede
Priority: Minor


We recently brought up a few brokers to replace some existing nodes, and used 
the provided script to reassign partitions from the retired nodes to the new 
ones, one at a time.

A little while after the fact, we noticed extreme disk usage on the new nodes. 
Tracked this down to the fact that the replicated segments are all timestamped 
from the moment of replication rather than using whatever timestamp was set on 
the original node. Since this is the timestamp the log roller uses, it takes a 
full week (rollover time) before any data is purged from the new brokers.

In the short term, what is the safest workaround? Can we just `rm` these old 
segments, or should we be messing with the filesystem metadata so kafka removes 
them itself?

In the longer term, the partition mover should be setting timestamps 
appropriately on the segments it moves.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Log at INFO level in Benchmark tests

2015-11-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/569


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka_0.9.0_jdk7 #36

2015-11-20 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-2687) Add support for ListGroups and DescribeGroup APIs

2015-11-20 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018863#comment-15018863
 ] 

Jun Rao commented on KAFKA-2687:


[~sslavic], thanks for reporting the typo. I fixed that in the 0.9.0 docs.

> Add support for ListGroups and DescribeGroup APIs
> -
>
> Key: KAFKA-2687
> URL: https://issues.apache.org/jira/browse/KAFKA-2687
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Since the new consumer currently has no persistence in Zookeeper (pending 
> outcome of KAFKA-2017), there is no way for administrators to investigate 
> group status including getting the list of members in the group and their 
> partition assignments. We therefore propose to modify GroupMetadataRequest 
> (previously known as ConsumerMetadataRequest) to return group metadata when 
> received by the respective group's coordinator. When received by another 
> broker, the request will be handled as before: by only returning coordinator 
> host and port information.
> {code}
> GroupMetadataRequest => GroupId IncludeMetadata
>   GroupId => String
>   IncludeMetadata => Boolean
> GroupMetadataResponse => ErrorCode Coordinator GroupMetadata
>   ErrorCode => int16
>   Coordinator => Id Host Port
> Id => int32
> Host => string
> Port => int32
>   GroupMetadata => State ProtocolType Generation Protocol Leader  Members
> State => String
> ProtocolType => String
> Generation => int32
> Protocol => String
> Leader => String
> Members => [Member MemberMetadata MemberAssignment]
>   Member => MemberIp ClientId
> MemberIp => String
> ClientId => String
>   MemberMetadata => Bytes
>   MemberAssignment => Bytes
> {code}
> The request schema includes a flag to indicate whether metadata is needed, 
> which saves clients from having to read all group metadata when they are just 
> trying to find the coordinator. This is important to reduce group overhead 
> for use cases which involve a large number of topic subscriptions (e.g. 
> mirror maker).
> Tools will use the protocol type to determine how to parse metadata. For 
> example, when the protocolType is "consumer", the tool can use 
> ConsumerProtocol to parse the member metadata as topic subscriptions and 
> partition assignments. 
> The detailed proposal can be found below.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-40%3A+ListGroups+and+DescribeGroup



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #173

2015-11-20 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-2863; Add a `close()` method to `Authorizer`

[cshapi] MINOR: Log at INFO level in Benchmark tests

--
[...truncated 3312 lines...]

org.apache.kafka.clients.producer.internals.BufferPoolTest > 
testCantAllocateMoreMemoryThanWeHave PASSED

org.apache.kafka.clients.producer.internals.BufferPoolTest > testSimple PASSED

org.apache.kafka.clients.producer.internals.BufferPoolTest > 
testDelayedAllocation PASSED

org.apache.kafka.clients.producer.internals.RecordAccumulatorTest > 
testRetryBackoff PASSED

org.apache.kafka.clients.producer.internals.RecordAccumulatorTest > 
testNextReadyCheckDelay PASSED

org.apache.kafka.clients.producer.internals.RecordAccumulatorTest > 
testStressfulSituation PASSED

org.apache.kafka.clients.producer.internals.RecordAccumulatorTest > testFlush 
PASSED

org.apache.kafka.clients.producer.internals.RecordAccumulatorTest > testFull 
PASSED

org.apache.kafka.clients.producer.internals.RecordAccumulatorTest > 
testAbortIncompleteBatches PASSED

org.apache.kafka.clients.producer.internals.RecordAccumulatorTest > 
testExpiredBatches PASSED

org.apache.kafka.clients.producer.internals.RecordAccumulatorTest > testLinger 
PASSED

org.apache.kafka.clients.producer.internals.RecordAccumulatorTest > 
testPartialDrain PASSED

org.apache.kafka.clients.producer.internals.RecordAccumulatorTest > 
testAppendLarge PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > testSerializerClose PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testConstructorFailureCloseResource PASSED

org.apache.kafka.clients.producer.RecordSendTest > testTimeout PASSED

org.apache.kafka.clients.producer.RecordSendTest > testError PASSED

org.apache.kafka.clients.producer.RecordSendTest > testBlocking PASSED

org.apache.kafka.clients.ClientUtilsTest > testParseAndValidateAddresses PASSED

org.apache.kafka.clients.ClientUtilsTest > testNoPort PASSED

org.apache.kafka.clients.NetworkClientTest > testSimpleRequestResponse PASSED

org.apache.kafka.clients.NetworkClientTest > testClose PASSED

org.apache.kafka.clients.NetworkClientTest > testLeastLoadedNode PASSED

org.apache.kafka.clients.NetworkClientTest > testRequestTimeout PASSED

org.apache.kafka.clients.NetworkClientTest > 
testSimpleRequestResponseWithStaticNodes PASSED

org.apache.kafka.clients.NetworkClientTest > testSendToUnreadyNode PASSED

org.apache.kafka.clients.MetadataTest > testListenerCanUnregister PASSED

org.apache.kafka.clients.MetadataTest > testFailedUpdate PASSED

org.apache.kafka.clients.MetadataTest > testMetadataUpdateWaitTime PASSED

org.apache.kafka.clients.MetadataTest > testUpdateWithNeedMetadataForAllTopics 
PASSED

org.apache.kafka.clients.MetadataTest > testMetadata PASSED

org.apache.kafka.clients.MetadataTest > testListenerGetsNotifiedOfUpdate PASSED

org.apache.kafka.common.serialization.SerializationTest > testStringSerializer 
PASSED

org.apache.kafka.common.serialization.SerializationTest > testIntegerSerializer 
PASSED

org.apache.kafka.common.config.AbstractConfigTest > testOriginalsWithPrefix 
PASSED

org.apache.kafka.common.config.AbstractConfigTest > testConfiguredInstances 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testBasicTypes PASSED

org.apache.kafka.common.config.ConfigDefTest > testNullDefault PASSED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefaultRange PASSED

org.apache.kafka.common.config.ConfigDefTest > testValidators PASSED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefaultString PASSED

org.apache.kafka.common.config.ConfigDefTest > testSslPasswords PASSED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefault PASSED

org.apache.kafka.common.config.ConfigDefTest > testMissingRequired PASSED

org.apache.kafka.common.config.ConfigDefTest > testDefinedTwice PASSED

org.apache.kafka.common.config.ConfigDefTest > testBadInputs PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testNulls 
PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testDefault 
PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testSimple 
PASSED

org.apache.kafka.common.requests.RequestResponseTest > testSerialization PASSED

org.apache.kafka.common.requests.RequestResponseTest > fetchResponseVersionTest 
PASSED

org.apache.kafka.common.requests.RequestResponseTest > 
produceResponseVersionTest PASSED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testPrincipalNameCanContainSeparator PASSED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testEqualsAndHashCode PASSED

org.apache.kafka.common.security.kerberos.KerberosNameTest > testParse PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > testClientMode PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryConfiguration PASSED

org.apache.kafka.common.metrics.M

[GitHub] kafka pull request: KAFKA-2825, KAFKA-2852: Controller failover te...

2015-11-20 Thread apovzner
GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/570

KAFKA-2825, KAFKA-2852: Controller failover tests added to ducktape 
replication tests and fix to temp dir

I closed an original pull request that contained previous comments by Geoff 
(which are already addressed here), because I got into bad rebase situation. 
So, I created a new branch and cherry-picked my commits + merged with Ben's 
changes to fix MiniKDC tests to run on Virtual Box. That change was conflicting 
with my changes, where I was copying MiniKDC files with new scp method, and 
temp file was created inside that method. To merge Ben's changes, I added two 
optional parameters to scp(): 'pattern' and 'subst' to optionally substitute 
string while spp'ing files, which is needed for krb5.conf file.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka kafka-2825

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/570.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #570


commit 44f7a6ea9fbfc11e6eebc44b5bfab36431469522
Author: Anna Povzner 
Date:   2015-11-11T22:43:49Z

Extended existing ducktape replication tests to include controller failover

commit 447e330c840e9a556aff2d05d54f873aee8641a5
Author: Anna Povzner 
Date:   2015-11-12T04:12:43Z

Using random dir under /temp for local kdc files to avoid conflicts.

commit a364a3f289277579f17973829a3d51a97749875c
Author: Anna Povzner 
Date:   2015-11-12T21:40:32Z

Fixed usage of random dir under local /tmp for miniKdc files

commit 5b2c048d1c0922b8d36286d96c90bae53c01a671
Author: Anna Povzner 
Date:   2015-11-12T21:48:47Z

Fix to make sure that temp dir for mini kcd files is removed after test 
finishes.

commit 18c8670e18cfdb641efe14df3c2479ba7e340e0d
Author: Anna Povzner 
Date:   2015-11-17T19:17:05Z

KAFKA-2825 Moved query zookeeper method from KafkaService to 
ZookeeperService

commit d70d1eb17fa1ee54dadd52d55ba3e89a81b37c0d
Author: Anna Povzner 
Date:   2015-11-17T21:54:04Z

KAFKA-2851 Added scp method to remote_account utils to scp between two 
remote nodes through unique local temp file

commit 34830cd67e00f7b28d1afcffe9653fc543bca9d0
Author: Anna Povzner 
Date:   2015-11-17T21:59:02Z

KAFKA-2851: minor fix to format string in utils.remote_coount.scp method

commit e5a28e3e2718e057138043a7d6cbc9d42c17d84e
Author: Anna Povzner 
Date:   2015-11-18T02:56:57Z

KAFKA-2851: clean up temp file even if scp fails

commit bed5a2f1f75462d857fc44382322544e6adc2bb2
Author: Anna Povzner 
Date:   2015-11-18T18:17:09Z

KAFKA-2825 Using only PLAINTEXT and SASL_SSL security protocols for 
controller failover tests

commit 84d21b6ac1324f7ac2bbc0f908d7a218e95501d5
Author: Anna Povzner 
Date:   2015-11-20T21:49:49Z

Merged with Ben's changes to make MiniKFC tests to run on Virtual Box

commit f0d630907f4eaf14e7274e9075022d949e5b2752
Author: Anna Povzner 
Date:   2015-11-20T21:53:43Z

Very minor changes: typo in output string and some white spaces.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2825, KAFKA-2851: Extended existing duck...

2015-11-20 Thread apovzner
Github user apovzner closed the pull request at:

https://github.com/apache/kafka/pull/518


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2825) Add controller failover to existing replication tests

2015-11-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018891#comment-15018891
 ] 

ASF GitHub Bot commented on KAFKA-2825:
---

GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/570

KAFKA-2825, KAFKA-2852: Controller failover tests added to ducktape 
replication tests and fix to temp dir

I closed an original pull request that contained previous comments by Geoff 
(which are already addressed here), because I got into bad rebase situation. 
So, I created a new branch and cherry-picked my commits + merged with Ben's 
changes to fix MiniKDC tests to run on Virtual Box. That change was conflicting 
with my changes, where I was copying MiniKDC files with new scp method, and 
temp file was created inside that method. To merge Ben's changes, I added two 
optional parameters to scp(): 'pattern' and 'subst' to optionally substitute 
string while spp'ing files, which is needed for krb5.conf file.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka kafka-2825

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/570.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #570


commit 44f7a6ea9fbfc11e6eebc44b5bfab36431469522
Author: Anna Povzner 
Date:   2015-11-11T22:43:49Z

Extended existing ducktape replication tests to include controller failover

commit 447e330c840e9a556aff2d05d54f873aee8641a5
Author: Anna Povzner 
Date:   2015-11-12T04:12:43Z

Using random dir under /temp for local kdc files to avoid conflicts.

commit a364a3f289277579f17973829a3d51a97749875c
Author: Anna Povzner 
Date:   2015-11-12T21:40:32Z

Fixed usage of random dir under local /tmp for miniKdc files

commit 5b2c048d1c0922b8d36286d96c90bae53c01a671
Author: Anna Povzner 
Date:   2015-11-12T21:48:47Z

Fix to make sure that temp dir for mini kcd files is removed after test 
finishes.

commit 18c8670e18cfdb641efe14df3c2479ba7e340e0d
Author: Anna Povzner 
Date:   2015-11-17T19:17:05Z

KAFKA-2825 Moved query zookeeper method from KafkaService to 
ZookeeperService

commit d70d1eb17fa1ee54dadd52d55ba3e89a81b37c0d
Author: Anna Povzner 
Date:   2015-11-17T21:54:04Z

KAFKA-2851 Added scp method to remote_account utils to scp between two 
remote nodes through unique local temp file

commit 34830cd67e00f7b28d1afcffe9653fc543bca9d0
Author: Anna Povzner 
Date:   2015-11-17T21:59:02Z

KAFKA-2851: minor fix to format string in utils.remote_coount.scp method

commit e5a28e3e2718e057138043a7d6cbc9d42c17d84e
Author: Anna Povzner 
Date:   2015-11-18T02:56:57Z

KAFKA-2851: clean up temp file even if scp fails

commit bed5a2f1f75462d857fc44382322544e6adc2bb2
Author: Anna Povzner 
Date:   2015-11-18T18:17:09Z

KAFKA-2825 Using only PLAINTEXT and SASL_SSL security protocols for 
controller failover tests

commit 84d21b6ac1324f7ac2bbc0f908d7a218e95501d5
Author: Anna Povzner 
Date:   2015-11-20T21:49:49Z

Merged with Ben's changes to make MiniKFC tests to run on Virtual Box

commit f0d630907f4eaf14e7274e9075022d949e5b2752
Author: Anna Povzner 
Date:   2015-11-20T21:53:43Z

Very minor changes: typo in output string and some white spaces.




> Add controller failover to existing replication tests
> -
>
> Key: KAFKA-2825
> URL: https://issues.apache.org/jira/browse/KAFKA-2825
> Project: Kafka
>  Issue Type: Test
>Reporter: Anna Povzner
>Assignee: Anna Povzner
>
> Extend existing replication tests to include controller failover:
> * clean/hard shutdown
> * clean/hard bounce



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2825) Add controller failover to existing replication tests

2015-11-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018892#comment-15018892
 ] 

ASF GitHub Bot commented on KAFKA-2825:
---

Github user apovzner closed the pull request at:

https://github.com/apache/kafka/pull/518


> Add controller failover to existing replication tests
> -
>
> Key: KAFKA-2825
> URL: https://issues.apache.org/jira/browse/KAFKA-2825
> Project: Kafka
>  Issue Type: Test
>Reporter: Anna Povzner
>Assignee: Anna Povzner
>
> Extend existing replication tests to include controller failover:
> * clean/hard shutdown
> * clean/hard bounce



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka_0.9.0_jdk7 #38

2015-11-20 Thread Apache Jenkins Server
See 

Changes:

[junrao] trivial change to 0.9.0 docs to fix incorrect ssl.key.password

--
[...truncated 231 lines...]
:core:jar UP-TO-DATE
:examples:compileJava
:examples:processResources UP-TO-DATE
:examples:classes
:examples:jar
:log4j-appender:compileJava
:log4j-appender:processResources UP-TO-DATE
:log4j-appender:classes
:log4j-appender:jar
:tools:compileJava
:tools:processResources UP-TO-DATE
:tools:classes
:clients:javadoc
:clients:javadocJar
:clients:srcJar
:clients:testJar
:clients:signArchives SKIPPED
:tools:copyDependantLibs
:tools:jar
:connect:api:compileJavaNote: 

 uses unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.

:connect:api:processResources UP-TO-DATE
:connect:api:classes
:connect:api:copyDependantLibs
:connect:api:jar
:connect:file:compileJava
:connect:file:processResources UP-TO-DATE
:connect:file:classes
:connect:file:copyDependantLibs
:connect:file:jar
:connect:json:compileJavaNote: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.

:connect:json:processResources UP-TO-DATE
:connect:json:classes
:connect:json:copyDependantLibs
:connect:json:jar
:connect:runtime:compileJavaNote: Some input files use unchecked or unsafe 
operations.
Note: Recompile with -Xlint:unchecked for details.

:connect:runtime:processResources UP-TO-DATE
:connect:runtime:classes
:connect:runtime:copyDependantLibs
:connect:runtime:jar
:jarAll
:test_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka_0.9.0_jdk7:clients:compileJava UP-TO-DATE
:kafka_0.9.0_jdk7:clients:processResources UP-TO-DATE
:kafka_0.9.0_jdk7:clients:classes UP-TO-DATE
:kafka_0.9.0_jdk7:clients:determineCommitId UP-TO-DATE
:kafka_0.9.0_jdk7:clients:createVersionFile
:kafka_0.9.0_jdk7:clients:jar UP-TO-DATE
:kafka_0.9.0_jdk7:clients:compileTestJava UP-TO-DATE
:kafka_0.9.0_jdk7:clients:processTestResources UP-TO-DATE
:kafka_0.9.0_jdk7:clients:testClasses UP-TO-DATE
:kafka_0.9.0_jdk7:core:compileJava UP-TO-DATE
:kafka_0.9.0_jdk7:core:compileScala UP-TO-DATE
:kafka_0.9.0_jdk7:core:processResources UP-TO-DATE
:kafka_0.9.0_jdk7:core:classes UP-TO-DATE
:kafka_0.9.0_jdk7:core:compileTestJava UP-TO-DATE
:kafka_0.9.0_jdk7:core:compileTestScala
:73:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
producerProps.put(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "false")
 ^
:250:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
producerProps.setProperty(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, 
"true")
 ^
:462:
 value METADATA_FETCH_TIMEOUT_CONFIG in object ProducerConfig is deprecated: 
see corresponding Javadoc for more information.
producerProps.put(ProducerConfig.METADATA_FETCH_TIMEOUT_CONFIG, 
metadataFetchTimeout.toString)
 ^
:101:
 constructor Capture in class Capture is deprecated: see corresponding Javadoc 
for more information.
val entityArgument = new Capture[String]()
 ^
:102:
 constructor Capture in class Capture is deprecated: see corresponding Javadoc 
for more information.
val propertiesArgument = new Capture[Properties]()
 ^
there were 4 feature warning(s); re-run with -feature for details
6 warnings found
:test_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Could not add entry 
'
 to cache fileHashes.bin 
(
> Corrupted FreeListBlock 677501 found in cache 
> '

* Try:
Run with --info or --debug option to get more log output.

*

[GitHub] kafka pull request: KAFKA-2718: Avoid reusing temporary directorie...

2015-11-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/399


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2718) Reuse of temporary directories leading to transient unit test failures

2015-11-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15019071#comment-15019071
 ] 

ASF GitHub Bot commented on KAFKA-2718:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/399


> Reuse of temporary directories leading to transient unit test failures
> --
>
> Key: KAFKA-2718
> URL: https://issues.apache.org/jira/browse/KAFKA-2718
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 0.9.1.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> Stack traces in some of the transient unit test failures indicate that 
> temporary directories used for Zookeeper are being reused.
> {quote}
> kafka.common.TopicExistsException: Topic "topic" already exists.
>   at 
> kafka.admin.AdminUtils$.createOrUpdateTopicPartitionAssignmentPathInZK(AdminUtils.scala:253)
>   at kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:237)
>   at kafka.utils.TestUtils$.createTopic(TestUtils.scala:231)
>   at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:63)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2853) Reuse of temporary server data.dir could lead to transient failures.

2015-11-20 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2853.
--
Resolution: Duplicate

> Reuse of temporary server data.dir could lead to transient failures.
> 
>
> Key: KAFKA-2853
> URL: https://issues.apache.org/jira/browse/KAFKA-2853
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>  Labels: newbiee
>
> Example: https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/1388/
> {code}
> kafka.common.InconsistentBrokerIdException: Configured brokerId 2 doesn't 
> match stored brokerId 0 in meta.properties
>   at kafka.server.KafkaServer.getBrokerId(KafkaServer.scala:630)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:174)
>   at kafka.utils.TestUtils$.createServer(TestUtils.scala:134)
>   at 
> kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:63)
>   at 
> kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:63)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:742)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> kafka.integration.KafkaServerTestHarness$class.setUp(KafkaServerTestHarness.scala:63)
>   at 
> kafka.api.BaseConsumerTest.kafka$api$IntegrationTestHarness$$super$setUp(BaseConsumerTest.scala:36)
>   at 
> kafka.api.IntegrationTestHarness$class.setUp(IntegrationTestHarness.scala:56)
>   at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:62)
>   at 
> kafka.api.SaslSslConsumerTest.kafka$api$SaslTestHarness$$super$setUp(SaslSslConsumerTest.scala:19)
>   at kafka.api.SaslTestHarness$class.setUp(SaslTestHarness.scala:38)
>   at kafka.api.SaslSslConsumerTest.setUp(SaslSslConsumerTest.scala:19)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.

[jira] [Updated] (KAFKA-2718) Reuse of temporary directories leading to transient unit test failures

2015-11-20 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2718:
-
   Resolution: Fixed
Fix Version/s: 0.9.0.0
   Status: Resolved  (was: Patch Available)

> Reuse of temporary directories leading to transient unit test failures
> --
>
> Key: KAFKA-2718
> URL: https://issues.apache.org/jira/browse/KAFKA-2718
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 0.9.1.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.0.0
>
>
> Stack traces in some of the transient unit test failures indicate that 
> temporary directories used for Zookeeper are being reused.
> {quote}
> kafka.common.TopicExistsException: Topic "topic" already exists.
>   at 
> kafka.admin.AdminUtils$.createOrUpdateTopicPartitionAssignmentPathInZK(AdminUtils.scala:253)
>   at kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:237)
>   at kafka.utils.TestUtils$.createTopic(TestUtils.scala:231)
>   at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:63)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2812: improve consumer integration tests

2015-11-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/500


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2812) Enhance new consumer integration test coverage

2015-11-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15019085#comment-15019085
 ] 

ASF GitHub Bot commented on KAFKA-2812:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/500


> Enhance new consumer integration test coverage
> --
>
> Key: KAFKA-2812
> URL: https://issues.apache.org/jira/browse/KAFKA-2812
> Project: Kafka
>  Issue Type: Test
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> There are still some test cases that we didn't get to in KAFKA-2274 
> (including hard broker and client failures) as well as additional validation 
> that can be added to existing test cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2812) Enhance new consumer integration test coverage

2015-11-20 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2812.
--
Resolution: Fixed

> Enhance new consumer integration test coverage
> --
>
> Key: KAFKA-2812
> URL: https://issues.apache.org/jira/browse/KAFKA-2812
> Project: Kafka
>  Issue Type: Test
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> There are still some test cases that we didn't get to in KAFKA-2274 
> (including hard broker and client failures) as well as additional validation 
> that can be added to existing test cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #174

2015-11-20 Thread Apache Jenkins Server
See 

Changes:

[junrao] trivial change to 0.9.0 docs to fix outdated ConsumerMetadataRequest

[junrao] trivial change to 0.9.0 docs to fix incorrect ssl.key.password

--
[...truncated 2796 lines...]

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testLogToClean PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentProtocolType PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetFromUnknownGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testLeaveGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerNewGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchange

[GitHub] kafka pull request: KAFKA-2862: Fix MirrorMaker's message.handler....

2015-11-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/561


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2862) Incorrect help description for MirrorMaker's message.handler.args

2015-11-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15019126#comment-15019126
 ] 

ASF GitHub Bot commented on KAFKA-2862:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/561


> Incorrect help description for MirrorMaker's message.handler.args
> -
>
> Key: KAFKA-2862
> URL: https://issues.apache.org/jira/browse/KAFKA-2862
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.9.1.0
>
>
> Help description for MirrorMaker's message.handler.args is not correct. Fix 
> it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2862) Incorrect help description for MirrorMaker's message.handler.args

2015-11-20 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2862.
--
   Resolution: Fixed
Fix Version/s: 0.9.1.0

Issue resolved by pull request 561
[https://github.com/apache/kafka/pull/561]

> Incorrect help description for MirrorMaker's message.handler.args
> -
>
> Key: KAFKA-2862
> URL: https://issues.apache.org/jira/browse/KAFKA-2862
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.9.1.0
>
>
> Help description for MirrorMaker's message.handler.args is not correct. Fix 
> it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka_0.9.0_jdk7 #39

2015-11-20 Thread Apache Jenkins Server
See 

Changes:

[guozhang] KAFKA-2718: Avoid reusing temporary directories in core unit tests

[guozhang] KAFKA-2812: improve consumer integration tests

[junrao] trivial change: revert incorrect change to ssl.key.password

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-1 (docker Ubuntu ubuntu ubuntu1) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/0.9.0^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/0.9.0^{commit} # timeout=10
Checking out Revision fc7243c2af4b2b4affcdf14d80826986fe6d4ce8 
(refs/remotes/origin/0.9.0)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f fc7243c2af4b2b4affcdf14d80826986fe6d4ce8
 > git rev-list 6aeaa1eb5ceeb7603e9d5339a96f0fa268f12fa0 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka_0.9.0_jdk7] $ /bin/bash -xe /tmp/hudson1286420746659493887.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 11.882 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka_0.9.0_jdk7] $ /bin/bash -xe /tmp/hudson5629905522670843625.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 --stacktrace clean jarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka_0.9.0_jdk7:clients:compileJavaNote: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.

:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Could not add entry 
'
 to cache fileHashes.bin 
(
> Corrupted FreeListBlock 651053 found in cache 
> '

* Try:
Run with --info or --debug option to get more log output.

* Exception is:
org.gradle.api.UncheckedIOException: Could not add entry 
'
 to cache fileHashes.bin 
(
at 
org.gradle.cache.internal.btree.BTreePersistentIndexedCache.put(BTreePersistentIndexedCache.java:155)
at 
org.gradle.cache.internal.DefaultMultiProcessSafePersistentIndexedCache$2.run(DefaultMultiProcessSafePersistentIndexedCache.java:51)
at 
org.gradle.cache.internal.DefaultFileLockManager$DefaultFileLock.doWriteAction(DefaultFileLockManager.java:173)
at 
org.gradle.cache.internal.DefaultFileLockManager$DefaultFileLock.writeFile(DefaultFileLockManager.java:163)
at 
org.gradle.cache.internal.DefaultCacheAccess$UnitOfWorkFileAccess.writeFile(DefaultCacheAccess.java:404)
at 
org.gradle.cache.internal.DefaultMultiPr

Re: [VOTE] 0.9.0.0 Candiate 3

2015-11-20 Thread Jun Rao
KAFKA-2859 is now fixed. Will roll out RC4.

Thanks,

Jun

On Thu, Nov 19, 2015 at 12:52 PM, Ewen Cheslack-Postava 
wrote:

> FYI, I found a blocker for Kafka Connect
> https://issues.apache.org/jira/browse/KAFKA-2859. The fix is already in,
> but we'll need to do one more round of RC.
>
> -Ewen
>
> On Thu, Nov 19, 2015 at 11:45 AM, Rajini Sivaram <
> rajinisiva...@googlemail.com> wrote:
>
> > +1 (non-binding)
> >
> > We integrated this (source rather than binary) into our build yesterday
> and
> > it has been running for a day in our test clusters with light load
> > throughout and occasional heavy load. We are running on IBM JRE with SSL
> > clients.
> >
> > On Thu, Nov 19, 2015 at 6:55 PM, Guozhang Wang 
> wrote:
> >
> > > +1 (binding).
> > >
> > > Verified quick start, console clients and topic/consumer tools.
> > >
> > > Guozhang
> > >
> > > On Thu, Nov 19, 2015 at 10:45 AM, Ismael Juma 
> wrote:
> > >
> > > > +1 (non-binding).
> > > >
> > > > Verified source and binary artifacts, ran ./gradlew testAll with JDK
> > > 7u80,
> > > > quick start on source artifact and Scala 2.11 binary artifact.
> > > >
> > > > Ismael
> > > >
> > > > On Wed, Nov 18, 2015 at 5:57 AM, Jun Rao  wrote:
> > > >
> > > > > This is the third candidate for release of Apache Kafka 0.9.0.0.
> > This a
> > > > > major release that includes (1) authentication (through SSL and
> SASL)
> > > and
> > > > > authorization, (2) a new java consumer, (3) a Kafka connect
> framework
> > > for
> > > > > data ingestion and egression, and (4) quotas. Since this is a major
> > > > > release, we will give people a bit more time for trying this out.
> > > > >
> > > > > Release Notes for the 0.9.0.0 release
> > > > >
> > > > >
> > > >
> > >
> >
> https://people.apache.org/~junrao/kafka-0.9.0.0-candidate3/RELEASE_NOTES.html
> > > > >
> > > > > *** Please download, test and vote by Friday, Nov. 20, 10pm PT
> > > > >
> > > > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > > > http://kafka.apache.org/KEYS in addition to the md5, sha1
> > > > > and sha2 (SHA256) checksum.
> > > > >
> > > > > * Release artifacts to be voted upon (source and binary):
> > > > > https://people.apache.org/~junrao/kafka-0.9.0.0-candidate3/
> > > > >
> > > > > * Maven artifacts to be voted upon prior to release:
> > > > > https://repository.apache.org/content/groups/staging/
> > > > >
> > > > > * scala-doc
> > > > >
> https://people.apache.org/~junrao/kafka-0.9.0.0-candidate3/scaladoc/
> > > > >
> > > > > * java-doc
> > > > >
> https://people.apache.org/~junrao/kafka-0.9.0.0-candidate3/javadoc/
> > > > >
> > > > > * The tag to be voted upon (off the 0.9.0 branch) is the 0.9.0.0
> tag
> > > > >
> > > > >
> > > >
> > >
> >
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=b8872868168f7b1172c08879f4b699db1ccb5ab7
> > > > >
> > > > > * Documentation
> > > > > http://kafka.apache.org/090/documentation.html
> > > > >
> > > > > /***
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Jun
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
> >
> >
> > --
> > Thank you...
> >
> > Regards,
> >
> > Rajini
> >
>
>
>
> --
> Thanks,
> Ewen
>


Re: Potential KafkaStreaming Bug

2015-11-20 Thread Guozhang Wang
Bill,

Thanks for point it out! I think this is a real bug. Do you want to file PR
in github with the fix? You can find the instructions here:

https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes

Guozhang

On Fri, Nov 20, 2015 at 8:32 AM, Bill Bejeck  wrote:

> Hi All,
>
> I'm starting to experiment with the lower-level Processor Client API found
> on the KIP-28 wiki.
>
> When starting the KafkaStream I get the following Exception:
>
> Exception in thread "main" java.util.NoSuchElementException: id: SINK
> at
>
> org.apache.kafka.streams.processor.internals.QuickUnion.root(QuickUnion.java:40)
> at
>
> org.apache.kafka.streams.processor.TopologyBuilder.makeNodeGroups(TopologyBuilder.java:387)
> at
>
> org.apache.kafka.streams.processor.TopologyBuilder.topicGroups(TopologyBuilder.java:339)
> at
>
> org.apache.kafka.streams.processor.internals.StreamThread.(StreamThread.java:139)
> at
>
> org.apache.kafka.streams.processor.internals.StreamThread.(StreamThread.java:120)
> at org.apache.kafka.streams.KafkaStreaming.(KafkaStreaming.java:110)
> at bbejeck.ProcessorDriver.main(ProcessorDriver.java:35)
>
> The TopologyBuilder is being built like so:
> topologyBuilder.addSource("SOURCE", new StringDeserializer(), new
> StringDeserializer(), "src-topic")
> .addProcessor("PROCESS", new
> GenericProcessorClient(replaceVowels), "SOURCE")
> .addSink("SINK", "dest-topic", new StringSerializer(), new
> StringSerializer(), "PROCESS");
>
> Looks to me the cause of the error is that in  TopologyBuilder.addSink
> method the sink  is never connected with it's parent.
>
> When I added the following two lines to the addSink method, the Exception
> goes away.
>
>   nodeGrouper.add(name);
>   nodeGrouper.unite(name, parentNames);
>
> Is this a bug or am I doing something incorrect?
>
> Thanks,
> Bill
>



-- 
-- Guozhang


Build failed in Jenkins: kafka-trunk-jdk8 #175

2015-11-20 Thread Apache Jenkins Server
See 

Changes:

[guozhang] KAFKA-2718: Avoid reusing temporary directories in core unit tests

[guozhang] KAFKA-2812: improve consumer integration tests

[guozhang] KAFKA-2862: Fix MirrorMaker's message.handler.args description

--
[...truncated 4425 lines...]
org.apache.kafka.connect.json.JsonConverterTest > timeToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > structToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testConnectSchemaMetadataTranslation PASSED

org.apache.kafka.connect.json.JsonConverterTest > shortToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > doubleToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > timeToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToConnectStringKeys PASSED

org.apache.kafka.connect.json.JsonConverterTest > floatToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > decimalToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > arrayToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testCacheSchemaToConnectConversion PASSED

org.apache.kafka.connect.json.JsonConverterTest > booleanToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > bytesToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > doubleToConnect PASSED
:connect:runtime:checkstyleMain
:connect:runtime:compileTestJavawarning: [options] bootstrap class path not set 
in conjunction with -source 1.7
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:connect:runtime:processTestResources
:connect:runtime:testClasses
:connect:runtime:checkstyleTest
:connect:runtime:test

org.apache.kafka.connect.util.KafkaBasedLogTest > testReloadOnStart PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testStartStop PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testConsumerError PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testProducerError PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testSendAndReadToEnd FAILED
org.junit.ComparisonFailure: expected: but was:
at org.junit.Assert.assertEquals(Assert.java:115)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.kafka.connect.util.KafkaBasedLogTest.testSendAndReadToEnd(KafkaBasedLogTest.java:312)

org.apache.kafka.connect.util.ShutdownableThreadTest > testGracefulShutdown 
PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testForcibleShutdown 
PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorNotLeader PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorExists PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnector PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnectorNotLeader PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnector PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorConfig PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorConfigConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorTaskConfigs PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorTaskConfigsConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorTaskConfigs PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorTaskConfigsConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testListConnectors PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testListConnectorsNotLeader PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testListConnectorsNotSynced PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnector PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testPollsInBackground PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommit PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitTaskFlushFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitTaskSuccessAndFlushFailure PASSED

org.apache.kafka.connect.runtime.WorkerSin

Build failed in Jenkins: kafka-trunk-jdk8 #176

2015-11-20 Thread Apache Jenkins Server
See 

Changes:

[junrao] trivial change: revert incorrect change to ssl.key.password

--
[...truncated 89 lines...]
^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/tools/ProducerPerformance.scala:40:
 @deprecated now takes two arguments; see the scaladoc.
@deprecated
 ^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/controller/ControllerChannelManager.scala:389:
 class BrokerEndPoint in object UpdateMetadataRequest is deprecated: see 
corresponding Javadoc for more information.
  new UpdateMetadataRequest.BrokerEndPoint(brokerEndPoint.id, 
brokerEndPoint.host, brokerEndPoint.port)
^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/controller/ControllerChannelManager.scala:391:
 constructor UpdateMetadataRequest in class UpdateMetadataRequest is 
deprecated: see corresponding Javadoc for more information.
new UpdateMetadataRequest(controllerId, controllerEpoch, 
liveBrokers.asJava, partitionStates.asJava)
^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/network/BlockingChannel.scala:129:
 method readFromReadableChannel in class NetworkReceive is deprecated: see 
corresponding Javadoc for more information.
  response.readFromReadableChannel(channel)
   ^
there were 15 feature warning(s); re-run with -feature for details
14 warnings found
warning: [options] bootstrap class path not set in conjunction with -source 1.7
1 warning
:kafka-trunk-jdk8:core:processResources UP-TO-DATE
:kafka-trunk-jdk8:core:classes
:kafka-trunk-jdk8:clients:compileTestJavawarning: [options] bootstrap class 
path not set in conjunction with -source 1.7
Note: 
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java
 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
1 warning

:kafka-trunk-jdk8:clients:processTestResources
:kafka-trunk-jdk8:clients:testClasses
:kafka-trunk-jdk8:core:copyDependantLibs
:kafka-trunk-jdk8:core:copyDependantTestLibs
:kafka-trunk-jdk8:core:jar
:jar_core_2_11_7
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk8:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes UP-TO-DATE
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar UP-TO-DATE
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/api/OffsetCommitRequest.scala:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/common/OffsetMetadataAndError.scala:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/common/OffsetMetadataAndError.scala:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/coordinator/GroupMetadataManager.scala:393:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/server/KafkaApis.scala:274:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (offsetAndMetadata.commitTimestamp =

0.9.0.0 RC4

2015-11-20 Thread Jun Rao
This is the fourth candidate for release of Apache Kafka 0.9.0.0. This a
major release that includes (1) authentication (through SSL and SASL) and
authorization, (2) a new java consumer, (3) a Kafka connect framework for
data ingestion and egression, and (4) quotas. Since this is a major
release, we will give people a bit more time for trying this out.

Release Notes for the 0.9.0.0 release
https://people.apache.org/~junrao/kafka-0.9.0.0-candidate4/RELEASE_NOTES.html

*** Please download, test and vote by Monday, Nov. 23, 6pm PT

Kafka's KEYS file containing PGP keys we use to sign the release:
http://kafka.apache.org/KEYS in addition to the md5, sha1
and sha2 (SHA256) checksum.

* Release artifacts to be voted upon (source and binary):
https://people.apache.org/~junrao/kafka-0.9.0.0-candidate4/

* Maven artifacts to be voted upon prior to release:
https://repository.apache.org/content/groups/staging/

* scala-doc
https://people.apache.org/~junrao/kafka-0.9.0.0-candidate4/scaladoc/

* java-doc
https://people.apache.org/~junrao/kafka-0.9.0.0-candidate4/javadoc/

* The tag to be voted upon (off the 0.9.0 branch) is the 0.9.0.0 tag
https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=132943b0f83831132cd46ac961cf6f1c00132565

* Documentation
http://kafka.apache.org/090/documentation.html

/***

Thanks,

Jun


Build failed in Jenkins: kafka-trunk-jdk7 #846

2015-11-20 Thread Apache Jenkins Server
See 

Changes:

[guozhang] KAFKA-2718: Avoid reusing temporary directories in core unit tests

[guozhang] KAFKA-2812: improve consumer integration tests

[guozhang] KAFKA-2862: Fix MirrorMaker's message.handler.args description

--
[...truncated 2762 lines...]

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testLogToClean PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentProtocolType PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetFromUnknownGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testLeaveGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerNewGroup PASSED

kafka.coordina

Re: Potential KafkaStreaming Bug

2015-11-20 Thread Bill Bejeck
Ok great.  I'll get it too it this weekend.

Thanks,
Bill

On Fri, Nov 20, 2015 at 8:04 PM, Guozhang Wang  wrote:

> Bill,
>
> Thanks for point it out! I think this is a real bug. Do you want to file PR
> in github with the fix? You can find the instructions here:
>
> https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes
>
> Guozhang
>
> On Fri, Nov 20, 2015 at 8:32 AM, Bill Bejeck  wrote:
>
> > Hi All,
> >
> > I'm starting to experiment with the lower-level Processor Client API
> found
> > on the KIP-28 wiki.
> >
> > When starting the KafkaStream I get the following Exception:
> >
> > Exception in thread "main" java.util.NoSuchElementException: id: SINK
> > at
> >
> >
> org.apache.kafka.streams.processor.internals.QuickUnion.root(QuickUnion.java:40)
> > at
> >
> >
> org.apache.kafka.streams.processor.TopologyBuilder.makeNodeGroups(TopologyBuilder.java:387)
> > at
> >
> >
> org.apache.kafka.streams.processor.TopologyBuilder.topicGroups(TopologyBuilder.java:339)
> > at
> >
> >
> org.apache.kafka.streams.processor.internals.StreamThread.(StreamThread.java:139)
> > at
> >
> >
> org.apache.kafka.streams.processor.internals.StreamThread.(StreamThread.java:120)
> > at
> org.apache.kafka.streams.KafkaStreaming.(KafkaStreaming.java:110)
> > at bbejeck.ProcessorDriver.main(ProcessorDriver.java:35)
> >
> > The TopologyBuilder is being built like so:
> > topologyBuilder.addSource("SOURCE", new StringDeserializer(), new
> > StringDeserializer(), "src-topic")
> > .addProcessor("PROCESS", new
> > GenericProcessorClient(replaceVowels), "SOURCE")
> > .addSink("SINK", "dest-topic", new StringSerializer(),
> new
> > StringSerializer(), "PROCESS");
> >
> > Looks to me the cause of the error is that in  TopologyBuilder.addSink
> > method the sink  is never connected with it's parent.
> >
> > When I added the following two lines to the addSink method, the Exception
> > goes away.
> >
> >   nodeGrouper.add(name);
> >   nodeGrouper.unite(name, parentNames);
> >
> > Is this a bug or am I doing something incorrect?
> >
> > Thanks,
> > Bill
> >
>
>
>
> --
> -- Guozhang
>


[jira] [Created] (KAFKA-2872) Error starting KafkaStream caused by sink not being connected to parent source/processor nodes

2015-11-20 Thread Bill Bejeck (JIRA)
Bill Bejeck created KAFKA-2872:
--

 Summary: Error starting KafkaStream caused by sink not being 
connected to parent source/processor nodes
 Key: KAFKA-2872
 URL: https://issues.apache.org/jira/browse/KAFKA-2872
 Project: Kafka
  Issue Type: Bug
  Components: kafka streams
Affects Versions: 0.9.0.0
Reporter: Bill Bejeck
Assignee: Bill Bejeck


When starting the KafkaStream I get the following Exception:

Exception in thread "main" java.util.NoSuchElementException: id: SINK
at 
org.apache.kafka.streams.processor.internals.QuickUnion.root(QuickUnion.java:40)
at 
org.apache.kafka.streams.processor.TopologyBuilder.makeNodeGroups(TopologyBuilder.java:387)
at 
org.apache.kafka.streams.processor.TopologyBuilder.topicGroups(TopologyBuilder.java:339)
at 
org.apache.kafka.streams.processor.internals.StreamThread.(StreamThread.java:139)
at 
org.apache.kafka.streams.processor.internals.StreamThread.(StreamThread.java:120)
at 
org.apache.kafka.streams.KafkaStreaming.(KafkaStreaming.java:110)
at bbejeck.ProcessorDriver.main(ProcessorDriver.java:35)

The TopologyBuilder is being built like so:
topologyBuilder.addSource("SOURCE", new StringDeserializer(), new 
StringDeserializer(), "src-topic")
.addProcessor("PROCESS", new 
GenericProcessorClient(replaceVowels), "SOURCE")
.addSink("SINK", "dest-topic", new StringSerializer(), new 
StringSerializer(), "PROCESS");

Looks to me the cause of the error is that in  TopologyBuilder.addSink method 
the sink  is never connected with it's parent.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-trunk-jdk7 #847

2015-11-20 Thread Apache Jenkins Server
See 



[jira] [Work started] (KAFKA-2872) Error starting KafkaStream caused by sink not being connected to parent source/processor nodes

2015-11-20 Thread Bill Bejeck (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-2872 started by Bill Bejeck.
--
> Error starting KafkaStream caused by sink not being connected to parent 
> source/processor nodes
> --
>
> Key: KAFKA-2872
> URL: https://issues.apache.org/jira/browse/KAFKA-2872
> Project: Kafka
>  Issue Type: Bug
>  Components: kafka streams
>Affects Versions: 0.9.0.0
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>
> When starting the KafkaStream I get the following Exception:
> Exception in thread "main" java.util.NoSuchElementException: id: SINK
>   at 
> org.apache.kafka.streams.processor.internals.QuickUnion.root(QuickUnion.java:40)
>   at 
> org.apache.kafka.streams.processor.TopologyBuilder.makeNodeGroups(TopologyBuilder.java:387)
>   at 
> org.apache.kafka.streams.processor.TopologyBuilder.topicGroups(TopologyBuilder.java:339)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.(StreamThread.java:139)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.(StreamThread.java:120)
>   at 
> org.apache.kafka.streams.KafkaStreaming.(KafkaStreaming.java:110)
>   at bbejeck.ProcessorDriver.main(ProcessorDriver.java:35)
> The TopologyBuilder is being built like so:
> topologyBuilder.addSource("SOURCE", new StringDeserializer(), new 
> StringDeserializer(), "src-topic")
> .addProcessor("PROCESS", new 
> GenericProcessorClient(replaceVowels), "SOURCE")
> .addSink("SINK", "dest-topic", new StringSerializer(), new 
> StringSerializer(), "PROCESS");
> Looks to me the cause of the error is that in  TopologyBuilder.addSink method 
> the sink  is never connected with it's parent.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2361: unit test failure in ProducerFailu...

2015-11-20 Thread ZoneMayor
Github user ZoneMayor closed the pull request at:

https://github.com/apache/kafka/pull/558


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2361) Unit Test BUILD FAILED

2015-11-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15020299#comment-15020299
 ] 

ASF GitHub Bot commented on KAFKA-2361:
---

Github user ZoneMayor closed the pull request at:

https://github.com/apache/kafka/pull/558


> Unit Test BUILD FAILED
> --
>
> Key: KAFKA-2361
> URL: https://issues.apache.org/jira/browse/KAFKA-2361
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Affects Versions: 0.8.2.1
> Environment: Linux
>Reporter: Bo Wang
>Assignee: jin xing
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> 290 tests completed, 2 failed :
> kafka.api.ProducerFailureHandlingTest > 
> testNotEnoughReplicasAfterBrokerShutdown FAILED
> org.scalatest.junit.JUnitTestFailedError: Expected 
> NotEnoughReplicasException when producing to topic with fewer brokers than 
> min.insync.replicas
> at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:101)
> at 
> org.scalatest.junit.JUnit3Suite.newAssertionFailedException(JUnit3Suite.scala:149)
> at org.scalatest.Assertions$class.fail(Assertions.scala:711)
> at org.scalatest.junit.JUnit3Suite.fail(JUnit3Suite.scala:149)
> at 
> kafka.api.ProducerFailureHandlingTest.testNotEnoughReplicasAfterBrokerShutdown(ProducerFailureHandlingTest.scala:355)
> kafka.server.ServerShutdownTest > testCleanShutdownAfterFailedStartup FAILED
> org.scalatest.junit.JUnitTestFailedError: Expected KafkaServer setup to 
> fail with connection exception but caught a different exception.
> at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:101)
> at 
> org.scalatest.junit.JUnit3Suite.newAssertionFailedException(JUnit3Suite.scala:149)
> at org.scalatest.Assertions$class.fail(Assertions.scala:711)
> at org.scalatest.junit.JUnit3Suite.fail(JUnit3Suite.scala:149)
> at 
> kafka.server.ServerShutdownTest.testCleanShutdownAfterFailedStartup(ServerShutdownTest.scala:136)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2361: unit test failure in ProducerFailu...

2015-11-20 Thread ZoneMayor
GitHub user ZoneMayor opened a pull request:

https://github.com/apache/kafka/pull/571

KAFKA-2361: unit test failure in ProducerFailureHandlingTest.testNotE…

same issue with KAFKA-1999, so I want to fix it
https://issues.apache.org/jira/browse/KAFKA-1999

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka 0.8.2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/571.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #571


commit da1d0d88cd4fd8819b2bbc1068a4473062a041e9
Author: jinxing 
Date:   2015-11-19T03:42:05Z

KAFKA-2361: unit test failure in 
ProducerFailureHandlingTest.testNotEnoughReplicasAfterBrokerShutdown




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2361) Unit Test BUILD FAILED

2015-11-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15020303#comment-15020303
 ] 

ASF GitHub Bot commented on KAFKA-2361:
---

GitHub user ZoneMayor opened a pull request:

https://github.com/apache/kafka/pull/571

KAFKA-2361: unit test failure in ProducerFailureHandlingTest.testNotE…

same issue with KAFKA-1999, so I want to fix it
https://issues.apache.org/jira/browse/KAFKA-1999

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka 0.8.2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/571.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #571


commit da1d0d88cd4fd8819b2bbc1068a4473062a041e9
Author: jinxing 
Date:   2015-11-19T03:42:05Z

KAFKA-2361: unit test failure in 
ProducerFailureHandlingTest.testNotEnoughReplicasAfterBrokerShutdown




> Unit Test BUILD FAILED
> --
>
> Key: KAFKA-2361
> URL: https://issues.apache.org/jira/browse/KAFKA-2361
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Affects Versions: 0.8.2.1
> Environment: Linux
>Reporter: Bo Wang
>Assignee: jin xing
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> 290 tests completed, 2 failed :
> kafka.api.ProducerFailureHandlingTest > 
> testNotEnoughReplicasAfterBrokerShutdown FAILED
> org.scalatest.junit.JUnitTestFailedError: Expected 
> NotEnoughReplicasException when producing to topic with fewer brokers than 
> min.insync.replicas
> at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:101)
> at 
> org.scalatest.junit.JUnit3Suite.newAssertionFailedException(JUnit3Suite.scala:149)
> at org.scalatest.Assertions$class.fail(Assertions.scala:711)
> at org.scalatest.junit.JUnit3Suite.fail(JUnit3Suite.scala:149)
> at 
> kafka.api.ProducerFailureHandlingTest.testNotEnoughReplicasAfterBrokerShutdown(ProducerFailureHandlingTest.scala:355)
> kafka.server.ServerShutdownTest > testCleanShutdownAfterFailedStartup FAILED
> org.scalatest.junit.JUnitTestFailedError: Expected KafkaServer setup to 
> fail with connection exception but caught a different exception.
> at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:101)
> at 
> org.scalatest.junit.JUnit3Suite.newAssertionFailedException(JUnit3Suite.scala:149)
> at org.scalatest.Assertions$class.fail(Assertions.scala:711)
> at org.scalatest.junit.JUnit3Suite.fail(JUnit3Suite.scala:149)
> at 
> kafka.server.ServerShutdownTest.testCleanShutdownAfterFailedStartup(ServerShutdownTest.scala:136)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)