[jira] [Updated] (KAFKA-1706) Adding a byte bounded blocking queue to util.

2014-10-26 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-1706:

Attachment: KAFKA-1706_2014-10-26_23:50:07.patch

> Adding a byte bounded blocking queue to util.
> -
>
> Key: KAFKA-1706
> URL: https://issues.apache.org/jira/browse/KAFKA-1706
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-1706.patch, KAFKA-1706_2014-10-15_09:26:26.patch, 
> KAFKA-1706_2014-10-15_09:28:01.patch, KAFKA-1706_2014-10-26_23:47:31.patch, 
> KAFKA-1706_2014-10-26_23:50:07.patch
>
>
> We saw many out of memory issues in Mirror Maker. To enhance memory 
> management we want to introduce a ByteBoundedBlockingQueue that has limit on 
> both number of messages and number of bytes in it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1706) Adding a byte bounded blocking queue to util.

2014-10-26 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14184893#comment-14184893
 ] 

Jiangjie Qin commented on KAFKA-1706:
-

Updated reviewboard https://reviews.apache.org/r/26755/diff/
 against branch origin/trunk

> Adding a byte bounded blocking queue to util.
> -
>
> Key: KAFKA-1706
> URL: https://issues.apache.org/jira/browse/KAFKA-1706
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-1706.patch, KAFKA-1706_2014-10-15_09:26:26.patch, 
> KAFKA-1706_2014-10-15_09:28:01.patch, KAFKA-1706_2014-10-26_23:47:31.patch, 
> KAFKA-1706_2014-10-26_23:50:07.patch
>
>
> We saw many out of memory issues in Mirror Maker. To enhance memory 
> management we want to introduce a ByteBoundedBlockingQueue that has limit on 
> both number of messages and number of bytes in it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 26755: Patch for KAFKA-1706

2014-10-26 Thread Jiangjie Qin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/26755/
---

(Updated Oct. 27, 2014, 6:50 a.m.)


Review request for kafka.


Bugs: KAFKA-1706
https://issues.apache.org/jira/browse/KAFKA-1706


Repository: kafka


Description (updated)
---

changed arguments name


correct typo.


Incorporated Joel's comments. Also fixed negative queue size problem.


Diffs (updated)
-

  core/src/main/scala/kafka/utils/ByteBoundedBlockingQueue.scala PRE-CREATION 

Diff: https://reviews.apache.org/r/26755/diff/


Testing
---


Thanks,

Jiangjie Qin



[jira] [Commented] (KAFKA-1706) Adding a byte bounded blocking queue to util.

2014-10-26 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14184891#comment-14184891
 ] 

Jiangjie Qin commented on KAFKA-1706:
-

Updated reviewboard https://reviews.apache.org/r/26755/diff/
 against branch origin/trunk

> Adding a byte bounded blocking queue to util.
> -
>
> Key: KAFKA-1706
> URL: https://issues.apache.org/jira/browse/KAFKA-1706
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-1706.patch, KAFKA-1706_2014-10-15_09:26:26.patch, 
> KAFKA-1706_2014-10-15_09:28:01.patch, KAFKA-1706_2014-10-26_23:47:31.patch
>
>
> We saw many out of memory issues in Mirror Maker. To enhance memory 
> management we want to introduce a ByteBoundedBlockingQueue that has limit on 
> both number of messages and number of bytes in it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 26755: Patch for KAFKA-1706

2014-10-26 Thread Jiangjie Qin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/26755/
---

(Updated Oct. 27, 2014, 6:47 a.m.)


Review request for kafka.


Bugs: KAFKA-1706
https://issues.apache.org/jira/browse/KAFKA-1706


Repository: kafka


Description
---

changed arguments name


correct typo.


Diffs (updated)
-

  core/src/main/scala/kafka/utils/ByteBoundedBlockingQueue.scala PRE-CREATION 

Diff: https://reviews.apache.org/r/26755/diff/


Testing
---


Thanks,

Jiangjie Qin



[jira] [Updated] (KAFKA-1706) Adding a byte bounded blocking queue to util.

2014-10-26 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-1706:

Attachment: KAFKA-1706_2014-10-26_23:47:31.patch

> Adding a byte bounded blocking queue to util.
> -
>
> Key: KAFKA-1706
> URL: https://issues.apache.org/jira/browse/KAFKA-1706
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-1706.patch, KAFKA-1706_2014-10-15_09:26:26.patch, 
> KAFKA-1706_2014-10-15_09:28:01.patch, KAFKA-1706_2014-10-26_23:47:31.patch
>
>
> We saw many out of memory issues in Mirror Maker. To enhance memory 
> management we want to introduce a ByteBoundedBlockingQueue that has limit on 
> both number of messages and number of bytes in it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 26755: Patch for KAFKA-1706

2014-10-26 Thread Jiangjie Qin


> On Oct. 25, 2014, 7:52 a.m., Joel Koshy wrote:
> >

Joel, thanks a lot for the review! Some comments on your comments.


> On Oct. 25, 2014, 7:52 a.m., Joel Koshy wrote:
> > core/src/main/scala/kafka/utils/ByteBoundedBlockingQueue.scala, line 18
> > 
> >
> > I'm wondering if this is specific and nuanced enough to make it 
> > entirely private to MirrorMaker.scala
> > 
> > OR
> > 
> > if you think it is useful as a generic utility consider putting in 
> > org.apache.kafka.clients.common.utils

My first thinking is that this could help provide better control on memory 
management in broader cases in addition to mirror maker, such as consumer side 
data chunk queue, and maybe also for controller message queue.


> On Oct. 25, 2014, 7:52 a.m., Joel Koshy wrote:
> > core/src/main/scala/kafka/utils/ByteBoundedBlockingQueue.scala, line 36
> > 
> >
> > This can be paraphrased to be simpler:
> > 
> > "An element can be enqueued provided the current size (in number of 
> > elements) is within the configured capacity and the current size in bytes 
> > of the queue is within the configured byte capacity. i.e., the element may 
> > be enqueued even if adding it causes the queue's size in bytes to exceed 
> > the byte capacity."
> > 
> > Ok, so while I was thinking through the above: is there any benefit to 
> > having a count-based capacity when you have a byte-based capacity? i.e., 
> > why not have byte-capacity only?

I think there might be three cases where a queue size in number of message 
could help:
1. Because the size function is actually provided by user, if the message has a 
small payload with big overhead on other stuff, the size of bytes might not be 
working well.
2. The queue num size could be used to control the number of message buffered 
in the middle, i.e the failure boundry. For instance if one of the mirror maker 
bounced, if we buffered too many messages in mirror maker, we need to reconsume 
all of them again.
3. In cases where the byte limit only used to protect against of running out of 
memory, but users don't expect the queue to consume that much memory all the 
time. (I'm not sure if it is a valid use case though...)


> On Oct. 25, 2014, 7:52 a.m., Joel Koshy wrote:
> > core/src/main/scala/kafka/utils/ByteBoundedBlockingQueue.scala, line 82
> > 
> >
> > One significant caveat to this approach (and in the timed variant 
> > above) is that if a single large element needs to be enqueued it could 
> > potentially block a number of smaller elements from being enqueued. This 
> > may be okay in the case of mirror maker though but would make it less 
> > useful as a generic utility.

I'm not sure why the big put could block small ones... It is possible that 
there is a super big item put into the queue and makes the queue to pass the 
byte limit by a lot. In that case, all the put will be blocked until a bunch of 
small messages are taken out of the queue. But it seems to be the purpose of 
having a byte limit for the queue.


> On Oct. 25, 2014, 7:52 a.m., Joel Koshy wrote:
> > core/src/main/scala/kafka/utils/ByteBoundedBlockingQueue.scala, line 100
> > 
> >
> > Can you clarify what this means?

I was trying to say that the poll method does not contend lock with offer. I 
saw similar description from some queue's javadoc, it is probably better to 
remove it...


> On Oct. 25, 2014, 7:52 a.m., Joel Koshy wrote:
> > core/src/main/scala/kafka/utils/ByteBoundedBlockingQueue.scala, line 109
> > 
> >
> > getAndDecrement(sizeFunction.get(e))

It seems getAndDecrement() does not take argument and will always decrement by 
1.


- Jiangjie


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/26755/#review58497
---


On Oct. 15, 2014, 4:28 p.m., Jiangjie Qin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/26755/
> ---
> 
> (Updated Oct. 15, 2014, 4:28 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1706
> https://issues.apache.org/jira/browse/KAFKA-1706
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> changed arguments name
> 
> 
> correct typo.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/utils/ByteBoundedBlockingQueue.scala PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/26755/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
>

Re: Review Request 26885: Patch for KAFKA-1642

2014-10-26 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/26885/#review58575
---

Ship it!


LGTM, with one minor comment below.


clients/src/main/java/org/apache/kafka/clients/NetworkClient.java


The comments "When connecting or connected, this handles slow/stalled 
connections" here are a bit misleading: after checking the code I realize 
connectionDelay is only triggered to detemine the delay in milis that we can 
re-check connectivity for node that is not connected, and hence if the node is 
connected again while we are determining its delay, we just set it to MAX.

Instead of making it general to the KafkaClient interface, shall we just 
add this to the code block of line 155?


- Guozhang Wang


On Oct. 23, 2014, 11:19 p.m., Ewen Cheslack-Postava wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/26885/
> ---
> 
> (Updated Oct. 23, 2014, 11:19 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1642
> https://issues.apache.org/jira/browse/KAFKA-1642
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Fixes two issues with the computation of ready nodes and poll timeouts in
> Sender/RecordAccumulator:
> 
> 1. The timeout was computed incorrectly because it took into account all 
> nodes,
> even if they had data to send such that their timeout would be 0. However, 
> nodes
> were then filtered based on whether it was possible to send (i.e. their
> connection was still good) which could result in nothing to send and a 0
> timeout, resulting in busy looping. Instead, the timeout needs to be computed
> only using data that cannot be immediately sent, i.e. where the timeout will 
> be
> greater than 0. This timeout is only used if, after filtering by whether
> connections are ready for sending, there is no data to be sent. Other events 
> can
> wake the thread up earlier, e.g. a client reconnects and becomes ready again.
> 
> 2. One of the conditions indicating whether data is sendable is whether a
> timeout has expired -- either the linger time or the retry backoff. This
> condition wasn't accounting for both cases properly, always using the linger
> time. This means the retry backoff was probably not being respected.
> 
> KAFKA-1642 Compute correct poll timeout when all nodes have sendable data but 
> none can send data because they are in a connection backoff period.
> 
> 
> Addressing Jun's comments.
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java 
> d304660f29246e9600efe3ddb28cfcc2b074bed3 
>   clients/src/main/java/org/apache/kafka/clients/KafkaClient.java 
> 29658d4a15f112dc0af5ce517eaab93e6f00134b 
>   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
> eea270abb16f40c9f3b47c4ea96be412fb4fdc8b 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java
>  c5d470011d334318d5ee801021aadd0c000974a6 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
> 8ebe7ed82c9384b71ce0cc3ddbef2c2325363ab9 
>   clients/src/test/java/org/apache/kafka/clients/MockClient.java 
> aae8d4a1e98279470587d397cc779a9baf6fee6c 
>   
> clients/src/test/java/org/apache/kafka/clients/producer/RecordAccumulatorTest.java
>  0762b35abba0551f23047348c5893bb8c9acff14 
> 
> Diff: https://reviews.apache.org/r/26885/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Ewen Cheslack-Postava
> 
>



[VOTE RESULT] 0.8.2-beta Release Candidate 1

2014-10-26 Thread Joe Stein
+1 binding = 3 votes
+1 non-binding = 4 votes
-1 = 0 votes
0 = 0 votes

The vote passes.

I will release artifacts to maven central now, update the dist svn and
download site. Once that all mirrors (in the morning) will send an announce.

Thanks everyone that contributed to the work in this release!

/***
 Joe Stein
 Founder, Principal Consultant
 Big Data Open Source Security LLC
 http://www.stealth.ly
 Twitter: @allthingshadoop 
/

On Sun, Oct 26, 2014 at 9:43 AM, Joe Stein  wrote:

> +1 (binding) verified maven (doc, sources, test, core for 2.9.1/2, 2.10,
> 2.11), binaries from source, source verification and implementation tests
> passing from maven and the binaries
> https://github.com/stealthly/scala-kafka/tree/0.8.2-beta
>
> I will close the vote ~ 3pm PT if you haven't voted please do so before
> then. If all goes well I will push to maven central, update download docs
> and send the announce tomorrow morning. I don't think we should switch over
> github main branch until 0.8.2 is final.
>
> /***
>  Joe Stein
>  Founder, Principal Consultant
>  Big Data Open Source Security LLC
>  http://www.stealth.ly
>  Twitter: @allthingshadoop 
> /
>
> On Sat, Oct 25, 2014 at 3:28 AM, Manikumar Reddy 
> wrote:
>
>> +1 (non-binding). Verified the source and binary releases.
>>
>> Regards,
>> Manikumar
>>
>
>


[jira] [Commented] (KAFKA-1501) transient unit tests failures due to port already in use

2014-10-26 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14184691#comment-14184691
 ] 

Guozhang Wang commented on KAFKA-1501:
--

It turns that both the choosePort and socketServer needs to set SO_RESUEADDR to 
true. Have run 10 unit tests with v2 patch and no exceptions found (on my old 
PC that was a percentage of 30%+).

> transient unit tests failures due to port already in use
> 
>
> Key: KAFKA-1501
> URL: https://issues.apache.org/jira/browse/KAFKA-1501
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>  Labels: newbie
> Attachments: KAFKA-1501.patch, KAFKA-1501.patch
>
>
> Saw the following transient failures.
> kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne FAILED
> kafka.common.KafkaException: Socket server failed to bind to 
> localhost:59909: Address already in use.
> at kafka.network.Acceptor.openServerSocket(SocketServer.scala:195)
> at kafka.network.Acceptor.(SocketServer.scala:141)
> at kafka.network.SocketServer.startup(SocketServer.scala:68)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:95)
> at kafka.utils.TestUtils$.createServer(TestUtils.scala:123)
> at 
> kafka.api.ProducerFailureHandlingTest.setUp(ProducerFailureHandlingTest.scala:68)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1501) transient unit tests failures due to port already in use

2014-10-26 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14184689#comment-14184689
 ] 

Guozhang Wang commented on KAFKA-1501:
--

Created reviewboard https://reviews.apache.org/r/27214/diff/
 against branch origin/trunk

> transient unit tests failures due to port already in use
> 
>
> Key: KAFKA-1501
> URL: https://issues.apache.org/jira/browse/KAFKA-1501
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>  Labels: newbie
> Attachments: KAFKA-1501.patch, KAFKA-1501.patch
>
>
> Saw the following transient failures.
> kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne FAILED
> kafka.common.KafkaException: Socket server failed to bind to 
> localhost:59909: Address already in use.
> at kafka.network.Acceptor.openServerSocket(SocketServer.scala:195)
> at kafka.network.Acceptor.(SocketServer.scala:141)
> at kafka.network.SocketServer.startup(SocketServer.scala:68)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:95)
> at kafka.utils.TestUtils$.createServer(TestUtils.scala:123)
> at 
> kafka.api.ProducerFailureHandlingTest.setUp(ProducerFailureHandlingTest.scala:68)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1501) transient unit tests failures due to port already in use

2014-10-26 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-1501:
-
Attachment: KAFKA-1501.patch

> transient unit tests failures due to port already in use
> 
>
> Key: KAFKA-1501
> URL: https://issues.apache.org/jira/browse/KAFKA-1501
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>  Labels: newbie
> Attachments: KAFKA-1501.patch, KAFKA-1501.patch
>
>
> Saw the following transient failures.
> kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne FAILED
> kafka.common.KafkaException: Socket server failed to bind to 
> localhost:59909: Address already in use.
> at kafka.network.Acceptor.openServerSocket(SocketServer.scala:195)
> at kafka.network.Acceptor.(SocketServer.scala:141)
> at kafka.network.SocketServer.startup(SocketServer.scala:68)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:95)
> at kafka.utils.TestUtils$.createServer(TestUtils.scala:123)
> at 
> kafka.api.ProducerFailureHandlingTest.setUp(ProducerFailureHandlingTest.scala:68)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 27214: Fix KAFKA-1501 by enabling reuse address

2014-10-26 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27214/
---

Review request for kafka.


Bugs: KAFKA-1501
https://issues.apache.org/jira/browse/KAFKA-1501


Repository: kafka


Description
---

unit tests 10/10


Diffs
-

  core/src/main/scala/kafka/network/SocketServer.scala 
cee76b323e5f3e4c783749ac9e78e1ef02897e3b 
  core/src/test/scala/unit/kafka/utils/TestUtils.scala 
dd3640f47b26a4ff1515c2dc7e964550ac35b7b2 

Diff: https://reviews.apache.org/r/27214/diff/


Testing
---


Thanks,

Guozhang Wang



[jira] [Commented] (KAFKA-1683) Implement a "session" concept in the socket server

2014-10-26 Thread Arvind Mani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14184682#comment-14184682
 ] 

Arvind Mani commented on KAFKA-1683:


Hi [~joestein],

Regarding revocation - are you talking about revoking access of individual 
client (kafka consumer, client, or mirror maker), revoking server (kafka 
broker) identity, or revoking the issuer of client/server identities (KDC or 
CA).

Revoking client access is not hard provided ACL list maintained by authz is 
fresh and ACLs are checked per request. If we revoke server identity, then 
considering the circumstances this calls for taking the compromised server 
offline and pointing clients at another server.

Revocation of issuer is tricky and the implementation may be dependent on 
authentication mechanism (e.g., PKI vs kerberos). 

Broadly there are several options:
1) Periodically disconnect forcing re-authentication 
2) Periodically check revocation - e.g., OCSP for certs but note that this is 
typically done just at time of full TLS handshake.
3) Renegotiation within existing session - this is tricky to implement 
correctly. TLS supports renegotiation but is often subject to bugs.

In addition there will be changes to configuration on client and server - e.g., 
new trust store or KDC. 

If you are talking about dealing with issuer revocation then it might be worth 
opening a separate tickets for PKI and for Kerberos.

- Arvind

> Implement a "session" concept in the socket server
> --
>
> Key: KAFKA-1683
> URL: https://issues.apache.org/jira/browse/KAFKA-1683
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 0.9.0
>Reporter: Jay Kreps
>Assignee: Gwen Shapira
> Attachments: KAFKA-1683.patch, KAFKA-1683.patch
>
>
> To implement authentication we need a way to keep track of some things 
> between requests. The initial use for this would be remembering the 
> authenticated user/principle info, but likely more uses would come up (for 
> example we will also need to remember whether and which encryption or 
> integrity measures are in place on the socket so we can wrap and unwrap 
> writes and reads).
> I was thinking we could just add a Session object that might have a user 
> field. The session object would need to get added to RequestChannel.Request 
> so it is passed down to the API layer with each request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] 0.8.2-beta Release Candidate 1

2014-10-26 Thread Joe Stein
+1 (binding) verified maven (doc, sources, test, core for 2.9.1/2, 2.10,
2.11), binaries from source, source verification and implementation tests
passing from maven and the binaries
https://github.com/stealthly/scala-kafka/tree/0.8.2-beta

I will close the vote ~ 3pm PT if you haven't voted please do so before
then. If all goes well I will push to maven central, update download docs
and send the announce tomorrow morning. I don't think we should switch over
github main branch until 0.8.2 is final.

/***
 Joe Stein
 Founder, Principal Consultant
 Big Data Open Source Security LLC
 http://www.stealth.ly
 Twitter: @allthingshadoop 
/

On Sat, Oct 25, 2014 at 3:28 AM, Manikumar Reddy 
wrote:

> +1 (non-binding). Verified the source and binary releases.
>
> Regards,
> Manikumar
>


[jira] [Commented] (KAFKA-1499) Broker-side compression configuration

2014-10-26 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14184422#comment-14184422
 ] 

Manikumar Reddy commented on KAFKA-1499:


Updated reviewboard https://reviews.apache.org/r/24704/diff/
 against branch origin/trunk

> Broker-side compression configuration
> -
>
> Key: KAFKA-1499
> URL: https://issues.apache.org/jira/browse/KAFKA-1499
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Joel Koshy
>Assignee: Manikumar Reddy
>  Labels: newbie++
> Fix For: 0.8.3
>
> Attachments: KAFKA-1499.patch, KAFKA-1499.patch, 
> KAFKA-1499_2014-08-15_14:20:27.patch, KAFKA-1499_2014-08-21_21:44:27.patch, 
> KAFKA-1499_2014-09-21_15:57:23.patch, KAFKA-1499_2014-09-23_14:45:38.patch, 
> KAFKA-1499_2014-09-24_14:20:33.patch, KAFKA-1499_2014-09-24_14:24:54.patch, 
> KAFKA-1499_2014-09-25_11:05:57.patch, KAFKA-1499_2014-10-27_13:13:55.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> A given topic can have messages in mixed compression codecs. i.e., it can
> also have a mix of uncompressed/compressed messages.
> It will be useful to support a broker-side configuration to recompress
> messages to a specific compression codec. i.e., all messages (for all
> topics) on the broker will be compressed to this codec. We could have
> per-topic overrides as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1499) Broker-side compression configuration

2014-10-26 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-1499:
---
Attachment: KAFKA-1499_2014-10-27_13:13:55.patch

> Broker-side compression configuration
> -
>
> Key: KAFKA-1499
> URL: https://issues.apache.org/jira/browse/KAFKA-1499
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Joel Koshy
>Assignee: Manikumar Reddy
>  Labels: newbie++
> Fix For: 0.8.3
>
> Attachments: KAFKA-1499.patch, KAFKA-1499.patch, 
> KAFKA-1499_2014-08-15_14:20:27.patch, KAFKA-1499_2014-08-21_21:44:27.patch, 
> KAFKA-1499_2014-09-21_15:57:23.patch, KAFKA-1499_2014-09-23_14:45:38.patch, 
> KAFKA-1499_2014-09-24_14:20:33.patch, KAFKA-1499_2014-09-24_14:24:54.patch, 
> KAFKA-1499_2014-09-25_11:05:57.patch, KAFKA-1499_2014-10-27_13:13:55.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> A given topic can have messages in mixed compression codecs. i.e., it can
> also have a mix of uncompressed/compressed messages.
> It will be useful to support a broker-side configuration to recompress
> messages to a specific compression codec. i.e., all messages (for all
> topics) on the broker will be compressed to this codec. We could have
> per-topic overrides as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 24704: Patch for KAFKA-1499

2014-10-26 Thread Manikumar Reddy O

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24704/
---

(Updated Oct. 26, 2014, 7:44 a.m.)


Review request for kafka.


Bugs: KAFKA-1499
https://issues.apache.org/jira/browse/KAFKA-1499


Repository: kafka


Description (updated)
---

Implemented new broker-compression logic. Added a broker-level/topic override  
config property compression.type. This config takes the following options:  
uncompressed,  gzip, snappy, lz4, producer.


Diffs (updated)
-

  core/src/main/scala/kafka/log/Log.scala 
157d67369baabd2206a2356b2aa421e848adab17 
  core/src/main/scala/kafka/log/LogConfig.scala 
e48922a97727dd0b98f3ae630ebb0af3bef2373d 
  core/src/main/scala/kafka/message/BrokerCompression.scala PRE-CREATION 
  core/src/main/scala/kafka/message/ByteBufferMessageSet.scala 
788c7864bc881b935975ab4a4e877b690e65f1f1 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
6e26c5436feb4629d17f199011f3ebb674aa767f 
  core/src/main/scala/kafka/server/KafkaServer.scala 
4de812374e8fb1fed834d2be3f9655f55b511a74 
  core/src/test/scala/unit/kafka/log/BrokerCompressionTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/message/ByteBufferMessageSetTest.scala 
4e45d965bc423192ac704883ee75e9727006f89b 
  core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
2377abe4933e065d037828a214c3a87e1773a8ef 

Diff: https://reviews.apache.org/r/24704/diff/


Testing
---


Thanks,

Manikumar Reddy O