[jira] [Commented] (KAFKA-2466) ConsoleConsumer throws ConcurrentModificationException on termination

2015-08-25 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14710732#comment-14710732
 ] 

Ewen Cheslack-Postava commented on KAFKA-2466:
--

This isn't a behavior regression like those in KAFKA-2466, but it is closely 
related. Since the patches required some similar code (need to handle certain 
exceptions more gracefully), I incorporated the necessary changes to fix this 
issue into the patch for KAFKA-2467.

> ConsoleConsumer throws ConcurrentModificationException on termination
> -
>
> Key: KAFKA-2466
> URL: https://issues.apache.org/jira/browse/KAFKA-2466
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> ConsoleConsumer throws ConcurrentModificationException on termination.
> ST:
> {code}
> Exception in thread "Thread-1" java.util.ConcurrentModificationException: 
> KafkaConsumer is not safe for multi-threaded access
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.acquire(KafkaConsumer.java:1169)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.close(KafkaConsumer.java:1087)
>   at kafka.consumer.NewShinyConsumer.close(BaseConsumer.scala:50)
>   at kafka.tools.ConsoleConsumer$$anon$1.run(ConsoleConsumer.scala:74)
> {code}
> Other thread which constantly tries to consume is
> {code}
> "main" prio=10 tid=0x7f3aa800c000 nid=0x1314 runnable [0x7f3aae37d000]
>java.lang.Thread.State: RUNNABLE
>   at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
>   at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
>   at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
>   at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
>   - locked <0xdd1df130> (a sun.nio.ch.Util$2)
>   - locked <0xdd1df120> (a java.util.Collections$UnmodifiableSet)
>   - locked <0xdd0af720> (a sun.nio.ch.EPollSelectorImpl)
>   at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
>   at org.apache.kafka.common.network.Selector.select(Selector.java:440)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:263)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:221)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:274)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:182)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:172)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:779)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:730)
>   at kafka.consumer.NewShinyConsumer.receive(BaseConsumer.scala:43)
>   at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:87)
>   at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:54)
>   at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:39)
>   at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2468) SIGINT during Kafka server startup can leave server deadlocked

2015-08-25 Thread Ashish K Singh (JIRA)
Ashish K Singh created KAFKA-2468:
-

 Summary: SIGINT during Kafka server startup can leave server 
deadlocked
 Key: KAFKA-2468
 URL: https://issues.apache.org/jira/browse/KAFKA-2468
 Project: Kafka
  Issue Type: Bug
Reporter: Ashish K Singh
Assignee: Ashish K Singh


KafkaServer on receiving a SIGINT will try to shutdown and if this happens 
while the server is starting up, it will get into deadlock.

Thread dump after deadlock
{code}
2015-08-24 22:03:52
Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.55-b03 mixed mode):

"Attach Listener" daemon prio=5 tid=0x7fc08e827800 nid=0x5807 waiting on 
condition [0x]
   java.lang.Thread.State: RUNNABLE

"Thread-2" prio=5 tid=0x7fc08b9de000 nid=0x6b03 waiting for monitor entry 
[0x00011ad3a000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at java.lang.Shutdown.exit(Shutdown.java:212)
- waiting to lock <0x0007bae86ac0> (a java.lang.Class for 
java.lang.Shutdown)
at java.lang.Runtime.exit(Runtime.java:109)
at java.lang.System.exit(System.java:962)
at 
kafka.server.KafkaServerStartable.shutdown(KafkaServerStartable.scala:46)
at kafka.Kafka$$anon$1.run(Kafka.scala:65)

"SIGINT handler" daemon prio=5 tid=0x7fc08ca51800 nid=0x6503 in 
Object.wait() [0x00011aa31000]
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x0007bcb40610> (a kafka.Kafka$$anon$1)
at java.lang.Thread.join(Thread.java:1281)
- locked <0x0007bcb40610> (a kafka.Kafka$$anon$1)
at java.lang.Thread.join(Thread.java:1355)
at 
java.lang.ApplicationShutdownHooks.runHooks(ApplicationShutdownHooks.java:106)
at 
java.lang.ApplicationShutdownHooks$1.run(ApplicationShutdownHooks.java:46)
at java.lang.Shutdown.runHooks(Shutdown.java:123)
at java.lang.Shutdown.sequence(Shutdown.java:167)
at java.lang.Shutdown.exit(Shutdown.java:212)
- locked <0x0007bae86ac0> (a java.lang.Class for java.lang.Shutdown)
at java.lang.Terminator$1.handle(Terminator.java:52)
at sun.misc.Signal$1.run(Signal.java:212)
at java.lang.Thread.run(Thread.java:745)

"RMI TCP Accept-0" daemon prio=5 tid=0x7fc08c164000 nid=0x5c07 runnable 
[0x000119fe8000]
   java.lang.Thread.State: RUNNABLE
at java.net.PlainSocketImpl.socketAccept(Native Method)
at 
java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398)
at java.net.ServerSocket.implAccept(ServerSocket.java:530)
at java.net.ServerSocket.accept(ServerSocket.java:498)
at 
sun.management.jmxremote.LocalRMIServerSocketFactory$1.accept(LocalRMIServerSocketFactory.java:52)
at 
sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:388)
at 
sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:360)
at java.lang.Thread.run(Thread.java:745)

"Service Thread" daemon prio=5 tid=0x7fc08d015000 nid=0x5503 runnable 
[0x]
   java.lang.Thread.State: RUNNABLE

"C2 CompilerThread1" daemon prio=5 tid=0x7fc08c82b000 nid=0x5303 waiting on 
condition [0x]
   java.lang.Thread.State: RUNNABLE

"C2 CompilerThread0" daemon prio=5 tid=0x7fc08c82a000 nid=0x5103 waiting on 
condition [0x]
   java.lang.Thread.State: RUNNABLE

"Signal Dispatcher" daemon prio=5 tid=0x7fc08c829800 nid=0x4f03 runnable 
[0x]
   java.lang.Thread.State: RUNNABLE

"Surrogate Locker Thread (Concurrent GC)" daemon prio=5 tid=0x7fc08d002000 
nid=0x400b waiting on condition [0x]
   java.lang.Thread.State: RUNNABLE

"Finalizer" daemon prio=5 tid=0x7fc08d012800 nid=0x3b03 in Object.wait() 
[0x000117ee6000]
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x0007bae05568> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
- locked <0x0007bae05568> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:189)

"Reference Handler" daemon prio=5 tid=0x7fc08c803000 nid=0x3903 in 
Object.wait() [0x000117de3000]
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x0007bae050f0> (a java.lang.ref.Reference$Lock)
at java.lang.Object.wait(Object.java:503)
at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:133)
- locked <0x0007bae050f0> (a java.lang.ref.Reference$Lock)

"main" prio=5 tid=0x7fc08d000800 nid=0x1303 waiting for monitor 

[GitHub] kafka pull request: KAFKA-2468: SIGINT during Kafka server startup...

2015-08-25 Thread SinghAsDev
GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/167

KAFKA-2468: SIGINT during Kafka server startup can leave server deadlocked

As we handle exceptions or invalid states in Kafka server by shutting it 
down, there is no reason to use exit() and not halt() in shutdown itself.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-2468

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/167.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #167


commit 6b5954fb2747f9c94289ad4a75873131b7f55d0a
Author: asingh 
Date:   2015-08-25T07:34:42Z

KAFKA-2468: SIGINT during Kafka server startup can leave server deadlocked




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-2469) System test console consumer logs should write all messages to debug logger

2015-08-25 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-2469:


 Summary: System test console consumer logs should write all 
messages to debug logger
 Key: KAFKA-2469
 URL: https://issues.apache.org/jira/browse/KAFKA-2469
 Project: Kafka
  Issue Type: Bug
  Components: system tests
Reporter: Ewen Cheslack-Postava
Assignee: Geoff Anderson
Priority: Minor


Currently every message read back from the ConsoleConsumer services is logged 
to the debug logger like this:

[DEBUG - 2015-08-25 07:21:12,466 - console_consumer - _worker - lineno:177]: 
consumed a message: 772367

This results in huge log files that are pretty difficult to work with (hundreds 
of megs), and isn't required for the tests. VerifiableProducer removed its 
logging for this same reason, preferring to save it to a log file that could 
optionally be collected if it was needed. The ConsoleConsumer service should do 
the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2468) SIGINT during Kafka server startup can leave server deadlocked

2015-08-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14710800#comment-14710800
 ] 

ASF GitHub Bot commented on KAFKA-2468:
---

GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/167

KAFKA-2468: SIGINT during Kafka server startup can leave server deadlocked

As we handle exceptions or invalid states in Kafka server by shutting it 
down, there is no reason to use exit() and not halt() in shutdown itself.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-2468

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/167.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #167


commit 6b5954fb2747f9c94289ad4a75873131b7f55d0a
Author: asingh 
Date:   2015-08-25T07:34:42Z

KAFKA-2468: SIGINT during Kafka server startup can leave server deadlocked




> SIGINT during Kafka server startup can leave server deadlocked
> --
>
> Key: KAFKA-2468
> URL: https://issues.apache.org/jira/browse/KAFKA-2468
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> KafkaServer on receiving a SIGINT will try to shutdown and if this happens 
> while the server is starting up, it will get into deadlock.
> Thread dump after deadlock
> {code}
> 2015-08-24 22:03:52
> Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.55-b03 mixed mode):
> "Attach Listener" daemon prio=5 tid=0x7fc08e827800 nid=0x5807 waiting on 
> condition [0x]
>java.lang.Thread.State: RUNNABLE
> "Thread-2" prio=5 tid=0x7fc08b9de000 nid=0x6b03 waiting for monitor entry 
> [0x00011ad3a000]
>java.lang.Thread.State: BLOCKED (on object monitor)
>   at java.lang.Shutdown.exit(Shutdown.java:212)
>   - waiting to lock <0x0007bae86ac0> (a java.lang.Class for 
> java.lang.Shutdown)
>   at java.lang.Runtime.exit(Runtime.java:109)
>   at java.lang.System.exit(System.java:962)
>   at 
> kafka.server.KafkaServerStartable.shutdown(KafkaServerStartable.scala:46)
>   at kafka.Kafka$$anon$1.run(Kafka.scala:65)
> "SIGINT handler" daemon prio=5 tid=0x7fc08ca51800 nid=0x6503 in 
> Object.wait() [0x00011aa31000]
>java.lang.Thread.State: WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0x0007bcb40610> (a kafka.Kafka$$anon$1)
>   at java.lang.Thread.join(Thread.java:1281)
>   - locked <0x0007bcb40610> (a kafka.Kafka$$anon$1)
>   at java.lang.Thread.join(Thread.java:1355)
>   at 
> java.lang.ApplicationShutdownHooks.runHooks(ApplicationShutdownHooks.java:106)
>   at 
> java.lang.ApplicationShutdownHooks$1.run(ApplicationShutdownHooks.java:46)
>   at java.lang.Shutdown.runHooks(Shutdown.java:123)
>   at java.lang.Shutdown.sequence(Shutdown.java:167)
>   at java.lang.Shutdown.exit(Shutdown.java:212)
>   - locked <0x0007bae86ac0> (a java.lang.Class for java.lang.Shutdown)
>   at java.lang.Terminator$1.handle(Terminator.java:52)
>   at sun.misc.Signal$1.run(Signal.java:212)
>   at java.lang.Thread.run(Thread.java:745)
> "RMI TCP Accept-0" daemon prio=5 tid=0x7fc08c164000 nid=0x5c07 runnable 
> [0x000119fe8000]
>java.lang.Thread.State: RUNNABLE
>   at java.net.PlainSocketImpl.socketAccept(Native Method)
>   at 
> java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398)
>   at java.net.ServerSocket.implAccept(ServerSocket.java:530)
>   at java.net.ServerSocket.accept(ServerSocket.java:498)
>   at 
> sun.management.jmxremote.LocalRMIServerSocketFactory$1.accept(LocalRMIServerSocketFactory.java:52)
>   at 
> sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:388)
>   at 
> sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:360)
>   at java.lang.Thread.run(Thread.java:745)
> "Service Thread" daemon prio=5 tid=0x7fc08d015000 nid=0x5503 runnable 
> [0x]
>java.lang.Thread.State: RUNNABLE
> "C2 CompilerThread1" daemon prio=5 tid=0x7fc08c82b000 nid=0x5303 waiting 
> on condition [0x]
>java.lang.Thread.State: RUNNABLE
> "C2 CompilerThread0" daemon prio=5 tid=0x7fc08c82a000 nid=0x5103 waiting 
> on condition [0x]
>java.lang.Thread.State: RUNNABLE
> "Signal Dispatcher" daemon prio=5 tid=0x7fc08c829800 nid=0x4f03 runnable 
> [0x]
>java.lang.Thread.State: RUNNABLE
> "Surrogate Locker Thread (Concurrent GC)" daemon prio=5 
> tid=0x7fc08d002000 

[jira] [Commented] (KAFKA-2466) ConsoleConsumer throws ConcurrentModificationException on termination

2015-08-25 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14710804#comment-14710804
 ] 

Ashish K Singh commented on KAFKA-2466:
---

OK, thanks for taking care of it.

> ConsoleConsumer throws ConcurrentModificationException on termination
> -
>
> Key: KAFKA-2466
> URL: https://issues.apache.org/jira/browse/KAFKA-2466
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> ConsoleConsumer throws ConcurrentModificationException on termination.
> ST:
> {code}
> Exception in thread "Thread-1" java.util.ConcurrentModificationException: 
> KafkaConsumer is not safe for multi-threaded access
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.acquire(KafkaConsumer.java:1169)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.close(KafkaConsumer.java:1087)
>   at kafka.consumer.NewShinyConsumer.close(BaseConsumer.scala:50)
>   at kafka.tools.ConsoleConsumer$$anon$1.run(ConsoleConsumer.scala:74)
> {code}
> Other thread which constantly tries to consume is
> {code}
> "main" prio=10 tid=0x7f3aa800c000 nid=0x1314 runnable [0x7f3aae37d000]
>java.lang.Thread.State: RUNNABLE
>   at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
>   at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
>   at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
>   at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
>   - locked <0xdd1df130> (a sun.nio.ch.Util$2)
>   - locked <0xdd1df120> (a java.util.Collections$UnmodifiableSet)
>   - locked <0xdd0af720> (a sun.nio.ch.EPollSelectorImpl)
>   at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
>   at org.apache.kafka.common.network.Selector.select(Selector.java:440)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:263)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:221)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:274)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:182)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:172)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:779)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:730)
>   at kafka.consumer.NewShinyConsumer.receive(BaseConsumer.scala:43)
>   at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:87)
>   at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:54)
>   at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:39)
>   at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2467) ConsoleConsumer regressions

2015-08-25 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14710825#comment-14710825
 ] 

Ismael Juma commented on KAFKA-2467:


[~ewencp], do I understand correctly that the system tests would have caught 
this regression? Shall we be running the system tests against PR branches 
automatically?

> ConsoleConsumer regressions
> ---
>
> Key: KAFKA-2467
> URL: https://issues.apache.org/jira/browse/KAFKA-2467
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>
> It seems that the patch for KAFKA-2015 caused a few changes in the behavior 
> of the console consumer. I picked this up because it caused the new mirror 
> maker sanity system test to hang. We need a separate fix for ducktape to 
> address the lack of a timeout where it got stuck, but I'd also like to get 
> this fixed ASAP since it affects pretty much all system test efforts since 
> they commonly use console consumer to validate data produced to Kafka.
> I've tracked down a couple of changes so far:
> 1. The --consumer.config option handling was changed entirely. I think the 
> new approach was trying to parse it as key=value parameters, but it's 
> supposed to be a properties file *containing* key=value pairs.
> 2. A few different exceptions during message processing are not handled the 
> same way. The skipMessageOnErrorOpt is not longer being used at all (it's 
> parsed, but that option is never checked anymore). Also, exceptions during 
> iteration are not caught. After fixing the consumer.config issue, which was 
> keeping the consumer.timeout.ms setting from making it into the consumer 
> config, this also caused the process to hang. It killed the main thread, but 
> there must be another non-daemon thread still running (presumably the 
> consumer threads?)
> 3. The "consumed X messages" message changed from stderr to stdout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2467) ConsoleConsumer regressions

2015-08-25 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14710829#comment-14710829
 ] 

Ewen Cheslack-Postava commented on KAFKA-2467:
--

[~ijuma] Sort of. We still need refinements since the tests would have hung and 
had to be killed by Jenkins (we're missing a timeout in a pretty critical 
place).

As for running system tests on every PR, that would be nice but at the moment 
isn't practical because of how long they can take to run. It'd be great to get 
there eventually, but we need to continue refining the tests, ducktape (e.g., 
adding the parallel test runner), and even the jenkins config (reuse clusters 
to avoid the startup costs for each test run that we currently pay). For now, 
we're working on making it easy for people to run the system tests manually on 
specific PRs when they think it would be useful, which I think is a reasonable 
intermediate step before doing builds on every PR.

> ConsoleConsumer regressions
> ---
>
> Key: KAFKA-2467
> URL: https://issues.apache.org/jira/browse/KAFKA-2467
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>
> It seems that the patch for KAFKA-2015 caused a few changes in the behavior 
> of the console consumer. I picked this up because it caused the new mirror 
> maker sanity system test to hang. We need a separate fix for ducktape to 
> address the lack of a timeout where it got stuck, but I'd also like to get 
> this fixed ASAP since it affects pretty much all system test efforts since 
> they commonly use console consumer to validate data produced to Kafka.
> I've tracked down a couple of changes so far:
> 1. The --consumer.config option handling was changed entirely. I think the 
> new approach was trying to parse it as key=value parameters, but it's 
> supposed to be a properties file *containing* key=value pairs.
> 2. A few different exceptions during message processing are not handled the 
> same way. The skipMessageOnErrorOpt is not longer being used at all (it's 
> parsed, but that option is never checked anymore). Also, exceptions during 
> iteration are not caught. After fixing the consumer.config issue, which was 
> keeping the consumer.timeout.ms setting from making it into the consumer 
> config, this also caused the process to hang. It killed the main thread, but 
> there must be another non-daemon thread still running (presumably the 
> consumer threads?)
> 3. The "consumed X messages" message changed from stderr to stdout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2467) ConsoleConsumer regressions

2015-08-25 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14710836#comment-14710836
 ] 

Ismael Juma commented on KAFKA-2467:


Thanks for the explanation [~ewencp]. That sounds good. It would be good to 
update 
https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes 
with the instructions once ready. 

> ConsoleConsumer regressions
> ---
>
> Key: KAFKA-2467
> URL: https://issues.apache.org/jira/browse/KAFKA-2467
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>
> It seems that the patch for KAFKA-2015 caused a few changes in the behavior 
> of the console consumer. I picked this up because it caused the new mirror 
> maker sanity system test to hang. We need a separate fix for ducktape to 
> address the lack of a timeout where it got stuck, but I'd also like to get 
> this fixed ASAP since it affects pretty much all system test efforts since 
> they commonly use console consumer to validate data produced to Kafka.
> I've tracked down a couple of changes so far:
> 1. The --consumer.config option handling was changed entirely. I think the 
> new approach was trying to parse it as key=value parameters, but it's 
> supposed to be a properties file *containing* key=value pairs.
> 2. A few different exceptions during message processing are not handled the 
> same way. The skipMessageOnErrorOpt is not longer being used at all (it's 
> parsed, but that option is never checked anymore). Also, exceptions during 
> iteration are not caught. After fixing the consumer.config issue, which was 
> keeping the consumer.timeout.ms setting from making it into the consumer 
> config, this also caused the process to hang. It killed the main thread, but 
> there must be another non-daemon thread still running (presumably the 
> consumer threads?)
> 3. The "consumed X messages" message changed from stderr to stdout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2467) ConsoleConsumer regressions

2015-08-25 Thread Ben Stopford (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14710860#comment-14710860
 ] 

Ben Stopford commented on KAFKA-2467:
-

Apologies for this regression. 2015 was adapted by me from a submitted patch. 
This caused some confusion and I should have been more questioning. Lesson 
learnt. 

Automated tests here would obviously be a boon too.   

> ConsoleConsumer regressions
> ---
>
> Key: KAFKA-2467
> URL: https://issues.apache.org/jira/browse/KAFKA-2467
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>
> It seems that the patch for KAFKA-2015 caused a few changes in the behavior 
> of the console consumer. I picked this up because it caused the new mirror 
> maker sanity system test to hang. We need a separate fix for ducktape to 
> address the lack of a timeout where it got stuck, but I'd also like to get 
> this fixed ASAP since it affects pretty much all system test efforts since 
> they commonly use console consumer to validate data produced to Kafka.
> I've tracked down a couple of changes so far:
> 1. The --consumer.config option handling was changed entirely. I think the 
> new approach was trying to parse it as key=value parameters, but it's 
> supposed to be a properties file *containing* key=value pairs.
> 2. A few different exceptions during message processing are not handled the 
> same way. The skipMessageOnErrorOpt is not longer being used at all (it's 
> parsed, but that option is never checked anymore). Also, exceptions during 
> iteration are not caught. After fixing the consumer.config issue, which was 
> keeping the consumer.timeout.ms setting from making it into the consumer 
> config, this also caused the process to hang. It killed the main thread, but 
> there must be another non-daemon thread still running (presumably the 
> consumer threads?)
> 3. The "consumed X messages" message changed from stderr to stdout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2467) ConsoleConsumer regressions

2015-08-25 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14710873#comment-14710873
 ] 

Ewen Cheslack-Postava commented on KAFKA-2467:
--

No worries, I was just thankful that the patch had added any unit tests since 
they helped track the problem down and fix it! This is one class of issues that 
are harder to catch in unit tests. Apparently the addition of some system tests 
that we're running regularly is paying off :)

> ConsoleConsumer regressions
> ---
>
> Key: KAFKA-2467
> URL: https://issues.apache.org/jira/browse/KAFKA-2467
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>
> It seems that the patch for KAFKA-2015 caused a few changes in the behavior 
> of the console consumer. I picked this up because it caused the new mirror 
> maker sanity system test to hang. We need a separate fix for ducktape to 
> address the lack of a timeout where it got stuck, but I'd also like to get 
> this fixed ASAP since it affects pretty much all system test efforts since 
> they commonly use console consumer to validate data produced to Kafka.
> I've tracked down a couple of changes so far:
> 1. The --consumer.config option handling was changed entirely. I think the 
> new approach was trying to parse it as key=value parameters, but it's 
> supposed to be a properties file *containing* key=value pairs.
> 2. A few different exceptions during message processing are not handled the 
> same way. The skipMessageOnErrorOpt is not longer being used at all (it's 
> parsed, but that option is never checked anymore). Also, exceptions during 
> iteration are not caught. After fixing the consumer.config issue, which was 
> keeping the consumer.timeout.ms setting from making it into the consumer 
> config, this also caused the process to hang. It killed the main thread, but 
> there must be another non-daemon thread still running (presumably the 
> consumer threads?)
> 3. The "consumed X messages" message changed from stderr to stdout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-1369 - snappy version update 1.1.x

2015-08-25 Thread thinker0
Github user thinker0 closed the pull request at:

https://github.com/apache/kafka/pull/31


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-1369) snappy version update 1.1.x

2015-08-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14710907#comment-14710907
 ] 

ASF GitHub Bot commented on KAFKA-1369:
---

Github user thinker0 closed the pull request at:

https://github.com/apache/kafka/pull/31


> snappy version update 1.1.x
> ---
>
> Key: KAFKA-1369
> URL: https://issues.apache.org/jira/browse/KAFKA-1369
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.0, 0.8.1.1
> Environment: Red Hat Enterprise Linux Server release 5.8 (Tikanga)
> - x64 
>Reporter: thinker0
>Assignee: Roger Hoover
>Priority: Minor
> Fix For: 0.8.2.0
>
> Attachments: patch.diff
>
>
> https://github.com/xerial/snappy-java/issues/38 issue
> snappy version 1.1.x
> {code}
> org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null
> at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:239)
> at org.xerial.snappy.Snappy.(Snappy.java:48)
> at 
> org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:351)
> at org.xerial.snappy.SnappyInputStream.rawRead(SnappyInputStream.java:159)
> at org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:142)
> at java.io.InputStream.read(InputStream.java:101)
> at 
> kafka.message.ByteBufferMessageSet$$anonfun$decompress$1.apply$mcI$sp(ByteBufferMessageSet.scala:68)
> at 
> kafka.message.ByteBufferMessageSet$$anonfun$decompress$1.apply(ByteBufferMessageSet.scala:68)
> at 
> kafka.message.ByteBufferMessageSet$$anonfun$decompress$1.apply(ByteBufferMessageSet.scala:68)
> at scala.collection.immutable.Stream$.continually(Stream.scala:1129)
> at 
> kafka.message.ByteBufferMessageSet$.decompress(ByteBufferMessageSet.scala:68)
> at 
> kafka.message.ByteBufferMessageSet$$anon$1.makeNextOuter(ByteBufferMessageSet.scala:178)
> at 
> kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:191)
> at 
> kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:145)
> at 
> kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
> at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
> {code}
> {code}
> /tmp] ldd snappy-1.0.5-libsnappyjava.so
> ./snappy-1.0.5-libsnappyjava.so: /usr/lib64/libstdc++.so.6: version 
> `GLIBCXX_3.4.9' not found (required by ./snappy-1.0.5-libsnappyjava.so)
> ./snappy-1.0.5-libsnappyjava.so: /usr/lib64/libstdc++.so.6: version 
> `GLIBCXX_3.4.11' not found (required by ./snappy-1.0.5-libsnappyjava.so)
>   linux-vdso.so.1 =>  (0x7fff81dfc000)
>   libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x2b554b43)
>   libm.so.6 => /lib64/libm.so.6 (0x2b554b731000)
>   libc.so.6 => /lib64/libc.so.6 (0x2b554b9b4000)
>   libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x2b554bd0c000)
>   /lib64/ld-linux-x86-64.so.2 (0x0033e2a0)
> {code}
> {code}
> /tmp] ldd snappy-1.1.1M1-be6ba593-9ac7-488e-953e-ba5fd9530ee1-libsnappyjava.so
> ldd: warning: you do not have execution permission for 
> `./snappy-1.1.1M1-be6ba593-9ac7-488e-953e-ba5fd9530ee1-libsnappyjava.so'
>   linux-vdso.so.1 =>  (0x7fff1c132000)
>   libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x2b9548319000)
>   libm.so.6 => /lib64/libm.so.6 (0x2b954861a000)
>   libc.so.6 => /lib64/libc.so.6 (0x2b954889d000)
>   libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x2b9548bf5000)
>   /lib64/ld-linux-x86-64.so.2 (0x0033e2a0)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2385) zookeeper-shell does not work

2015-08-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-2385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14711084#comment-14711084
 ] 

João Reis commented on KAFKA-2385:
--

Hi guys, 

Maybe is not related with this issue, but what is not working is the feature of 
passing args to the shell like this:

#> ./zookeeper-shell.sh localhost:2181 ls /kafka

On the previous kafka version 0.8.1.1 by passing comands like 'ls' or 'get' 
worked. On 0.8.2.1 seems it does not work.



> zookeeper-shell does not work
> -
>
> Key: KAFKA-2385
> URL: https://issues.apache.org/jira/browse/KAFKA-2385
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Jiangjie Qin
>Assignee: Flavio Junqueira
> Fix For: 0.8.3
>
>
> The zookeeper shell shipped with Kafka does not work because jline jar is 
> missing.
> [jqin@jqin-ld1 bin]$ ./zookeeper-shell.sh localhost:2181
> Connecting to localhost:2181
> Welcome to ZooKeeper!
> JLine support is disabled
> WATCHER::
> WatchedEvent state:SyncConnected type:None path:null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1690) Add SSL support to Kafka Broker, Producer and Consumer

2015-08-25 Thread Navjot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14711100#comment-14711100
 ] 

Navjot commented on KAFKA-1690:
---

@Sriharsha Please find the java version below:
{{java version "1.7.0_79"}} 
{{Java(TM) SE Runtime Environment (build 1.7.0_79-b15)}}
{{Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)}}
I am working on Windows 7 64bit machine.

Also, I tried to use the package by following steps mentioned at 
[https://github.com/harshach/kafka/blob/KAFKA-1690-V1/SSL-setup.md] 
but my Server and Producer failed with following errors:

{{# Producer Error 
##}}
{{[2015-08-25 16:00:21,888] WARN Failed to send SSL Close}}
{{message(org.apache.kafka.common.network.SSLTransportLayer)}}
{{java.io.IOException: Invalid close state, will not send network data.}}
{{at 
org.apache.kafka.common.network.SSLTransportLayer.close(SSLTransportLayer.java:144)}}
{{at org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:49)}}
{{at org.apache.kafka.common.network.Selector.close(Selector.java:458)}}
{{at org.apache.kafka.common.network.Selector.poll(Selector.java:327)}}
{{at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:221)}}
{{at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:209)}}
{{at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:127)}}
{{at java.lang.Thread.run(Thread.java:745)}}

{{ Server Error 
##}}

{{[2015-08-25 15:59:22,572] WARN Error when freeing index buffer 
(kafka.log.OffsetIndex)}}
{{java.lang.NullPointerException}}
{{at 
kafka.log.OffsetIndex.kafka$log$OffsetIndex$$forceUnmap(OffsetIndex.scala:301)}}
{{at kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:283)}}
{{at kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:276)}}
{{at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)}}
{{at kafka.log.OffsetIndex.resize(OffsetIndex.scala:276)}}
{{at kafka.log.Log.loadSegments(Log.scala:234)}}
{{at kafka.log.Log.(Log.scala:90)}}
{{at kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$3$$anonfun}}
{{$apply$10$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:150)}}
{{at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60)}}
{{at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)}}
{{at java.util.concurrent.FutureTask.run(FutureTask.java:262)}}
{{at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)}}
{{at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)}}
{{at java.lang.Thread.run(Thread.java:745)}}

> Add SSL support to Kafka Broker, Producer and Consumer
> --
>
> Key: KAFKA-1690
> URL: https://issues.apache.org/jira/browse/KAFKA-1690
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
> Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
> KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch, 
> KAFKA-1690_2015-05-11_16:09:36.patch, KAFKA-1690_2015-05-12_16:20:08.patch, 
> KAFKA-1690_2015-05-15_07:18:21.patch, KAFKA-1690_2015-05-20_14:54:35.patch, 
> KAFKA-1690_2015-05-21_10:37:08.patch, KAFKA-1690_2015-06-03_18:52:29.patch, 
> KAFKA-1690_2015-06-23_13:18:20.patch, KAFKA-1690_2015-07-20_06:10:42.patch, 
> KAFKA-1690_2015-07-20_11:59:57.patch, KAFKA-1690_2015-07-25_12:10:55.patch, 
> KAFKA-1690_2015-08-16_20:41:02.patch, KAFKA-1690_2015-08-17_08:12:50.patch, 
> KAFKA-1690_2015-08-17_09:28:52.patch, KAFKA-1690_2015-08-17_12:20:53.patch, 
> KAFKA-1690_2015-08-18_11:24:46.patch, KAFKA-1690_2015-08-18_17:24:48.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1690) Add SSL support to Kafka Broker, Producer and Consumer

2015-08-25 Thread Navjot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14711100#comment-14711100
 ] 

Navjot edited comment on KAFKA-1690 at 8/25/15 11:32 AM:
-

[~harsha_ch]Please find the java version below:
{{java version "1.7.0_79"}} 
{{Java(TM) SE Runtime Environment (build 1.7.0_79-b15)}}
{{Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)}}
I am working on Windows 7 64bit machine.

Also, I tried to use the package by following steps mentioned at 
[https://github.com/harshach/kafka/blob/KAFKA-1690-V1/SSL-setup.md] 
but my Server and Producer failed with following errors:

{{# Producer Error 
##}}
{{[2015-08-25 16:00:21,888] WARN Failed to send SSL Close}}
{{message(org.apache.kafka.common.network.SSLTransportLayer)}}
{{java.io.IOException: Invalid close state, will not send network data.}}
{{at 
org.apache.kafka.common.network.SSLTransportLayer.close(SSLTransportLayer.java:144)}}
{{at org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:49)}}
{{at org.apache.kafka.common.network.Selector.close(Selector.java:458)}}
{{at org.apache.kafka.common.network.Selector.poll(Selector.java:327)}}
{{at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:221)}}
{{at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:209)}}
{{at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:127)}}
{{at java.lang.Thread.run(Thread.java:745)}}

{{ Server Error 
##}}

{{[2015-08-25 15:59:22,572] WARN Error when freeing index buffer 
(kafka.log.OffsetIndex)}}
{{java.lang.NullPointerException}}
{{at 
kafka.log.OffsetIndex.kafka$log$OffsetIndex$$forceUnmap(OffsetIndex.scala:301)}}
{{at kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:283)}}
{{at kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:276)}}
{{at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)}}
{{at kafka.log.OffsetIndex.resize(OffsetIndex.scala:276)}}
{{at kafka.log.Log.loadSegments(Log.scala:234)}}
{{at kafka.log.Log.(Log.scala:90)}}
{{at kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$3$$anonfun}}
{{$apply$10$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:150)}}
{{at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60)}}
{{at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)}}
{{at java.util.concurrent.FutureTask.run(FutureTask.java:262)}}
{{at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)}}
{{at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)}}
{{at java.lang.Thread.run(Thread.java:745)}}


was (Author: navjotbhardwaj):
@Sriharsha Please find the java version below:
{{java version "1.7.0_79"}} 
{{Java(TM) SE Runtime Environment (build 1.7.0_79-b15)}}
{{Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)}}
I am working on Windows 7 64bit machine.

Also, I tried to use the package by following steps mentioned at 
[https://github.com/harshach/kafka/blob/KAFKA-1690-V1/SSL-setup.md] 
but my Server and Producer failed with following errors:

{{# Producer Error 
##}}
{{[2015-08-25 16:00:21,888] WARN Failed to send SSL Close}}
{{message(org.apache.kafka.common.network.SSLTransportLayer)}}
{{java.io.IOException: Invalid close state, will not send network data.}}
{{at 
org.apache.kafka.common.network.SSLTransportLayer.close(SSLTransportLayer.java:144)}}
{{at org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:49)}}
{{at org.apache.kafka.common.network.Selector.close(Selector.java:458)}}
{{at org.apache.kafka.common.network.Selector.poll(Selector.java:327)}}
{{at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:221)}}
{{at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:209)}}
{{at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:127)}}
{{at java.lang.Thread.run(Thread.java:745)}}

{{ Server Error 
##}}

{{[2015-08-25 15:59:22,572] WARN Error when freeing index buffer 
(kafka.log.OffsetIndex)}}
{{java.lang.NullPointerException}}
{{at 
kafka.log.OffsetIndex.kafka$log$OffsetIndex$$forceUnmap(OffsetIndex.scala:301)}}
{{at kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:283)}}
{{at kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:276)}}
{{at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)}}
{{at kafka.log.OffsetIndex.resize(OffsetIndex.scala:276)}}
{{at kafka.log.Log.loadSegments(Log.scala:234)}}
{{at kafka.log.Log.(Log.scala:90)}}
{{at kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$3$$anonfun}}
{{$apply$10$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:150)}}
{{at kafka.utils.Cor

kafka producer-perf-test.sh compression-codec not working

2015-08-25 Thread Prabhjot Bharaj
Hi,

I have bene trying to use kafka-producer-perf-test.sh to arrive at certain
benchmarks.
When I try to run it with --compression-codec values of 1, 2 and 3, I
notice increased throughput compared to NoCompressionCodec

But, When I checked the Producerperformance.scala, I saw that the the
`producer.send` is getting data from the method: `generateProducerData`.
But, this data is just an empty array of Bytes.

Now, as per my basic understanding of compression algorithms, I think a
byte sequence of zeros will eventually result in a very small message,
because of which I thought I might be observing better throughput.

So, in line: 247 of ProducerPerformance.scala, I did this minor code
change:-



*val message = 
"qopwr11591UPD113582260001AS1IL1-1N/A1Entertainment1-1an-example.com1-1-1-1-1-1-1-1011413/011413_factor_points_FNC_,LOW,MED_LOW,MED,HIGH,HD,.mp4.csmil/bitrate=11subcategory
71Title 
10^D1-1-111-1-1-1-1-1-111-1-1-1-1-115101-1-1-1-1126112491-1-1-1-1-1-1-1-1-1-1-1-1-1-1-111-1-1-r1VR-11591UPD113582260001AS1IL1-1N/A1Entertainment1-1an-example.com1-1-1-1-1-1-1-1011413/011413_factor_points_FNC_,LOW,MED_LOW,MED,HIGH,HD,.mp4.csmil/bitrate=11subcategory
71Title 
10^D1-1-111-1-1-1-1-1-111-1-1-1-1-115101-1-1-1-1126112491-1-1-1-1-1-1-1-1-1-1-1-1-1-1-111-1-1-r1VR-11591UPD113582260001AS1IL1-1N/A1Entertainment1-1an-example.com1-1-1-1-1-1-1-1011413/011413_factor_points_FNC_,LOW,MED_LOW,MED,HIGH,HD,.mp4.csmil/bitrate=11subcategory
71Title 
10^D1-1-111-1-1-1-1-1-111-1-1-1-1-115101-1-1-1-1126112491-1-1-1-1-1-1-1-1-1-1-1-1-1-1-111-1-1-"message.getBytes().slice(0,msgSize)*


This makes sure that I have a big message, and I can slice that
message to the message size passed in the command line options


But, the problem is that when I try running the same with
--compression-codec vlues of 1, 2 or 3, I still am seeing ASCII data
(i.e. uncompressed one only)


I want to ask whether this is a bug. And, using
kafka-producer-perf-test.sh, how can I send my own compressed data ?


Thanks,

Prabhjot


[jira] [Created] (KAFKA-2470) kafka-producer-perf-test.sh can't configure all to request-num-acks

2015-08-25 Thread Bo Wang (JIRA)
Bo Wang created KAFKA-2470:
--

 Summary: kafka-producer-perf-test.sh can't configure all to 
request-num-acks
 Key: KAFKA-2470
 URL: https://issues.apache.org/jira/browse/KAFKA-2470
 Project: Kafka
  Issue Type: Bug
  Components: clients, tools
Affects Versions: 0.8.2.1
 Environment: Linux
Reporter: Bo Wang


For New Producer API, kafka-producer-perf-test.sh can't configure all to 
request-num-acks :
bin]# ./kafka-producer-perf-test.sh --topic test --broker-list host:port 
--messages 100 --message-size 200 --new-producer --sync --batch-size 1
 --request-num-acks all
Exception in thread "main" joptsimple.OptionArgumentConversionException: Cannot 
convert argument 'all' of option ['request-num-acks'] to class java.lang.Integer
at 
joptsimple.ArgumentAcceptingOptionSpec.convert(ArgumentAcceptingOptionSpec.java:237)
at joptsimple.OptionSet.valuesOf(OptionSet.java:226)
at joptsimple.OptionSet.valueOf(OptionSet.java:170)
at 
kafka.tools.ProducerPerformance$ProducerPerfConfig.(ProducerPerformance.scala:146)
at kafka.tools.ProducerPerformance$.main(ProducerPerformance.scala:42)
at kafka.tools.ProducerPerformance.main(ProducerPerformance.scala)
Caused by: joptsimple.internal.ReflectionException: 
java.lang.NumberFormatException: For input string: "all"
at 
joptsimple.internal.Reflection.reflectionException(Reflection.java:136)
at joptsimple.internal.Reflection.invoke(Reflection.java:123)
at 
joptsimple.internal.MethodInvokingValueConverter.convert(MethodInvokingValueConverter.java:48)
at 
joptsimple.ArgumentAcceptingOptionSpec.convert(ArgumentAcceptingOptionSpec.java:234)
... 5 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: kafka producer-perf-test.sh compression-codec not working

2015-08-25 Thread Prabhjot Bharaj
Hi Erik,

I have put my efforts on the produce side till now, Thanks for making me
aware that consumer will decompress automatically.

I'll also consider your point on creating real-life messages

But, I have still have one confusion -

Why would the current ProducerPerformance.scala compress an Array of Bytes
with all zeros ?
That will anyways give better throughput. correct ?

Regards,
Prabhjot

On Tue, Aug 25, 2015 at 7:05 PM, Helleren, Erik 
wrote:

> Hi Prabhjot,
> There are two important things to know about kafka compression:  First
> uncompression happens automatically in the consumer
> (https://cwiki.apache.org/confluence/display/KAFKA/Compression) so you
> should see ascii returned on the consumer side. The best way to see if
> compression has happened that I know of is to actually look at a packet
> capture.
>
> Second, the producer does not compress individual messages, but actually
> batches several sequential messages to the same topic and partition
> together and compresses that compound message.
> (
> https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Pro
> tocol#AGuideToTheKafkaProtocol-Compression) Thus, a fixed string will
> still see far better compression ratios than a Œtypical' real life
> message.
>
> Making a real-life-like message isn¹t easy, and depends heavily on your
> domain. But a general approach would be to generate messages by randomly
> selected words from a dictionary.  And having a dictionary around thousand
> large words means there is a reasonable chance of the same words appearing
> multiple times in the same message.  Also words can be non-sence like
> ³asdfasdfasdfasdf², or large words in the language of your choice.  The
> goal is for each message to be unique, but still have similar chunks that
> a compression algorithm can detect and compress.
>
> -Erik
>
>
> On 8/25/15, 6:47 AM, "Prabhjot Bharaj"  wrote:
>
> >Hi,
> >
> >I have bene trying to use kafka-producer-perf-test.sh to arrive at certain
> >benchmarks.
> >When I try to run it with --compression-codec values of 1, 2 and 3, I
> >notice increased throughput compared to NoCompressionCodec
> >
> >But, When I checked the Producerperformance.scala, I saw that the the
> >`producer.send` is getting data from the method: `generateProducerData`.
> >But, this data is just an empty array of Bytes.
> >
> >Now, as per my basic understanding of compression algorithms, I think a
> >byte sequence of zeros will eventually result in a very small message,
> >because of which I thought I might be observing better throughput.
> >
> >So, in line: 247 of ProducerPerformance.scala, I did this minor code
> >change:-
> >
> >
> >
> >*val message =
> >"qopwr11591UPD113582260001AS1IL1-1N/A1Entertainment1-1an-example.com1-1-1-
> >1-1-1-1-1011413/011413_factor_points_FNC_,LOW,MED_LOW,MED,HIGH,HD,.mp4.csm
> >il/bitrate=11subcategory
> >71Title
> >10^D1-1-111-1-1-1-1-1-111-1-1-1-1-115101-1-1-1-1126112491-1-1-1-1-1-1-1-1-
> >1-1-1-1-1-1-111-1-1-r1VR-11591UPD113582260001AS1IL1-1N/A1Entertainment1-1a
> >n-example.com1-1-1-1-1-1-1-1011413/011413_factor_points_FNC_,LOW,MED_LOW,M
> >ED,HIGH,HD,.mp4.csmil/bitrate=11subcategory
> >71Title
> >10^D1-1-111-1-1-1-1-1-111-1-1-1-1-115101-1-1-1-1126112491-1-1-1-1-1-1-1-1-
> >1-1-1-1-1-1-111-1-1-r1VR-11591UPD113582260001AS1IL1-1N/A1Entertainment1-1a
> >n-example.com1-1-1-1-1-1-1-1011413/011413_factor_points_FNC_,LOW,MED_LOW,M
> >ED,HIGH,HD,.mp4.csmil/bitrate=11subcategory
> >71Title
> >10^D1-1-111-1-1-1-1-1-111-1-1-1-1-115101-1-1-1-1126112491-1-1-1-1-1-1-1-1-
> >1-1-1-1-1-1-111-1-1-"message.getBytes().slice(0,msgSize)*
> >
> >
> >This makes sure that I have a big message, and I can slice that
> >message to the message size passed in the command line options
> >
> >
> >But, the problem is that when I try running the same with
> >--compression-codec vlues of 1, 2 or 3, I still am seeing ASCII data
> >(i.e. uncompressed one only)
> >
> >
> >I want to ask whether this is a bug. And, using
> >kafka-producer-perf-test.sh, how can I send my own compressed data ?
> >
> >
> >Thanks,
> >
> >Prabhjot
>
>


-- 
-
"There are only 10 types of people in the world: Those who understand
binary, and those who don't"


[jira] [Created] (KAFKA-2471) Replicas Order and Leader out of sync

2015-08-25 Thread Manish Sharma (JIRA)
Manish Sharma created KAFKA-2471:


 Summary: Replicas Order and Leader out of sync
 Key: KAFKA-2471
 URL: https://issues.apache.org/jira/browse/KAFKA-2471
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.1
Reporter: Manish Sharma


Our 2 kafka brokers ( 1 & 5) were rebooted due to hypervisor going down and I 
think we encountered a similar
issue that was discussed in thread "Problem with node after restart no 
partitions?".  The resulting JIRA is closed without conclusions or
recovery steps. 

Our Brokers 5 and 1 were also running zookeeper of our cluster (along with 
broker 2),
we are running kafka version 0.8.2.1

After doing a controlled restarts over all brokers a few times our cluster 
seems ok now.

But there are a some topics that have replicas out of sync with Leaders.

Partition 2 below has Leader 5 and replicas order should be 5,1 
{code}
Topic:2015-01-12PartitionCount:3ReplicationFactor:2 Configs:
Topic: 2015-01-12   Partition: 0Leader: 4   Replicas: 4,3   
Isr: 3,4
Topic: 2015-01-12   Partition: 1Leader: 0   Replicas: 0,4   
Isr: 0,4
Topic: 2015-01-12   Partition: 2Leader: 5   Replicas: 1,5   
Isr: 5
{code}

I tried reassigning partition 2 replicas to broker 5 (leader) and broker : 0
Now partition reassignment is stuck for more than a day. 

%) /usr/local/kafka/bin/kafka-reassign-partitions.sh --zookeeper 
kafka-trgt05:2182 --reassignment-json-file 2015-01-12_2.json --verify
Status of partition reassignment:
Reassignment of partition [2015-01-12,2] is still in progress

And In zookeeper, reassign_partitions is empty..

[zk: kafka-trgt05:2182(CONNECTED) 2] ls /admin/reassign_partitions
[]

This seems like a bug being triggered, that leaves the cluster in unhealthy 
state.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: kafka producer-perf-test.sh compression-codec not working

2015-08-25 Thread Helleren, Erik
Hi Prabhjot,
There are two important things to know about kafka compression:  First
uncompression happens automatically in the consumer
(https://cwiki.apache.org/confluence/display/KAFKA/Compression) so you
should see ascii returned on the consumer side. The best way to see if
compression has happened that I know of is to actually look at a packet
capture.   

Second, the producer does not compress individual messages, but actually
batches several sequential messages to the same topic and partition
together and compresses that compound message.
(https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Pro
tocol#AGuideToTheKafkaProtocol-Compression) Thus, a fixed string will
still see far better compression ratios than a Œtypical' real life
message. 

Making a real-life-like message isn¹t easy, and depends heavily on your
domain. But a general approach would be to generate messages by randomly
selected words from a dictionary.  And having a dictionary around thousand
large words means there is a reasonable chance of the same words appearing
multiple times in the same message.  Also words can be non-sence like
³asdfasdfasdfasdf², or large words in the language of your choice.  The
goal is for each message to be unique, but still have similar chunks that
a compression algorithm can detect and compress.

-Erik
  

On 8/25/15, 6:47 AM, "Prabhjot Bharaj"  wrote:

>Hi,
>
>I have bene trying to use kafka-producer-perf-test.sh to arrive at certain
>benchmarks.
>When I try to run it with --compression-codec values of 1, 2 and 3, I
>notice increased throughput compared to NoCompressionCodec
>
>But, When I checked the Producerperformance.scala, I saw that the the
>`producer.send` is getting data from the method: `generateProducerData`.
>But, this data is just an empty array of Bytes.
>
>Now, as per my basic understanding of compression algorithms, I think a
>byte sequence of zeros will eventually result in a very small message,
>because of which I thought I might be observing better throughput.
>
>So, in line: 247 of ProducerPerformance.scala, I did this minor code
>change:-
>
>
>
>*val message = 
>"qopwr11591UPD113582260001AS1IL1-1N/A1Entertainment1-1an-example.com1-1-1-
>1-1-1-1-1011413/011413_factor_points_FNC_,LOW,MED_LOW,MED,HIGH,HD,.mp4.csm
>il/bitrate=11subcategory
>71Title 
>10^D1-1-111-1-1-1-1-1-111-1-1-1-1-115101-1-1-1-1126112491-1-1-1-1-1-1-1-1-
>1-1-1-1-1-1-111-1-1-r1VR-11591UPD113582260001AS1IL1-1N/A1Entertainment1-1a
>n-example.com1-1-1-1-1-1-1-1011413/011413_factor_points_FNC_,LOW,MED_LOW,M
>ED,HIGH,HD,.mp4.csmil/bitrate=11subcategory
>71Title 
>10^D1-1-111-1-1-1-1-1-111-1-1-1-1-115101-1-1-1-1126112491-1-1-1-1-1-1-1-1-
>1-1-1-1-1-1-111-1-1-r1VR-11591UPD113582260001AS1IL1-1N/A1Entertainment1-1a
>n-example.com1-1-1-1-1-1-1-1011413/011413_factor_points_FNC_,LOW,MED_LOW,M
>ED,HIGH,HD,.mp4.csmil/bitrate=11subcategory
>71Title 
>10^D1-1-111-1-1-1-1-1-111-1-1-1-1-115101-1-1-1-1126112491-1-1-1-1-1-1-1-1-
>1-1-1-1-1-1-111-1-1-"message.getBytes().slice(0,msgSize)*
>
>
>This makes sure that I have a big message, and I can slice that
>message to the message size passed in the command line options
>
>
>But, the problem is that when I try running the same with
>--compression-codec vlues of 1, 2 or 3, I still am seeing ASCII data
>(i.e. uncompressed one only)
>
>
>I want to ask whether this is a bug. And, using
>kafka-producer-perf-test.sh, how can I send my own compressed data ?
>
>
>Thanks,
>
>Prabhjot



[jira] [Comment Edited] (KAFKA-1690) Add SSL support to Kafka Broker, Producer and Consumer

2015-08-25 Thread Navjot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14711100#comment-14711100
 ] 

Navjot edited comment on KAFKA-1690 at 8/25/15 2:15 PM:


[~harsha_ch]Please find the java version below:
{{java version "1.7.0_79"}} 
{{Java(TM) SE Runtime Environment (build 1.7.0_79-b15)}}
{{Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)}}
I am working on Windows 7 64bit machine.

Also, I tried to use the package by following steps mentioned at 
[https://github.com/harshach/kafka/blob/KAFKA-1690-V1/SSL-setup.md] 
but my Server and Producer failed with following errors:

{{# Producer Error 
##}}
{{[2015-08-25 16:00:21,888] WARN Failed to send SSL Close}}
{{message(org.apache.kafka.common.network.SSLTransportLayer)}}
{{java.io.IOException: Invalid close state, will not send network data.}}
{{at 
org.apache.kafka.common.network.SSLTransportLayer.close(SSLTransportLayer.java:144)}}
{{at org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:49)}}
{{at org.apache.kafka.common.network.Selector.close(Selector.java:458)}}
{{at org.apache.kafka.common.network.Selector.poll(Selector.java:327)}}
{{at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:221)}}
{{at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:209)}}
{{at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:127)}}
{{at java.lang.Thread.run(Thread.java:745)}}

{{ Server Error 
##}}

{{[2015-08-25 15:59:22,572] WARN Error when freeing index buffer 
(kafka.log.OffsetIndex)}}
{{java.lang.NullPointerException}}
{{at 
kafka.log.OffsetIndex.kafka$log$OffsetIndex$$forceUnmap(OffsetIndex.scala:301)}}
{{at kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:283)}}
{{at kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:276)}}
{{at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)}}
{{at kafka.log.OffsetIndex.resize(OffsetIndex.scala:276)}}
{{at kafka.log.Log.loadSegments(Log.scala:234)}}
{{at kafka.log.Log.(Log.scala:90)}}
{{at kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$3$$anonfun}}
{{$apply$10$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:150)}}
{{at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60)}}
{{at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)}}
{{at java.util.concurrent.FutureTask.run(FutureTask.java:262)}}
{{at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)}}
{{at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)}}
{{at java.lang.Thread.run(Thread.java:745)}}

Also while running producer I can see following WARNS:

kafka-console-producer.bat --broker-list localhost:9093 --topic test1 
--new-producer --producer-property "security.protocol=SSL" --producer-property 
"ssl.truststore.location=client.truststore.jks" --producer-property 
"ssl.truststore.password=striker"
[2015-08-25 18:20:48,236] WARN The configuration ssl.truststore.location = 
C:\Users\g01013268\Desktop\Certs\client.truststore.jks was supplied but isn't a 
known config. (org.apache.kafka.clients.producer.ProducerConfig) 
[2015-08-25 18:20:48,236] WARN The configuration ssl.truststore.password = 
striker was supplied but isn't a known config. 
(org.apache.kafka.clients.producer.ProducerConfig)
[2015-08-25 18:20:48,236] WARN The configuration security.protocol = SSL was 
supplied but isn't a known config. 
(org.apache.kafka.clients.producer.ProducerConfig)


was (Author: navjotbhardwaj):
[~harsha_ch]Please find the java version below:
{{java version "1.7.0_79"}} 
{{Java(TM) SE Runtime Environment (build 1.7.0_79-b15)}}
{{Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)}}
I am working on Windows 7 64bit machine.

Also, I tried to use the package by following steps mentioned at 
[https://github.com/harshach/kafka/blob/KAFKA-1690-V1/SSL-setup.md] 
but my Server and Producer failed with following errors:

{{# Producer Error 
##}}
{{[2015-08-25 16:00:21,888] WARN Failed to send SSL Close}}
{{message(org.apache.kafka.common.network.SSLTransportLayer)}}
{{java.io.IOException: Invalid close state, will not send network data.}}
{{at 
org.apache.kafka.common.network.SSLTransportLayer.close(SSLTransportLayer.java:144)}}
{{at org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:49)}}
{{at org.apache.kafka.common.network.Selector.close(Selector.java:458)}}
{{at org.apache.kafka.common.network.Selector.poll(Selector.java:327)}}
{{at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:221)}}
{{at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:209)}}
{{at org.apache.kafka.clients.producer.internals.Sender.run(S

Re: kafka producer-perf-test.sh compression-codec not working

2015-08-25 Thread Helleren, Erik
Prabhjot,
When no compression is being used, it should have only a tiny impact on 
performance.  But when it is enabled it will make it as though the message 
payload is small and nearly constant, regardless as to how large the configured 
message size is.

I think that the answer is that this is room for improvement in the perf test, 
especially where compression is concerned.  If you do implement an improvement, 
a patch might be helpful to the community.  But something to consider is that 
threwput alone isn’t the only important performance measure.   Round trip 
latency is also important.
Thanks,
-Erik


From: Prabhjot Bharaj mailto:prabhbha...@gmail.com>>
Date: Tuesday, August 25, 2015 at 8:41 AM
To: Erik Helleren 
mailto:erik.helle...@cmegroup.com>>
Cc: "us...@kafka.apache.org" 
mailto:us...@kafka.apache.org>>, 
"dev@kafka.apache.org" 
mailto:dev@kafka.apache.org>>
Subject: Re: kafka producer-perf-test.sh compression-codec not working

Hi Erik,

I have put my efforts on the produce side till now, Thanks for making me aware 
that consumer will decompress automatically.

I'll also consider your point on creating real-life messages

But, I have still have one confusion -

Why would the current ProducerPerformance.scala compress an Array of Bytes with 
all zeros ?
That will anyways give better throughput. correct ?

Regards,
Prabhjot

On Tue, Aug 25, 2015 at 7:05 PM, Helleren, Erik 
mailto:erik.helle...@cmegroup.com>> wrote:
Hi Prabhjot,
There are two important things to know about kafka compression:  First
uncompression happens automatically in the consumer
(https://cwiki.apache.org/confluence/display/KAFKA/Compression) so you
should see ascii returned on the consumer side. The best way to see if
compression has happened that I know of is to actually look at a packet
capture.

Second, the producer does not compress individual messages, but actually
batches several sequential messages to the same topic and partition
together and compresses that compound message.
(https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Pro
tocol#AGuideToTheKafkaProtocol-Compression) Thus, a fixed string will
still see far better compression ratios than a Œtypical' real life
message.

Making a real-life-like message isn¹t easy, and depends heavily on your
domain. But a general approach would be to generate messages by randomly
selected words from a dictionary.  And having a dictionary around thousand
large words means there is a reasonable chance of the same words appearing
multiple times in the same message.  Also words can be non-sence like
³asdfasdfasdfasdf², or large words in the language of your choice.  The
goal is for each message to be unique, but still have similar chunks that
a compression algorithm can detect and compress.

-Erik


On 8/25/15, 6:47 AM, "Prabhjot Bharaj" 
mailto:prabhbha...@gmail.com>> wrote:

>Hi,
>
>I have bene trying to use kafka-producer-perf-test.sh to arrive at certain
>benchmarks.
>When I try to run it with --compression-codec values of 1, 2 and 3, I
>notice increased throughput compared to NoCompressionCodec
>
>But, When I checked the Producerperformance.scala, I saw that the the
>`producer.send` is getting data from the method: `generateProducerData`.
>But, this data is just an empty array of Bytes.
>
>Now, as per my basic understanding of compression algorithms, I think a
>byte sequence of zeros will eventually result in a very small message,
>because of which I thought I might be observing better throughput.
>
>So, in line: 247 of ProducerPerformance.scala, I did this minor code
>change:-
>
>
>
>*val message =
>"qopwr11591UPD113582260001AS1IL1-1N/A1Entertainment1-1an-example.com1-1-1-
>1-1-1-1-1011413/011413_factor_points_FNC_,LOW,MED_LOW,MED,HIGH,HD,.mp4.csm
>il/bitrate=11subcategory
>71Title
>10^D1-1-111-1-1-1-1-1-111-1-1-1-1-115101-1-1-1-1126112491-1-1-1-1-1-1-1-1-
>1-1-1-1-1-1-111-1-1-r1VR-11591UPD113582260001AS1IL1-1N/A1Entertainment1-1a
>n-example.com1-1-1-1-1-1-1-1011413/011413_factor_points_FNC_,LOW,MED_LOW,M
>ED,HIGH,HD,.mp4.csmil/bitrate=11subcategory
>71Title
>10^D1-1-111-1-1-1-1-1-111-1-1-1-1-115101-1-1-1-1126112491-1-1-1-1-1-1-1-1-
>1-1-1-1-1-1-111-1-1-r1VR-11591UPD113582260001AS1IL1-1N/A1Entertainment1-1a
>n-example.com1-1-1-1-1-1-1-1011413/011413_factor_points_FNC_,LOW,MED_LOW,M
>ED,HIGH,HD,.mp4.csmil/bitrate=11subcategory
>71Title
>10^D1-1-111-1-1-1-1-1-111-1-1-1-1-115101-1-1-1-1126112491-1-1-1-1-1-1-1-1-
>1-1-1-1-1-1-111-1-1-"message.getBytes().slice(0,msgSize)*
>
>
>This makes sure that I have a big message, and I can slice that
>message to the message size passed in the command line options
>
>
>But, the problem is that when I try running the same with
>--compression-codec vlues of 1, 2 or 3, I still am seeing ASCII data
>(i.e. uncompressed one only)
>
>
>I want to ask whether this is a bug. And, using
>kafka-producer-perf-test.sh, how can I send my own compressed data ?
>
>
>Thanks,
>
>Prabhj

[jira] [Commented] (KAFKA-1690) Add SSL support to Kafka Broker, Producer and Consumer

2015-08-25 Thread Rajasekar Elango (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14711376#comment-14711376
 ] 

Rajasekar Elango commented on KAFKA-1690:
-

SSLv3 was disabled since jdk version JDK 7u75. So if this works in 1.7.0_51, 
but not in 1.7.0_79. It's likely that kafka SSL implementation is using SSLv3. 
[~NavjotBhardwaj] You can try commenting out property 
jdk.tls.disabledAlgorithms=SSLv3 on 
$JAVA_HOME/jre/lib/security/java.security/java.security to confirm if this is 
the case.

Thanks,
Raja.

> Add SSL support to Kafka Broker, Producer and Consumer
> --
>
> Key: KAFKA-1690
> URL: https://issues.apache.org/jira/browse/KAFKA-1690
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
> Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
> KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch, 
> KAFKA-1690_2015-05-11_16:09:36.patch, KAFKA-1690_2015-05-12_16:20:08.patch, 
> KAFKA-1690_2015-05-15_07:18:21.patch, KAFKA-1690_2015-05-20_14:54:35.patch, 
> KAFKA-1690_2015-05-21_10:37:08.patch, KAFKA-1690_2015-06-03_18:52:29.patch, 
> KAFKA-1690_2015-06-23_13:18:20.patch, KAFKA-1690_2015-07-20_06:10:42.patch, 
> KAFKA-1690_2015-07-20_11:59:57.patch, KAFKA-1690_2015-07-25_12:10:55.patch, 
> KAFKA-1690_2015-08-16_20:41:02.patch, KAFKA-1690_2015-08-17_08:12:50.patch, 
> KAFKA-1690_2015-08-17_09:28:52.patch, KAFKA-1690_2015-08-17_12:20:53.patch, 
> KAFKA-1690_2015-08-18_11:24:46.patch, KAFKA-1690_2015-08-18_17:24:48.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: kafka producer-perf-test.sh compression-codec not working

2015-08-25 Thread Prabhjot Bharaj
Hi Erik,

Thanks for your inputs.

How can we measure round trip latency using kafka-producer-perf-test.sh ?

or any other tool ?

Regards,
Prabhjot

On Tue, Aug 25, 2015 at 7:41 PM, Helleren, Erik 
wrote:

> Prabhjot,
> When no compression is being used, it should have only a tiny impact on
> performance.  But when it is enabled it will make it as though the message
> payload is small and nearly constant, regardless as to how large the
> configured message size is.
>
> I think that the answer is that this is room for improvement in the perf
> test, especially where compression is concerned.  If you do implement an
> improvement, a patch might be helpful to the community.  But something to
> consider is that threwput alone isn’t the only important performance
> measure.   Round trip latency is also important.
> Thanks,
> -Erik
>
>
> From: Prabhjot Bharaj mailto:prabhbha...@gmail.com
> >>
> Date: Tuesday, August 25, 2015 at 8:41 AM
> To: Erik Helleren  erik.helle...@cmegroup.com>>
> Cc: "us...@kafka.apache.org" <
> us...@kafka.apache.org>, "
> dev@kafka.apache.org"  >
> Subject: Re: kafka producer-perf-test.sh compression-codec not working
>
> Hi Erik,
>
> I have put my efforts on the produce side till now, Thanks for making me
> aware that consumer will decompress automatically.
>
> I'll also consider your point on creating real-life messages
>
> But, I have still have one confusion -
>
> Why would the current ProducerPerformance.scala compress an Array of Bytes
> with all zeros ?
> That will anyways give better throughput. correct ?
>
> Regards,
> Prabhjot
>
> On Tue, Aug 25, 2015 at 7:05 PM, Helleren, Erik <
> erik.helle...@cmegroup.com> wrote:
> Hi Prabhjot,
> There are two important things to know about kafka compression:  First
> uncompression happens automatically in the consumer
> (https://cwiki.apache.org/confluence/display/KAFKA/Compression) so you
> should see ascii returned on the consumer side. The best way to see if
> compression has happened that I know of is to actually look at a packet
> capture.
>
> Second, the producer does not compress individual messages, but actually
> batches several sequential messages to the same topic and partition
> together and compresses that compound message.
> (
> https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Pro
> tocol#AGuideToTheKafkaProtocol-Compression) Thus, a fixed string will
> still see far better compression ratios than a Œtypical' real life
> message.
>
> Making a real-life-like message isn¹t easy, and depends heavily on your
> domain. But a general approach would be to generate messages by randomly
> selected words from a dictionary.  And having a dictionary around thousand
> large words means there is a reasonable chance of the same words appearing
> multiple times in the same message.  Also words can be non-sence like
> ³asdfasdfasdfasdf², or large words in the language of your choice.  The
> goal is for each message to be unique, but still have similar chunks that
> a compression algorithm can detect and compress.
>
> -Erik
>
>
> On 8/25/15, 6:47 AM, "Prabhjot Bharaj"  prabhbha...@gmail.com>> wrote:
>
> >Hi,
> >
> >I have bene trying to use kafka-producer-perf-test.sh to arrive at certain
> >benchmarks.
> >When I try to run it with --compression-codec values of 1, 2 and 3, I
> >notice increased throughput compared to NoCompressionCodec
> >
> >But, When I checked the Producerperformance.scala, I saw that the the
> >`producer.send` is getting data from the method: `generateProducerData`.
> >But, this data is just an empty array of Bytes.
> >
> >Now, as per my basic understanding of compression algorithms, I think a
> >byte sequence of zeros will eventually result in a very small message,
> >because of which I thought I might be observing better throughput.
> >
> >So, in line: 247 of ProducerPerformance.scala, I did this minor code
> >change:-
> >
> >
> >
> >*val message =
> >"qopwr11591UPD113582260001AS1IL1-1N/A1Entertainment1-1an-example.com1-1-1-
> >1-1-1-1-1011413/011413_factor_points_FNC_,LOW,MED_LOW,MED,HIGH,HD,.mp4.csm
> >il/bitrate=11subcategory
> >71Title
> >10^D1-1-111-1-1-1-1-1-111-1-1-1-1-115101-1-1-1-1126112491-1-1-1-1-1-1-1-1-
> >1-1-1-1-1-1-111-1-1-r1VR-11591UPD113582260001AS1IL1-1N/A1Entertainment1-1a
> >n-example.com1-1-1-1-1-1-1-1011413/011413_factor_points_FNC_,LOW,MED_LOW,M
> >ED,HIGH,HD,.mp4.csmil/bitrate=11subcategory
> >71Title
> >10^D1-1-111-1-1-1-1-1-111-1-1-1-1-115101-1-1-1-1126112491-1-1-1-1-1-1-1-1-
> >1-1-1-1-1-1-111-1-1-r1VR-11591UPD113582260001AS1IL1-1N/A1Entertainment1-1a
> >n-example.com1-1-1-1-1-1-1-1011413/011413_factor_points_FNC_,LOW,MED_LOW,M
> >ED,HIGH,HD,.mp4.csmil/bitrate=11subcategory
> >71Title
> >10^D1-1-111-1-1-1-1-1-111-1-1-1-1-115101-1-1-1-1126112491-1-1-1-1-1-1-1-1-
> >1-1-1-1-1-1-111-1-1-"message.getBytes().slice(0,msgSize)

[jira] [Created] (KAFKA-2472) Fix kafka ssl configs to not throw warnings

2015-08-25 Thread Sriharsha Chintalapani (JIRA)
Sriharsha Chintalapani created KAFKA-2472:
-

 Summary: Fix kafka ssl configs to not throw warnings
 Key: KAFKA-2472
 URL: https://issues.apache.org/jira/browse/KAFKA-2472
 Project: Kafka
  Issue Type: Bug
Reporter: Sriharsha Chintalapani
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3


This is a follow-up fix on kafka-1690.
[2015-08-25 18:20:48,236] WARN The configuration ssl.truststore.password = 
striker was supplied but isn't a known config. 
(org.apache.kafka.clients.producer.ProducerConfig)
[2015-08-25 18:20:48,236] WARN The configuration security.protocol = SSL was 
supplied but isn't a known config. 
(org.apache.kafka.clients.producer.ProducerConfig)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1811) Ensuring registered broker host:port is unique

2015-08-25 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-1811:
--
Summary: Ensuring registered broker host:port is unique  (was: ensuring 
registered broker host:port is unique)

> Ensuring registered broker host:port is unique
> --
>
> Key: KAFKA-1811
> URL: https://issues.apache.org/jira/browse/KAFKA-1811
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>Assignee: Edward Ribeiro
>  Labels: newbie
> Attachments: KAFKA-1811.patch, KAFKA_1811.patch
>
>
> Currently, we expect each of the registered broker to have a unique host:port 
> pair. However, we don't enforce that, which causes various weird problems. It 
> would be useful to ensure this during broker registration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2385) zookeeper-shell does not work

2015-08-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-2385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14711441#comment-14711441
 ] 

João Reis commented on KAFKA-2385:
--

It seems to be a Zookeeper problem as stated here: 
https://issues.apache.org/jira/browse/ZOOKEEPER-1897

Sorry about this comment.

> zookeeper-shell does not work
> -
>
> Key: KAFKA-2385
> URL: https://issues.apache.org/jira/browse/KAFKA-2385
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Jiangjie Qin
>Assignee: Flavio Junqueira
> Fix For: 0.8.3
>
>
> The zookeeper shell shipped with Kafka does not work because jline jar is 
> missing.
> [jqin@jqin-ld1 bin]$ ./zookeeper-shell.sh localhost:2181
> Connecting to localhost:2181
> Welcome to ZooKeeper!
> JLine support is disabled
> WATCHER::
> WatchedEvent state:SyncConnected type:None path:null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2472) Fix kafka ssl configs to not throw warnings

2015-08-25 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14711500#comment-14711500
 ] 

Gwen Shapira commented on KAFKA-2472:
-

I'm wondering if we still need the unknown config warnings at all - the entire 
system was designed to pass arguments to SSL, serializers, reporters, etc. 
Unknown configs are expected at this point.

> Fix kafka ssl configs to not throw warnings
> ---
>
> Key: KAFKA-2472
> URL: https://issues.apache.org/jira/browse/KAFKA-2472
> Project: Kafka
>  Issue Type: Bug
>Reporter: Sriharsha Chintalapani
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
>
> This is a follow-up fix on kafka-1690.
> [2015-08-25 18:20:48,236] WARN The configuration ssl.truststore.password = 
> striker was supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2015-08-25 18:20:48,236] WARN The configuration security.protocol = SSL was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2472) Fix kafka ssl configs to not throw warnings

2015-08-25 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14711506#comment-14711506
 ] 

Sriharsha Chintalapani commented on KAFKA-2472:
---

[~gwenshap]  I think we should remove the warning.

> Fix kafka ssl configs to not throw warnings
> ---
>
> Key: KAFKA-2472
> URL: https://issues.apache.org/jira/browse/KAFKA-2472
> Project: Kafka
>  Issue Type: Bug
>Reporter: Sriharsha Chintalapani
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
>
> This is a follow-up fix on kafka-1690.
> [2015-08-25 18:20:48,236] WARN The configuration ssl.truststore.password = 
> striker was supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2015-08-25 18:20:48,236] WARN The configuration security.protocol = SSL was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-1811 Ensuring registered broker host:por...

2015-08-25 Thread eribeiro
GitHub user eribeiro opened a pull request:

https://github.com/apache/kafka/pull/168

KAFKA-1811 Ensuring registered broker host:port is unique

Adds a ZKLock recipe implementation to guarantee that the host:port pair is 
unique among the brokers registered on ZooKeeper.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/eribeiro/kafka KAFKA-1811

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/168.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #168


commit a9941892833e8e25c36ee0aac8704143863db452
Author: Edward Ribeiro 
Date:   2015-08-25T15:29:12Z

KAFKA-1811 Ensuring registered broker host:port is unique




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-1811) Ensuring registered broker host:port is unique

2015-08-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14711498#comment-14711498
 ] 

ASF GitHub Bot commented on KAFKA-1811:
---

GitHub user eribeiro opened a pull request:

https://github.com/apache/kafka/pull/168

KAFKA-1811 Ensuring registered broker host:port is unique

Adds a ZKLock recipe implementation to guarantee that the host:port pair is 
unique among the brokers registered on ZooKeeper.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/eribeiro/kafka KAFKA-1811

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/168.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #168


commit a9941892833e8e25c36ee0aac8704143863db452
Author: Edward Ribeiro 
Date:   2015-08-25T15:29:12Z

KAFKA-1811 Ensuring registered broker host:port is unique




> Ensuring registered broker host:port is unique
> --
>
> Key: KAFKA-1811
> URL: https://issues.apache.org/jira/browse/KAFKA-1811
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>Assignee: Edward Ribeiro
>  Labels: newbie
> Attachments: KAFKA-1811.patch, KAFKA_1811.patch
>
>
> Currently, we expect each of the registered broker to have a unique host:port 
> pair. However, we don't enforce that, which causes various weird problems. It 
> would be useful to ensure this during broker registration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2122) Remove controller.message.queue.size Config

2015-08-25 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2122:

Fix Version/s: 0.8.3

> Remove controller.message.queue.size Config
> ---
>
> Key: KAFKA-2122
> URL: https://issues.apache.org/jira/browse/KAFKA-2122
> Project: Kafka
>  Issue Type: Bug
>Reporter: Onur Karaman
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
> Attachments: KAFKA-2122.patch, KAFKA-2122_2015-04-19_12:44:41.patch
>
>
> A deadlock can happen during a delete topic if controller.message.queue.size 
> is overridden to a custom value. Details are here: 
> https://issues.apache.org/jira/browse/KAFKA-2046?focusedCommentId=14380776&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14380776
> Given that KAFKA-1993 is enabling delete topic by default, it would be unsafe 
> to simultaneously allow a configurable controller.message.queue.size



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1690) Add SSL support to Kafka Broker, Producer and Consumer

2015-08-25 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14711565#comment-14711565
 ] 

Sriharsha Chintalapani commented on KAFKA-1690:
---

[~NavjotBhardwaj] it doesn't look like a ssl releated issue. Did you try 
deploying on windows without the ssl and ran producer/consumers?

> Add SSL support to Kafka Broker, Producer and Consumer
> --
>
> Key: KAFKA-1690
> URL: https://issues.apache.org/jira/browse/KAFKA-1690
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
> Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
> KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch, 
> KAFKA-1690_2015-05-11_16:09:36.patch, KAFKA-1690_2015-05-12_16:20:08.patch, 
> KAFKA-1690_2015-05-15_07:18:21.patch, KAFKA-1690_2015-05-20_14:54:35.patch, 
> KAFKA-1690_2015-05-21_10:37:08.patch, KAFKA-1690_2015-06-03_18:52:29.patch, 
> KAFKA-1690_2015-06-23_13:18:20.patch, KAFKA-1690_2015-07-20_06:10:42.patch, 
> KAFKA-1690_2015-07-20_11:59:57.patch, KAFKA-1690_2015-07-25_12:10:55.patch, 
> KAFKA-1690_2015-08-16_20:41:02.patch, KAFKA-1690_2015-08-17_08:12:50.patch, 
> KAFKA-1690_2015-08-17_09:28:52.patch, KAFKA-1690_2015-08-17_12:20:53.patch, 
> KAFKA-1690_2015-08-18_11:24:46.patch, KAFKA-1690_2015-08-18_17:24:48.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: KAFKA-2364 migrate docs from SVN to git

2015-08-25 Thread Guozhang Wang
I thought Gwen's suggestion was to us a separate folder in the same repo
for docs instead of a separate branch, Gwen can correct me if I was wrong?

Guozhang

On Mon, Aug 24, 2015 at 10:31 AM, Manikumar Reddy 
wrote:

> Hi,
>
>Infra team created git repo for kafka site docs.
>
>Gwen/Guozhang,
>Need your help to create a branch "asf-site" and copy the exiting
> svn contents to that branch.
>
> git repo: https://git-wip-us.apache.org/repos/asf/kafka-site.git
>
>
> https://issues.apache.org/jira/browse/INFRA-10143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14709630#comment-14709630
>
> Kumar
>
> On Fri, Aug 21, 2015 at 6:16 PM, Ismael Juma  wrote:
>
> > My preference would be to do `2` because it reduces the number of tools
> we
> > need to know. If we want to clone the repo for the generated site, we can
> > use the same tools as we do for the code repo and we can watch for
> changes
> > on GitHub, etc.
> >
> > Ismael
> >
> > On Fri, Aug 21, 2015 at 1:34 PM, Manikumar Reddy 
> > wrote:
> >
> > > Hi All,
> > >
> > > Can we finalize the  approach? So that we can proceed further.
> > >
> > > 1. Gwen's suggestion + existing svn repo
> > > 2. Gwen's suggestion + new git repo for docs
> > >
> > > kumar
> > >
> > > On Thu, Aug 20, 2015 at 11:48 PM, Manikumar Reddy <
> ku...@nmsworks.co.in>
> > > wrote:
> > >
> > > >   Also can we migrate svn repo to git repo? This will help us to fix
> > > > occasional  doc changes/bug fixes through github PR.
> > > >
> > > > On Thu, Aug 20, 2015 at 4:04 AM, Guozhang Wang 
> > > wrote:
> > > >
> > > >> Gwen: I remembered it wrong. We would not need another round of
> > voting.
> > > >>
> > > >> On Wed, Aug 19, 2015 at 3:06 PM, Gwen Shapira 
> > > wrote:
> > > >>
> > > >> > Looking back at this thread, the +1 mention "same repo", so I'm
> not
> > > >> sure a
> > > >> > new vote is required.
> > > >> >
> > > >> > On Wed, Aug 19, 2015 at 3:00 PM, Guozhang Wang <
> wangg...@gmail.com>
> > > >> wrote:
> > > >> >
> > > >> > > So I think we have two different approaches here. The original
> > > >> proposal
> > > >> > > from Aseem is to move website from SVN to a separate Git repo,
> and
> > > >> hence
> > > >> > > have separate commits on code / doc changes. For that we have
> > > >> accumulated
> > > >> > > enough binging +1s to move on; Gwen's proposal is to move
> website
> > > into
> > > >> > the
> > > >> > > same repo under a different folder. If people feel they prefer
> > this
> > > >> over
> > > >> > > the previous approach I would like to call for another round of
> > > >> voting.
> > > >> > >
> > > >> > > Guozhang
> > > >> > >
> > > >> > > On Wed, Aug 19, 2015 at 10:24 AM, Ashish <
> paliwalash...@gmail.com
> > >
> > > >> > wrote:
> > > >> > >
> > > >> > > > +1 to what Gwen has suggested. This is what we follow in
> Flume.
> > > >> > > >
> > > >> > > > All the latest doc changes are in git, once ready you move
> > changes
> > > >> to
> > > >> > > > svn to update website.
> > > >> > > > The only catch is, when you need to update specific changes to
> > > >> website
> > > >> > > > outside release cycle, need to be a bit careful :)
> > > >> > > >
> > > >> > > > On Wed, Aug 19, 2015 at 9:06 AM, Gwen Shapira <
> > g...@confluent.io>
> > > >> > wrote:
> > > >> > > > > Yeah, so the way this works in few other projects I worked
> on
> > > is:
> > > >> > > > >
> > > >> > > > > * The code repo has a /docs directory with the latest
> revision
> > > of
> > > >> the
> > > >> > > > docs
> > > >> > > > > (not multiple versions, just one that matches the latest
> state
> > > of
> > > >> > code)
> > > >> > > > > * When you submit a patch that requires doc modification,
> you
> > > >> modify
> > > >> > > all
> > > >> > > > > relevant files in same patch and they get reviewed and
> > committed
> > > >> > > together
> > > >> > > > > (ideally)
> > > >> > > > > * When we release, we copy the docs matching the release and
> > > >> commit
> > > >> > to
> > > >> > > > SVN
> > > >> > > > > website. We also do this occasionally to fix bugs in earlier
> > > docs.
> > > >> > > > > * Release artifacts include a copy of the docs
> > > >> > > > >
> > > >> > > > > Nice to have:
> > > >> > > > > * Docs are in Asciidoc and build generates the HTML.
> Asciidoc
> > is
> > > >> > easier
> > > >> > > > to
> > > >> > > > > edit and review.
> > > >> > > > >
> > > >> > > > > I suggest a similar process for Kafka.
> > > >> > > > >
> > > >> > > > > On Wed, Aug 19, 2015 at 8:53 AM, Ismael Juma <
> > ism...@juma.me.uk
> > > >
> > > >> > > wrote:
> > > >> > > > >
> > > >> > > > >> I should clarify: it's not possible unless we add an
> > additional
> > > >> step
> > > >> > > > that
> > > >> > > > >> moves the docs from the code repo to the website repo.
> > > >> > > > >>
> > > >> > > > >> Ismael
> > > >> > > > >>
> > > >> > > > >> On Wed, Aug 19, 2015 at 4:42 PM, Ismael Juma <
> > > ism...@juma.me.uk>
> > > >> > > wrote:
> > > >> > > > >>
> > > >> > > > >> > Hi a

[jira] [Commented] (KAFKA-1690) Add SSL support to Kafka Broker, Producer and Consumer

2015-08-25 Thread Sourabh Chandak (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14711624#comment-14711624
 ] 

Sourabh Chandak commented on KAFKA-1690:


[~sriharsha] Is the simple consumer also updated to support SSL in this?
I don't see any change to the SimpleConsumer in your commit

> Add SSL support to Kafka Broker, Producer and Consumer
> --
>
> Key: KAFKA-1690
> URL: https://issues.apache.org/jira/browse/KAFKA-1690
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
> Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
> KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch, 
> KAFKA-1690_2015-05-11_16:09:36.patch, KAFKA-1690_2015-05-12_16:20:08.patch, 
> KAFKA-1690_2015-05-15_07:18:21.patch, KAFKA-1690_2015-05-20_14:54:35.patch, 
> KAFKA-1690_2015-05-21_10:37:08.patch, KAFKA-1690_2015-06-03_18:52:29.patch, 
> KAFKA-1690_2015-06-23_13:18:20.patch, KAFKA-1690_2015-07-20_06:10:42.patch, 
> KAFKA-1690_2015-07-20_11:59:57.patch, KAFKA-1690_2015-07-25_12:10:55.patch, 
> KAFKA-1690_2015-08-16_20:41:02.patch, KAFKA-1690_2015-08-17_08:12:50.patch, 
> KAFKA-1690_2015-08-17_09:28:52.patch, KAFKA-1690_2015-08-17_12:20:53.patch, 
> KAFKA-1690_2015-08-18_11:24:46.patch, KAFKA-1690_2015-08-18_17:24:48.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1690) Add SSL support to Kafka Broker, Producer and Consumer

2015-08-25 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14711679#comment-14711679
 ] 

Sriharsha Chintalapani commented on KAFKA-1690:
---

[~sourabh0612] SSL or any security changes will be limited new producer and 
consumer.

> Add SSL support to Kafka Broker, Producer and Consumer
> --
>
> Key: KAFKA-1690
> URL: https://issues.apache.org/jira/browse/KAFKA-1690
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
> Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
> KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch, 
> KAFKA-1690_2015-05-11_16:09:36.patch, KAFKA-1690_2015-05-12_16:20:08.patch, 
> KAFKA-1690_2015-05-15_07:18:21.patch, KAFKA-1690_2015-05-20_14:54:35.patch, 
> KAFKA-1690_2015-05-21_10:37:08.patch, KAFKA-1690_2015-06-03_18:52:29.patch, 
> KAFKA-1690_2015-06-23_13:18:20.patch, KAFKA-1690_2015-07-20_06:10:42.patch, 
> KAFKA-1690_2015-07-20_11:59:57.patch, KAFKA-1690_2015-07-25_12:10:55.patch, 
> KAFKA-1690_2015-08-16_20:41:02.patch, KAFKA-1690_2015-08-17_08:12:50.patch, 
> KAFKA-1690_2015-08-17_09:28:52.patch, KAFKA-1690_2015-08-17_12:20:53.patch, 
> KAFKA-1690_2015-08-18_11:24:46.patch, KAFKA-1690_2015-08-18_17:24:48.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33378: Patch for KAFKA-2136

2015-08-25 Thread Aditya Auradkar


> On Aug. 22, 2015, 12:45 a.m., Joel Koshy wrote:
> > clients/src/main/java/org/apache/kafka/common/requests/FetchResponse.java, 
> > line 107
> > 
> >
> > This will probably need a versionId as well (as is done in the Scala 
> > response) - i.e., when we move the broker over to use these protocol 
> > schemas.
> 
> Aditya Auradkar wrote:
> Makes sense. Do you want me to tackle this in this patch or should it be 
> tackled in the patch that migrates the broker to use these schemas?
> 
> Joel Koshy wrote:
> I think it would be safer to do it in this patch itself.

Do you think we actually need to check the version id? The Struct contains the 
schema which should be sufficient to understand the version number right?


- Aditya


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33378/#review96100
---


On Aug. 24, 2015, 5:33 p.m., Aditya Auradkar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/33378/
> ---
> 
> (Updated Aug. 24, 2015, 5:33 p.m.)
> 
> 
> Review request for kafka, Joel Koshy and Jun Rao.
> 
> 
> Bugs: KAFKA-2136
> https://issues.apache.org/jira/browse/KAFKA-2136
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Changes are
> - Addressing Joel's comments
> - protocol changes to the fetch request and response to return the 
> throttle_time_ms to clients
> - New producer/consumer metrics to expose the avg and max delay time for a 
> client
> - Test cases.
> - Addressed Joel and Juns comments
> 
> 
> Diffs
> -
> 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java
>  9dc669728e6b052f5c6686fcf1b5696a50538ab4 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
> 0baf16e55046a2f49f6431e01d52c323c95eddf0 
>   clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 
> 3dc8b015afd2347a41c9a9dbc02b8e367da5f75f 
>   clients/src/main/java/org/apache/kafka/common/requests/FetchRequest.java 
> df073a0e76cc5cc731861b9604d0e19a928970e0 
>   clients/src/main/java/org/apache/kafka/common/requests/FetchResponse.java 
> eb8951fba48c335095cc43fc3672de1c733e07ff 
>   clients/src/main/java/org/apache/kafka/common/requests/ProduceRequest.java 
> 715504b32950666e9aa5a260fa99d5f897b2007a 
>   clients/src/main/java/org/apache/kafka/common/requests/ProduceResponse.java 
> febfc70dabc23671fd8a85cf5c5b274dff1e10fb 
>   
> clients/src/test/java/org/apache/kafka/clients/consumer/internals/FetcherTest.java
>  a7c83cac47d41138d47d7590a3787432d675c1b0 
>   
> clients/src/test/java/org/apache/kafka/clients/producer/internals/SenderTest.java
>  8b1805d3d2bcb9fe2bacb37d870c3236aa9532c4 
>   
> clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java
>  8b2aca85fa738180e5420985fddc39a4bf9681ea 
>   core/src/main/scala/kafka/api/FetchRequest.scala 
> 5b38f8554898e54800abd65a7415dd0ac41fd958 
>   core/src/main/scala/kafka/api/FetchResponse.scala 
> b9efec2efbd3ea0ee12b911f453c47e66ad34894 
>   core/src/main/scala/kafka/api/ProducerRequest.scala 
> c866180d3680da03e48d374415f10220f6ca68c4 
>   core/src/main/scala/kafka/api/ProducerResponse.scala 
> 5d1fac4cb8943f5bfaa487f8e9d9d2856efbd330 
>   core/src/main/scala/kafka/consumer/SimpleConsumer.scala 
> 7ebc0405d1f309bed9943e7116051d1d8276f200 
>   core/src/main/scala/kafka/server/AbstractFetcherThread.scala 
> f84306143c43049e3aa44e42beaffe7eb2783163 
>   core/src/main/scala/kafka/server/ClientQuotaManager.scala 
> 9f8473f1c64d10c04cf8cc91967688e29e54ae2e 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 67f0cad802f901f255825aa2158545d7f5e7cc3d 
>   core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
> fae22d2af8daccd528ac24614290f46ae8f6c797 
>   core/src/main/scala/kafka/server/ReplicaManager.scala 
> d829e180c3943a90861a12ec184f9b4e4bbafe7d 
>   core/src/main/scala/kafka/server/ThrottledResponse.scala 
> 1f80d5480ccf7c411a02dd90296a7046ede0fae2 
>   core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala 
> b4c2a228c3c9872e5817ac58d3022e4903e317b7 
>   core/src/test/scala/unit/kafka/server/ClientQuotaManagerTest.scala 
> caf98e8f2e09d39ab8234b9f4b9478686865e908 
>   core/src/test/scala/unit/kafka/server/ThrottledResponseExpirationTest.scala 
> c4b5803917e700965677d53624f1460c1a52bf52 
> 
> Diff: https://reviews.apache.org/r/33378/diff/
> 
> 
> Testing
> ---
> 
> New tests added
> 
> 
> Thanks,
> 
> Aditya Auradkar
> 
>



[jira] [Commented] (KAFKA-1690) Add SSL support to Kafka Broker, Producer and Consumer

2015-08-25 Thread Sourabh Chandak (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14711698#comment-14711698
 ] 

Sourabh Chandak commented on KAFKA-1690:


[~sriharsha] Sorry it might be a noob question, but can you point me to the new 
API, also will the new API support all the functionality of SimpleConsumer?

> Add SSL support to Kafka Broker, Producer and Consumer
> --
>
> Key: KAFKA-1690
> URL: https://issues.apache.org/jira/browse/KAFKA-1690
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
> Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
> KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch, 
> KAFKA-1690_2015-05-11_16:09:36.patch, KAFKA-1690_2015-05-12_16:20:08.patch, 
> KAFKA-1690_2015-05-15_07:18:21.patch, KAFKA-1690_2015-05-20_14:54:35.patch, 
> KAFKA-1690_2015-05-21_10:37:08.patch, KAFKA-1690_2015-06-03_18:52:29.patch, 
> KAFKA-1690_2015-06-23_13:18:20.patch, KAFKA-1690_2015-07-20_06:10:42.patch, 
> KAFKA-1690_2015-07-20_11:59:57.patch, KAFKA-1690_2015-07-25_12:10:55.patch, 
> KAFKA-1690_2015-08-16_20:41:02.patch, KAFKA-1690_2015-08-17_08:12:50.patch, 
> KAFKA-1690_2015-08-17_09:28:52.patch, KAFKA-1690_2015-08-17_12:20:53.patch, 
> KAFKA-1690_2015-08-18_11:24:46.patch, KAFKA-1690_2015-08-18_17:24:48.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1690) Add SSL support to Kafka Broker, Producer and Consumer

2015-08-25 Thread Navjot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14711715#comment-14711715
 ] 

Navjot commented on KAFKA-1690:
---

Yes [~harsha_ch] I deployed it on windows without SSL and it was running 
absolutely fine.

> Add SSL support to Kafka Broker, Producer and Consumer
> --
>
> Key: KAFKA-1690
> URL: https://issues.apache.org/jira/browse/KAFKA-1690
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
> Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
> KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch, 
> KAFKA-1690_2015-05-11_16:09:36.patch, KAFKA-1690_2015-05-12_16:20:08.patch, 
> KAFKA-1690_2015-05-15_07:18:21.patch, KAFKA-1690_2015-05-20_14:54:35.patch, 
> KAFKA-1690_2015-05-21_10:37:08.patch, KAFKA-1690_2015-06-03_18:52:29.patch, 
> KAFKA-1690_2015-06-23_13:18:20.patch, KAFKA-1690_2015-07-20_06:10:42.patch, 
> KAFKA-1690_2015-07-20_11:59:57.patch, KAFKA-1690_2015-07-25_12:10:55.patch, 
> KAFKA-1690_2015-08-16_20:41:02.patch, KAFKA-1690_2015-08-17_08:12:50.patch, 
> KAFKA-1690_2015-08-17_09:28:52.patch, KAFKA-1690_2015-08-17_12:20:53.patch, 
> KAFKA-1690_2015-08-18_11:24:46.patch, KAFKA-1690_2015-08-18_17:24:48.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1690) Add SSL support to Kafka Broker, Producer and Consumer

2015-08-25 Thread Navjot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14711718#comment-14711718
 ] 

Navjot commented on KAFKA-1690:
---

Yes [~harsha_ch] I deployed it on windows without SSL and it was running 
absolutely fine.

> Add SSL support to Kafka Broker, Producer and Consumer
> --
>
> Key: KAFKA-1690
> URL: https://issues.apache.org/jira/browse/KAFKA-1690
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
> Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
> KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch, 
> KAFKA-1690_2015-05-11_16:09:36.patch, KAFKA-1690_2015-05-12_16:20:08.patch, 
> KAFKA-1690_2015-05-15_07:18:21.patch, KAFKA-1690_2015-05-20_14:54:35.patch, 
> KAFKA-1690_2015-05-21_10:37:08.patch, KAFKA-1690_2015-06-03_18:52:29.patch, 
> KAFKA-1690_2015-06-23_13:18:20.patch, KAFKA-1690_2015-07-20_06:10:42.patch, 
> KAFKA-1690_2015-07-20_11:59:57.patch, KAFKA-1690_2015-07-25_12:10:55.patch, 
> KAFKA-1690_2015-08-16_20:41:02.patch, KAFKA-1690_2015-08-17_08:12:50.patch, 
> KAFKA-1690_2015-08-17_09:28:52.patch, KAFKA-1690_2015-08-17_12:20:53.patch, 
> KAFKA-1690_2015-08-18_11:24:46.patch, KAFKA-1690_2015-08-18_17:24:48.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1690) Add SSL support to Kafka Broker, Producer and Consumer

2015-08-25 Thread Navjot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14711717#comment-14711717
 ] 

Navjot commented on KAFKA-1690:
---

Yes [~harsha_ch] I deployed it on windows without SSL and it was running 
absolutely fine.

> Add SSL support to Kafka Broker, Producer and Consumer
> --
>
> Key: KAFKA-1690
> URL: https://issues.apache.org/jira/browse/KAFKA-1690
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
> Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
> KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch, 
> KAFKA-1690_2015-05-11_16:09:36.patch, KAFKA-1690_2015-05-12_16:20:08.patch, 
> KAFKA-1690_2015-05-15_07:18:21.patch, KAFKA-1690_2015-05-20_14:54:35.patch, 
> KAFKA-1690_2015-05-21_10:37:08.patch, KAFKA-1690_2015-06-03_18:52:29.patch, 
> KAFKA-1690_2015-06-23_13:18:20.patch, KAFKA-1690_2015-07-20_06:10:42.patch, 
> KAFKA-1690_2015-07-20_11:59:57.patch, KAFKA-1690_2015-07-25_12:10:55.patch, 
> KAFKA-1690_2015-08-16_20:41:02.patch, KAFKA-1690_2015-08-17_08:12:50.patch, 
> KAFKA-1690_2015-08-17_09:28:52.patch, KAFKA-1690_2015-08-17_12:20:53.patch, 
> KAFKA-1690_2015-08-18_11:24:46.patch, KAFKA-1690_2015-08-18_17:24:48.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1690) Add SSL support to Kafka Broker, Producer and Consumer

2015-08-25 Thread Navjot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14711719#comment-14711719
 ] 

Navjot commented on KAFKA-1690:
---

Yes [~harsha_ch] I deployed it on windows without SSL and it was running 
absolutely fine.

> Add SSL support to Kafka Broker, Producer and Consumer
> --
>
> Key: KAFKA-1690
> URL: https://issues.apache.org/jira/browse/KAFKA-1690
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
> Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
> KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch, 
> KAFKA-1690_2015-05-11_16:09:36.patch, KAFKA-1690_2015-05-12_16:20:08.patch, 
> KAFKA-1690_2015-05-15_07:18:21.patch, KAFKA-1690_2015-05-20_14:54:35.patch, 
> KAFKA-1690_2015-05-21_10:37:08.patch, KAFKA-1690_2015-06-03_18:52:29.patch, 
> KAFKA-1690_2015-06-23_13:18:20.patch, KAFKA-1690_2015-07-20_06:10:42.patch, 
> KAFKA-1690_2015-07-20_11:59:57.patch, KAFKA-1690_2015-07-25_12:10:55.patch, 
> KAFKA-1690_2015-08-16_20:41:02.patch, KAFKA-1690_2015-08-17_08:12:50.patch, 
> KAFKA-1690_2015-08-17_09:28:52.patch, KAFKA-1690_2015-08-17_12:20:53.patch, 
> KAFKA-1690_2015-08-18_11:24:46.patch, KAFKA-1690_2015-08-18_17:24:48.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1690) Add SSL support to Kafka Broker, Producer and Consumer

2015-08-25 Thread Navjot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14711720#comment-14711720
 ] 

Navjot commented on KAFKA-1690:
---

Yes [~harsha_ch] I deployed it on windows without SSL and it was running 
absolutely fine.

> Add SSL support to Kafka Broker, Producer and Consumer
> --
>
> Key: KAFKA-1690
> URL: https://issues.apache.org/jira/browse/KAFKA-1690
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
> Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
> KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch, 
> KAFKA-1690_2015-05-11_16:09:36.patch, KAFKA-1690_2015-05-12_16:20:08.patch, 
> KAFKA-1690_2015-05-15_07:18:21.patch, KAFKA-1690_2015-05-20_14:54:35.patch, 
> KAFKA-1690_2015-05-21_10:37:08.patch, KAFKA-1690_2015-06-03_18:52:29.patch, 
> KAFKA-1690_2015-06-23_13:18:20.patch, KAFKA-1690_2015-07-20_06:10:42.patch, 
> KAFKA-1690_2015-07-20_11:59:57.patch, KAFKA-1690_2015-07-25_12:10:55.patch, 
> KAFKA-1690_2015-08-16_20:41:02.patch, KAFKA-1690_2015-08-17_08:12:50.patch, 
> KAFKA-1690_2015-08-17_09:28:52.patch, KAFKA-1690_2015-08-17_12:20:53.patch, 
> KAFKA-1690_2015-08-18_11:24:46.patch, KAFKA-1690_2015-08-18_17:24:48.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (KAFKA-1690) Add SSL support to Kafka Broker, Producer and Consumer

2015-08-25 Thread Navjot (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navjot updated KAFKA-1690:
--
Comment: was deleted

(was: Yes [~harsha_ch] I deployed it on windows without SSL and it was running 
absolutely fine.)

> Add SSL support to Kafka Broker, Producer and Consumer
> --
>
> Key: KAFKA-1690
> URL: https://issues.apache.org/jira/browse/KAFKA-1690
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
> Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
> KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch, 
> KAFKA-1690_2015-05-11_16:09:36.patch, KAFKA-1690_2015-05-12_16:20:08.patch, 
> KAFKA-1690_2015-05-15_07:18:21.patch, KAFKA-1690_2015-05-20_14:54:35.patch, 
> KAFKA-1690_2015-05-21_10:37:08.patch, KAFKA-1690_2015-06-03_18:52:29.patch, 
> KAFKA-1690_2015-06-23_13:18:20.patch, KAFKA-1690_2015-07-20_06:10:42.patch, 
> KAFKA-1690_2015-07-20_11:59:57.patch, KAFKA-1690_2015-07-25_12:10:55.patch, 
> KAFKA-1690_2015-08-16_20:41:02.patch, KAFKA-1690_2015-08-17_08:12:50.patch, 
> KAFKA-1690_2015-08-17_09:28:52.patch, KAFKA-1690_2015-08-17_12:20:53.patch, 
> KAFKA-1690_2015-08-18_11:24:46.patch, KAFKA-1690_2015-08-18_17:24:48.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (KAFKA-1690) Add SSL support to Kafka Broker, Producer and Consumer

2015-08-25 Thread Navjot (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navjot updated KAFKA-1690:
--
Comment: was deleted

(was: Yes [~harsha_ch] I deployed it on windows without SSL and it was running 
absolutely fine.)

> Add SSL support to Kafka Broker, Producer and Consumer
> --
>
> Key: KAFKA-1690
> URL: https://issues.apache.org/jira/browse/KAFKA-1690
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
> Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
> KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch, 
> KAFKA-1690_2015-05-11_16:09:36.patch, KAFKA-1690_2015-05-12_16:20:08.patch, 
> KAFKA-1690_2015-05-15_07:18:21.patch, KAFKA-1690_2015-05-20_14:54:35.patch, 
> KAFKA-1690_2015-05-21_10:37:08.patch, KAFKA-1690_2015-06-03_18:52:29.patch, 
> KAFKA-1690_2015-06-23_13:18:20.patch, KAFKA-1690_2015-07-20_06:10:42.patch, 
> KAFKA-1690_2015-07-20_11:59:57.patch, KAFKA-1690_2015-07-25_12:10:55.patch, 
> KAFKA-1690_2015-08-16_20:41:02.patch, KAFKA-1690_2015-08-17_08:12:50.patch, 
> KAFKA-1690_2015-08-17_09:28:52.patch, KAFKA-1690_2015-08-17_12:20:53.patch, 
> KAFKA-1690_2015-08-18_11:24:46.patch, KAFKA-1690_2015-08-18_17:24:48.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (KAFKA-1690) Add SSL support to Kafka Broker, Producer and Consumer

2015-08-25 Thread Navjot (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navjot updated KAFKA-1690:
--
Comment: was deleted

(was: Yes [~harsha_ch] I deployed it on windows without SSL and it was running 
absolutely fine.)

> Add SSL support to Kafka Broker, Producer and Consumer
> --
>
> Key: KAFKA-1690
> URL: https://issues.apache.org/jira/browse/KAFKA-1690
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
> Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
> KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch, 
> KAFKA-1690_2015-05-11_16:09:36.patch, KAFKA-1690_2015-05-12_16:20:08.patch, 
> KAFKA-1690_2015-05-15_07:18:21.patch, KAFKA-1690_2015-05-20_14:54:35.patch, 
> KAFKA-1690_2015-05-21_10:37:08.patch, KAFKA-1690_2015-06-03_18:52:29.patch, 
> KAFKA-1690_2015-06-23_13:18:20.patch, KAFKA-1690_2015-07-20_06:10:42.patch, 
> KAFKA-1690_2015-07-20_11:59:57.patch, KAFKA-1690_2015-07-25_12:10:55.patch, 
> KAFKA-1690_2015-08-16_20:41:02.patch, KAFKA-1690_2015-08-17_08:12:50.patch, 
> KAFKA-1690_2015-08-17_09:28:52.patch, KAFKA-1690_2015-08-17_12:20:53.patch, 
> KAFKA-1690_2015-08-18_11:24:46.patch, KAFKA-1690_2015-08-18_17:24:48.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (KAFKA-1690) Add SSL support to Kafka Broker, Producer and Consumer

2015-08-25 Thread Navjot (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navjot updated KAFKA-1690:
--
Comment: was deleted

(was: Yes [~harsha_ch] I deployed it on windows without SSL and it was running 
absolutely fine.)

> Add SSL support to Kafka Broker, Producer and Consumer
> --
>
> Key: KAFKA-1690
> URL: https://issues.apache.org/jira/browse/KAFKA-1690
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
> Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
> KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch, 
> KAFKA-1690_2015-05-11_16:09:36.patch, KAFKA-1690_2015-05-12_16:20:08.patch, 
> KAFKA-1690_2015-05-15_07:18:21.patch, KAFKA-1690_2015-05-20_14:54:35.patch, 
> KAFKA-1690_2015-05-21_10:37:08.patch, KAFKA-1690_2015-06-03_18:52:29.patch, 
> KAFKA-1690_2015-06-23_13:18:20.patch, KAFKA-1690_2015-07-20_06:10:42.patch, 
> KAFKA-1690_2015-07-20_11:59:57.patch, KAFKA-1690_2015-07-25_12:10:55.patch, 
> KAFKA-1690_2015-08-16_20:41:02.patch, KAFKA-1690_2015-08-17_08:12:50.patch, 
> KAFKA-1690_2015-08-17_09:28:52.patch, KAFKA-1690_2015-08-17_12:20:53.patch, 
> KAFKA-1690_2015-08-18_11:24:46.patch, KAFKA-1690_2015-08-18_17:24:48.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (KAFKA-1690) Add SSL support to Kafka Broker, Producer and Consumer

2015-08-25 Thread Navjot (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navjot updated KAFKA-1690:
--
Comment: was deleted

(was: Yes [~harsha_ch] I deployed it on windows without SSL and it was running 
absolutely fine.)

> Add SSL support to Kafka Broker, Producer and Consumer
> --
>
> Key: KAFKA-1690
> URL: https://issues.apache.org/jira/browse/KAFKA-1690
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
> Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
> KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch, 
> KAFKA-1690_2015-05-11_16:09:36.patch, KAFKA-1690_2015-05-12_16:20:08.patch, 
> KAFKA-1690_2015-05-15_07:18:21.patch, KAFKA-1690_2015-05-20_14:54:35.patch, 
> KAFKA-1690_2015-05-21_10:37:08.patch, KAFKA-1690_2015-06-03_18:52:29.patch, 
> KAFKA-1690_2015-06-23_13:18:20.patch, KAFKA-1690_2015-07-20_06:10:42.patch, 
> KAFKA-1690_2015-07-20_11:59:57.patch, KAFKA-1690_2015-07-25_12:10:55.patch, 
> KAFKA-1690_2015-08-16_20:41:02.patch, KAFKA-1690_2015-08-17_08:12:50.patch, 
> KAFKA-1690_2015-08-17_09:28:52.patch, KAFKA-1690_2015-08-17_12:20:53.patch, 
> KAFKA-1690_2015-08-18_11:24:46.patch, KAFKA-1690_2015-08-18_17:24:48.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: kafka producer-perf-test.sh compression-codec not working

2015-08-25 Thread Helleren, Erik
Prabhjot,
You can’t do it with producer perf test, but its relatively simple to 
implement. The message body includes a timestamp of when your producer 
produces, and the consumer looks at the difference between between the 
timestamp in the body and the current timestamp.

Or, if you were looking for ack latency, you can use the producer’s async 
callback to measure latency.
-Erik

From: Prabhjot Bharaj mailto:prabhbha...@gmail.com>>
Date: Tuesday, August 25, 2015 at 9:22 AM
To: Erik Helleren 
mailto:erik.helle...@cmegroup.com>>
Cc: "us...@kafka.apache.org" 
mailto:us...@kafka.apache.org>>, 
"dev@kafka.apache.org" 
mailto:dev@kafka.apache.org>>
Subject: Re: kafka producer-perf-test.sh compression-codec not working

Hi Erik,

Thanks for your inputs.

How can we measure round trip latency using kafka-producer-perf-test.sh ?

or any other tool ?

Regards,
Prabhjot

On Tue, Aug 25, 2015 at 7:41 PM, Helleren, Erik 
mailto:erik.helle...@cmegroup.com>> wrote:
Prabhjot,
When no compression is being used, it should have only a tiny impact on 
performance.  But when it is enabled it will make it as though the message 
payload is small and nearly constant, regardless as to how large the configured 
message size is.

I think that the answer is that this is room for improvement in the perf test, 
especially where compression is concerned.  If you do implement an improvement, 
a patch might be helpful to the community.  But something to consider is that 
threwput alone isn’t the only important performance measure.   Round trip 
latency is also important.
Thanks,
-Erik


From: Prabhjot Bharaj 
mailto:prabhbha...@gmail.com>>>
Date: Tuesday, August 25, 2015 at 8:41 AM
To: Erik Helleren 
mailto:erik.helle...@cmegroup.com>>>
Cc: 
"us...@kafka.apache.org>"
 
mailto:us...@kafka.apache.org>>>,
 
"dev@kafka.apache.org>"
 
mailto:dev@kafka.apache.org>>>
Subject: Re: kafka producer-perf-test.sh compression-codec not working

Hi Erik,

I have put my efforts on the produce side till now, Thanks for making me aware 
that consumer will decompress automatically.

I'll also consider your point on creating real-life messages

But, I have still have one confusion -

Why would the current ProducerPerformance.scala compress an Array of Bytes with 
all zeros ?
That will anyways give better throughput. correct ?

Regards,
Prabhjot

On Tue, Aug 25, 2015 at 7:05 PM, Helleren, Erik 
mailto:erik.helle...@cmegroup.com>>>
 wrote:
Hi Prabhjot,
There are two important things to know about kafka compression:  First
uncompression happens automatically in the consumer
(https://cwiki.apache.org/confluence/display/KAFKA/Compression) so you
should see ascii returned on the consumer side. The best way to see if
compression has happened that I know of is to actually look at a packet
capture.

Second, the producer does not compress individual messages, but actually
batches several sequential messages to the same topic and partition
together and compresses that compound message.
(https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Pro
tocol#AGuideToTheKafkaProtocol-Compression) Thus, a fixed string will
still see far better compression ratios than a Œtypical' real life
message.

Making a real-life-like message isn¹t easy, and depends heavily on your
domain. But a general approach would be to generate messages by randomly
selected words from a dictionary.  And having a dictionary around thousand
large words means there is a reasonable chance of the same words appearing
multiple times in the same message.  Also words can be non-sence like
³asdfasdfasdfasdf², or large words in the language of your choice.  The
goal is for each message to be unique, but still have similar chunks that
a compression algorithm can detect and compress.

-Erik


On 8/25/15, 6:47 AM, "Prabhjot Bharaj" 
mailto:prabhbha...@gmail.com>>>
 wrote:

>Hi,
>
>I have bene trying to use kafka-producer-perf-test.sh to arrive at certain
>benchmarks.
>When I try to run it with --compression-codec values of 1, 2 and 3, I
>notice increased throughput compared to NoCompressionCodec
>
>But, When I checked the Producerperformance.scala, I saw that the the
>`producer.send` is getting data from the method: `generateProducerData`.
>But, this data is just an empty array of Bytes.
>
>Now, as per my basic understanding of compression algorithms, I think a
>byte sequence of zeros will eventua

Re: kafka producer-perf-test.sh compression-codec not working

2015-08-25 Thread Prabhjot Bharaj
Hi Erik,
Thanks for these inputs
I'll implement it

Regards,
Prabhjot
On Aug 25, 2015 11:53 PM, "Helleren, Erik" 
wrote:

> Prabhjot,
> You can’t do it with producer perf test, but its relatively simple to
> implement. The message body includes a timestamp of when your producer
> produces, and the consumer looks at the difference between between the
> timestamp in the body and the current timestamp.
>
> Or, if you were looking for ack latency, you can use the producer’s async
> callback to measure latency.
> -Erik
>
> From: Prabhjot Bharaj mailto:prabhbha...@gmail.com
> >>
> Date: Tuesday, August 25, 2015 at 9:22 AM
> To: Erik Helleren  erik.helle...@cmegroup.com>>
> Cc: "us...@kafka.apache.org" <
> us...@kafka.apache.org>, "
> dev@kafka.apache.org"  >
> Subject: Re: kafka producer-perf-test.sh compression-codec not working
>
> Hi Erik,
>
> Thanks for your inputs.
>
> How can we measure round trip latency using kafka-producer-perf-test.sh ?
>
> or any other tool ?
>
> Regards,
> Prabhjot
>
> On Tue, Aug 25, 2015 at 7:41 PM, Helleren, Erik <
> erik.helle...@cmegroup.com> wrote:
> Prabhjot,
> When no compression is being used, it should have only a tiny impact on
> performance.  But when it is enabled it will make it as though the message
> payload is small and nearly constant, regardless as to how large the
> configured message size is.
>
> I think that the answer is that this is room for improvement in the perf
> test, especially where compression is concerned.  If you do implement an
> improvement, a patch might be helpful to the community.  But something to
> consider is that threwput alone isn’t the only important performance
> measure.   Round trip latency is also important.
> Thanks,
> -Erik
>
>
> From: Prabhjot Bharaj mailto:prabhbha...@gmail.com
> >>>
> Date: Tuesday, August 25, 2015 at 8:41 AM
> To: Erik Helleren  erik.helle...@cmegroup.com>>>
> Cc: "us...@kafka.apache.org us...@kafka.apache.org>" <
> us...@kafka.apache.org us...@kafka.apache.org>>, "
> dev@kafka.apache.org dev@kafka.apache.org>"  >>
> Subject: Re: kafka producer-perf-test.sh compression-codec not working
>
> Hi Erik,
>
> I have put my efforts on the produce side till now, Thanks for making me
> aware that consumer will decompress automatically.
>
> I'll also consider your point on creating real-life messages
>
> But, I have still have one confusion -
>
> Why would the current ProducerPerformance.scala compress an Array of Bytes
> with all zeros ?
> That will anyways give better throughput. correct ?
>
> Regards,
> Prabhjot
>
> On Tue, Aug 25, 2015 at 7:05 PM, Helleren, Erik <
> erik.helle...@cmegroup.com erik.helle...@cmegroup.com>> wrote:
> Hi Prabhjot,
> There are two important things to know about kafka compression:  First
> uncompression happens automatically in the consumer
> (https://cwiki.apache.org/confluence/display/KAFKA/Compression) so you
> should see ascii returned on the consumer side. The best way to see if
> compression has happened that I know of is to actually look at a packet
> capture.
>
> Second, the producer does not compress individual messages, but actually
> batches several sequential messages to the same topic and partition
> together and compresses that compound message.
> (
> https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Pro
> tocol#AGuideToTheKafkaProtocol-Compression) Thus, a fixed string will
> still see far better compression ratios than a Œtypical' real life
> message.
>
> Making a real-life-like message isn¹t easy, and depends heavily on your
> domain. But a general approach would be to generate messages by randomly
> selected words from a dictionary.  And having a dictionary around thousand
> large words means there is a reasonable chance of the same words appearing
> multiple times in the same message.  Also words can be non-sence like
> ³asdfasdfasdfasdf², or large words in the language of your choice.  The
> goal is for each message to be unique, but still have similar chunks that
> a compression algorithm can detect and compress.
>
> -Erik
>
>
> On 8/25/15, 6:47 AM, "Prabhjot Bharaj"  prabhbha...@gmail.com>>> wrote:
>
> >Hi,
> >
> >I have bene trying to use kafka-producer-perf-test.sh to arrive at certain
> >benchmarks.
> >When I try to run it with --compression-codec values of 1, 2 and 3, I
> >notice increased throughput

[jira] [Commented] (KAFKA-1690) Add SSL support to Kafka Broker, Producer and Consumer

2015-08-25 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14711733#comment-14711733
 ] 

Ismael Juma commented on KAFKA-1690:


[~sourabh0612], it would be better to take this to the users mailing list 
(http://kafka.apache.org/contact.html). The javadoc for the new consumer can be 
found at 
http://kafka.apache.org/083/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html

> Add SSL support to Kafka Broker, Producer and Consumer
> --
>
> Key: KAFKA-1690
> URL: https://issues.apache.org/jira/browse/KAFKA-1690
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
> Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
> KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch, 
> KAFKA-1690_2015-05-11_16:09:36.patch, KAFKA-1690_2015-05-12_16:20:08.patch, 
> KAFKA-1690_2015-05-15_07:18:21.patch, KAFKA-1690_2015-05-20_14:54:35.patch, 
> KAFKA-1690_2015-05-21_10:37:08.patch, KAFKA-1690_2015-06-03_18:52:29.patch, 
> KAFKA-1690_2015-06-23_13:18:20.patch, KAFKA-1690_2015-07-20_06:10:42.patch, 
> KAFKA-1690_2015-07-20_11:59:57.patch, KAFKA-1690_2015-07-25_12:10:55.patch, 
> KAFKA-1690_2015-08-16_20:41:02.patch, KAFKA-1690_2015-08-17_08:12:50.patch, 
> KAFKA-1690_2015-08-17_09:28:52.patch, KAFKA-1690_2015-08-17_12:20:53.patch, 
> KAFKA-1690_2015-08-18_11:24:46.patch, KAFKA-1690_2015-08-18_17:24:48.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1690) Add SSL support to Kafka Broker, Producer and Consumer

2015-08-25 Thread Navjot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14711741#comment-14711741
 ] 

Navjot commented on KAFKA-1690:
---

Yes [~harsha_ch] I deployed it on windows and producers/consumers wihtout SSL 
were running absolutely fine.

> Add SSL support to Kafka Broker, Producer and Consumer
> --
>
> Key: KAFKA-1690
> URL: https://issues.apache.org/jira/browse/KAFKA-1690
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
> Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
> KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch, 
> KAFKA-1690_2015-05-11_16:09:36.patch, KAFKA-1690_2015-05-12_16:20:08.patch, 
> KAFKA-1690_2015-05-15_07:18:21.patch, KAFKA-1690_2015-05-20_14:54:35.patch, 
> KAFKA-1690_2015-05-21_10:37:08.patch, KAFKA-1690_2015-06-03_18:52:29.patch, 
> KAFKA-1690_2015-06-23_13:18:20.patch, KAFKA-1690_2015-07-20_06:10:42.patch, 
> KAFKA-1690_2015-07-20_11:59:57.patch, KAFKA-1690_2015-07-25_12:10:55.patch, 
> KAFKA-1690_2015-08-16_20:41:02.patch, KAFKA-1690_2015-08-17_08:12:50.patch, 
> KAFKA-1690_2015-08-17_09:28:52.patch, KAFKA-1690_2015-08-17_12:20:53.patch, 
> KAFKA-1690_2015-08-18_11:24:46.patch, KAFKA-1690_2015-08-18_17:24:48.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2136) Client side protocol changes to return quota delays

2015-08-25 Thread Aditya A Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14711745#comment-14711745
 ] 

Aditya A Auradkar commented on KAFKA-2136:
--

Updated reviewboard https://reviews.apache.org/r/33378/diff/
 against branch origin/trunk

> Client side protocol changes to return quota delays
> ---
>
> Key: KAFKA-2136
> URL: https://issues.apache.org/jira/browse/KAFKA-2136
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>  Labels: quotas
> Attachments: KAFKA-2136.patch, KAFKA-2136_2015-05-06_18:32:48.patch, 
> KAFKA-2136_2015-05-06_18:35:54.patch, KAFKA-2136_2015-05-11_14:50:56.patch, 
> KAFKA-2136_2015-05-12_14:40:44.patch, KAFKA-2136_2015-06-09_10:07:13.patch, 
> KAFKA-2136_2015-06-09_10:10:25.patch, KAFKA-2136_2015-06-30_19:43:55.patch, 
> KAFKA-2136_2015-07-13_13:34:03.patch, KAFKA-2136_2015-08-18_13:19:57.patch, 
> KAFKA-2136_2015-08-18_13:24:00.patch, KAFKA-2136_2015-08-21_16:29:17.patch, 
> KAFKA-2136_2015-08-24_10:33:10.patch, KAFKA-2136_2015-08-25_11:29:52.patch
>
>
> As described in KIP-13, evolve the protocol to return a throttle_time_ms in 
> the Fetch and the ProduceResponse objects. Add client side metrics on the new 
> producer and consumer to expose the delay time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33378: Patch for KAFKA-2136

2015-08-25 Thread Aditya Auradkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33378/
---

(Updated Aug. 25, 2015, 6:30 p.m.)


Review request for kafka, Joel Koshy and Jun Rao.


Bugs: KAFKA-2136
https://issues.apache.org/jira/browse/KAFKA-2136


Repository: kafka


Description (updated)
---

Addressing Joel's comments


Merging


Chaning variable name


Addressing Joel's comments


Addressing Joel's comments


Addressing comments


Addressing joels comments


Addressing joels comments


Addressed Joels comments


Diffs (updated)
-

  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java 
9dc669728e6b052f5c6686fcf1b5696a50538ab4 
  clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
0baf16e55046a2f49f6431e01d52c323c95eddf0 
  clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 
3dc8b015afd2347a41c9a9dbc02b8e367da5f75f 
  clients/src/main/java/org/apache/kafka/common/requests/FetchRequest.java 
df073a0e76cc5cc731861b9604d0e19a928970e0 
  clients/src/main/java/org/apache/kafka/common/requests/FetchResponse.java 
eb8951fba48c335095cc43fc3672de1c733e07ff 
  clients/src/main/java/org/apache/kafka/common/requests/ProduceRequest.java 
715504b32950666e9aa5a260fa99d5f897b2007a 
  clients/src/main/java/org/apache/kafka/common/requests/ProduceResponse.java 
febfc70dabc23671fd8a85cf5c5b274dff1e10fb 
  
clients/src/test/java/org/apache/kafka/clients/consumer/internals/FetcherTest.java
 a7c83cac47d41138d47d7590a3787432d675c1b0 
  
clients/src/test/java/org/apache/kafka/clients/producer/internals/SenderTest.java
 8b1805d3d2bcb9fe2bacb37d870c3236aa9532c4 
  
clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java 
8b2aca85fa738180e5420985fddc39a4bf9681ea 
  core/src/main/scala/kafka/api/FetchRequest.scala 
5b38f8554898e54800abd65a7415dd0ac41fd958 
  core/src/main/scala/kafka/api/FetchResponse.scala 
b9efec2efbd3ea0ee12b911f453c47e66ad34894 
  core/src/main/scala/kafka/api/ProducerRequest.scala 
c866180d3680da03e48d374415f10220f6ca68c4 
  core/src/main/scala/kafka/api/ProducerResponse.scala 
5d1fac4cb8943f5bfaa487f8e9d9d2856efbd330 
  core/src/main/scala/kafka/consumer/SimpleConsumer.scala 
7ebc0405d1f309bed9943e7116051d1d8276f200 
  core/src/main/scala/kafka/server/AbstractFetcherThread.scala 
f84306143c43049e3aa44e42beaffe7eb2783163 
  core/src/main/scala/kafka/server/ClientQuotaManager.scala 
9f8473f1c64d10c04cf8cc91967688e29e54ae2e 
  core/src/main/scala/kafka/server/KafkaApis.scala 
67f0cad802f901f255825aa2158545d7f5e7cc3d 
  core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
fae22d2af8daccd528ac24614290f46ae8f6c797 
  core/src/main/scala/kafka/server/ReplicaManager.scala 
d829e180c3943a90861a12ec184f9b4e4bbafe7d 
  core/src/main/scala/kafka/server/ThrottledResponse.scala 
1f80d5480ccf7c411a02dd90296a7046ede0fae2 
  core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala 
b4c2a228c3c9872e5817ac58d3022e4903e317b7 
  core/src/test/scala/unit/kafka/server/ClientQuotaManagerTest.scala 
caf98e8f2e09d39ab8234b9f4b9478686865e908 
  core/src/test/scala/unit/kafka/server/ThrottledResponseExpirationTest.scala 
c4b5803917e700965677d53624f1460c1a52bf52 

Diff: https://reviews.apache.org/r/33378/diff/


Testing
---

New tests added


Thanks,

Aditya Auradkar



Re: Review Request 33378: Patch for KAFKA-2136

2015-08-25 Thread Aditya Auradkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33378/
---

(Updated Aug. 25, 2015, 6:30 p.m.)


Review request for kafka, Joel Koshy and Jun Rao.


Bugs: KAFKA-2136
https://issues.apache.org/jira/browse/KAFKA-2136


Repository: kafka


Description (updated)
---

Changes are
- Addressing Joel's comments
- protocol changes to the fetch request and response to return the 
throttle_time_ms to clients
- New producer/consumer metrics to expose the avg and max delay time for a 
client
- Test cases.
- Addressed Joel and Juns comments


Diffs
-

  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java 
9dc669728e6b052f5c6686fcf1b5696a50538ab4 
  clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
0baf16e55046a2f49f6431e01d52c323c95eddf0 
  clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 
3dc8b015afd2347a41c9a9dbc02b8e367da5f75f 
  clients/src/main/java/org/apache/kafka/common/requests/FetchRequest.java 
df073a0e76cc5cc731861b9604d0e19a928970e0 
  clients/src/main/java/org/apache/kafka/common/requests/FetchResponse.java 
eb8951fba48c335095cc43fc3672de1c733e07ff 
  clients/src/main/java/org/apache/kafka/common/requests/ProduceRequest.java 
715504b32950666e9aa5a260fa99d5f897b2007a 
  clients/src/main/java/org/apache/kafka/common/requests/ProduceResponse.java 
febfc70dabc23671fd8a85cf5c5b274dff1e10fb 
  
clients/src/test/java/org/apache/kafka/clients/consumer/internals/FetcherTest.java
 a7c83cac47d41138d47d7590a3787432d675c1b0 
  
clients/src/test/java/org/apache/kafka/clients/producer/internals/SenderTest.java
 8b1805d3d2bcb9fe2bacb37d870c3236aa9532c4 
  
clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java 
8b2aca85fa738180e5420985fddc39a4bf9681ea 
  core/src/main/scala/kafka/api/FetchRequest.scala 
5b38f8554898e54800abd65a7415dd0ac41fd958 
  core/src/main/scala/kafka/api/FetchResponse.scala 
b9efec2efbd3ea0ee12b911f453c47e66ad34894 
  core/src/main/scala/kafka/api/ProducerRequest.scala 
c866180d3680da03e48d374415f10220f6ca68c4 
  core/src/main/scala/kafka/api/ProducerResponse.scala 
5d1fac4cb8943f5bfaa487f8e9d9d2856efbd330 
  core/src/main/scala/kafka/consumer/SimpleConsumer.scala 
7ebc0405d1f309bed9943e7116051d1d8276f200 
  core/src/main/scala/kafka/server/AbstractFetcherThread.scala 
f84306143c43049e3aa44e42beaffe7eb2783163 
  core/src/main/scala/kafka/server/ClientQuotaManager.scala 
9f8473f1c64d10c04cf8cc91967688e29e54ae2e 
  core/src/main/scala/kafka/server/KafkaApis.scala 
67f0cad802f901f255825aa2158545d7f5e7cc3d 
  core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
fae22d2af8daccd528ac24614290f46ae8f6c797 
  core/src/main/scala/kafka/server/ReplicaManager.scala 
d829e180c3943a90861a12ec184f9b4e4bbafe7d 
  core/src/main/scala/kafka/server/ThrottledResponse.scala 
1f80d5480ccf7c411a02dd90296a7046ede0fae2 
  core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala 
b4c2a228c3c9872e5817ac58d3022e4903e317b7 
  core/src/test/scala/unit/kafka/server/ClientQuotaManagerTest.scala 
caf98e8f2e09d39ab8234b9f4b9478686865e908 
  core/src/test/scala/unit/kafka/server/ThrottledResponseExpirationTest.scala 
c4b5803917e700965677d53624f1460c1a52bf52 

Diff: https://reviews.apache.org/r/33378/diff/


Testing
---

New tests added


Thanks,

Aditya Auradkar



[jira] [Updated] (KAFKA-2136) Client side protocol changes to return quota delays

2015-08-25 Thread Aditya A Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya A Auradkar updated KAFKA-2136:
-
Attachment: KAFKA-2136_2015-08-25_11:29:52.patch

> Client side protocol changes to return quota delays
> ---
>
> Key: KAFKA-2136
> URL: https://issues.apache.org/jira/browse/KAFKA-2136
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>  Labels: quotas
> Attachments: KAFKA-2136.patch, KAFKA-2136_2015-05-06_18:32:48.patch, 
> KAFKA-2136_2015-05-06_18:35:54.patch, KAFKA-2136_2015-05-11_14:50:56.patch, 
> KAFKA-2136_2015-05-12_14:40:44.patch, KAFKA-2136_2015-06-09_10:07:13.patch, 
> KAFKA-2136_2015-06-09_10:10:25.patch, KAFKA-2136_2015-06-30_19:43:55.patch, 
> KAFKA-2136_2015-07-13_13:34:03.patch, KAFKA-2136_2015-08-18_13:19:57.patch, 
> KAFKA-2136_2015-08-18_13:24:00.patch, KAFKA-2136_2015-08-21_16:29:17.patch, 
> KAFKA-2136_2015-08-24_10:33:10.patch, KAFKA-2136_2015-08-25_11:29:52.patch
>
>
> As described in KIP-13, evolve the protocol to return a throttle_time_ms in 
> the Fetch and the ProduceResponse objects. Add client side metrics on the new 
> producer and consumer to expose the delay time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Can I get permission to create proposal

2015-08-25 Thread Abhishek Nigam
For pinning the controller to a broker KAFKA-1778 I want to create a
proposal detailing the design.

My user id is:anigam

-Abhishek


[jira] [Updated] (KAFKA-2367) Add Copycat runtime data API

2015-08-25 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2367:

Status: Patch Available  (was: Open)

> Add Copycat runtime data API
> 
>
> Key: KAFKA-2367
> URL: https://issues.apache.org/jira/browse/KAFKA-2367
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.8.3
>
>
> Design the API used for runtime data in Copycat. This API is used to 
> construct schemas and records that Copycat processes. This needs to be a 
> fairly general data model (think Avro, JSON, Protobufs, Thrift) in order to 
> support complex, varied data types that may be input from/output to many data 
> systems.
> This should issue should also address the serialization interfaces used 
> within Copycat, which translate the runtime data into serialized byte[] form. 
> It is important that these be considered together because the data format can 
> be used in multiple ways (records, partition IDs, partition offsets), so it 
> and the corresponding serializers must be sufficient for all these use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1387) Kafka getting stuck creating ephemeral node it has already created when two zookeeper sessions are established in a very short period of time

2015-08-25 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14711838#comment-14711838
 ] 

Guozhang Wang commented on KAFKA-1387:
--

Thanks [~fpj], thanks for the patch. Here are some high-level comments:

1. Will the mixing usage of ZK directly and ZkClient together violate ordering? 
AFAIK ZkClient orders all events fired by watchers and hand them to the user 
callbacks one-by-one, if we use ZK's Watcher directly will its callback be 
called out-of-order with other events?

2. If we get a Code.OK in CreateCallback, do we still need to trigger a 
ZooKeeper.exist with ExistsCallback again?

3. For the consumer / server registration case particularly, we tries to handle 
parent path creation in ZkUtils.makeSurePersistentPathExists, so I feel we 
should expose the problem that parent path does not exist yet instead trying to 
hide it in createRecursive.

> Kafka getting stuck creating ephemeral node it has already created when two 
> zookeeper sessions are established in a very short period of time
> -
>
> Key: KAFKA-1387
> URL: https://issues.apache.org/jira/browse/KAFKA-1387
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1
>Reporter: Fedor Korotkiy
>Assignee: Flavio Junqueira
>Priority: Blocker
>  Labels: newbie, patch, zkclient-problems
> Attachments: KAFKA-1387.patch, kafka-1387.patch
>
>
> Kafka broker re-registers itself in zookeeper every time handleNewSession() 
> callback is invoked.
> https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/server/KafkaHealthcheck.scala
>  
> Now imagine the following sequence of events.
> 1) Zookeeper session reestablishes. handleNewSession() callback is queued by 
> the zkClient, but not invoked yet.
> 2) Zookeeper session reestablishes again, queueing callback second time.
> 3) First callback is invoked, creating /broker/[id] ephemeral path.
> 4) Second callback is invoked and it tries to create /broker/[id] path using 
> createEphemeralPathExpectConflictHandleZKBug() function. But the path is 
> already exists, so createEphemeralPathExpectConflictHandleZKBug() is getting 
> stuck in the infinite loop.
> Seems like controller election code have the same issue.
> I'am able to reproduce this issue on the 0.8.1 branch from github using the 
> following configs.
> # zookeeper
> tickTime=10
> dataDir=/tmp/zk/
> clientPort=2101
> maxClientCnxns=0
> # kafka
> broker.id=1
> log.dir=/tmp/kafka
> zookeeper.connect=localhost:2101
> zookeeper.connection.timeout.ms=100
> zookeeper.sessiontimeout.ms=100
> Just start kafka and zookeeper and then pause zookeeper several times using 
> Ctrl-Z.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2473) Not able to release version with patch jira-1591

2015-08-25 Thread Amar (JIRA)
Amar created KAFKA-2473:
---

 Summary: Not able to release version with patch jira-1591
 Key: KAFKA-2473
 URL: https://issues.apache.org/jira/browse/KAFKA-2473
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.2.1
Reporter: Amar


Hi,

WE are seeing lot of  un-necessary logging in server.log. As per Jeera, it got 
fixed in 1591. But I couldn't able to find matched release. Can you please 
advise 

https://issues.apache.org/jira/browse/KAFKA-1591 

Thanks
Amar



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2473) Not able to find release version with patch jira-1591

2015-08-25 Thread Amar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amar updated KAFKA-2473:

Summary: Not able to find release version with patch jira-1591  (was: Not 
able to release version with patch jira-1591)

> Not able to find release version with patch jira-1591
> -
>
> Key: KAFKA-2473
> URL: https://issues.apache.org/jira/browse/KAFKA-2473
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Amar
>
> Hi,
> WE are seeing lot of  un-necessary logging in server.log. As per Jeera, it 
> got fixed in 1591. But I couldn't able to find matched release. Can you 
> please advise 
> https://issues.apache.org/jira/browse/KAFKA-1591 
> Thanks
> Amar



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2461) request logger no longer logs extra information in debug mode

2015-08-25 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2461:

Fix Version/s: 0.8.3

> request logger no longer logs extra information in debug mode
> -
>
> Key: KAFKA-2461
> URL: https://issues.apache.org/jira/browse/KAFKA-2461
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Ashish K Singh
> Fix For: 0.8.3
>
>
> Currently request logging calls are identical for trace and debug:
> {code}
> if(requestLogger.isTraceEnabled)
> requestLogger.trace("Completed request:%s from connection 
> %s;totalTime:%d,requestQueueTime:%d,localTime:%d,remoteTime:%d,responseQueueTime:%d,sendTime:%d"
> .format(requestDesc, connectionId, totalTime, 
> requestQueueTime, apiLocalTime, apiRemoteTime, responseQueueTime, 
> responseSendTime))
>   else if(requestLogger.isDebugEnabled)
> requestLogger.debug("Completed request:%s from connection 
> %s;totalTime:%d,requestQueueTime:%d,localTime:%d,remoteTime:%d,responseQueueTime:%d,sendTime:%d"
>   .format(requestDesc, connectionId, totalTime, requestQueueTime, 
> apiLocalTime, apiRemoteTime, responseQueueTime, responseSendTime))
> {code}
> I think in the past (3 refactoring steps ago), we used to print more 
> information about specific topics and partitions in debug mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2461) request logger no longer logs extra information in debug mode

2015-08-25 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14711994#comment-14711994
 ] 

Gwen Shapira commented on KAFKA-2461:
-

We'll need it in 0.8.3, since we are losing diagnosis capabilities without this 
(the ability to attribute specific request messages to topics)

> request logger no longer logs extra information in debug mode
> -
>
> Key: KAFKA-2461
> URL: https://issues.apache.org/jira/browse/KAFKA-2461
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Ashish K Singh
> Fix For: 0.8.3
>
>
> Currently request logging calls are identical for trace and debug:
> {code}
> if(requestLogger.isTraceEnabled)
> requestLogger.trace("Completed request:%s from connection 
> %s;totalTime:%d,requestQueueTime:%d,localTime:%d,remoteTime:%d,responseQueueTime:%d,sendTime:%d"
> .format(requestDesc, connectionId, totalTime, 
> requestQueueTime, apiLocalTime, apiRemoteTime, responseQueueTime, 
> responseSendTime))
>   else if(requestLogger.isDebugEnabled)
> requestLogger.debug("Completed request:%s from connection 
> %s;totalTime:%d,requestQueueTime:%d,localTime:%d,remoteTime:%d,responseQueueTime:%d,sendTime:%d"
>   .format(requestDesc, connectionId, totalTime, requestQueueTime, 
> apiLocalTime, apiRemoteTime, responseQueueTime, responseSendTime))
> {code}
> I think in the past (3 refactoring steps ago), we used to print more 
> information about specific topics and partitions in debug mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2473) Not able to find release version with patch jira-1591

2015-08-25 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14712009#comment-14712009
 ] 

Edward Ribeiro commented on KAFKA-2473:
---

Hi Amar, please, post this kind of question on mailing list first 
(us...@kafka.apache.org) before opening a JIRA. It is preferable to open a 
ticket only after confirming this as a bug.

Best regards!

> Not able to find release version with patch jira-1591
> -
>
> Key: KAFKA-2473
> URL: https://issues.apache.org/jira/browse/KAFKA-2473
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Amar
>
> Hi,
> WE are seeing lot of  un-necessary logging in server.log. As per Jeera, it 
> got fixed in 1591. But I couldn't able to find matched release. Can you 
> please advise 
> https://issues.apache.org/jira/browse/KAFKA-1591 
> Thanks
> Amar



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2473) Not able to find release version with patch jira-1591

2015-08-25 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14712009#comment-14712009
 ] 

Edward Ribeiro edited comment on KAFKA-2473 at 8/25/15 9:30 PM:


Hi Amar, please, post this kind of question on mailing list first 
(us...@kafka.apache.org) before opening a JIRA. Open a ticket only after 
confirming this as a bug.

Best regards!


was (Author: eribeiro):
Hi Amar, please, post this kind of question on mailing list first 
(us...@kafka.apache.org) before opening a JIRA. It is preferable to open a 
ticket only after confirming this as a bug.

Best regards!

> Not able to find release version with patch jira-1591
> -
>
> Key: KAFKA-2473
> URL: https://issues.apache.org/jira/browse/KAFKA-2473
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Amar
>
> Hi,
> WE are seeing lot of  un-necessary logging in server.log. As per Jeera, it 
> got fixed in 1591. But I couldn't able to find matched release. Can you 
> please advise 
> https://issues.apache.org/jira/browse/KAFKA-1591 
> Thanks
> Amar



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2473) Not able to find release version with patch jira-1591

2015-08-25 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14712009#comment-14712009
 ] 

Edward Ribeiro edited comment on KAFKA-2473 at 8/25/15 9:33 PM:


Hi Amar, please, post this kind of question on mailing list first 
(us...@kafka.apache.org) before opening a JIRA. Open a ticket only after 
confirming this as a bug. Also, make sure you provide a link to a gist or 
pastebin of the logs you are seeing on your email.

Best regards!


was (Author: eribeiro):
Hi Amar, please, post this kind of question on mailing list first 
(us...@kafka.apache.org) before opening a JIRA. Open a ticket only after 
confirming this as a bug.

Best regards!

> Not able to find release version with patch jira-1591
> -
>
> Key: KAFKA-2473
> URL: https://issues.apache.org/jira/browse/KAFKA-2473
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Amar
>
> Hi,
> WE are seeing lot of  un-necessary logging in server.log. As per Jeera, it 
> got fixed in 1591. But I couldn't able to find matched release. Can you 
> please advise 
> https://issues.apache.org/jira/browse/KAFKA-1591 
> Thanks
> Amar



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2461: request logger no longer logs extr...

2015-08-25 Thread SinghAsDev
GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/169

KAFKA-2461: request logger no longer logs extra information in debug mode



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-2461

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/169.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #169


commit 6cce97cef5ab52c394a94c8a2062f543ee6d4d81
Author: asingh 
Date:   2015-08-25T22:32:56Z

KAFKA-2461: request logger no longer logs extra information in debug mode




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2461) request logger no longer logs extra information in debug mode

2015-08-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14712096#comment-14712096
 ] 

ASF GitHub Bot commented on KAFKA-2461:
---

GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/169

KAFKA-2461: request logger no longer logs extra information in debug mode



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-2461

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/169.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #169


commit 6cce97cef5ab52c394a94c8a2062f543ee6d4d81
Author: asingh 
Date:   2015-08-25T22:32:56Z

KAFKA-2461: request logger no longer logs extra information in debug mode




> request logger no longer logs extra information in debug mode
> -
>
> Key: KAFKA-2461
> URL: https://issues.apache.org/jira/browse/KAFKA-2461
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Ashish K Singh
> Fix For: 0.8.3
>
>
> Currently request logging calls are identical for trace and debug:
> {code}
> if(requestLogger.isTraceEnabled)
> requestLogger.trace("Completed request:%s from connection 
> %s;totalTime:%d,requestQueueTime:%d,localTime:%d,remoteTime:%d,responseQueueTime:%d,sendTime:%d"
> .format(requestDesc, connectionId, totalTime, 
> requestQueueTime, apiLocalTime, apiRemoteTime, responseQueueTime, 
> responseSendTime))
>   else if(requestLogger.isDebugEnabled)
> requestLogger.debug("Completed request:%s from connection 
> %s;totalTime:%d,requestQueueTime:%d,localTime:%d,remoteTime:%d,responseQueueTime:%d,sendTime:%d"
>   .format(requestDesc, connectionId, totalTime, requestQueueTime, 
> apiLocalTime, apiRemoteTime, responseQueueTime, responseSendTime))
> {code}
> I think in the past (3 refactoring steps ago), we used to print more 
> information about specific topics and partitions in debug mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2473) Not able to find release version with patch jira-1591

2015-08-25 Thread Amar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amar resolved KAFKA-2473.
-
Resolution: Not A Problem

> Not able to find release version with patch jira-1591
> -
>
> Key: KAFKA-2473
> URL: https://issues.apache.org/jira/browse/KAFKA-2473
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Amar
>
> Hi,
> WE are seeing lot of  un-necessary logging in server.log. As per Jeera, it 
> got fixed in 1591. But I couldn't able to find matched release. Can you 
> please advise 
> https://issues.apache.org/jira/browse/KAFKA-1591 
> Thanks
> Amar



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2468) SIGINT during Kafka server startup can leave server deadlocked

2015-08-25 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14712201#comment-14712201
 ] 

Ewen Cheslack-Postava commented on KAFKA-2468:
--

Does this actually solve the problem? The docs on Runtime.exit say:

bq. If this method is invoked after the virtual machine has begun its shutdown 
sequence then if shutdown hooks are being run this method will block 
indefinitely. If shutdown hooks have already been run and on-exit finalization 
has been enabled then this method halts the virtual machine with the given 
status code if the status is nonzero; otherwise, it blocks indefinitely. 

Since the issue here seems to be that System.exit gets invoked due to an 
exception from KafkaServerStartable.startup, that invokes the shtudown hook, 
which invokes KafkaServerStartable.shutdown, which calls KafkaServer.shutdown, 
which throws an exception and then KafkaServerStartable's exception handler 
invokes System.exit. If we replace one with Runtime.exit, doesn't the above 
comment imply it will also block indefinitely since the first System.exit in 
the scenario above will still invoke Runtime.exit (the first call), then the 
subsequent Runtime.exit call via the shutdown hook's call to 
KafkaServerStartable.shutdown will actually end up waiting on itself (since it 
blocks until shutdown hooks are complete, but it is running in a shutdown hook)?

It seems like a better solution would be to just use a flag. Set it to true 
after startup() returns, then in shutdown(), check it before invoking 
KafkaServer.shutdown() so we don't ever produce the IllegalStateException.

> SIGINT during Kafka server startup can leave server deadlocked
> --
>
> Key: KAFKA-2468
> URL: https://issues.apache.org/jira/browse/KAFKA-2468
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> KafkaServer on receiving a SIGINT will try to shutdown and if this happens 
> while the server is starting up, it will get into deadlock.
> Thread dump after deadlock
> {code}
> 2015-08-24 22:03:52
> Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.55-b03 mixed mode):
> "Attach Listener" daemon prio=5 tid=0x7fc08e827800 nid=0x5807 waiting on 
> condition [0x]
>java.lang.Thread.State: RUNNABLE
> "Thread-2" prio=5 tid=0x7fc08b9de000 nid=0x6b03 waiting for monitor entry 
> [0x00011ad3a000]
>java.lang.Thread.State: BLOCKED (on object monitor)
>   at java.lang.Shutdown.exit(Shutdown.java:212)
>   - waiting to lock <0x0007bae86ac0> (a java.lang.Class for 
> java.lang.Shutdown)
>   at java.lang.Runtime.exit(Runtime.java:109)
>   at java.lang.System.exit(System.java:962)
>   at 
> kafka.server.KafkaServerStartable.shutdown(KafkaServerStartable.scala:46)
>   at kafka.Kafka$$anon$1.run(Kafka.scala:65)
> "SIGINT handler" daemon prio=5 tid=0x7fc08ca51800 nid=0x6503 in 
> Object.wait() [0x00011aa31000]
>java.lang.Thread.State: WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0x0007bcb40610> (a kafka.Kafka$$anon$1)
>   at java.lang.Thread.join(Thread.java:1281)
>   - locked <0x0007bcb40610> (a kafka.Kafka$$anon$1)
>   at java.lang.Thread.join(Thread.java:1355)
>   at 
> java.lang.ApplicationShutdownHooks.runHooks(ApplicationShutdownHooks.java:106)
>   at 
> java.lang.ApplicationShutdownHooks$1.run(ApplicationShutdownHooks.java:46)
>   at java.lang.Shutdown.runHooks(Shutdown.java:123)
>   at java.lang.Shutdown.sequence(Shutdown.java:167)
>   at java.lang.Shutdown.exit(Shutdown.java:212)
>   - locked <0x0007bae86ac0> (a java.lang.Class for java.lang.Shutdown)
>   at java.lang.Terminator$1.handle(Terminator.java:52)
>   at sun.misc.Signal$1.run(Signal.java:212)
>   at java.lang.Thread.run(Thread.java:745)
> "RMI TCP Accept-0" daemon prio=5 tid=0x7fc08c164000 nid=0x5c07 runnable 
> [0x000119fe8000]
>java.lang.Thread.State: RUNNABLE
>   at java.net.PlainSocketImpl.socketAccept(Native Method)
>   at 
> java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398)
>   at java.net.ServerSocket.implAccept(ServerSocket.java:530)
>   at java.net.ServerSocket.accept(ServerSocket.java:498)
>   at 
> sun.management.jmxremote.LocalRMIServerSocketFactory$1.accept(LocalRMIServerSocketFactory.java:52)
>   at 
> sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:388)
>   at 
> sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:360)
>   at java.lang.Thread.run(Thread.java:745)
> "Service Thread" daemon prio=5 tid=0x7fc08d015000 nid=0x5503 runnable 
> [0x]
>java.lang.Thread.State: RUN

[jira] [Comment Edited] (KAFKA-2468) SIGINT during Kafka server startup can leave server deadlocked

2015-08-25 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14712225#comment-14712225
 ] 

Ashish K Singh edited comment on KAFKA-2468 at 8/26/15 12:13 AM:
-

[~ewencp] unlike exit(), halt() forcibly terminates the jvm. Below is an 
excerpt from 
[here|http://geekexplains.blogspot.com/2008/06/runtimeexit-vs-runtimehalt-in-java.html].

{quote}
You might have noticed so far that the difference between the two methods is 
that Runtime.exit() invokes the shutdown sequence of the underlying JVM whereas 
Runtime.halt() forcibly terminates the JVM process. So, Runtime.exit() causes 
the registered shutdown hooks to be executed and then also lets all the 
uninvoked finalizers to be executed before the JVM process shuts down whereas 
Runtime.halt() simply terminates the JVM process immediately and abruptly.
{quote}

I have tested that the current PR resolves the issue. I did think about 
protecting exits with a flag. However, exit() can be called in Kafka.scala as 
well. Also, it does not make a lot of sense to wait for anything in catch block 
of shutdown.


was (Author: singhashish):
[~ewencp] unlike exit(), halt() forcibly terminates the jvm. Below is an 
excerpt from 
[here](http://geekexplains.blogspot.com/2008/06/runtimeexit-vs-runtimehalt-in-java.html).

{quote}
You might have noticed so far that the difference between the two methods is 
that Runtime.exit() invokes the shutdown sequence of the underlying JVM whereas 
Runtime.halt() forcibly terminates the JVM process. So, Runtime.exit() causes 
the registered shutdown hooks to be executed and then also lets all the 
uninvoked finalizers to be executed before the JVM process shuts down whereas 
Runtime.halt() simply terminates the JVM process immediately and abruptly.
{quote}

I have tested that the current PR resolves the issue. I did think about 
protecting exits with a flag. However, exit() can be called in Kafka.scala as 
well. Also, it does not make a lot of sense to wait for anything in catch block 
of shutdown.

> SIGINT during Kafka server startup can leave server deadlocked
> --
>
> Key: KAFKA-2468
> URL: https://issues.apache.org/jira/browse/KAFKA-2468
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> KafkaServer on receiving a SIGINT will try to shutdown and if this happens 
> while the server is starting up, it will get into deadlock.
> Thread dump after deadlock
> {code}
> 2015-08-24 22:03:52
> Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.55-b03 mixed mode):
> "Attach Listener" daemon prio=5 tid=0x7fc08e827800 nid=0x5807 waiting on 
> condition [0x]
>java.lang.Thread.State: RUNNABLE
> "Thread-2" prio=5 tid=0x7fc08b9de000 nid=0x6b03 waiting for monitor entry 
> [0x00011ad3a000]
>java.lang.Thread.State: BLOCKED (on object monitor)
>   at java.lang.Shutdown.exit(Shutdown.java:212)
>   - waiting to lock <0x0007bae86ac0> (a java.lang.Class for 
> java.lang.Shutdown)
>   at java.lang.Runtime.exit(Runtime.java:109)
>   at java.lang.System.exit(System.java:962)
>   at 
> kafka.server.KafkaServerStartable.shutdown(KafkaServerStartable.scala:46)
>   at kafka.Kafka$$anon$1.run(Kafka.scala:65)
> "SIGINT handler" daemon prio=5 tid=0x7fc08ca51800 nid=0x6503 in 
> Object.wait() [0x00011aa31000]
>java.lang.Thread.State: WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0x0007bcb40610> (a kafka.Kafka$$anon$1)
>   at java.lang.Thread.join(Thread.java:1281)
>   - locked <0x0007bcb40610> (a kafka.Kafka$$anon$1)
>   at java.lang.Thread.join(Thread.java:1355)
>   at 
> java.lang.ApplicationShutdownHooks.runHooks(ApplicationShutdownHooks.java:106)
>   at 
> java.lang.ApplicationShutdownHooks$1.run(ApplicationShutdownHooks.java:46)
>   at java.lang.Shutdown.runHooks(Shutdown.java:123)
>   at java.lang.Shutdown.sequence(Shutdown.java:167)
>   at java.lang.Shutdown.exit(Shutdown.java:212)
>   - locked <0x0007bae86ac0> (a java.lang.Class for java.lang.Shutdown)
>   at java.lang.Terminator$1.handle(Terminator.java:52)
>   at sun.misc.Signal$1.run(Signal.java:212)
>   at java.lang.Thread.run(Thread.java:745)
> "RMI TCP Accept-0" daemon prio=5 tid=0x7fc08c164000 nid=0x5c07 runnable 
> [0x000119fe8000]
>java.lang.Thread.State: RUNNABLE
>   at java.net.PlainSocketImpl.socketAccept(Native Method)
>   at 
> java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398)
>   at java.net.ServerSocket.implAccept(ServerSocket.java:530)
>   at java.net.ServerSocket.accept(ServerSocket.java:498)
>   at 
> sun.management.jmxremote.Loc

[jira] [Commented] (KAFKA-2468) SIGINT during Kafka server startup can leave server deadlocked

2015-08-25 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14712225#comment-14712225
 ] 

Ashish K Singh commented on KAFKA-2468:
---

[~ewencp] unlike exit(), halt() forcibly terminates the jvm. Below is an 
excerpt from 
[here](http://geekexplains.blogspot.com/2008/06/runtimeexit-vs-runtimehalt-in-java.html).

{quote}
You might have noticed so far that the difference between the two methods is 
that Runtime.exit() invokes the shutdown sequence of the underlying JVM whereas 
Runtime.halt() forcibly terminates the JVM process. So, Runtime.exit() causes 
the registered shutdown hooks to be executed and then also lets all the 
uninvoked finalizers to be executed before the JVM process shuts down whereas 
Runtime.halt() simply terminates the JVM process immediately and abruptly.
{quote}

I have tested that the current PR resolves the issue. I did think about 
protecting exits with a flag. However, exit() can be called in Kafka.scala as 
well. Also, it does not make a lot of sense to wait for anything in catch block 
of shutdown.

> SIGINT during Kafka server startup can leave server deadlocked
> --
>
> Key: KAFKA-2468
> URL: https://issues.apache.org/jira/browse/KAFKA-2468
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> KafkaServer on receiving a SIGINT will try to shutdown and if this happens 
> while the server is starting up, it will get into deadlock.
> Thread dump after deadlock
> {code}
> 2015-08-24 22:03:52
> Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.55-b03 mixed mode):
> "Attach Listener" daemon prio=5 tid=0x7fc08e827800 nid=0x5807 waiting on 
> condition [0x]
>java.lang.Thread.State: RUNNABLE
> "Thread-2" prio=5 tid=0x7fc08b9de000 nid=0x6b03 waiting for monitor entry 
> [0x00011ad3a000]
>java.lang.Thread.State: BLOCKED (on object monitor)
>   at java.lang.Shutdown.exit(Shutdown.java:212)
>   - waiting to lock <0x0007bae86ac0> (a java.lang.Class for 
> java.lang.Shutdown)
>   at java.lang.Runtime.exit(Runtime.java:109)
>   at java.lang.System.exit(System.java:962)
>   at 
> kafka.server.KafkaServerStartable.shutdown(KafkaServerStartable.scala:46)
>   at kafka.Kafka$$anon$1.run(Kafka.scala:65)
> "SIGINT handler" daemon prio=5 tid=0x7fc08ca51800 nid=0x6503 in 
> Object.wait() [0x00011aa31000]
>java.lang.Thread.State: WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0x0007bcb40610> (a kafka.Kafka$$anon$1)
>   at java.lang.Thread.join(Thread.java:1281)
>   - locked <0x0007bcb40610> (a kafka.Kafka$$anon$1)
>   at java.lang.Thread.join(Thread.java:1355)
>   at 
> java.lang.ApplicationShutdownHooks.runHooks(ApplicationShutdownHooks.java:106)
>   at 
> java.lang.ApplicationShutdownHooks$1.run(ApplicationShutdownHooks.java:46)
>   at java.lang.Shutdown.runHooks(Shutdown.java:123)
>   at java.lang.Shutdown.sequence(Shutdown.java:167)
>   at java.lang.Shutdown.exit(Shutdown.java:212)
>   - locked <0x0007bae86ac0> (a java.lang.Class for java.lang.Shutdown)
>   at java.lang.Terminator$1.handle(Terminator.java:52)
>   at sun.misc.Signal$1.run(Signal.java:212)
>   at java.lang.Thread.run(Thread.java:745)
> "RMI TCP Accept-0" daemon prio=5 tid=0x7fc08c164000 nid=0x5c07 runnable 
> [0x000119fe8000]
>java.lang.Thread.State: RUNNABLE
>   at java.net.PlainSocketImpl.socketAccept(Native Method)
>   at 
> java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398)
>   at java.net.ServerSocket.implAccept(ServerSocket.java:530)
>   at java.net.ServerSocket.accept(ServerSocket.java:498)
>   at 
> sun.management.jmxremote.LocalRMIServerSocketFactory$1.accept(LocalRMIServerSocketFactory.java:52)
>   at 
> sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:388)
>   at 
> sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:360)
>   at java.lang.Thread.run(Thread.java:745)
> "Service Thread" daemon prio=5 tid=0x7fc08d015000 nid=0x5503 runnable 
> [0x]
>java.lang.Thread.State: RUNNABLE
> "C2 CompilerThread1" daemon prio=5 tid=0x7fc08c82b000 nid=0x5303 waiting 
> on condition [0x]
>java.lang.Thread.State: RUNNABLE
> "C2 CompilerThread0" daemon prio=5 tid=0x7fc08c82a000 nid=0x5103 waiting 
> on condition [0x]
>java.lang.Thread.State: RUNNABLE
> "Signal Dispatcher" daemon prio=5 tid=0x7fc08c829800 nid=0x4f03 runnable 
> [0x]
>java.lang.Thread.State: RUNNABLE
> "Surrogate Locker Thread (Concurrent GC)" daemon prio=5 
> tid=0x7fc08d002000 nid=0x4

[jira] [Commented] (KAFKA-2388) subscribe(topic)/unsubscribe(topic) should either take a callback to allow user to handle exceptions or it should be synchronous.

2015-08-25 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14712236#comment-14712236
 ] 

Jason Gustafson commented on KAFKA-2388:


[~onurkaraman] I've updated the patch to address your comments. I'm thinking 
maybe we punt on onError for now so we can get this into trunk. We can always 
add that in a separate patch after we've defined it a little better. What do 
you think?

> subscribe(topic)/unsubscribe(topic) should either take a callback to allow 
> user to handle exceptions or it should be synchronous.
> -
>
> Key: KAFKA-2388
> URL: https://issues.apache.org/jira/browse/KAFKA-2388
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jiangjie Qin
>Assignee: Jason Gustafson
>
> According to the mailing list discussion on the consumer interface, we'll 
> replace:
> {code}
> public void subscribe(String... topics);
> public void subscribe(TopicPartition... partitions);
> public Set subscriptions();
> {code}
> with:
> {code}
> void subscribe(List topics, RebalanceCallback callback);
> void assign(List partitions);
> List subscriptions();
> List assignments();
> {code}
> We don't need the unsubscribe APIs anymore.
> The RebalanceCallback would look like:
> {code}
> interface RebalanceCallback {
>   void onAssignment(List partitions);
>   void onRevocation(List partitions);
>   // handle non-existing topics, etc.
>   void onError(Exception e);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36652: Patch for KAFKA-2351

2015-08-25 Thread Joel Koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36652/#review96463
---

Ship it!


Ship It!

- Joel Koshy


On Aug. 24, 2015, 10:50 p.m., Mayuresh Gharat wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/36652/
> ---
> 
> (Updated Aug. 24, 2015, 10:50 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2351
> https://issues.apache.org/jira/browse/KAFKA-2351
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressed Joel's comments
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/network/SocketServer.scala 
> 649812d9f8014edbd9e99113a0f9eaf186360bcc 
> 
> Diff: https://reviews.apache.org/r/36652/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Mayuresh Gharat
> 
>



[jira] [Updated] (KAFKA-2351) Brokers are having a problem shutting down correctly

2015-08-25 Thread Joel Koshy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Koshy updated KAFKA-2351:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk.

> Brokers are having a problem shutting down correctly
> 
>
> Key: KAFKA-2351
> URL: https://issues.apache.org/jira/browse/KAFKA-2351
> Project: Kafka
>  Issue Type: Bug
>Reporter: Mayuresh Gharat
>Assignee: Mayuresh Gharat
> Attachments: KAFKA-2351.patch, KAFKA-2351_2015-07-21_14:58:13.patch, 
> KAFKA-2351_2015-07-23_21:36:52.patch, KAFKA-2351_2015-08-13_13:10:05.patch, 
> KAFKA-2351_2015-08-24_15:50:41.patch
>
>
> The run() in Acceptor during shutdown might throw an exception that is not 
> caught and it never reaches shutdownComplete due to which the latch is not 
> counted down and the broker will not be able to shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2210) KafkaAuthorizer: Add all public entities, config changes and changes to KafkaAPI and kafkaServer to allow pluggable authorizer implementation.

2015-08-25 Thread Parth Brahmbhatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Brahmbhatt updated KAFKA-2210:

Attachment: KAFKA-2210_2015-08-25_17:59:22.patch

> KafkaAuthorizer: Add all public entities, config changes and changes to 
> KafkaAPI and kafkaServer to allow pluggable authorizer implementation.
> --
>
> Key: KAFKA-2210
> URL: https://issues.apache.org/jira/browse/KAFKA-2210
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
> Fix For: 0.8.3
>
> Attachments: KAFKA-2210.patch, KAFKA-2210_2015-06-03_16:36:11.patch, 
> KAFKA-2210_2015-06-04_16:07:39.patch, KAFKA-2210_2015-07-09_18:00:34.patch, 
> KAFKA-2210_2015-07-14_10:02:19.patch, KAFKA-2210_2015-07-14_14:13:19.patch, 
> KAFKA-2210_2015-07-20_16:42:18.patch, KAFKA-2210_2015-07-21_17:08:21.patch, 
> KAFKA-2210_2015-08-10_18:31:54.patch, KAFKA-2210_2015-08-20_11:27:18.patch, 
> KAFKA-2210_2015-08-25_17:59:22.patch
>
>
> This is the first subtask for Kafka-1688. As Part of this jira we intend to 
> agree on all the public entities, configs and changes to existing kafka 
> classes to allow pluggable authorizer implementation.
> Please see KIP-11 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface
>  for detailed design. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2210) KafkaAuthorizer: Add all public entities, config changes and changes to KafkaAPI and kafkaServer to allow pluggable authorizer implementation.

2015-08-25 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14712282#comment-14712282
 ] 

Parth Brahmbhatt commented on KAFKA-2210:
-

Updated reviewboard https://reviews.apache.org/r/34492/diff/
 against branch origin/trunk

> KafkaAuthorizer: Add all public entities, config changes and changes to 
> KafkaAPI and kafkaServer to allow pluggable authorizer implementation.
> --
>
> Key: KAFKA-2210
> URL: https://issues.apache.org/jira/browse/KAFKA-2210
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
> Fix For: 0.8.3
>
> Attachments: KAFKA-2210.patch, KAFKA-2210_2015-06-03_16:36:11.patch, 
> KAFKA-2210_2015-06-04_16:07:39.patch, KAFKA-2210_2015-07-09_18:00:34.patch, 
> KAFKA-2210_2015-07-14_10:02:19.patch, KAFKA-2210_2015-07-14_14:13:19.patch, 
> KAFKA-2210_2015-07-20_16:42:18.patch, KAFKA-2210_2015-07-21_17:08:21.patch, 
> KAFKA-2210_2015-08-10_18:31:54.patch, KAFKA-2210_2015-08-20_11:27:18.patch, 
> KAFKA-2210_2015-08-25_17:59:22.patch
>
>
> This is the first subtask for Kafka-1688. As Part of this jira we intend to 
> agree on all the public entities, configs and changes to existing kafka 
> classes to allow pluggable authorizer implementation.
> Please see KIP-11 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface
>  for detailed design. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34492: Patch for KAFKA-2210

2015-08-25 Thread Parth Brahmbhatt

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/
---

(Updated Aug. 26, 2015, 12:59 a.m.)


Review request for kafka.


Bugs: KAFKA-2210
https://issues.apache.org/jira/browse/KAFKA-2210


Repository: kafka


Description (updated)
---

Addressing review comments from Jun.


Adding CREATE check for offset topic only if the topic does not exist already.


Addressing some more comments.


Removing acl.json file


Moving PermissionType to trait instead of enum. Following the convention for 
defining constants.


Adding authorizer.config.path back.


Addressing more comments from Jun.


Addressing more comments.


Now addressing Ismael's comments. Case sensitive checks.


Addressing Jun's comments.


Merge remote-tracking branch 'origin/trunk' into az

Conflicts:
core/src/main/scala/kafka/server/KafkaApis.scala
core/src/main/scala/kafka/server/KafkaServer.scala

Deleting KafkaConfigDefTest


Addressing comments from Ismael.


Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into az


Consolidating KafkaPrincipal.


Diffs (updated)
-

  
clients/src/main/java/org/apache/kafka/common/network/PlaintextTransportLayer.java
 a3567af96e4f90d62587b31d5b14e7911ba9baf4 
  
clients/src/main/java/org/apache/kafka/common/security/auth/KafkaPrincipal.java 
277b6efb8cf4ad2c81fdcf694bf7c233e48cf214 
  
clients/src/test/java/org/apache/kafka/common/security/auth/KafkaPrincipalTest.java
 PRE-CREATION 
  core/src/main/scala/kafka/api/OffsetRequest.scala 
f418868046f7c99aefdccd9956541a0cb72b1500 
  core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
  core/src/main/scala/kafka/common/ErrorMapping.scala 
c75c68589681b2c9d6eba2b440ce5e58cddf6370 
  core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
  core/src/main/scala/kafka/server/KafkaApis.scala 
67f0cad802f901f255825aa2158545d7f5e7cc3d 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
d547a01cf7098f216a3775e1e1901c5794e1b24c 
  core/src/main/scala/kafka/server/KafkaServer.scala 
17db4fa3c3a146f03a35dd7671ad1b06d122bb59 
  core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/security/auth/OperationTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
PRE-CREATION 
  core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
PRE-CREATION 
  core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
3da666f73227fc7ef7093e3790546344065f6825 

Diff: https://reviews.apache.org/r/34492/diff/


Testing
---


Thanks,

Parth Brahmbhatt



Re: Review Request 34492: Patch for KAFKA-2210

2015-08-25 Thread Parth Brahmbhatt


> On Aug. 23, 2015, 9:29 p.m., Jun Rao wrote:
> > core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala, line 21
> > 
> >
> > There is a java version of KafkaPrincipal. Could we consolidate?

consolidated. Converted Authorizer's principal to Java class and moved under 
client/common/security/auth. Let me know if I misunderstood what you meant by 
this comment.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review96129
---


On Aug. 26, 2015, 12:59 a.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34492/
> ---
> 
> (Updated Aug. 26, 2015, 12:59 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2210
> https://issues.apache.org/jira/browse/KAFKA-2210
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressing review comments from Jun.
> 
> 
> Adding CREATE check for offset topic only if the topic does not exist already.
> 
> 
> Addressing some more comments.
> 
> 
> Removing acl.json file
> 
> 
> Moving PermissionType to trait instead of enum. Following the convention for 
> defining constants.
> 
> 
> Adding authorizer.config.path back.
> 
> 
> Addressing more comments from Jun.
> 
> 
> Addressing more comments.
> 
> 
> Now addressing Ismael's comments. Case sensitive checks.
> 
> 
> Addressing Jun's comments.
> 
> 
> Merge remote-tracking branch 'origin/trunk' into az
> 
> Conflicts:
>   core/src/main/scala/kafka/server/KafkaApis.scala
>   core/src/main/scala/kafka/server/KafkaServer.scala
> 
> Deleting KafkaConfigDefTest
> 
> 
> Addressing comments from Ismael.
> 
> 
> Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into az
> 
> 
> Consolidating KafkaPrincipal.
> 
> 
> Diffs
> -
> 
>   
> clients/src/main/java/org/apache/kafka/common/network/PlaintextTransportLayer.java
>  a3567af96e4f90d62587b31d5b14e7911ba9baf4 
>   
> clients/src/main/java/org/apache/kafka/common/security/auth/KafkaPrincipal.java
>  277b6efb8cf4ad2c81fdcf694bf7c233e48cf214 
>   
> clients/src/test/java/org/apache/kafka/common/security/auth/KafkaPrincipalTest.java
>  PRE-CREATION 
>   core/src/main/scala/kafka/api/OffsetRequest.scala 
> f418868046f7c99aefdccd9956541a0cb72b1500 
>   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> c75c68589681b2c9d6eba2b440ce5e58cddf6370 
>   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 67f0cad802f901f255825aa2158545d7f5e7cc3d 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> d547a01cf7098f216a3775e1e1901c5794e1b24c 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 17db4fa3c3a146f03a35dd7671ad1b06d122bb59 
>   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
> 3da666f73227fc7ef7093e3790546344065f6825 
> 
> Diff: https://reviews.apache.org/r/34492/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Parth Brahmbhatt
> 
>



Re: Review Request 33378: Patch for KAFKA-2136

2015-08-25 Thread Joel Koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33378/#review96470
---

Ship it!


Ship It!

- Joel Koshy


On Aug. 25, 2015, 6:30 p.m., Aditya Auradkar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/33378/
> ---
> 
> (Updated Aug. 25, 2015, 6:30 p.m.)
> 
> 
> Review request for kafka, Joel Koshy and Jun Rao.
> 
> 
> Bugs: KAFKA-2136
> https://issues.apache.org/jira/browse/KAFKA-2136
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Changes are
> - Addressing Joel's comments
> - protocol changes to the fetch request and response to return the 
> throttle_time_ms to clients
> - New producer/consumer metrics to expose the avg and max delay time for a 
> client
> - Test cases.
> - Addressed Joel and Juns comments
> 
> 
> Diffs
> -
> 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java
>  9dc669728e6b052f5c6686fcf1b5696a50538ab4 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
> 0baf16e55046a2f49f6431e01d52c323c95eddf0 
>   clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 
> 3dc8b015afd2347a41c9a9dbc02b8e367da5f75f 
>   clients/src/main/java/org/apache/kafka/common/requests/FetchRequest.java 
> df073a0e76cc5cc731861b9604d0e19a928970e0 
>   clients/src/main/java/org/apache/kafka/common/requests/FetchResponse.java 
> eb8951fba48c335095cc43fc3672de1c733e07ff 
>   clients/src/main/java/org/apache/kafka/common/requests/ProduceRequest.java 
> 715504b32950666e9aa5a260fa99d5f897b2007a 
>   clients/src/main/java/org/apache/kafka/common/requests/ProduceResponse.java 
> febfc70dabc23671fd8a85cf5c5b274dff1e10fb 
>   
> clients/src/test/java/org/apache/kafka/clients/consumer/internals/FetcherTest.java
>  a7c83cac47d41138d47d7590a3787432d675c1b0 
>   
> clients/src/test/java/org/apache/kafka/clients/producer/internals/SenderTest.java
>  8b1805d3d2bcb9fe2bacb37d870c3236aa9532c4 
>   
> clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java
>  8b2aca85fa738180e5420985fddc39a4bf9681ea 
>   core/src/main/scala/kafka/api/FetchRequest.scala 
> 5b38f8554898e54800abd65a7415dd0ac41fd958 
>   core/src/main/scala/kafka/api/FetchResponse.scala 
> b9efec2efbd3ea0ee12b911f453c47e66ad34894 
>   core/src/main/scala/kafka/api/ProducerRequest.scala 
> c866180d3680da03e48d374415f10220f6ca68c4 
>   core/src/main/scala/kafka/api/ProducerResponse.scala 
> 5d1fac4cb8943f5bfaa487f8e9d9d2856efbd330 
>   core/src/main/scala/kafka/consumer/SimpleConsumer.scala 
> 7ebc0405d1f309bed9943e7116051d1d8276f200 
>   core/src/main/scala/kafka/server/AbstractFetcherThread.scala 
> f84306143c43049e3aa44e42beaffe7eb2783163 
>   core/src/main/scala/kafka/server/ClientQuotaManager.scala 
> 9f8473f1c64d10c04cf8cc91967688e29e54ae2e 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 67f0cad802f901f255825aa2158545d7f5e7cc3d 
>   core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
> fae22d2af8daccd528ac24614290f46ae8f6c797 
>   core/src/main/scala/kafka/server/ReplicaManager.scala 
> d829e180c3943a90861a12ec184f9b4e4bbafe7d 
>   core/src/main/scala/kafka/server/ThrottledResponse.scala 
> 1f80d5480ccf7c411a02dd90296a7046ede0fae2 
>   core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala 
> b4c2a228c3c9872e5817ac58d3022e4903e317b7 
>   core/src/test/scala/unit/kafka/server/ClientQuotaManagerTest.scala 
> caf98e8f2e09d39ab8234b9f4b9478686865e908 
>   core/src/test/scala/unit/kafka/server/ThrottledResponseExpirationTest.scala 
> c4b5803917e700965677d53624f1460c1a52bf52 
> 
> Diff: https://reviews.apache.org/r/33378/diff/
> 
> 
> Testing
> ---
> 
> New tests added
> 
> 
> Thanks,
> 
> Aditya Auradkar
> 
>



Jenkins build is back to normal : Kafka-trunk #598

2015-08-25 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-2145) An option to add topic owners.

2015-08-25 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14712319#comment-14712319
 ] 

Parth Brahmbhatt commented on KAFKA-2145:
-

[~junrao] Pinging again for review. 

> An option to add topic owners. 
> ---
>
> Key: KAFKA-2145
> URL: https://issues.apache.org/jira/browse/KAFKA-2145
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
>
> We need to expose a way so users can identify users/groups that share 
> ownership of topic. We discussed adding this as part of 
> https://issues.apache.org/jira/browse/KAFKA-2035 and agreed that it will be 
> simpler to add owner as a logconfig. 
> The owner field can be used for auditing and also by authorization layer to 
> grant access without having to explicitly configure acls. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2136) Client side protocol changes to return quota delays

2015-08-25 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14712324#comment-14712324
 ] 

Joel Koshy commented on KAFKA-2136:
---

Thanks for the patches - pushed to trunk. Can you update:
https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol

Can you also file a quotas documentation ticket and set its fix-version to 
0.8.3?

We can close this ticket after that.

> Client side protocol changes to return quota delays
> ---
>
> Key: KAFKA-2136
> URL: https://issues.apache.org/jira/browse/KAFKA-2136
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>  Labels: quotas
> Attachments: KAFKA-2136.patch, KAFKA-2136_2015-05-06_18:32:48.patch, 
> KAFKA-2136_2015-05-06_18:35:54.patch, KAFKA-2136_2015-05-11_14:50:56.patch, 
> KAFKA-2136_2015-05-12_14:40:44.patch, KAFKA-2136_2015-06-09_10:07:13.patch, 
> KAFKA-2136_2015-06-09_10:10:25.patch, KAFKA-2136_2015-06-30_19:43:55.patch, 
> KAFKA-2136_2015-07-13_13:34:03.patch, KAFKA-2136_2015-08-18_13:19:57.patch, 
> KAFKA-2136_2015-08-18_13:24:00.patch, KAFKA-2136_2015-08-21_16:29:17.patch, 
> KAFKA-2136_2015-08-24_10:33:10.patch, KAFKA-2136_2015-08-25_11:29:52.patch
>
>
> As described in KIP-13, evolve the protocol to return a throttle_time_ms in 
> the Fetch and the ProduceResponse objects. Add client side metrics on the new 
> producer and consumer to expose the delay time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2388) subscribe(topic)/unsubscribe(topic) should either take a callback to allow user to handle exceptions or it should be synchronous.

2015-08-25 Thread Onur Karaman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14712347#comment-14712347
 ] 

Onur Karaman commented on KAFKA-2388:
-

Sounds okay to me. I'll try to take a look at the patch later today.

> subscribe(topic)/unsubscribe(topic) should either take a callback to allow 
> user to handle exceptions or it should be synchronous.
> -
>
> Key: KAFKA-2388
> URL: https://issues.apache.org/jira/browse/KAFKA-2388
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jiangjie Qin
>Assignee: Jason Gustafson
>
> According to the mailing list discussion on the consumer interface, we'll 
> replace:
> {code}
> public void subscribe(String... topics);
> public void subscribe(TopicPartition... partitions);
> public Set subscriptions();
> {code}
> with:
> {code}
> void subscribe(List topics, RebalanceCallback callback);
> void assign(List partitions);
> List subscriptions();
> List assignments();
> {code}
> We don't need the unsubscribe APIs anymore.
> The RebalanceCallback would look like:
> {code}
> interface RebalanceCallback {
>   void onAssignment(List partitions);
>   void onRevocation(List partitions);
>   // handle non-existing topics, etc.
>   void onError(Exception e);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33378: Patch for KAFKA-2136

2015-08-25 Thread Joel Koshy


> On Aug. 21, 2015, 12:13 a.m., Joel Koshy wrote:
> > core/src/main/scala/kafka/api/FetchResponse.scala, line 175
> > 
> >
> > Since (in the event of multiple calls) this grouping would be repeated, 
> > should we just have `responseSize` take the `FetchResponse` object and have 
> > that look up the `lazy val dataGroupedByTopic`? That said, I think the 
> > original version should have had `sizeInBytes` as a `lazy val` as well.
> 
> Aditya Auradkar wrote:
> FetchResponse.responseSize is called from KafkaApis in order to figure 
> out what value to record. We cannot pass in a FetchResponse object because 
> the object doesn't exist yet because the throttle time is not available.
> 
> Should I change the signature to accept a dataGroupedByTopic instead of a 
> TopicPartition -> FetchResponsePartitionData map.
> 
> Joel Koshy wrote:
> Got it - yes we could do that. There is also the pre-existing issue of 
> `sizeInBytes` breaking the laziness of `dataGroupedByTopic` which we can 
> address.

I realized later that this now causes two explicit group-by's on the 
server-side and one on the consumer-side. So the non-lazy val may have worked 
better in practice.


- Joel


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33378/#review96009
---


On Aug. 25, 2015, 6:30 p.m., Aditya Auradkar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/33378/
> ---
> 
> (Updated Aug. 25, 2015, 6:30 p.m.)
> 
> 
> Review request for kafka, Joel Koshy and Jun Rao.
> 
> 
> Bugs: KAFKA-2136
> https://issues.apache.org/jira/browse/KAFKA-2136
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Changes are
> - Addressing Joel's comments
> - protocol changes to the fetch request and response to return the 
> throttle_time_ms to clients
> - New producer/consumer metrics to expose the avg and max delay time for a 
> client
> - Test cases.
> - Addressed Joel and Juns comments
> 
> 
> Diffs
> -
> 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java
>  9dc669728e6b052f5c6686fcf1b5696a50538ab4 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
> 0baf16e55046a2f49f6431e01d52c323c95eddf0 
>   clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 
> 3dc8b015afd2347a41c9a9dbc02b8e367da5f75f 
>   clients/src/main/java/org/apache/kafka/common/requests/FetchRequest.java 
> df073a0e76cc5cc731861b9604d0e19a928970e0 
>   clients/src/main/java/org/apache/kafka/common/requests/FetchResponse.java 
> eb8951fba48c335095cc43fc3672de1c733e07ff 
>   clients/src/main/java/org/apache/kafka/common/requests/ProduceRequest.java 
> 715504b32950666e9aa5a260fa99d5f897b2007a 
>   clients/src/main/java/org/apache/kafka/common/requests/ProduceResponse.java 
> febfc70dabc23671fd8a85cf5c5b274dff1e10fb 
>   
> clients/src/test/java/org/apache/kafka/clients/consumer/internals/FetcherTest.java
>  a7c83cac47d41138d47d7590a3787432d675c1b0 
>   
> clients/src/test/java/org/apache/kafka/clients/producer/internals/SenderTest.java
>  8b1805d3d2bcb9fe2bacb37d870c3236aa9532c4 
>   
> clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java
>  8b2aca85fa738180e5420985fddc39a4bf9681ea 
>   core/src/main/scala/kafka/api/FetchRequest.scala 
> 5b38f8554898e54800abd65a7415dd0ac41fd958 
>   core/src/main/scala/kafka/api/FetchResponse.scala 
> b9efec2efbd3ea0ee12b911f453c47e66ad34894 
>   core/src/main/scala/kafka/api/ProducerRequest.scala 
> c866180d3680da03e48d374415f10220f6ca68c4 
>   core/src/main/scala/kafka/api/ProducerResponse.scala 
> 5d1fac4cb8943f5bfaa487f8e9d9d2856efbd330 
>   core/src/main/scala/kafka/consumer/SimpleConsumer.scala 
> 7ebc0405d1f309bed9943e7116051d1d8276f200 
>   core/src/main/scala/kafka/server/AbstractFetcherThread.scala 
> f84306143c43049e3aa44e42beaffe7eb2783163 
>   core/src/main/scala/kafka/server/ClientQuotaManager.scala 
> 9f8473f1c64d10c04cf8cc91967688e29e54ae2e 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 67f0cad802f901f255825aa2158545d7f5e7cc3d 
>   core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
> fae22d2af8daccd528ac24614290f46ae8f6c797 
>   core/src/main/scala/kafka/server/ReplicaManager.scala 
> d829e180c3943a90861a12ec184f9b4e4bbafe7d 
>   core/src/main/scala/kafka/server/ThrottledResponse.scala 
> 1f80d5480ccf7c411a02dd90296a7046ede0fae2 
>   core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala 
> b4c2a228c3c9872e5817ac58d3022e4903e317b7 
>   core/src/test/scala/unit/kafka/server/ClientQuotaManagerTest.scala 
> caf98e8f2e09d39ab8234b9f4b9478686865e908 
>   core

[jira] [Created] (KAFKA-2474) Add caching for converted Copycat schemas in JSONConverter

2015-08-25 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-2474:


 Summary: Add caching for converted Copycat schemas in JSONConverter
 Key: KAFKA-2474
 URL: https://issues.apache.org/jira/browse/KAFKA-2474
 Project: Kafka
  Issue Type: Sub-task
  Components: copycat
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava


>From discussion of KAFKA-2367:

bq. Caching of conversion of schemas. In the JSON implementation we're 
including, we're probably being pretty wasteful right now since every record 
has to translate both the schema and data to JSON. We should definitely be 
doing some caching here. I think an LRU using an IdentityHashMap should be 
fine. However, this does assume that connectors are good about reusing schemas 
(defining them up front, or if they are dynamic they should have their own 
cache of schemas and be able to detect when they can be reused).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2475) Reduce copycat configs to only specify a converter or serializer, not both

2015-08-25 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-2475:


 Summary: Reduce copycat configs to only specify a converter or 
serializer, not both
 Key: KAFKA-2475
 URL: https://issues.apache.org/jira/browse/KAFKA-2475
 Project: Kafka
  Issue Type: Sub-task
  Components: copycat
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava


Ultimately, all we care about is getting from byte[] -> Copycat data API. The 
current split of the interfaces makes it easy to reuse existing serializers, 
but you still have to implement a new class.

It will be simpler, both conceptually and by requiring fewer configs, to 
combine both these steps into a single API. This also allows certain formats to 
preserve more information across these (e.g. for primitive types in schema, 
which otherwise could lose certain schema information).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2476) Define logical types for Copycat data API

2015-08-25 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-2476:


 Summary: Define logical types for Copycat data API
 Key: KAFKA-2476
 URL: https://issues.apache.org/jira/browse/KAFKA-2476
 Project: Kafka
  Issue Type: Sub-task
  Components: copycat
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava


We need some common types like datetime and decimal. This boils down to 
defining the schemas for these types, along with documenting their semantics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2468) SIGINT during Kafka server startup can leave server deadlocked

2015-08-25 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14712397#comment-14712397
 ] 

Ewen Cheslack-Postava commented on KAFKA-2468:
--

[~ashi.khur...@gmail.com] Ok, makes sense. I just got stuck on the exit methods 
and completely missed that it was calling halt instead. LGTM.

> SIGINT during Kafka server startup can leave server deadlocked
> --
>
> Key: KAFKA-2468
> URL: https://issues.apache.org/jira/browse/KAFKA-2468
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> KafkaServer on receiving a SIGINT will try to shutdown and if this happens 
> while the server is starting up, it will get into deadlock.
> Thread dump after deadlock
> {code}
> 2015-08-24 22:03:52
> Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.55-b03 mixed mode):
> "Attach Listener" daemon prio=5 tid=0x7fc08e827800 nid=0x5807 waiting on 
> condition [0x]
>java.lang.Thread.State: RUNNABLE
> "Thread-2" prio=5 tid=0x7fc08b9de000 nid=0x6b03 waiting for monitor entry 
> [0x00011ad3a000]
>java.lang.Thread.State: BLOCKED (on object monitor)
>   at java.lang.Shutdown.exit(Shutdown.java:212)
>   - waiting to lock <0x0007bae86ac0> (a java.lang.Class for 
> java.lang.Shutdown)
>   at java.lang.Runtime.exit(Runtime.java:109)
>   at java.lang.System.exit(System.java:962)
>   at 
> kafka.server.KafkaServerStartable.shutdown(KafkaServerStartable.scala:46)
>   at kafka.Kafka$$anon$1.run(Kafka.scala:65)
> "SIGINT handler" daemon prio=5 tid=0x7fc08ca51800 nid=0x6503 in 
> Object.wait() [0x00011aa31000]
>java.lang.Thread.State: WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0x0007bcb40610> (a kafka.Kafka$$anon$1)
>   at java.lang.Thread.join(Thread.java:1281)
>   - locked <0x0007bcb40610> (a kafka.Kafka$$anon$1)
>   at java.lang.Thread.join(Thread.java:1355)
>   at 
> java.lang.ApplicationShutdownHooks.runHooks(ApplicationShutdownHooks.java:106)
>   at 
> java.lang.ApplicationShutdownHooks$1.run(ApplicationShutdownHooks.java:46)
>   at java.lang.Shutdown.runHooks(Shutdown.java:123)
>   at java.lang.Shutdown.sequence(Shutdown.java:167)
>   at java.lang.Shutdown.exit(Shutdown.java:212)
>   - locked <0x0007bae86ac0> (a java.lang.Class for java.lang.Shutdown)
>   at java.lang.Terminator$1.handle(Terminator.java:52)
>   at sun.misc.Signal$1.run(Signal.java:212)
>   at java.lang.Thread.run(Thread.java:745)
> "RMI TCP Accept-0" daemon prio=5 tid=0x7fc08c164000 nid=0x5c07 runnable 
> [0x000119fe8000]
>java.lang.Thread.State: RUNNABLE
>   at java.net.PlainSocketImpl.socketAccept(Native Method)
>   at 
> java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398)
>   at java.net.ServerSocket.implAccept(ServerSocket.java:530)
>   at java.net.ServerSocket.accept(ServerSocket.java:498)
>   at 
> sun.management.jmxremote.LocalRMIServerSocketFactory$1.accept(LocalRMIServerSocketFactory.java:52)
>   at 
> sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:388)
>   at 
> sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:360)
>   at java.lang.Thread.run(Thread.java:745)
> "Service Thread" daemon prio=5 tid=0x7fc08d015000 nid=0x5503 runnable 
> [0x]
>java.lang.Thread.State: RUNNABLE
> "C2 CompilerThread1" daemon prio=5 tid=0x7fc08c82b000 nid=0x5303 waiting 
> on condition [0x]
>java.lang.Thread.State: RUNNABLE
> "C2 CompilerThread0" daemon prio=5 tid=0x7fc08c82a000 nid=0x5103 waiting 
> on condition [0x]
>java.lang.Thread.State: RUNNABLE
> "Signal Dispatcher" daemon prio=5 tid=0x7fc08c829800 nid=0x4f03 runnable 
> [0x]
>java.lang.Thread.State: RUNNABLE
> "Surrogate Locker Thread (Concurrent GC)" daemon prio=5 
> tid=0x7fc08d002000 nid=0x400b waiting on condition [0x]
>java.lang.Thread.State: RUNNABLE
> "Finalizer" daemon prio=5 tid=0x7fc08d012800 nid=0x3b03 in Object.wait() 
> [0x000117ee6000]
>java.lang.Thread.State: WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0x0007bae05568> (a java.lang.ref.ReferenceQueue$Lock)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
>   - locked <0x0007bae05568> (a java.lang.ref.ReferenceQueue$Lock)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
>   at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:189)
> "Reference Handler" daemon prio=5 tid=0x7fc08c803000 nid=0x3903 in 
> Object.

[jira] [Commented] (KAFKA-2367) Add Copycat runtime data API

2015-08-25 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14712401#comment-14712401
 ] 

Gwen Shapira commented on KAFKA-2367:
-

My opinions / comments on the above:

* Logical types are important (timestamp, decimals) - but can be done separately

* I don't see much value in non-ordered versions (it doesn't give you any 
information you don't get from the schema itself), so an int makes more sense. 
We also need to support lack of versions. While there, we should also support 
lack of schema (since some connectors won't know the schema or won't care)

* If we recommend ByteBuffer, I don't know if we should offer byte[]. I don't 
feel strongly either way though.

* I like the new explicit schemas! Much less error-prone.

* Follow up jira for caching, its important.

* Lack of unions is fine, IMO Avro unions are somewhat misused and we have 
better support for nulls ;)

* Lack of IndexedRecord is fine given the current API.

* I agree with how you defined the default+nullable behavior and will watch out 
during the review. 

> Add Copycat runtime data API
> 
>
> Key: KAFKA-2367
> URL: https://issues.apache.org/jira/browse/KAFKA-2367
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.8.3
>
>
> Design the API used for runtime data in Copycat. This API is used to 
> construct schemas and records that Copycat processes. This needs to be a 
> fairly general data model (think Avro, JSON, Protobufs, Thrift) in order to 
> support complex, varied data types that may be input from/output to many data 
> systems.
> This should issue should also address the serialization interfaces used 
> within Copycat, which translate the runtime data into serialized byte[] form. 
> It is important that these be considered together because the data format can 
> be used in multiple ways (records, partition IDs, partition offsets), so it 
> and the corresponding serializers must be sufficient for all these use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2468) SIGINT during Kafka server startup can leave server deadlocked

2015-08-25 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14712413#comment-14712413
 ] 

Ashish K Singh commented on KAFKA-2468:
---

[~guozhang] could you review and help with getting this committed?

> SIGINT during Kafka server startup can leave server deadlocked
> --
>
> Key: KAFKA-2468
> URL: https://issues.apache.org/jira/browse/KAFKA-2468
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> KafkaServer on receiving a SIGINT will try to shutdown and if this happens 
> while the server is starting up, it will get into deadlock.
> Thread dump after deadlock
> {code}
> 2015-08-24 22:03:52
> Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.55-b03 mixed mode):
> "Attach Listener" daemon prio=5 tid=0x7fc08e827800 nid=0x5807 waiting on 
> condition [0x]
>java.lang.Thread.State: RUNNABLE
> "Thread-2" prio=5 tid=0x7fc08b9de000 nid=0x6b03 waiting for monitor entry 
> [0x00011ad3a000]
>java.lang.Thread.State: BLOCKED (on object monitor)
>   at java.lang.Shutdown.exit(Shutdown.java:212)
>   - waiting to lock <0x0007bae86ac0> (a java.lang.Class for 
> java.lang.Shutdown)
>   at java.lang.Runtime.exit(Runtime.java:109)
>   at java.lang.System.exit(System.java:962)
>   at 
> kafka.server.KafkaServerStartable.shutdown(KafkaServerStartable.scala:46)
>   at kafka.Kafka$$anon$1.run(Kafka.scala:65)
> "SIGINT handler" daemon prio=5 tid=0x7fc08ca51800 nid=0x6503 in 
> Object.wait() [0x00011aa31000]
>java.lang.Thread.State: WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0x0007bcb40610> (a kafka.Kafka$$anon$1)
>   at java.lang.Thread.join(Thread.java:1281)
>   - locked <0x0007bcb40610> (a kafka.Kafka$$anon$1)
>   at java.lang.Thread.join(Thread.java:1355)
>   at 
> java.lang.ApplicationShutdownHooks.runHooks(ApplicationShutdownHooks.java:106)
>   at 
> java.lang.ApplicationShutdownHooks$1.run(ApplicationShutdownHooks.java:46)
>   at java.lang.Shutdown.runHooks(Shutdown.java:123)
>   at java.lang.Shutdown.sequence(Shutdown.java:167)
>   at java.lang.Shutdown.exit(Shutdown.java:212)
>   - locked <0x0007bae86ac0> (a java.lang.Class for java.lang.Shutdown)
>   at java.lang.Terminator$1.handle(Terminator.java:52)
>   at sun.misc.Signal$1.run(Signal.java:212)
>   at java.lang.Thread.run(Thread.java:745)
> "RMI TCP Accept-0" daemon prio=5 tid=0x7fc08c164000 nid=0x5c07 runnable 
> [0x000119fe8000]
>java.lang.Thread.State: RUNNABLE
>   at java.net.PlainSocketImpl.socketAccept(Native Method)
>   at 
> java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398)
>   at java.net.ServerSocket.implAccept(ServerSocket.java:530)
>   at java.net.ServerSocket.accept(ServerSocket.java:498)
>   at 
> sun.management.jmxremote.LocalRMIServerSocketFactory$1.accept(LocalRMIServerSocketFactory.java:52)
>   at 
> sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:388)
>   at 
> sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:360)
>   at java.lang.Thread.run(Thread.java:745)
> "Service Thread" daemon prio=5 tid=0x7fc08d015000 nid=0x5503 runnable 
> [0x]
>java.lang.Thread.State: RUNNABLE
> "C2 CompilerThread1" daemon prio=5 tid=0x7fc08c82b000 nid=0x5303 waiting 
> on condition [0x]
>java.lang.Thread.State: RUNNABLE
> "C2 CompilerThread0" daemon prio=5 tid=0x7fc08c82a000 nid=0x5103 waiting 
> on condition [0x]
>java.lang.Thread.State: RUNNABLE
> "Signal Dispatcher" daemon prio=5 tid=0x7fc08c829800 nid=0x4f03 runnable 
> [0x]
>java.lang.Thread.State: RUNNABLE
> "Surrogate Locker Thread (Concurrent GC)" daemon prio=5 
> tid=0x7fc08d002000 nid=0x400b waiting on condition [0x]
>java.lang.Thread.State: RUNNABLE
> "Finalizer" daemon prio=5 tid=0x7fc08d012800 nid=0x3b03 in Object.wait() 
> [0x000117ee6000]
>java.lang.Thread.State: WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0x0007bae05568> (a java.lang.ref.ReferenceQueue$Lock)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
>   - locked <0x0007bae05568> (a java.lang.ref.ReferenceQueue$Lock)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
>   at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:189)
> "Reference Handler" daemon prio=5 tid=0x7fc08c803000 nid=0x3903 in 
> Object.wait() [0x000117de3000]
>java.lang.Thread.State: WAITING (on object monitor)
>

Re: Issue when enabling SSL on broker

2015-08-25 Thread Sriharsha Chintalapani
Hi Xiang,
         Did you try following the instructions here 
https://cwiki.apache.org/confluence/display/KAFKA/Deploying+SSL+for+Kafka .
Whats the output of openssl s_client and which version of java and OS are you 
using.

Thanks,
Harsha


On August 25, 2015 at 8:42:18 PM, Xiang Zhou (Samuel) (zhou...@gmail.com) wrote:

no cipher suites in common 

[jira] [Commented] (KAFKA-2367) Add Copycat runtime data API

2015-08-25 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14712472#comment-14712472
 ] 

Jay Kreps commented on KAFKA-2367:
--

On unions: they were kind of a disaster at LinkedIn. Avro supports them, so 
clever engineers would use them, but exactly zero other data thingies in the 
world allow them (Hive had them but they didn't work, at least at that time). 
So it was a great way to produce data nothing but other java code could use. 
Eventually people kept having to go back and un-union things and finally they 
were just forbidden (except for nullable of course).

> Add Copycat runtime data API
> 
>
> Key: KAFKA-2367
> URL: https://issues.apache.org/jira/browse/KAFKA-2367
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.8.3
>
>
> Design the API used for runtime data in Copycat. This API is used to 
> construct schemas and records that Copycat processes. This needs to be a 
> fairly general data model (think Avro, JSON, Protobufs, Thrift) in order to 
> support complex, varied data types that may be input from/output to many data 
> systems.
> This should issue should also address the serialization interfaces used 
> within Copycat, which translate the runtime data into serialized byte[] form. 
> It is important that these be considered together because the data format can 
> be used in multiple ways (records, partition IDs, partition offsets), so it 
> and the corresponding serializers must be sufficient for all these use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Issue when enabling SSL on broker

2015-08-25 Thread Sriharsha Chintalapani
Hi,
      Turns out to be a bug in the instructions in the wiki . I fixed it can 
you please retry generating the truststore and keystore
https://cwiki.apache.org/confluence/display/KAFKA/Deploying+SSL+for+Kafka .
checkout this section "All of the above steps in a bash script” to generate the 
keystores.

Thanks,
Harsha


On August 25, 2015 at 8:56:24 PM, Sriharsha Chintalapani (ka...@harsha.io) 
wrote:

Hi Xiang,
         Did you try following the instructions here 
https://cwiki.apache.org/confluence/display/KAFKA/Deploying+SSL+for+Kafka .
Whats the output of openssl s_client and which version of java and OS are you 
using.

Thanks,
Harsha


On August 25, 2015 at 8:42:18 PM, Xiang Zhou (Samuel) (zhou...@gmail.com) wrote:

no cipher suites in common