[jira] [Commented] (KAFKA-3044) Consumer.poll doesnot return messages when poll interval is less

2015-12-28 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15073212#comment-15073212
 ] 

Jay Kreps commented on KAFKA-3044:
--

+1 on the doc change, let's not try to make the contract of poll() that it 
always returns messages, I think that will be hard to live up to (and in any 
case irrelevant since you can always hit the timeout).

> Consumer.poll doesnot return messages when poll interval is less
> 
>
> Key: KAFKA-3044
> URL: https://issues.apache.org/jira/browse/KAFKA-3044
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Praveen Devarao
>Assignee: Jason Gustafson
> Fix For: 0.9.0.1
>
>
> When seeking to particular position in consumer and starting poll with 
> timeout param 0 the consumer does not come back with data though there is 
> data published via a producer already. If the timeout is increased slowly in 
> chunks of 100ms then at 700ms value the consumer returns back the record on 
> first call to poll.
> Docs 
> [http://kafka.apache.org/090/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#poll(long)]
>  for poll reads if timeout is 0 then data will be returned immediately but 
> the behaviour seen is that data is not returned.
> The test code I am using can be found here 
> https://gist.github.com/praveend/013dcab01ebb8c7e2f2d
> I have created a topic with data published as below and then running the test 
> program [ConsumerPollTest.java]
> $ bin/kafka-topics.sh --create --zookeeper localhost:2181 
> --replication-factor 1 --partitions 1 --topic mytopic
> $ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic mytopic
> Hello
> Hai
> bye
> $ java ConsumerPollTest
> I have published this 3 lines of data to kafka only oncelater on I just 
> use the above program with different poll interval
> Let me know if I am missing anything and interpreting it wrongly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2929: Migrate duplicate error mapping fu...

2015-12-28 Thread granthenke
Github user granthenke closed the pull request at:

https://github.com/apache/kafka/pull/616


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2929: Migrate duplicate error mapping fu...

2015-12-28 Thread granthenke
GitHub user granthenke reopened a pull request:

https://github.com/apache/kafka/pull/616

KAFKA-2929: Migrate duplicate error mapping functionality

Deprecates ErrorMapping.scala in core in favor or Errors.java in common. 
Duplicated exceptions in core are deprecated as well, to ensure the mapping 
is correct.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka error-mapping

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/616.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #616


commit eba3b41309cf9f0436a0159dffa63fa51d89beb7
Author: Grant Henke 
Date:   2015-12-16T17:55:33Z

KAFKA-2929: Migrate duplicate error mapping functionality




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2929) Migrate server side error mapping functionality

2015-12-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15072726#comment-15072726
 ] 

ASF GitHub Bot commented on KAFKA-2929:
---

Github user granthenke closed the pull request at:

https://github.com/apache/kafka/pull/616


> Migrate server side error mapping functionality
> ---
>
> Key: KAFKA-2929
> URL: https://issues.apache.org/jira/browse/KAFKA-2929
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Kafka common and core both have a class that maps error codes and exceptions. 
> To prevent errors and issues with consistency we should migrate from 
> ErrorMapping.scala in core in favor or Errors.java in common.
> When the old clients are removed ErrorMapping.scala and the old exceptions 
> should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3044) Consumer.poll doesnot return messages when poll interval is less

2015-12-28 Thread Praveen Devarao (JIRA)
Praveen Devarao created KAFKA-3044:
--

 Summary: Consumer.poll doesnot return messages when poll interval 
is less
 Key: KAFKA-3044
 URL: https://issues.apache.org/jira/browse/KAFKA-3044
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.9.0.0
Reporter: Praveen Devarao
 Fix For: 0.9.0.1


When seeking to particular position in consumer and starting poll with timeout 
param 0 the consumer does not come back with data though there is data 
published via a producer already. If the timeout is increased slowly in chunks 
of 100ms then at 700ms value the consumer returns back the record on first call 
to poll.

Docs 
[http://kafka.apache.org/090/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#poll(long)]
 for poll reads if timeout is 0 then data will be returned immediately but the 
behaviour seen is that data is not returned.

The test code I am using can be found here 
https://gist.github.com/praveend/013dcab01ebb8c7e2f2d

I have created a topic with data published as below and then running the test 
program [ConsumerPollTest.java]

$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 
1 --partitions 1 --topic mytopic
$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic mytopic
Hello
Hai
bye
$ java ConsumerPollTest

I have published this 3 lines of data to kafka only oncelater on I just use 
the above program with different poll interval

Let me know if I am missing anything and interpreting it wrongly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3044) Consumer.poll doesnot return messages when poll interval is less

2015-12-28 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3044:
-
Assignee: Jason Gustafson

> Consumer.poll doesnot return messages when poll interval is less
> 
>
> Key: KAFKA-3044
> URL: https://issues.apache.org/jira/browse/KAFKA-3044
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Praveen Devarao
>Assignee: Jason Gustafson
> Fix For: 0.9.0.1
>
>
> When seeking to particular position in consumer and starting poll with 
> timeout param 0 the consumer does not come back with data though there is 
> data published via a producer already. If the timeout is increased slowly in 
> chunks of 100ms then at 700ms value the consumer returns back the record on 
> first call to poll.
> Docs 
> [http://kafka.apache.org/090/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#poll(long)]
>  for poll reads if timeout is 0 then data will be returned immediately but 
> the behaviour seen is that data is not returned.
> The test code I am using can be found here 
> https://gist.github.com/praveend/013dcab01ebb8c7e2f2d
> I have created a topic with data published as below and then running the test 
> program [ConsumerPollTest.java]
> $ bin/kafka-topics.sh --create --zookeeper localhost:2181 
> --replication-factor 1 --partitions 1 --topic mytopic
> $ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic mytopic
> Hello
> Hai
> bye
> $ java ConsumerPollTest
> I have published this 3 lines of data to kafka only oncelater on I just 
> use the above program with different poll interval
> Let me know if I am missing anything and interpreting it wrongly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3044) Consumer.poll doesnot return messages when poll interval is less

2015-12-28 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15073016#comment-15073016
 ] 

Guozhang Wang commented on KAFKA-3044:
--

Hi [~praveend] I think the Java docs need to be reworded since "returns 
immediately with any records that are available now" could be a bit misleading, 
actually what it meant is that if there is already some data fetched from the 
server and buffered at the consumer client, return them immediately, otherwise 
return empty.

Assigning to [~hachikuji] to review and change the java docs. 

> Consumer.poll doesnot return messages when poll interval is less
> 
>
> Key: KAFKA-3044
> URL: https://issues.apache.org/jira/browse/KAFKA-3044
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Praveen Devarao
>Assignee: Jason Gustafson
> Fix For: 0.9.0.1
>
>
> When seeking to particular position in consumer and starting poll with 
> timeout param 0 the consumer does not come back with data though there is 
> data published via a producer already. If the timeout is increased slowly in 
> chunks of 100ms then at 700ms value the consumer returns back the record on 
> first call to poll.
> Docs 
> [http://kafka.apache.org/090/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#poll(long)]
>  for poll reads if timeout is 0 then data will be returned immediately but 
> the behaviour seen is that data is not returned.
> The test code I am using can be found here 
> https://gist.github.com/praveend/013dcab01ebb8c7e2f2d
> I have created a topic with data published as below and then running the test 
> program [ConsumerPollTest.java]
> $ bin/kafka-topics.sh --create --zookeeper localhost:2181 
> --replication-factor 1 --partitions 1 --topic mytopic
> $ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic mytopic
> Hello
> Hai
> bye
> $ java ConsumerPollTest
> I have published this 3 lines of data to kafka only oncelater on I just 
> use the above program with different poll interval
> Let me know if I am missing anything and interpreting it wrongly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3045) ZkNodeChangeNotificationListener shouldn't log interrupted exception as error

2015-12-28 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-3045:
--

 Summary: ZkNodeChangeNotificationListener shouldn't log 
interrupted exception as error
 Key: KAFKA-3045
 URL: https://issues.apache.org/jira/browse/KAFKA-3045
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 0.9.0.0
Reporter: Jun Rao


Saw the following when running /opt/kafka/bin/kafka-acls.sh --authorizer 
kafka.security.auth.SimpleAclAuthorizer.

[2015-12-28 08:04:39,382] ERROR Error processing notification change for path = 
/kafka-acl-changes and notification= [acl_changes_04, 
acl_changes_03, acl_changes_02, acl_changes_01, 
acl_changes_00] : (kafka.common.ZkNodeChangeNotificationListener)

org.I0Itec.zkclient.exception.ZkInterruptedException: 
java.lang.InterruptedException

at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:997)

at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:1090)

at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:1085)

at kafka.utils.ZkUtils.readDataMaybeNull(ZkUtils.scala:525)

at 
kafka.security.auth.SimpleAclAuthorizer.kafka$security$auth$SimpleAclAuthorizer$$getAclsFromZk(SimpleAclAuthorizer.scala:213)

at 
kafka.security.auth.SimpleAclAuthorizer$AclChangedNotificaitonHandler$.processNotification(SimpleAclAuthorizer.scala:273)

at 
kafka.common.ZkNodeChangeNotificationListener$$anonfun$kafka$common$ZkNodeChangeNotificationListener$$processNotifications$2$$anonfun$apply$2.apply(ZkNodeChangeNotificationListener.scala:84)

at 
kafka.common.ZkNodeChangeNotificationListener$$anonfun$kafka$common$ZkNodeChangeNotificationListener$$processNotifications$2$$anonfun$apply$2.apply(ZkNodeChangeNotificationListener.scala:84)

at scala.Option.map(Option.scala:146)

at 
kafka.common.ZkNodeChangeNotificationListener$$anonfun$kafka$common$ZkNodeChangeNotificationListener$$processNotifications$2.apply(ZkNodeChangeNotificationListener.scala:84)

at 
kafka.common.ZkNodeChangeNotificationListener$$anonfun$kafka$common$ZkNodeChangeNotificationListener$$processNotifications$2.apply(ZkNodeChangeNotificationListener.scala:79)

at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)

at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)

at 
kafka.common.ZkNodeChangeNotificationListener.kafka$common$ZkNodeChangeNotificationListener$$processNotifications(ZkNodeChangeNotificationListener.scala:79)

at 
kafka.common.ZkNodeChangeNotificationListener$NodeChangeListener$.handleChildChange(ZkNodeChangeNotificationListener.scala:121)

at org.I0Itec.zkclient.ZkClient$10.run(ZkClient.java:842)

at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)

Caused by: java.lang.InterruptedException

at java.lang.Object.wait(Native Method)

at java.lang.Object.wait(Object.java:502)

at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)

at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1153)

at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1184)

at org.I0Itec.zkclient.ZkConnection.readData(ZkConnection.java:119)

at org.I0Itec.zkclient.ZkClient$12.call(ZkClient.java:1094)

at org.I0Itec.zkclient.ZkClient$12.call(ZkClient.java:1090)

at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:985)

When SimpleAclAuthorizer terminates, we close zkclient, which interrupts the 
watcher processor thread. Since this is expected, we shouldn't log this as an 
error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3046) add ByteBuffer Serializer

2015-12-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15073536#comment-15073536
 ] 

ASF GitHub Bot commented on KAFKA-3046:
---

GitHub user vesense opened a pull request:

https://github.com/apache/kafka/pull/718

KAFKA-3046: add ByteBuffer Serializer

https://issues.apache.org/jira/browse/KAFKA-3046

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vesense/kafka patch-3

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/718.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #718


commit 0f39db647e2a1a3189aba6d2fa96aa4c1156bf86
Author: Xin Wang 
Date:   2015-12-29T05:57:13Z

add bytebuffer ser

commit 711a8b594bd7a2a5eacb958b74ab85ba10d97158
Author: Xin Wang 
Date:   2015-12-29T05:58:49Z

add bytebuffer deser




> add ByteBuffer Serializer
> --
>
> Key: KAFKA-3046
> URL: https://issues.apache.org/jira/browse/KAFKA-3046
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients
>Reporter: Xin Wang
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3046: add ByteBuffer Serializer.

2015-12-28 Thread vesense
GitHub user vesense opened a pull request:

https://github.com/apache/kafka/pull/718

KAFKA-3046: add ByteBuffer Serializer

https://issues.apache.org/jira/browse/KAFKA-3046

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vesense/kafka patch-3

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/718.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #718


commit 0f39db647e2a1a3189aba6d2fa96aa4c1156bf86
Author: Xin Wang 
Date:   2015-12-29T05:57:13Z

add bytebuffer ser

commit 711a8b594bd7a2a5eacb958b74ab85ba10d97158
Author: Xin Wang 
Date:   2015-12-29T05:58:49Z

add bytebuffer deser




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-3046) add ByteBuffer Serializer

2015-12-28 Thread Xin Wang (JIRA)
Xin Wang created KAFKA-3046:
---

 Summary: add ByteBuffer Serializer
 Key: KAFKA-3046
 URL: https://issues.apache.org/jira/browse/KAFKA-3046
 Project: Kafka
  Issue Type: New Feature
  Components: clients
Reporter: Xin Wang






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-32 Add CreateTime and LogAppendTime to Kafka message.

2015-12-28 Thread Becket Qin
Thanks Guozhang, Gwen and Neha for the comments. Sorry for late reply because I 
only have occasional gmail access from my phone...

I just updated the wiki for KIP-32.

Gwen,

Yes, the migration plan is what you described.

I agree with your comments on the version. 
I changed message.format.version to use the release version.
I did not change the internal version, we can discuss this in a separate 
thread. 

Thanks,

Jiangjie (Becket) Qin



> On Dec 24, 2015, at 5:38 AM, Guozhang Wang  wrote:
> 
> Also I agree with Gwen that such changes may worth a 0.10 release or even
> 1.0, having it in 0.9.1 would be quite confusing to users.
> 
> Guozhang
> 
>> On Wed, Dec 23, 2015 at 1:36 PM, Guozhang Wang  wrote:
>> 
>> Becket,
>> 
>> Please let us know once you have updated the wiki page regarding the
>> migration plan. Thanks!
>> 
>> Guozhang
>> 
>>> On Wed, Dec 23, 2015 at 11:52 AM, Gwen Shapira  wrote:
>>> 
>>> Thanks Becket, Anne and Neha for responding to my concern.
>>> 
>>> I had an offline discussion with Anne where she helped me understand the
>>> migration process. It isn't as bad as it looks in the KIP :)
>>> 
>>> If I understand it correctly, the process (for users) will be:
>>> 
>>> 1. Prepare for upgrade (set format.version = 0, ApiVersion = 0.9.0)
>>> 2. Rolling upgrade of brokers
>>> 3. Bump ApiVersion to 0.9.0-1, so fetch requests between brokers will use
>>> the new protocol
>>> 4. Start upgrading clients
>>> 5. When "enough" clients are upgraded, bump format.version to 1 (rolling).
>>> 
>>> Becket, can you confirm?
>>> 
>>> Assuming this is the process, I'm +1 on the change.
>>> 
>>> Reminder to coders and reviewers that pull-requests with user-facing
>>> changes should include documentation changes as well as code changes.
>>> And a polite request to try to be helpful to users on when to use
>>> create-time and when to use log-append-time as configuration - this is not
>>> a trivial decision.
>>> 
>>> A separate point I'm going to raise in a different thread is that we need
>>> to streamline our versions a bit:
>>> 1. I'm afraid that 0.9.0-1 will be confusing to users who care about
>>> released versions (what if we forget to change it before the release? Is
>>> it
>>> meaningful enough to someone running off trunk?), we need to come up with
>>> something that will work for both LinkedIn and everyone else.
>>> 2. ApiVersion has real version numbers. message.format.version has
>>> sequence
>>> numbers. This makes us look pretty silly :)
>>> 
>>> My version concerns can be addressed separately and should not hold back
>>> this KIP.
>>> 
>>> Gwen
>>> 
>>> 
>>> 
>>> On Tue, Dec 22, 2015 at 11:01 PM, Becket Qin 
>>> wrote:
>>> 
 Hi Anna,
 
 Thanks for initiating the voting process. I did not start the voting
 process because there were still some ongoing discussion with Jun about
>>> the
 timestamp regarding compressed messages. That is why the wiki page
>>> hasn't
 reflected the latest conversation as Guozhang pointed out.
 
 Like Neha said I think we have reached general agreement on this KIP. So
 it is probably fine to start the KIP voting. At least we draw more
 attention to the KIP even if there are some new discussion to bring up.
 
 Regarding the upgrade plan, given we decided to implement KIP-31 and
 KIP-32 in the same patch to avoid change binary protocol twice, the
>>> upgrade
 plan was mostly discussed on the discussion thread of KIP-31.
 
 Since the voting has been initiated, I will update the wiki with latest
 conversation to avoid further confusion.
 
 BTW, I actually have started coding work on KIP-31 and KIP-32 and will
 focus on the patch before I return from vacation in mid Jan because I
>>> have
 no LInkedIn VPN access in China anyway...
 
 Thanks,
 
 Jiangjie
 
> On Dec 23, 2015, at 12:31 PM, Anna Povzner  wrote:
> 
> Hi Gwen,
> 
> I just wanted to point out that I just started the vote. Becket wrote
>>> the
> proposal and led the discussions.
> 
> What I understood from reading the discussion thread, the migration
>>> plan
> was discussed at the KIP meeting, and not much on the mailing list
 itself.
> 
> My question about the migration plan was same as Guozhang wrote: The
>>> case
> when an upgraded broker receives an old producer request. The
>>> proposal is
> for the broker to fill in the timestamp field with the current time at
 the
> broker. Cons: it goes against the definition of CreateTime type of the
> timestamp (we are "over-writing" it at the broker). Pros: It looks
>>> like
> most of the use-cases would actually want that behavior, because
 otherwise
> timestamp is useless and they will need to support an old,
>>> pre-timestamp,
> behavior. E.g., if we modify log retention policy to 

[jira] [Updated] (KAFKA-3046) add ByteBuffer Serializer

2015-12-28 Thread Xin Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin Wang updated KAFKA-3046:

Description: 
ByteBuffer is widely used in many scenarios. (eg: storm-sql can specify kafka 
as the external data Source, we can use ByteBuffer for value serializer.) 
Adding ByteBuffer Serializer officially will be convenient for 
users to use.

  was:ByteBuffer is widely used in many scenarios. Adding ByteBuffer 
Serializer officially will be convenient for users to use.


> add ByteBuffer Serializer
> --
>
> Key: KAFKA-3046
> URL: https://issues.apache.org/jira/browse/KAFKA-3046
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients
>Reporter: Xin Wang
>
> ByteBuffer is widely used in many scenarios. (eg: storm-sql can specify kafka 
> as the external data Source, we can use ByteBuffer for value serializer.) 
> Adding ByteBuffer Serializer officially will be convenient for 
> users to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3044) Consumer.poll doesnot return messages when poll interval is less

2015-12-28 Thread Praveen Devarao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15073597#comment-15073597
 ] 

Praveen Devarao commented on KAFKA-3044:


Hi [~guozhang] and [~jkreps]

OK.

Also, could we recommend a value for timeout [based on some assumed factors]?

Thanks

Praveen

> Consumer.poll doesnot return messages when poll interval is less
> 
>
> Key: KAFKA-3044
> URL: https://issues.apache.org/jira/browse/KAFKA-3044
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Praveen Devarao
>Assignee: Jason Gustafson
> Fix For: 0.9.0.1
>
>
> When seeking to particular position in consumer and starting poll with 
> timeout param 0 the consumer does not come back with data though there is 
> data published via a producer already. If the timeout is increased slowly in 
> chunks of 100ms then at 700ms value the consumer returns back the record on 
> first call to poll.
> Docs 
> [http://kafka.apache.org/090/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#poll(long)]
>  for poll reads if timeout is 0 then data will be returned immediately but 
> the behaviour seen is that data is not returned.
> The test code I am using can be found here 
> https://gist.github.com/praveend/013dcab01ebb8c7e2f2d
> I have created a topic with data published as below and then running the test 
> program [ConsumerPollTest.java]
> $ bin/kafka-topics.sh --create --zookeeper localhost:2181 
> --replication-factor 1 --partitions 1 --topic mytopic
> $ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic mytopic
> Hello
> Hai
> bye
> $ java ConsumerPollTest
> I have published this 3 lines of data to kafka only oncelater on I just 
> use the above program with different poll interval
> Let me know if I am missing anything and interpreting it wrongly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3046) add ByteBuffer Serializer

2015-12-28 Thread Xin Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin Wang updated KAFKA-3046:

Description: ByteBuffer is widely used in many scenarios. Adding ByteBuffer 
Serializer officially will be convenient for users to use.  (was: 
ByteBuffer is widely used in many scenarios. Adding will be convenient for 
users to use)

> add ByteBuffer Serializer
> --
>
> Key: KAFKA-3046
> URL: https://issues.apache.org/jira/browse/KAFKA-3046
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients
>Reporter: Xin Wang
>
> ByteBuffer is widely used in many scenarios. Adding ByteBuffer 
> Serializer officially will be convenient for users to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)