[jira] [Assigned] (KAFKA-3940) Log should check the return value of dir.mkdirs()

2017-01-31 Thread Ishita Mandhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishita Mandhan reassigned KAFKA-3940:
-

Assignee: (was: Ishita Mandhan)

> Log should check the return value of dir.mkdirs()
> -
>
> Key: KAFKA-3940
> URL: https://issues.apache.org/jira/browse/KAFKA-3940
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.0.0
>Reporter: Jun Rao
>  Labels: newbie, reliability
>
> In Log.loadSegments(), we call dir.mkdirs() w/o checking the return value and 
> just assume the directory will exist after the call. However, if the 
> directory can't be created (e.g. due to no space), we will hit 
> NullPointerException in the next statement, which will be confusing.
>for(file <- dir.listFiles if file.isFile) {



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-3522) Consider adding version information into rocksDB storage format

2017-01-31 Thread Ishita Mandhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishita Mandhan reassigned KAFKA-3522:
-

Assignee: (was: Ishita Mandhan)

> Consider adding version information into rocksDB storage format
> ---
>
> Key: KAFKA-3522
> URL: https://issues.apache.org/jira/browse/KAFKA-3522
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: architecture
>
> Kafka Streams does not introduce any modifications to the data format in the 
> underlying Kafka protocol, but it does use RocksDB for persistent state 
> storage, and currently its data format is fixed and hard-coded. We want to 
> consider the evolution path in the future we we change the data format, and 
> hence having some version info stored along with the storage file / directory 
> would be useful.
> And this information could be even out of the storage file; for example, we 
> can just use a small "version indicator" file in the rocksdb directory for 
> this purposes. Thoughts? [~enothereska] [~jkreps]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-3522) Consider adding version information into rocksDB storage format

2017-01-31 Thread Ishita Mandhan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15847779#comment-15847779
 ] 

Ishita Mandhan commented on KAFKA-3522:
---

[~ewencp]I don't think I'll be able to work on this anymore, hope it gets 
picked up soon by someone else!

> Consider adding version information into rocksDB storage format
> ---
>
> Key: KAFKA-3522
> URL: https://issues.apache.org/jira/browse/KAFKA-3522
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Ishita Mandhan
>  Labels: architecture
>
> Kafka Streams does not introduce any modifications to the data format in the 
> underlying Kafka protocol, but it does use RocksDB for persistent state 
> storage, and currently its data format is fixed and hard-coded. We want to 
> consider the evolution path in the future we we change the data format, and 
> hence having some version info stored along with the storage file / directory 
> would be useful.
> And this information could be even out of the storage file; for example, we 
> can just use a small "version indicator" file in the rocksdb directory for 
> this purposes. Thoughts? [~enothereska] [~jkreps]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-2544) Replication tools wiki page needs to be updated

2017-01-31 Thread Ishita Mandhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishita Mandhan reassigned KAFKA-2544:
-

Assignee: (was: Ishita Mandhan)

> Replication tools wiki page needs to be updated
> ---
>
> Key: KAFKA-2544
> URL: https://issues.apache.org/jira/browse/KAFKA-2544
> Project: Kafka
>  Issue Type: Improvement
>  Components: website
>Affects Versions: 0.8.2.1
>Reporter: Stevo Slavic
>Priority: Minor
>  Labels: documentation, newbie
>
> https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools is 
> outdated, mentions tools which have been heavily refactored or replaced by 
> other tools, e.g. add partition tool, list/create topics tools, etc.
> Please have the replication tools wiki page updated.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-4320) Log compaction docs update

2017-01-31 Thread Ishita Mandhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishita Mandhan reassigned KAFKA-4320:
-

Assignee: (was: Ishita Mandhan)

> Log compaction docs update
> --
>
> Key: KAFKA-4320
> URL: https://issues.apache.org/jira/browse/KAFKA-4320
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Reporter: Dustin Cote
>Priority: Minor
>  Labels: newbie
>
> The log compaction docs are out of date.  At least the default is said to be 
> that log compaction is disabled which is not true as of 0.9.0.1.  Probably 
> the whole section needs a once over to make sure it's in line with what is 
> currently there.  This is the section:
> [http://kafka.apache.org/documentation#design_compactionconfig]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-4220) Clean up & provide better error message when incorrect argument types are provided in the command line client

2017-01-27 Thread Ishita Mandhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishita Mandhan reassigned KAFKA-4220:
-

Assignee: Vahid Hashemian  (was: Ishita Mandhan)

> Clean up & provide better error message when incorrect argument types are 
> provided in the command line client
> -
>
> Key: KAFKA-4220
> URL: https://issues.apache.org/jira/browse/KAFKA-4220
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ishita Mandhan
>Assignee: Vahid Hashemian
>Priority: Minor
>
> When the argument provided to a command line statement is not of the right 
> type, a stack trace is returned. This can be replaced by a cleaner error 
> message that is earlier to read & understand for the user.
> For example-
> bin/kafka-console-consumer.sh --new-consumer --bootstrap-server 
> localhost:9092 --topic foo --timeout-ms abc
> 'abc' is an incorrect type for the --timeout-ms parameter, which expects a 
> number.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2857) ConsumerGroupCommand throws GroupCoordinatorNotAvailableException when describing a non-existent group before the offset topic is created

2017-01-27 Thread Ishita Mandhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishita Mandhan reassigned KAFKA-2857:
-

Assignee: Vahid Hashemian  (was: Ishita Mandhan)

> ConsumerGroupCommand throws GroupCoordinatorNotAvailableException when 
> describing a non-existent group before the offset topic is created
> -
>
> Key: KAFKA-2857
> URL: https://issues.apache.org/jira/browse/KAFKA-2857
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Ismael Juma
>Assignee: Vahid Hashemian
>Priority: Minor
>
> If we describe a non-existing group before the offset topic is created, like 
> the following:
> {code}
> bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --new-consumer 
> --describe --group 
> {code}
> We get the following error:
> {code}
> Error while executing consumer group command The group coordinator is not 
> available.
> org.apache.kafka.common.errors.GroupCoordinatorNotAvailableException: The 
> group coordinator is not available.
> {code}
> The exception is thrown in the `adminClient.describeConsumerGroup` call. We 
> can't interpret this exception as meaning that the group doesn't exist 
> because it could also be thrown f all replicas for a offset topic partition 
> are down (as explained by Jun).
> Jun also suggested that we should distinguish if a coordinator is not 
> available from the case where a coordinator doesn't exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3940) Log should check the return value of dir.mkdirs()

2016-11-28 Thread Ishita Mandhan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15701279#comment-15701279
 ] 

Ishita Mandhan commented on KAFKA-3940:
---

[~ijuma] I had a quick question about creating a second PR for this jira as I 
don't know how two PRs that resolve the same jira should be formatted. The 
change in the current PR is to convert dir.mkdirs() to 
Files.createDirectories(dir.toPath). Once this goes in, I want to create a PR 
to convert the file.deletes() to Files.delete(file.toPath). Is there a set 
format for the title that I should follow since it will be the second PR 
resolving the same jira issue? Should I just do something like "KAFKA-3940 Part 
2: Log should check the return value of dir.mkdirs()"? 

> Log should check the return value of dir.mkdirs()
> -
>
> Key: KAFKA-3940
> URL: https://issues.apache.org/jira/browse/KAFKA-3940
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.0.0
>Reporter: Jun Rao
>Assignee: Ishita Mandhan
>  Labels: newbie
>
> In Log.loadSegments(), we call dir.mkdirs() w/o checking the return value and 
> just assume the directory will exist after the call. However, if the 
> directory can't be created (e.g. due to no space), we will hit 
> NullPointerException in the next statement, which will be confusing.
>for(file <- dir.listFiles if file.isFile) {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4307) Inconsistent parameters between console producer and consumer

2016-11-01 Thread Ishita Mandhan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15626723#comment-15626723
 ] 

Ishita Mandhan commented on KAFKA-4307:
---

Sure, take your time! Was just checking to see if it was up for grabs :)

> Inconsistent parameters between console producer and consumer
> -
>
> Key: KAFKA-4307
> URL: https://issues.apache.org/jira/browse/KAFKA-4307
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Gwen Shapira
>Assignee: Manasvi Gupta
>  Labels: newbie
>
> kafka-console-producer uses --broker-list while kafka-console-consumer uses 
> --bootstrap-server.
> Let's add --bootstrap-server to the producer for some consistency?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2544) Replication tools wiki page needs to be updated

2016-11-01 Thread Ishita Mandhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishita Mandhan reassigned KAFKA-2544:
-

Assignee: Ishita Mandhan

> Replication tools wiki page needs to be updated
> ---
>
> Key: KAFKA-2544
> URL: https://issues.apache.org/jira/browse/KAFKA-2544
> Project: Kafka
>  Issue Type: Improvement
>  Components: website
>Affects Versions: 0.8.2.1
>Reporter: Stevo Slavic
>Assignee: Ishita Mandhan
>Priority: Minor
>  Labels: documentation, newbie
>
> https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools is 
> outdated, mentions tools which have been heavily refactored or replaced by 
> other tools, e.g. add partition tool, list/create topics tools, etc.
> Please have the replication tools wiki page updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4307) Inconsistent parameters between console producer and consumer

2016-11-01 Thread Ishita Mandhan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15626328#comment-15626328
 ] 

Ishita Mandhan commented on KAFKA-4307:
---

Are you working on this, [~manasvigupta]?

> Inconsistent parameters between console producer and consumer
> -
>
> Key: KAFKA-4307
> URL: https://issues.apache.org/jira/browse/KAFKA-4307
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Gwen Shapira
>Assignee: Manasvi Gupta
>  Labels: newbie
>
> kafka-console-producer uses --broker-list while kafka-console-consumer uses 
> --bootstrap-server.
> Let's add --bootstrap-server to the producer for some consistency?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-4320) Log compaction docs update

2016-11-01 Thread Ishita Mandhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishita Mandhan reassigned KAFKA-4320:
-

Assignee: Ishita Mandhan

> Log compaction docs update
> --
>
> Key: KAFKA-4320
> URL: https://issues.apache.org/jira/browse/KAFKA-4320
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Reporter: Dustin Cote
>Assignee: Ishita Mandhan
>Priority: Minor
>  Labels: newbie
>
> The log compaction docs are out of date.  At least the default is said to be 
> that log compaction is disabled which is not true as of 0.9.0.1.  Probably 
> the whole section needs a once over to make sure it's in line with what is 
> currently there.  This is the section:
> [http://kafka.apache.org/documentation#design_compactionconfig]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-4220) Clean up & provide better error message when incorrect argument types are provided in the command line client

2016-09-26 Thread Ishita Mandhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4220 started by Ishita Mandhan.
-
> Clean up & provide better error message when incorrect argument types are 
> provided in the command line client
> -
>
> Key: KAFKA-4220
> URL: https://issues.apache.org/jira/browse/KAFKA-4220
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ishita Mandhan
>Assignee: Ishita Mandhan
>Priority: Minor
>
> When the argument provided to a command line statement is not of the right 
> type, a stack trace is returned. This can be replaced by a cleaner error 
> message that is earlier to read & understand for the user.
> For example-
> bin/kafka-console-consumer.sh --new-consumer --bootstrap-server 
> localhost:9092 --topic foo --timeout-ms abc
> 'abc' is an incorrect type for the --timeout-ms parameter, which expects a 
> number.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4220) Clean up & provide better error message when incorrect argument types are provided in the command line client

2016-09-25 Thread Ishita Mandhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishita Mandhan updated KAFKA-4220:
--
Description: 
When the argument provided to a command line statement is not of the right 
type, a stack trace is returned. This can be replaced by a cleaner error 
message that is earlier to read & understand for the user.

For example-
bin/kafka-console-consumer.sh --new-consumer --bootstrap-server localhost:9092 
--topic foo --timeout-ms abc
'abc' is an incorrect type for the --timeout-ms parameter, which expects a 
number.

> Clean up & provide better error message when incorrect argument types are 
> provided in the command line client
> -
>
> Key: KAFKA-4220
> URL: https://issues.apache.org/jira/browse/KAFKA-4220
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ishita Mandhan
>Assignee: Ishita Mandhan
>Priority: Minor
>
> When the argument provided to a command line statement is not of the right 
> type, a stack trace is returned. This can be replaced by a cleaner error 
> message that is earlier to read & understand for the user.
> For example-
> bin/kafka-console-consumer.sh --new-consumer --bootstrap-server 
> localhost:9092 --topic foo --timeout-ms abc
> 'abc' is an incorrect type for the --timeout-ms parameter, which expects a 
> number.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4220) Clean up & provide better error message when incorrect argument types are provided in the command line client

2016-09-25 Thread Ishita Mandhan (JIRA)
Ishita Mandhan created KAFKA-4220:
-

 Summary: Clean up & provide better error message when incorrect 
argument types are provided in the command line client
 Key: KAFKA-4220
 URL: https://issues.apache.org/jira/browse/KAFKA-4220
 Project: Kafka
  Issue Type: Bug
Reporter: Ishita Mandhan
Assignee: Ishita Mandhan
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-3940) Log should check the return value of dir.mkdirs()

2016-08-25 Thread Ishita Mandhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-3940 started by Ishita Mandhan.
-
> Log should check the return value of dir.mkdirs()
> -
>
> Key: KAFKA-3940
> URL: https://issues.apache.org/jira/browse/KAFKA-3940
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.0.0
>Reporter: Jun Rao
>Assignee: Ishita Mandhan
>  Labels: newbie
>
> In Log.loadSegments(), we call dir.mkdirs() w/o checking the return value and 
> just assume the directory will exist after the call. However, if the 
> directory can't be created (e.g. due to no space), we will hit 
> NullPointerException in the next statement, which will be confusing.
>for(file <- dir.listFiles if file.isFile) {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3522) Consider adding version information into rocksDB storage format

2016-08-22 Thread Ishita Mandhan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15431288#comment-15431288
 ] 

Ishita Mandhan commented on KAFKA-3522:
---

Just wanted to check in to see if now is a good time to bring this back up. I'm 
looking for some small feature in streams to work on so if there's some other 
minor feature that you'd rather me divert my attention to at the moment, that's 
fine too :)

> Consider adding version information into rocksDB storage format
> ---
>
> Key: KAFKA-3522
> URL: https://issues.apache.org/jira/browse/KAFKA-3522
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Ishita Mandhan
>  Labels: architecture
> Fix For: 0.10.1.0
>
>
> Kafka Streams does not introduce any modifications to the data format in the 
> underlying Kafka protocol, but it does use RocksDB for persistent state 
> storage, and currently its data format is fixed and hard-coded. We want to 
> consider the evolution path in the future we we change the data format, and 
> hence having some version info stored along with the storage file / directory 
> would be useful.
> And this information could be even out of the storage file; for example, we 
> can just use a small "version indicator" file in the rocksdb directory for 
> this purposes. Thoughts? [~enothereska] [~jkreps]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3929) Add prefix for underlying clients configs in StreamConfig

2016-07-21 Thread Ishita Mandhan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388216#comment-15388216
 ] 

Ishita Mandhan commented on KAFKA-3929:
---

I've started but haven't made too much progress into it so if the fix needs to 
be done asap, you can pick it up. 

> Add prefix for underlying clients configs in StreamConfig
> -
>
> Key: KAFKA-3929
> URL: https://issues.apache.org/jira/browse/KAFKA-3929
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Ishita Mandhan
>  Labels: api
>
> There are a couple of configs that have the same name for producer / consumer 
> configs, e.g. take a look at {{CommonClientConfigs}}, and also for producer / 
> consumer interceptors there are commonly named configs as well.
> This is semi-related to KAFKA-3740 since we need to add "sub-class" configs 
> for RocksDB as well, and we'd better have some prefix mechanism for such 
> hierarchical configs in general.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3940) Log should check the return value of dir.mkdirs()

2016-07-19 Thread Ishita Mandhan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15384663#comment-15384663
 ] 

Ishita Mandhan commented on KAFKA-3940:
---

I think we should convert the dir.mkdirs() to Files.createDirectory. I have a 
patch ready that I can submit and then you can take a look at it and share 
feedback? I'm not sure what's a better way to collaborate but would be open to 
trying out a different method.

> Log should check the return value of dir.mkdirs()
> -
>
> Key: KAFKA-3940
> URL: https://issues.apache.org/jira/browse/KAFKA-3940
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.0.0
>Reporter: Jun Rao
>Assignee: Ishita Mandhan
>  Labels: newbie
>
> In Log.loadSegments(), we call dir.mkdirs() w/o checking the return value and 
> just assume the directory will exist after the call. However, if the 
> directory can't be created (e.g. due to no space), we will hit 
> NullPointerException in the next statement, which will be confusing.
>for(file <- dir.listFiles if file.isFile) {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3940) Log should check the return value of dir.mkdirs()

2016-07-13 Thread Ishita Mandhan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375692#comment-15375692
 ] 

Ishita Mandhan commented on KAFKA-3940:
---

Hi Jim, I got a notification of your PR because I had this bug assigned to me 
and I was working on it myself. I reviewed your PR and I think the fix would be 
more involved and more occurrences of dir.mkdirs() and File.delete could be 
addressed as discussed above. What are your thoughts? Do you mind collaborating 
on this patch set?

> Log should check the return value of dir.mkdirs()
> -
>
> Key: KAFKA-3940
> URL: https://issues.apache.org/jira/browse/KAFKA-3940
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.0.0
>Reporter: Jun Rao
>Assignee: Ishita Mandhan
>  Labels: newbie
>
> In Log.loadSegments(), we call dir.mkdirs() w/o checking the return value and 
> just assume the directory will exist after the call. However, if the 
> directory can't be created (e.g. due to no space), we will hit 
> NullPointerException in the next statement, which will be confusing.
>for(file <- dir.listFiles if file.isFile) {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3940) Log should check the return value of dir.mkdirs()

2016-07-11 Thread Ishita Mandhan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372078#comment-15372078
 ] 

Ishita Mandhan commented on KAFKA-3940:
---

If dir is changed from File to Files here 
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/log/Log.scala#L78,
 several functions will not be available for the type Files and alternative 
implementations will need to be created for each of those calls. Is this what 
you meant or am I misunderstanding something?

> Log should check the return value of dir.mkdirs()
> -
>
> Key: KAFKA-3940
> URL: https://issues.apache.org/jira/browse/KAFKA-3940
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.0.0
>Reporter: Jun Rao
>Assignee: Ishita Mandhan
>  Labels: newbie
>
> In Log.loadSegments(), we call dir.mkdirs() w/o checking the return value and 
> just assume the directory will exist after the call. However, if the 
> directory can't be created (e.g. due to no space), we will hit 
> NullPointerException in the next statement, which will be confusing.
>for(file <- dir.listFiles if file.isFile) {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3940) Log should check the return value of dir.mkdirs()

2016-07-08 Thread Ishita Mandhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishita Mandhan reassigned KAFKA-3940:
-

Assignee: Ishita Mandhan

> Log should check the return value of dir.mkdirs()
> -
>
> Key: KAFKA-3940
> URL: https://issues.apache.org/jira/browse/KAFKA-3940
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.0.0
>Reporter: Jun Rao
>Assignee: Ishita Mandhan
>  Labels: newbie
>
> In Log.loadSegments(), we call dir.mkdirs() w/o checking the return value and 
> just assume the directory will exist after the call. However, if the 
> directory can't be created (e.g. due to no space), we will hit 
> NullPointerException in the next statement, which will be confusing.
>for(file <- dir.listFiles if file.isFile) {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3929) Add prefix for underlying clients configs in StreamConfig

2016-07-06 Thread Ishita Mandhan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15364802#comment-15364802
 ] 

Ishita Mandhan commented on KAFKA-3929:
---

So have a function say appConfigsWithPrefix that is called in StreamsConfig's 
getConsumerConfigs function which is responsible for adding a prefix 
"kafka.consumer"?

> Add prefix for underlying clients configs in StreamConfig
> -
>
> Key: KAFKA-3929
> URL: https://issues.apache.org/jira/browse/KAFKA-3929
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Ishita Mandhan
>  Labels: api
>
> There are a couple of configs that have the same name for producer / consumer 
> configs, e.g. take a look at {{CommonClientConfigs}}, and also for producer / 
> consumer interceptors there are commonly named configs as well.
> This is semi-related to KAFKA-3740 since we need to add "sub-class" configs 
> for RocksDB as well, and we'd better have some prefix mechanism for such 
> hierarchical configs in general.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3929) Add prefix for underlying clients configs in StreamConfig

2016-07-05 Thread Ishita Mandhan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15363051#comment-15363051
 ] 

Ishita Mandhan commented on KAFKA-3929:
---

Are you thinking a similar implementation to KAFAK-3740 for the 
CommonClientConfigs where every config has either a consumer/producer prefix or 
just a subset of them? 

> Add prefix for underlying clients configs in StreamConfig
> -
>
> Key: KAFKA-3929
> URL: https://issues.apache.org/jira/browse/KAFKA-3929
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Ishita Mandhan
>  Labels: api
>
> There are a couple of configs that have the same name for producer / consumer 
> configs, e.g. take a look at {{CommonClientConfigs}}, and also for producer / 
> consumer interceptors there are commonly named configs as well.
> This is semi-related to KAFKA-3740 since we need to add "sub-class" configs 
> for RocksDB as well, and we'd better have some prefix mechanism for such 
> hierarchical configs in general.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3929) Add prefix for underlying clients configs in StreamConfig

2016-07-05 Thread Ishita Mandhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishita Mandhan reassigned KAFKA-3929:
-

Assignee: Ishita Mandhan

> Add prefix for underlying clients configs in StreamConfig
> -
>
> Key: KAFKA-3929
> URL: https://issues.apache.org/jira/browse/KAFKA-3929
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Ishita Mandhan
>  Labels: api
>
> There are a couple of configs that have the same name for producer / consumer 
> configs, e.g. take a look at {{CommonClientConfigs}}, and also for producer / 
> consumer interceptors there are commonly named configs as well.
> This is semi-related to KAFKA-3740 since we need to add "sub-class" configs 
> for RocksDB as well, and we'd better have some prefix mechanism for such 
> hierarchical configs in general.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-3876) Transient test failure: kafka.api.PlaintextConsumerTest.testExpandingTopicSubscriptions

2016-07-05 Thread Ishita Mandhan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15362993#comment-15362993
 ] 

Ishita Mandhan edited comment on KAFKA-3876 at 7/5/16 6:48 PM:
---

I saw the error in testPatternSubscription too - 
https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/4559/


was (Author: imandhan):
I saw a similar one too - 
https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/4559/

> Transient test failure: 
> kafka.api.PlaintextConsumerTest.testExpandingTopicSubscriptions
> ---
>
> Key: KAFKA-3876
> URL: https://issues.apache.org/jira/browse/KAFKA-3876
> Project: Kafka
>  Issue Type: Sub-task
>  Components: unit tests
>Reporter: Ismael Juma
>  Labels: transient-unit-test-failure
>
> Failed in a recent build:
> {code}
> java.lang.AssertionError: Partition [__consumer_offsets,0] metadata not 
> propagated after 5000 ms
>   at org.junit.Assert.fail(Assert.java:88)
>   at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:771)
>   at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:812)
>   at 
> kafka.utils.TestUtils$$anonfun$createTopic$1.apply(TestUtils.scala:240)
>   at 
> kafka.utils.TestUtils$$anonfun$createTopic$1.apply(TestUtils.scala:239)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.immutable.Range.foreach(Range.scala:160)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at kafka.utils.TestUtils$.createTopic(TestUtils.scala:239)
>   at 
> kafka.api.IntegrationTestHarness$class.setUp(IntegrationTestHarness.scala:80)
>   at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:60)
> {code}
> Standard out:
> {code}
> [2016-06-19 09:09:20,081] WARN Client session timed out, have not heard from 
> server in 4002ms for sessionid 0x15567ebcc160001 
> (org.apache.zookeeper.ClientCnxn:1108)
> [2016-06-19 09:09:21,602] WARN caught end of stream exception 
> (org.apache.zookeeper.server.NIOServerCnxn:357)
> EndOfStreamException: Unable to read additional data from client sessionid 
> 0x15567ebcc160001, likely client has closed socket
>   at 
> org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:230)
>   at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:203)
>   at java.lang.Thread.run(Thread.java:745)
> [2016-06-19 09:09:20,081] WARN Client session timed out, have not heard from 
> server in 4001ms for sessionid 0x15567ebcc160003 
> (org.apache.zookeeper.ClientCnxn:1108)
> [2016-06-19 09:09:21,598] WARN Client session timed out, have not heard from 
> server in 4001ms for sessionid 0x15567ebcc160002 
> (org.apache.zookeeper.ClientCnxn:1108)
> [2016-06-19 09:09:21,613] WARN caught end of stream exception 
> (org.apache.zookeeper.server.NIOServerCnxn:357)
> EndOfStreamException: Unable to read additional data from client sessionid 
> 0x15567ebcc160003, likely client has closed socket
>   at 
> org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:230)
>   at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:203)
>   at java.lang.Thread.run(Thread.java:745)
> [2016-06-19 09:09:21,613] WARN caught end of stream exception 
> (org.apache.zookeeper.server.NIOServerCnxn:357)
> EndOfStreamException: Unable to read additional data from client sessionid 
> 0x15567ebcc160002, likely client has closed socket
>   at 
> org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:230)
>   at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:203)
>   at java.lang.Thread.run(Thread.java:745)
> [2016-06-19 09:09:21,602] WARN Client session timed out, have not heard from 
> server in 4084ms for sessionid 0x15567ebcc16 
> (org.apache.zookeeper.ClientCnxn:1108)
> [2016-06-19 09:09:21,615] WARN caught end of stream exception 
> (org.apache.zookeeper.server.NIOServerCnxn:357)
> EndOfStreamException: Unable to read additional data from client sessionid 
> 0x15567ebcc16, likely client has closed socket
>   at 
> org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:230)
>   at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:203)
>   at java.lang.Thread.run(Thread.java:745)
> [2016-06-19 09:11:26,866] WARN caught end of stream exception 
> (org.apache.zookeeper.server.NIOServerCnxn:357)
> EndOfStreamException: Unable to read additional data from client sessionid 
> 0x15567edb4d10002, 

[jira] [Commented] (KAFKA-3876) Transient test failure: kafka.api.PlaintextConsumerTest.testExpandingTopicSubscriptions

2016-07-05 Thread Ishita Mandhan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15362993#comment-15362993
 ] 

Ishita Mandhan commented on KAFKA-3876:
---

I saw a similar one too - 
https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/4559/

> Transient test failure: 
> kafka.api.PlaintextConsumerTest.testExpandingTopicSubscriptions
> ---
>
> Key: KAFKA-3876
> URL: https://issues.apache.org/jira/browse/KAFKA-3876
> Project: Kafka
>  Issue Type: Sub-task
>  Components: unit tests
>Reporter: Ismael Juma
>  Labels: transient-unit-test-failure
>
> Failed in a recent build:
> {code}
> java.lang.AssertionError: Partition [__consumer_offsets,0] metadata not 
> propagated after 5000 ms
>   at org.junit.Assert.fail(Assert.java:88)
>   at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:771)
>   at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:812)
>   at 
> kafka.utils.TestUtils$$anonfun$createTopic$1.apply(TestUtils.scala:240)
>   at 
> kafka.utils.TestUtils$$anonfun$createTopic$1.apply(TestUtils.scala:239)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.immutable.Range.foreach(Range.scala:160)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at kafka.utils.TestUtils$.createTopic(TestUtils.scala:239)
>   at 
> kafka.api.IntegrationTestHarness$class.setUp(IntegrationTestHarness.scala:80)
>   at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:60)
> {code}
> Standard out:
> {code}
> [2016-06-19 09:09:20,081] WARN Client session timed out, have not heard from 
> server in 4002ms for sessionid 0x15567ebcc160001 
> (org.apache.zookeeper.ClientCnxn:1108)
> [2016-06-19 09:09:21,602] WARN caught end of stream exception 
> (org.apache.zookeeper.server.NIOServerCnxn:357)
> EndOfStreamException: Unable to read additional data from client sessionid 
> 0x15567ebcc160001, likely client has closed socket
>   at 
> org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:230)
>   at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:203)
>   at java.lang.Thread.run(Thread.java:745)
> [2016-06-19 09:09:20,081] WARN Client session timed out, have not heard from 
> server in 4001ms for sessionid 0x15567ebcc160003 
> (org.apache.zookeeper.ClientCnxn:1108)
> [2016-06-19 09:09:21,598] WARN Client session timed out, have not heard from 
> server in 4001ms for sessionid 0x15567ebcc160002 
> (org.apache.zookeeper.ClientCnxn:1108)
> [2016-06-19 09:09:21,613] WARN caught end of stream exception 
> (org.apache.zookeeper.server.NIOServerCnxn:357)
> EndOfStreamException: Unable to read additional data from client sessionid 
> 0x15567ebcc160003, likely client has closed socket
>   at 
> org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:230)
>   at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:203)
>   at java.lang.Thread.run(Thread.java:745)
> [2016-06-19 09:09:21,613] WARN caught end of stream exception 
> (org.apache.zookeeper.server.NIOServerCnxn:357)
> EndOfStreamException: Unable to read additional data from client sessionid 
> 0x15567ebcc160002, likely client has closed socket
>   at 
> org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:230)
>   at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:203)
>   at java.lang.Thread.run(Thread.java:745)
> [2016-06-19 09:09:21,602] WARN Client session timed out, have not heard from 
> server in 4084ms for sessionid 0x15567ebcc16 
> (org.apache.zookeeper.ClientCnxn:1108)
> [2016-06-19 09:09:21,615] WARN caught end of stream exception 
> (org.apache.zookeeper.server.NIOServerCnxn:357)
> EndOfStreamException: Unable to read additional data from client sessionid 
> 0x15567ebcc16, likely client has closed socket
>   at 
> org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:230)
>   at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:203)
>   at java.lang.Thread.run(Thread.java:745)
> [2016-06-19 09:11:26,866] WARN caught end of stream exception 
> (org.apache.zookeeper.server.NIOServerCnxn:357)
> EndOfStreamException: Unable to read additional data from client sessionid 
> 0x15567edb4d10002, likely client has closed socket
>   at 
> org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:230)
>   at 
> 

[jira] [Commented] (KAFKA-3522) Consider adding version information into rocksDB storage format

2016-06-27 Thread Ishita Mandhan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352172#comment-15352172
 ] 

Ishita Mandhan commented on KAFKA-3522:
---

I'd like to work on this once a consensus has been reached and if this is 
something suitable for someone new to kafka (might end up asking a lot of 
questions here :) )
[~guozhang] When you say version indicator file, are you thinking of some sort 
of file that simply stores the rocksdb version number as a json and perform a 
check during startup in KTableStoreSupplier.java? Or a file that stores the 
various configurable values (currently hardcoded) in RocksDBStore.java? Or both?

> Consider adding version information into rocksDB storage format
> ---
>
> Key: KAFKA-3522
> URL: https://issues.apache.org/jira/browse/KAFKA-3522
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Ishita Mandhan
>  Labels: architecture
> Fix For: 0.10.1.0
>
>
> Kafka Streams does not introduce any modifications to the data format in the 
> underlying Kafka protocol, but it does use RocksDB for persistent state 
> storage, and currently its data format is fixed and hard-coded. We want to 
> consider the evolution path in the future we we change the data format, and 
> hence having some version info stored along with the storage file / directory 
> would be useful.
> And this information could be even out of the storage file; for example, we 
> can just use a small "version indicator" file in the rocksdb directory for 
> this purposes. Thoughts? [~enothereska] [~jkreps]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3522) Consider adding version information into rocksDB storage format

2016-06-23 Thread Ishita Mandhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishita Mandhan reassigned KAFKA-3522:
-

Assignee: Ishita Mandhan

> Consider adding version information into rocksDB storage format
> ---
>
> Key: KAFKA-3522
> URL: https://issues.apache.org/jira/browse/KAFKA-3522
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Ishita Mandhan
>  Labels: architecture
> Fix For: 0.10.1.0
>
>
> Kafka Streams does not introduce any modifications to the data format in the 
> underlying Kafka protocol, but it does use RocksDB for persistent state 
> storage, and currently its data format is fixed and hard-coded. We want to 
> consider the evolution path in the future we we change the data format, and 
> hence having some version info stored along with the storage file / directory 
> would be useful.
> And this information could be even out of the storage file; for example, we 
> can just use a small "version indicator" file in the rocksdb directory for 
> this purposes. Thoughts? [~enothereska] [~jkreps]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2857) ConsumerGroupCommand throws GroupCoordinatorNotAvailableException when describing a non-existent group before the offset topic is created

2016-06-23 Thread Ishita Mandhan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347448#comment-15347448
 ] 

Ishita Mandhan commented on KAFKA-2857:
---

Thanks [~hachikuji]! I just created a PR for this jira.

> ConsumerGroupCommand throws GroupCoordinatorNotAvailableException when 
> describing a non-existent group before the offset topic is created
> -
>
> Key: KAFKA-2857
> URL: https://issues.apache.org/jira/browse/KAFKA-2857
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Ismael Juma
>Assignee: Ishita Mandhan
>Priority: Minor
>
> If we describe a non-existing group before the offset topic is created, like 
> the following:
> {code}
> bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --new-consumer 
> --describe --group 
> {code}
> We get the following error:
> {code}
> Error while executing consumer group command The group coordinator is not 
> available.
> org.apache.kafka.common.errors.GroupCoordinatorNotAvailableException: The 
> group coordinator is not available.
> {code}
> The exception is thrown in the `adminClient.describeConsumerGroup` call. We 
> can't interpret this exception as meaning that the group doesn't exist 
> because it could also be thrown f all replicas for a offset topic partition 
> are down (as explained by Jun).
> Jun also suggested that we should distinguish if a coordinator is not 
> available from the case where a coordinator doesn't exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3897) Improve Unit Tests for New Consumer's Regex Subscription

2016-06-23 Thread Ishita Mandhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishita Mandhan reassigned KAFKA-3897:
-

Assignee: Ishita Mandhan

> Improve Unit Tests for New Consumer's Regex Subscription
> 
>
> Key: KAFKA-3897
> URL: https://issues.apache.org/jira/browse/KAFKA-3897
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, unit tests
>Affects Versions: 0.10.0.0
>Reporter: Vahid Hashemian
>Assignee: Ishita Mandhan
>Priority: Minor
>
> The new consumer's unit tests do not currently do a good job of testing a 
> variety of scenarios. For example, the issue reported in KAFKA-3854 is a very 
> simple use case that is not picked by unit tests. It would be great to add a 
> number of unit tests to cover some use cases around regex subscription.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2857) ConsumerGroupCommand throws GroupCoordinatorNotAvailableException when describing a non-existent group before the offset topic is created

2016-06-23 Thread Ishita Mandhan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347175#comment-15347175
 ] 

Ishita Mandhan commented on KAFKA-2857:
---

So it seems like we need to check if the offsets topic exists or not and we can 
do this check right before a call to findCoordinator() is made in the 
describeGroup function() here - 
https://github.com/apache/kafka/blob/404b696bea58aca17fbe528aed03cb3c94516c39/core/src/main/scala/kafka/admin/AdminClient.scala#L125.
 If the offsets topic doesn’t exist, we can throw an exception and this is the 
only check we need to do to resolve the jira based on what I understand. Does 
this seem like the right approach to take? 

> ConsumerGroupCommand throws GroupCoordinatorNotAvailableException when 
> describing a non-existent group before the offset topic is created
> -
>
> Key: KAFKA-2857
> URL: https://issues.apache.org/jira/browse/KAFKA-2857
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Ismael Juma
>Assignee: Ishita Mandhan
>Priority: Minor
>
> If we describe a non-existing group before the offset topic is created, like 
> the following:
> {code}
> bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --new-consumer 
> --describe --group 
> {code}
> We get the following error:
> {code}
> Error while executing consumer group command The group coordinator is not 
> available.
> org.apache.kafka.common.errors.GroupCoordinatorNotAvailableException: The 
> group coordinator is not available.
> {code}
> The exception is thrown in the `adminClient.describeConsumerGroup` call. We 
> can't interpret this exception as meaning that the group doesn't exist 
> because it could also be thrown f all replicas for a offset topic partition 
> are down (as explained by Jun).
> Jun also suggested that we should distinguish if a coordinator is not 
> available from the case where a coordinator doesn't exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3185) Allow users to cleanup internal data

2016-06-16 Thread Ishita Mandhan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334945#comment-15334945
 ] 

Ishita Mandhan commented on KAFKA-3185:
---

Ah okay, sounds good- I'll look for something else. thank you!

> Allow users to cleanup internal data
> 
>
> Key: KAFKA-3185
> URL: https://issues.apache.org/jira/browse/KAFKA-3185
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Guozhang Wang
>Priority: Blocker
>  Labels: user-experience
> Fix For: 0.10.1.0
>
>
> Currently the internal data is managed completely by Kafka Streams framework 
> and users cannot clean them up actively. This results in a bad out-of-the-box 
> user experience especially for running demo programs since it results 
> internal data (changelog topics, RocksDB files, etc) that need to be cleaned 
> manually. It will be better to add a
> {code}
> KafkaStreams.cleanup()
> {code}
> function call to clean up these internal data programmatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3185) Allow users to cleanup internal data

2016-06-16 Thread Ishita Mandhan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334860#comment-15334860
 ] 

Ishita Mandhan commented on KAFKA-3185:
---

[~guozhang] Do you think this is a bug a newbie(aka me) could work on? If not, 
could you point me to something? Thanks

> Allow users to cleanup internal data
> 
>
> Key: KAFKA-3185
> URL: https://issues.apache.org/jira/browse/KAFKA-3185
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Guozhang Wang
>Priority: Blocker
>  Labels: user-experience
> Fix For: 0.10.1.0
>
>
> Currently the internal data is managed completely by Kafka Streams framework 
> and users cannot clean them up actively. This results in a bad out-of-the-box 
> user experience especially for running demo programs since it results 
> internal data (changelog topics, RocksDB files, etc) that need to be cleaned 
> manually. It will be better to add a
> {code}
> KafkaStreams.cleanup()
> {code}
> function call to clean up these internal data programmatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2857) ConsumerGroupCommand throws GroupCoordinatorNotAvailableException when describing a non-existent group before the offset topic is created

2016-06-10 Thread Ishita Mandhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishita Mandhan reassigned KAFKA-2857:
-

Assignee: Ishita Mandhan

> ConsumerGroupCommand throws GroupCoordinatorNotAvailableException when 
> describing a non-existent group before the offset topic is created
> -
>
> Key: KAFKA-2857
> URL: https://issues.apache.org/jira/browse/KAFKA-2857
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Ismael Juma
>Assignee: Ishita Mandhan
>Priority: Minor
>
> If we describe a non-existing group before the offset topic is created, like 
> the following:
> {code}
> bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --new-consumer 
> --describe --group 
> {code}
> We get the following error:
> {code}
> Error while executing consumer group command The group coordinator is not 
> available.
> org.apache.kafka.common.errors.GroupCoordinatorNotAvailableException: The 
> group coordinator is not available.
> {code}
> The exception is thrown in the `adminClient.describeConsumerGroup` call. We 
> can't interpret this exception as meaning that the group doesn't exist 
> because it could also be thrown f all replicas for a offset topic partition 
> are down (as explained by Jun).
> Jun also suggested that we should distinguish if a coordinator is not 
> available from the case where a coordinator doesn't exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2857) ConsumerGroupCommand throws GroupCoordinatorNotAvailableException when describing a non-existent group before the offset topic is created

2016-06-10 Thread Ishita Mandhan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325152#comment-15325152
 ] 

Ishita Mandhan commented on KAFKA-2857:
---

Working on this with [~vahid] and we aren't sure about what the part about all 
replicas for a offset topic partition being down means. If all 
__consumer_offsets partitions are down, wouldn't all the brokers be down as 
well (meaning that all brokers keep the same copy of __consumer_offsets)?

> ConsumerGroupCommand throws GroupCoordinatorNotAvailableException when 
> describing a non-existent group before the offset topic is created
> -
>
> Key: KAFKA-2857
> URL: https://issues.apache.org/jira/browse/KAFKA-2857
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Ismael Juma
>Priority: Minor
>
> If we describe a non-existing group before the offset topic is created, like 
> the following:
> {code}
> bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --new-consumer 
> --describe --group 
> {code}
> We get the following error:
> {code}
> Error while executing consumer group command The group coordinator is not 
> available.
> org.apache.kafka.common.errors.GroupCoordinatorNotAvailableException: The 
> group coordinator is not available.
> {code}
> The exception is thrown in the `adminClient.describeConsumerGroup` call. We 
> can't interpret this exception as meaning that the group doesn't exist 
> because it could also be thrown f all replicas for a offset topic partition 
> are down (as explained by Jun).
> Jun also suggested that we should distinguish if a coordinator is not 
> available from the case where a coordinator doesn't exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3762) Log.loadSegments() should log the message in exception

2016-06-07 Thread Ishita Mandhan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319487#comment-15319487
 ] 

Ishita Mandhan commented on KAFKA-3762:
---

[~junrao] Is this what you had in mind? ^

> Log.loadSegments() should log the message in exception
> --
>
> Key: KAFKA-3762
> URL: https://issues.apache.org/jira/browse/KAFKA-3762
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.10.0.0
>Reporter: Jun Rao
>Assignee: Ishita Mandhan
>  Labels: newbie
>
> In Log.loadSegments(), we have the following code. It would be useful to log 
> the error message in IllegalArgumentException.
> if(indexFile.exists()) {
>   try {
>   segment.index.sanityCheck()
>   } catch {
> case e: java.lang.IllegalArgumentException =>
>   warn("Found a corrupted index file, %s, deleting and rebuilding 
> index...".format(indexFile.getAbsolutePath))
>   indexFile.delete()
>   segment.recover(config.maxMessageSize)
>   }
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3755) tightening the offset check in ReplicaFetcherThread

2016-06-01 Thread Ishita Mandhan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311436#comment-15311436
 ] 

Ishita Mandhan commented on KAFKA-3755:
---

[~junrao]What would be easiest way to produce an error to test it out?

> tightening the offset check in ReplicaFetcherThread
> ---
>
> Key: KAFKA-3755
> URL: https://issues.apache.org/jira/browse/KAFKA-3755
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>Assignee: Ishita Mandhan
>
> Currently, in ReplicaFetcherThread.processPartitionData(), we have the 
> following code to make sure that the fetchOffset matches the log end offset.
>   if (fetchOffset != replica.logEndOffset.messageOffset)
> throw new RuntimeException("Offset mismatch for partition %s: fetched 
> offset = %d, log end offset = %d.".format(topicAndPartition, fetchOffset, 
> replica.logEndOffset.messageOffset))
> It would be useful to further assert that the first offset in the messageSet 
> to be appended to the log is >= than the log end offset.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3762) Log.loadSegments() should log the message in exception

2016-05-27 Thread Ishita Mandhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishita Mandhan reassigned KAFKA-3762:
-

Assignee: Ishita Mandhan

> Log.loadSegments() should log the message in exception
> --
>
> Key: KAFKA-3762
> URL: https://issues.apache.org/jira/browse/KAFKA-3762
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.10.0.0
>Reporter: Jun Rao
>Assignee: Ishita Mandhan
>  Labels: newbie
>
> In Log.loadSegments(), we have the following code. It would be useful to log 
> the error message in IllegalArgumentException.
> if(indexFile.exists()) {
>   try {
>   segment.index.sanityCheck()
>   } catch {
> case e: java.lang.IllegalArgumentException =>
>   warn("Found a corrupted index file, %s, deleting and rebuilding 
> index...".format(indexFile.getAbsolutePath))
>   indexFile.delete()
>   segment.recover(config.maxMessageSize)
>   }
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3755) tightening the offset check in ReplicaFetcherThread

2016-05-25 Thread Ishita Mandhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishita Mandhan reassigned KAFKA-3755:
-

Assignee: Ishita Mandhan

> tightening the offset check in ReplicaFetcherThread
> ---
>
> Key: KAFKA-3755
> URL: https://issues.apache.org/jira/browse/KAFKA-3755
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>Assignee: Ishita Mandhan
>
> Currently, in ReplicaFetcherThread.processPartitionData(), we have the 
> following code to make sure that the fetchOffset matches the log end offset.
>   if (fetchOffset != replica.logEndOffset.messageOffset)
> throw new RuntimeException("Offset mismatch for partition %s: fetched 
> offset = %d, log end offset = %d.".format(topicAndPartition, fetchOffset, 
> replica.logEndOffset.messageOffset))
> It would be useful to further assert that the first offset in the messageSet 
> to be appended to the log is >= than the log end offset.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3158) ConsumerGroupCommand should tell whether group is actually dead

2016-05-25 Thread Ishita Mandhan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300716#comment-15300716
 ] 

Ishita Mandhan commented on KAFKA-3158:
---

Thanks [~hachikuji]! I just submitted a PR for it

> ConsumerGroupCommand should tell whether group is actually dead
> ---
>
> Key: KAFKA-3158
> URL: https://issues.apache.org/jira/browse/KAFKA-3158
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, consumer
>Affects Versions: 0.9.0.0
>Reporter: Jason Gustafson
>Assignee: Ishita Mandhan
>Priority: Minor
>
> Currently the consumer group script reports the following when a group is 
> dead or rebalancing:
> {code}
> Consumer group `foo` does not exist or is rebalancing.
> {code}
> But it's annoying not to know which is actually the case. Since the group 
> state is exposed in the DescribeGroupRequest, we should be able to give 
> different messages for each case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3158) ConsumerGroupCommand should tell whether group is actually dead

2016-05-23 Thread Ishita Mandhan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15296877#comment-15296877
 ] 

Ishita Mandhan commented on KAFKA-3158:
---

[~jasong35] How can I reproduce the error? I'm having trouble killing a 
consumer group and running the describe statement at the same time. I've tried 
using watch and pipe with no luck.

> ConsumerGroupCommand should tell whether group is actually dead
> ---
>
> Key: KAFKA-3158
> URL: https://issues.apache.org/jira/browse/KAFKA-3158
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, consumer
>Affects Versions: 0.9.0.0
>Reporter: Jason Gustafson
>Assignee: Ishita Mandhan
>Priority: Minor
>
> Currently the consumer group script reports the following when a group is 
> dead or rebalancing:
> {code}
> Consumer group `foo` does not exist or is rebalancing.
> {code}
> But it's annoying not to know which is actually the case. Since the group 
> state is exposed in the DescribeGroupRequest, we should be able to give 
> different messages for each case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3511) Provide built-in aggregators sum() and avg() in Kafka Streams DSL

2016-05-17 Thread Ishita Mandhan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286487#comment-15286487
 ] 

Ishita Mandhan commented on KAFKA-3511:
---

Sure, [~enothereska] I could definitely use some help. I'm just getting back 
from vacation so I haven't done much and you can get started with the 
template/initial functions.

> Provide built-in aggregators sum() and avg() in Kafka Streams DSL
> -
>
> Key: KAFKA-3511
> URL: https://issues.apache.org/jira/browse/KAFKA-3511
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Ishita Mandhan
>  Labels: api
> Fix For: 0.10.1.0
>
>
> Currently we only have one built-in aggregate function count() in the Kafka 
> Streams DSL, but we want to add more aggregation functions like sum() and 
> avg().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3158) ConsumerGroupCommand should tell whether group is actually dead

2016-04-19 Thread Ishita Mandhan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15248867#comment-15248867
 ] 

Ishita Mandhan commented on KAFKA-3158:
---

Yes, I'd like to give it a shot. Assigning it to myself. Thanks [~ijuma]

> ConsumerGroupCommand should tell whether group is actually dead
> ---
>
> Key: KAFKA-3158
> URL: https://issues.apache.org/jira/browse/KAFKA-3158
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, consumer
>Affects Versions: 0.9.0.0
>Reporter: Jason Gustafson
>Priority: Minor
>
> Currently the consumer group script reports the following when a group is 
> dead or rebalancing:
> {code}
> Consumer group `foo` does not exist or is rebalancing.
> {code}
> But it's annoying not to know which is actually the case. Since the group 
> state is exposed in the DescribeGroupRequest, we should be able to give 
> different messages for each case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3158) ConsumerGroupCommand should tell whether group is actually dead

2016-04-19 Thread Ishita Mandhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishita Mandhan reassigned KAFKA-3158:
-

Assignee: Ishita Mandhan

> ConsumerGroupCommand should tell whether group is actually dead
> ---
>
> Key: KAFKA-3158
> URL: https://issues.apache.org/jira/browse/KAFKA-3158
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, consumer
>Affects Versions: 0.9.0.0
>Reporter: Jason Gustafson
>Assignee: Ishita Mandhan
>Priority: Minor
>
> Currently the consumer group script reports the following when a group is 
> dead or rebalancing:
> {code}
> Consumer group `foo` does not exist or is rebalancing.
> {code}
> But it's annoying not to know which is actually the case. Since the group 
> state is exposed in the DescribeGroupRequest, we should be able to give 
> different messages for each case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-3158) ConsumerGroupCommand should tell whether group is actually dead

2016-04-19 Thread Ishita Mandhan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15248852#comment-15248852
 ] 

Ishita Mandhan edited comment on KAFKA-3158 at 4/19/16 10:57 PM:
-

Neha, are you working on this bug? [~nehanarkhede]


was (Author: imandhan):
Neha, are you working on this bug?

> ConsumerGroupCommand should tell whether group is actually dead
> ---
>
> Key: KAFKA-3158
> URL: https://issues.apache.org/jira/browse/KAFKA-3158
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, consumer
>Affects Versions: 0.9.0.0
>Reporter: Jason Gustafson
>Assignee: Neha Narkhede
>Priority: Minor
>
> Currently the consumer group script reports the following when a group is 
> dead or rebalancing:
> {code}
> Consumer group `foo` does not exist or is rebalancing.
> {code}
> But it's annoying not to know which is actually the case. Since the group 
> state is exposed in the DescribeGroupRequest, we should be able to give 
> different messages for each case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3158) ConsumerGroupCommand should tell whether group is actually dead

2016-04-19 Thread Ishita Mandhan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15248852#comment-15248852
 ] 

Ishita Mandhan commented on KAFKA-3158:
---

Neha, are you working on this bug?

> ConsumerGroupCommand should tell whether group is actually dead
> ---
>
> Key: KAFKA-3158
> URL: https://issues.apache.org/jira/browse/KAFKA-3158
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, consumer
>Affects Versions: 0.9.0.0
>Reporter: Jason Gustafson
>Assignee: Neha Narkhede
>Priority: Minor
>
> Currently the consumer group script reports the following when a group is 
> dead or rebalancing:
> {code}
> Consumer group `foo` does not exist or is rebalancing.
> {code}
> But it's annoying not to know which is actually the case. Since the group 
> state is exposed in the DescribeGroupRequest, we should be able to give 
> different messages for each case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3511) Provide built-in aggregators sum() and avg() in Kafka Streams DSL

2016-04-13 Thread Ishita Mandhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishita Mandhan reassigned KAFKA-3511:
-

Assignee: Ishita Mandhan

> Provide built-in aggregators sum() and avg() in Kafka Streams DSL
> -
>
> Key: KAFKA-3511
> URL: https://issues.apache.org/jira/browse/KAFKA-3511
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Ishita Mandhan
>  Labels: api, newbie
> Fix For: 0.10.1.0
>
>
> Currently we only have one built-in aggregate function count() in the Kafka 
> Streams DSL, but we want to add more aggregation functions like sum() and 
> avg().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)