[GitHub] kafka pull request: update the command to run a particular test me...

2015-10-16 Thread lindong28
GitHub user lindong28 opened a pull request:

https://github.com/apache/kafka/pull/326

update the command to run a particular test method in readme



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lindong28/kafka minor-readme

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/326.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #326


commit 98d3a3e30692ae1805567892e0cc17819517c283
Author: Dong Lin 
Date:   2015-10-17T06:44:19Z

update the command to run a particular test method in readme




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #36

2015-10-16 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2515: Handle oversized messages properly in new consumer

[wangguoz] KAFKA-2397: add leave group request to force coordinator trigger

[wangguoz] KAFKA-2665: Add images to code github

--
[...truncated 4250 lines...]

org.apache.kafka.copycat.file.FileStreamSourceConnectorTest > 
testMultipleSourcesInvalid PASSED

org.apache.kafka.copycat.file.FileStreamSinkConnectorTest > testSinkTasks PASSED

org.apache.kafka.copycat.file.FileStreamSinkConnectorTest > testTaskClass PASSED

org.apache.kafka.copycat.file.FileStreamSinkTaskTest > testPutFlush PASSED

org.apache.kafka.copycat.file.FileStreamSourceTaskTest > testNormalLifecycle 
PASSED

org.apache.kafka.copycat.file.FileStreamSourceTaskTest > testMissingTopic PASSED
:copycat:json:checkstyleMain
:copycat:json:compileTestJavawarning: [options] bootstrap class path not set in 
conjunction with -source 1.7
Note: 

 uses unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:copycat:json:processTestResources UP-TO-DATE
:copycat:json:testClasses
:copycat:json:checkstyleTest
:copycat:json:test

org.apache.kafka.copycat.json.JsonConverterTest > longToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testCacheSchemaToJsonConversion PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
nullSchemaAndMapNonStringKeysToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > floatToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > booleanToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndMapToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > stringToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timestampToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testCopycatSchemaMetadataTranslation PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timestampToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > decimalToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToCopycatStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToJsonNonStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > longToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mismatchSchemaJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testCacheSchemaToCopycatConversion PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testJsonSchemaMetadataTranslation PASSED

org.apache.kafka.copycat.json.JsonConverterTest > bytesToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > shortToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > intToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > structToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > stringToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndArrayToJson 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > byteToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaPrimitiveToCopycat 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > byteToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > intToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > dateToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > noSchemaToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > noSchemaToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToJsonStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > arrayToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timeToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > structToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > shortToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > dateToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > doubleToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timeToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > floatToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > decimalToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > arrayToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > booleanToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToCopycatNonStringKeys 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > bytesToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > doubleToCopycat PASSED
:copycat:runtime:checkstyleMain
:copycat:runtime:compileTe

Build failed in Jenkins: kafka-trunk-jdk7 #696

2015-10-16 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2397: add leave group request to force coordinator trigger

[wangguoz] KAFKA-2665: Add images to code github

--
[...truncated 322 lines...]
:kafka-trunk-jdk7:clients:jar UP-TO-DATE
:kafka-trunk-jdk7:clients:javadoc UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:compileJava UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:processResources UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:classes UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:jar UP-TO-DATE
:kafka-trunk-jdk7:core:compileJava UP-TO-DATE
:kafka-trunk-jdk7:core:compileScala UP-TO-DATE
:kafka-trunk-jdk7:core:processResources UP-TO-DATE
:kafka-trunk-jdk7:core:classes UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:javadoc
:kafka-trunk-jdk7:core:javadoc
cache taskArtifacts.bin 
(
 is corrupt. Discarding.
:kafka-trunk-jdk7:core:javadocJar
:kafka-trunk-jdk7:core:scaladoc
[ant:scaladoc] Element 
' 
does not exist.
[ant:scaladoc] 
:277:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.uncleanLeaderElectionRate
[ant:scaladoc] ^
[ant:scaladoc] 
:278:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.leaderElectionTimer
[ant:scaladoc] ^
[ant:scaladoc] warning: there were 14 feature warning(s); re-run with -feature 
for details
[ant:scaladoc] 
:72:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:32:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:137:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:120:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:97:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#put".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:152:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#take".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 9 warnings found
:kafka-trunk-jdk7:core:scaladocJar
:kafka-trunk-jdk7:core:docsJar
:docsJar_2_11_7
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk7:clients:compileJavaNote: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.

:kafka-trunk-jdk7:clients:processResources UP-TO-DATE
:kafka-trunk-jdk7:clients:classes
:kafka-trunk-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk7:clients:createVersionFile
:kafka-trunk-jdk7:clients:jar
:kafka-trunk-jdk7:clients:javadoc
:kafka-trunk-jdk7:log4j-appender:compileJava
:kafka-trunk-jdk7:log4j-appender:processResources UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:classes
:kafka-trunk-jdk7:log4j-appender:jar
:kafka-trunk-jdk7:core:compileJava UP-TO-DATE
:kafka-trunk-jdk7:core:compileScala
:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^


Re: [DISCUSS] KIP-37 - Add namespaces in Kafka

2015-10-16 Thread Ashish Singh
On Thu, Oct 15, 2015 at 1:30 PM, Jiangjie Qin 
wrote:

> Hey Jay,
>
> If we allow consumer to subscribe to /*/my-event, does that mean we allow
> consumer to consume cross namespaces?

That is the idea. If a user has permissions then yes, he should be able to
consume from as many namespaces as he wants.


> In that case it seems not
> "hierarchical" but more like a name field filtering. i.e. user can choose
> to consume from topic where datacenter={x,y},
> topic_name={my-topic1,mytopic2}. Am I understanding right?
>
I think it is still hierarchical, however with possible filtering (as you
said).

>
> Thanks,
>
> Jiangjie (Becket) Qin
>
> On Wed, Oct 14, 2015 at 12:49 PM, Jay Kreps  wrote:
>
> > Hey Jason,
> >
> > I actually think this is one of the advantages. The problem we have today
> > is that you can't really do bidirectional replication between clusters
> > because it would actually be a feedback loop.
> >
> > So the intended use would be that you would have a structure where the
> > top-level directory was DIFFERENT but the topic names were the same, so
> if
> > you maintain
> >   /chicago-datacenter/actual-topics
> >   /oregon-datacenter/actual topics
> >   etc.
> > Then you replicate
> >   /chicago-datacenter/* => /oregon-datacenter
> > and
> >   /oregon-datacenter/* => /chicago-datacenter
> >
> > People who want the aggregate feed subscribe to /*/my-event.
> >
> > The nice thing about this is it gives a unified namespace across all
> > locations.
> >
> > Basically exactly what we do now but you no longer need to add new
> clusters
> > to get the namespacing.
> >
> > -Jay
> >
> >
> > On Wed, Oct 14, 2015 at 11:24 AM, Jason Gustafson 
> > wrote:
> >
> > > Hey Ashish, thanks for the write-up. I think having a namespace
> > capability
> > > is a useful feature for Kafka, in particular with the addition of the
> > > authorization layer. I probably prefer Jay's hierarchical approach if
> > we're
> > > going to embed the namespace in the topic name since it seems more
> > general.
> > > That said, one advantage of having a namespace independent of the topic
> > > name is that it simplifies replication between namespaces a bit since
> you
> > > don't have to parse and rewrite topic names. Assuming that hierarchical
> > > topics will happen eventually anyway, I imagine a common pattern would
> be
> > > to preserve the same directory structure in multiple namespaces, so
> > having
> > > an easy mechanism for applications to switch between them would be
> nice.
> > > The namespace is kind of analogous to a chroot in this case. Of course
> > you
> > > can achieve the same thing by having a configurable topic prefix, just
> > you
> > > have to do all the topic rewriting, which I'm guessing will be a little
> > > annoying to implement in all of the clients and tools. However, the
> > > tradeoff (as you mention in the KIP) is that all request schemas have
> to
> > be
> > > updated, which is also annoying.
> > >
> > > -Jason
> > >
> > > On Wed, Oct 14, 2015 at 12:03 AM, Ashish Singh 
> > > wrote:
> > >
> > > > On Mon, Oct 12, 2015 at 7:37 PM, Gwen Shapira 
> > wrote:
> > > >
> > > > > This works really nicely from the consumer side, but what about the
> > > > > producer? If there are no more topics,do we allow producing to a
> > > > directory
> > > > > and have the Partitioner hash-partition messages between all
> > partitions
> > > > in
> > > > > the multiple levels in a directory?
> > > > >
> > > > Good point.
> > > >
> > > > I am personally in favor of maintaining current behavior for
> producer,
> > > > i.e., letting users to only produce to a topic. This is different for
> > > > consumers, the suggested behavior is inline with current behavior.
> One
> > > can
> > > > use regex subscription to achieve the same even today.
> > > >
> > > > >
> > > > > Also, I think we want to preserve the consumer terminology of
> > > "subscribe"
> > > > > to topics / directories, but "assign" partitions - since the
> consumer
> > > > > behavior is different in those cases.
> > > > >
> > > > > On Mon, Oct 12, 2015 at 7:16 PM, Jay Kreps 
> wrote:
> > > > >
> > > > > > Okay this is similar to what I think we have talked about before.
> > Let
> > > > me
> > > > > > elaborate on the idea that I think has been floating around--it's
> > > > pretty
> > > > > > similar with a few differences.
> > > > > >
> > > > > > I think what you are calling the "default namespace" is basically
> > > what
> > > > I
> > > > > > would call the "current working directory" with paths not
> beginning
> > > > with
> > > > > > '/' being interpreted relative to this directory as in the fs.
> > > > > >
> > > > > > One thing you have to work out is what levels in this hierarchy
> you
> > > can
> > > > > > actually subscribe to. I think you are assuming only what we
> > > currently
> > > > > > consider a "topic", i.e. the first level of directories but not
> the
> > > > > > partitions or parent dirs, would be subscribable. If you think
> > about
> > > > it,

[jira] [Created] (KAFKA-2668) Add a metric that records the total number of metrics

2015-10-16 Thread Joel Koshy (JIRA)
Joel Koshy created KAFKA-2668:
-

 Summary: Add a metric that records the total number of metrics
 Key: KAFKA-2668
 URL: https://issues.apache.org/jira/browse/KAFKA-2668
 Project: Kafka
  Issue Type: Improvement
Reporter: Joel Koshy
 Fix For: 0.9.1


Sounds recursive and weird, but this would have been useful while debugging 
KAFKA-2664



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2665) Docs: Images that are part of the documentation are not part of the code github

2015-10-16 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2665.
--
   Resolution: Fixed
Fix Version/s: 0.9.0.0

Issue resolved by pull request 325
[https://github.com/apache/kafka/pull/325]

> Docs: Images that are part of the documentation are not part of the code 
> github
> ---
>
> Key: KAFKA-2665
> URL: https://issues.apache.org/jira/browse/KAFKA-2665
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Fix For: 0.9.0.0
>
>
> This means that:
> 1. We don't include them in the docs release
> 2. Its awkward to modify them or add new documentation with images
> I suggest we store the images under docs/images.
> This also means that every version of the docs in the site (starting at 
> 0.9.0.0) will have its own images directory (otherwise we can't safely modify 
> them if the architecture changes)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2665) Docs: Images that are part of the documentation are not part of the code github

2015-10-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14961603#comment-14961603
 ] 

ASF GitHub Bot commented on KAFKA-2665:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/325


> Docs: Images that are part of the documentation are not part of the code 
> github
> ---
>
> Key: KAFKA-2665
> URL: https://issues.apache.org/jira/browse/KAFKA-2665
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Fix For: 0.9.0.0
>
>
> This means that:
> 1. We don't include them in the docs release
> 2. Its awkward to modify them or add new documentation with images
> I suggest we store the images under docs/images.
> This also means that every version of the docs in the site (starting at 
> 0.9.0.0) will have its own images directory (otherwise we can't safely modify 
> them if the architecture changes)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2665: Docs: Images that are part of the ...

2015-10-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/325


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2665) Docs: Images that are part of the documentation are not part of the code github

2015-10-16 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2665:
-
Assignee: Gwen Shapira

> Docs: Images that are part of the documentation are not part of the code 
> github
> ---
>
> Key: KAFKA-2665
> URL: https://issues.apache.org/jira/browse/KAFKA-2665
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
>
> This means that:
> 1. We don't include them in the docs release
> 2. Its awkward to modify them or add new documentation with images
> I suggest we store the images under docs/images.
> This also means that every version of the docs in the site (starting at 
> 0.9.0.0) will have its own images directory (otherwise we can't safely modify 
> them if the architecture changes)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2397) leave group request

2015-10-16 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2397:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 103
[https://github.com/apache/kafka/pull/103]

> leave group request
> ---
>
> Key: KAFKA-2397
> URL: https://issues.apache.org/jira/browse/KAFKA-2397
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Onur Karaman
>Assignee: Onur Karaman
>Priority: Minor
> Fix For: 0.9.0.0
>
>
> Let's say every consumer in a group has session timeout s. Currently, if a 
> consumer leaves the group, the worst case time to stabilize the group is 2s 
> (s to detect the consumer failure + s for the rebalance window). If a 
> consumer instead can declare they are leaving the group, the worst case time 
> to stabilize the group would just be the s associated with the rebalance 
> window.
> This is a low priority optimization!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2664) Adding a new metric with several pre-existing metrics is very expensive

2015-10-16 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14961595#comment-14961595
 ] 

Joel Koshy commented on KAFKA-2664:
---

[~gwenshap] in general yes it could, but  if we did register per-connection 
metrics it is unlikely to cause as much of an issue as per-client-id metrics if 
you have clients that improperly generate a new client-id for every reconnect. 
This is because you would (typically) have of the order of a few hundred or low 
thousands of connection-id's; and once those have become registered you 
wouldn't need to add anymore even if you have many of those clients 
reconnecting frequently. That said, it is currently disabled (i.e., we don't 
register per-connection metrics) in the server-side selector.

bq.1. Can you specify which git-hash you reverted to?

The version we rolled back to does include KAFKA-1928 if that's what you are 
asking and multi-port support as well; but since those per-connection metrics 
are disabled it is probably irrelevant here.

bq. 2. Did you profile the connection? Or is this an educated guess of where 
time went?

I forgot to mention this above, but after the above episode and a mild 
suspicion fell on quota metrics I had a separate stress test for quota metrics 
- the easiest way to observe this is to synthetically call 
{{QuotaManager.recordAndMaybeThrottle}} in a loop and profile it. Most of the 
time is spent in the copy on write map and map resizes. So yes it was an 
educated guess until today... because today we deliberately reproduced this 
today in production and attached a profiler to the broker to verify that the 
higher local times were due to the creation of per-client id quota metrics.

> Adding a new metric with several pre-existing metrics is very expensive
> ---
>
> Key: KAFKA-2664
> URL: https://issues.apache.org/jira/browse/KAFKA-2664
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
> Fix For: 0.9.0.1
>
>
> I know the summary sounds expected, but we recently ran into a socket server 
> request queue backup that I suspect was caused by a combination of improperly 
> implemented applications that reconnect with a different (random) client-id 
> each time; and the fact that for quotas we now register a new quota 
> metric-set for each client-id.
> So here is what happened: a broker went down and a handful of other brokers 
> starting seeing queue times go up significantly. This caused the request 
> queue to backup, which caused socket timeouts and a further deluge of 
> reconnects. The only way we could get out of this was to fire-wall the broker 
> and downgrade to a version without quotas (or I think it would have worked to 
> just restart the broker).
> My guess is that there were a ton of pre-existing client-id metrics. I don’t 
> know for sure but I’m basing that on the fact that there were several new 
> unique client-ids showing up in the public access logs and request local 
> times for fetches started going up inexplicably. (It would have been useful 
> to have a metric for the number of metrics.) So it turns out that in the 
> above scenario (with say 50k pre-existing client-ids), the avg local time for 
> fetch can go up to the order of 50-100ms (at least with tests on a linux box) 
> largely due to the time taken to create new metrics; and that’s because we 
> use a copy-on-write map underneath. If you have enough (say, hundreds) of 
> clients re-connecting at the same time with new client-id's, that can cause 
> the request queues to start backing up and the overall queuing system to 
> become unstable; and the line starts to spill out of the building.
> I think this is a fairly new scenario with quotas - i.e., I don’t think the 
> past per-X metrics (per-topic for e.g.,) creation rate would ever come this 
> close.
> To be clear, the clients are clearly doing the wrong thing but I think the 
> broker can and should protect itself adequately against such rogue scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2397) leave group request

2015-10-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14961597#comment-14961597
 ] 

ASF GitHub Bot commented on KAFKA-2397:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/103


> leave group request
> ---
>
> Key: KAFKA-2397
> URL: https://issues.apache.org/jira/browse/KAFKA-2397
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Onur Karaman
>Assignee: Onur Karaman
>Priority: Minor
> Fix For: 0.9.0.0
>
>
> Let's say every consumer in a group has session timeout s. Currently, if a 
> consumer leaves the group, the worst case time to stabilize the group is 2s 
> (s to detect the consumer failure + s for the rebalance window). If a 
> consumer instead can declare they are leaving the group, the worst case time 
> to stabilize the group would just be the s associated with the rebalance 
> window.
> This is a low priority optimization!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2397: leave group request

2015-10-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/103


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2515) handle oversized messages properly in new consumer

2015-10-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14961587#comment-14961587
 ] 

ASF GitHub Bot commented on KAFKA-2515:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/318


> handle oversized messages properly in new consumer
> --
>
> Key: KAFKA-2515
> URL: https://issues.apache.org/jira/browse/KAFKA-2515
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> When there is an oversized message in the broker, it seems that the new 
> consumer just silently gets stuck. We should at least log an error when this 
> happens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2515) handle oversized messages properly in new consumer

2015-10-16 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2515.
--
Resolution: Fixed

Issue resolved by pull request 318
[https://github.com/apache/kafka/pull/318]

> handle oversized messages properly in new consumer
> --
>
> Key: KAFKA-2515
> URL: https://issues.apache.org/jira/browse/KAFKA-2515
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> When there is an oversized message in the broker, it seems that the new 
> consumer just silently gets stuck. We should at least log an error when this 
> happens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2515: Handle oversized messages properly...

2015-10-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/318


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #35

2015-10-16 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2484: Add schema projection utilities

--
[...truncated 2800 lines...]

kafka.log.LogConfigTest > testKafkaConfigToProps PASSED

kafka.log.LogConfigTest > testFromPropsInvalid PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED

kafka.log.LogTest > testParseTopicPartitionNameForNull PASSED

kafka.log.LogTest > testAppendAndReadWithNonSequentialOffsets PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingSeparator PASSED

kafka.log.LogTest > testCorruptIndexRebuild PASSED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved PASSED

kafka.log.LogTest > testCompressedMessages PASSED

kafka.log.LogTest > testAppendMessageWithNullPayload PASSED

kafka.log.LogTest > testCorruptLog PASSED

kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED

kafka.log.LogTest > testReopenThenTruncate PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition PASSED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testSizeBasedLogRoll PASSED

kafka.log.LogTest > testTimeBasedLogRollJitter PASSED

kafka.log.LogTest > testParseTopicPartitionName PASSED

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.producer.SyncProducerTest > testReachableServer PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge PASSED

kafka.producer.SyncProducerTest > testNotEnoughReplicas PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero PASSED

kafka.producer.SyncProducerTest > testProducerCanTimeout PASSED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse PASSED

kafka.producer.SyncProducerTest > testEmptyProduceRequest PASSED

kafka.producer.SyncProducerTest > testProduceCorrectlyReceivesResponse PASSED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic PASSED

kafka.producer.AsyncProducerTest > testQueueTimeExpired PASSED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents PASSED

kafka.producer.AsyncProducerTest > testBatchSize PASSED

kafka.producer.AsyncProducerTest > testSerializeEvents PASSED

kafka.producer.AsyncProducerTest > testProducerQueueSize PASSED

kafka.producer.AsyncProducerTest > testRandomPartitioner PASSED

kafka.producer.AsyncProducerTest > testInvalidConfiguration PASSED

kafka.producer.AsyncProducerTest > testInvalid

[jira] [Created] (KAFKA-2667) Copycat KafkaBasedLogTest.testSendAndReadToEnd transient failure

2015-10-16 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-2667:
--

 Summary: Copycat KafkaBasedLogTest.testSendAndReadToEnd transient 
failure
 Key: KAFKA-2667
 URL: https://issues.apache.org/jira/browse/KAFKA-2667
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jason Gustafson


Seen in recent builds:
{code}
org.apache.kafka.copycat.util.KafkaBasedLogTest > testSendAndReadToEnd FAILED
java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.kafka.copycat.util.KafkaBasedLogTest.testSendAndReadToEnd(KafkaBasedLogTest.java:335)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2665) Docs: Images that are part of the documentation are not part of the code github

2015-10-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14961534#comment-14961534
 ] 

ASF GitHub Bot commented on KAFKA-2665:
---

GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/325

KAFKA-2665: Docs: Images that are part of the documentation are not p…

…art of the code github

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka KAFKA-2665

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/325.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #325


commit ae4eac32fd4d7bed2c95a6e2d4eedb2f3ce12665
Author: Gwen Shapira 
Date:   2015-10-16T23:33:14Z

KAFKA-2665: Docs: Images that are part of the documentation are not part of 
the code github




> Docs: Images that are part of the documentation are not part of the code 
> github
> ---
>
> Key: KAFKA-2665
> URL: https://issues.apache.org/jira/browse/KAFKA-2665
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>
> This means that:
> 1. We don't include them in the docs release
> 2. Its awkward to modify them or add new documentation with images
> I suggest we store the images under docs/images.
> This also means that every version of the docs in the site (starting at 
> 0.9.0.0) will have its own images directory (otherwise we can't safely modify 
> them if the architecture changes)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2665: Docs: Images that are part of the ...

2015-10-16 Thread gwenshap
GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/325

KAFKA-2665: Docs: Images that are part of the documentation are not p…

…art of the code github

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka KAFKA-2665

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/325.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #325


commit ae4eac32fd4d7bed2c95a6e2d4eedb2f3ce12665
Author: Gwen Shapira 
Date:   2015-10-16T23:33:14Z

KAFKA-2665: Docs: Images that are part of the documentation are not part of 
the code github




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #34

2015-10-16 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2594: Add InMemoryLRUCacheStore as a preliminary method for

--
[...truncated 6255 lines...]

org.apache.kafka.copycat.file.FileStreamSourceConnectorTest > 
testSourceTasksStdin PASSED

org.apache.kafka.copycat.file.FileStreamSourceConnectorTest > testTaskClass 
PASSED

org.apache.kafka.copycat.file.FileStreamSourceConnectorTest > 
testMultipleSourcesInvalid PASSED

org.apache.kafka.copycat.file.FileStreamSinkConnectorTest > testSinkTasks PASSED

org.apache.kafka.copycat.file.FileStreamSinkConnectorTest > testTaskClass PASSED

org.apache.kafka.copycat.file.FileStreamSinkTaskTest > testPutFlush PASSED
:copycat:json:checkstyleMain
:copycat:json:compileTestJavawarning: [options] bootstrap class path not set in 
conjunction with -source 1.7
Note: 

 uses unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:copycat:json:processTestResources UP-TO-DATE
:copycat:json:testClasses
:copycat:json:checkstyleTest
:copycat:json:test

org.apache.kafka.copycat.json.JsonConverterTest > longToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testCacheSchemaToJsonConversion PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
nullSchemaAndMapNonStringKeysToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > floatToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > booleanToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndMapToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > stringToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timestampToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testCopycatSchemaMetadataTranslation PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timestampToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > decimalToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToCopycatStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToJsonNonStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > longToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mismatchSchemaJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testCacheSchemaToCopycatConversion PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testJsonSchemaMetadataTranslation PASSED

org.apache.kafka.copycat.json.JsonConverterTest > bytesToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > shortToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > intToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > structToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > stringToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndArrayToJson 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > byteToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaPrimitiveToCopycat 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > byteToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > intToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > dateToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > noSchemaToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > noSchemaToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToJsonStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > arrayToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timeToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > structToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > shortToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > dateToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > doubleToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timeToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > floatToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > decimalToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > arrayToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > booleanToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToCopycatNonStringKeys 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > bytesToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > doubleToCopycat PASSED
:copycat:runtime:checkstyleMain
:copycat:runtime:compileTestJavawarning: [options] bootstrap class path not set 
in conjunction with -source 1.7
Note: Some input files use u

Re: [DISCUSS] KIP-36 - Rack aware replica assignment

2015-10-16 Thread Allen Wang
KIP is updated include rack as an optional property for broker. Please take
a look and let me know if more details are needed.

For the case where some brokers have rack and some do not, the current KIP
uses the fail-fast behavior. If there are concerns, we can further discuss
this in the email thread or next hangout.



On Thu, Oct 15, 2015 at 10:42 AM, Allen Wang  wrote:

> That's a good question. I can think of three actions if the rack
> information is incomplete:
>
> 1. Treat the node without rack as if it is on its unique rack
> 2. Disregard all rack information and fallback to current algorithm
> 3. Fail-fast
>
> Now I think about it, one and three make more sense. The reason for
> fail-fast is that user mistake for not providing the rack may never be
> found if we tolerate that and the assignment may not be rack aware as the
> user has expected and this creates debug problems when things fail.
>
> What do you think? If not fail-fast, is there anyway we can make the user
> error standing out?
>
>
> On Thu, Oct 15, 2015 at 10:17 AM, Gwen Shapira  wrote:
>
>> Thanks! Just to clarify, when some brokers have rack assignment and some
>> don't, do we act like none of them have it? or like those without
>> assignment are in their own rack?
>>
>> The first scenario is good when first setting up rack-awareness, but the
>> second makes more sense for on-going maintenance (I can totally see
>> someone
>> adding a node and forgetting to set the rack property, we don't want this
>> to change behavior for anything except the new node).
>>
>> What do you think?
>>
>> Gwen
>>
>> On Thu, Oct 15, 2015 at 10:13 AM, Allen Wang 
>> wrote:
>>
>> > For scenario 1:
>> >
>> > - Add the rack information to broker property file or dynamically set
>> it in
>> > the wrapper code to bootstrap Kafka server. You would do that for all
>> > brokers and restart the brokers one by one.
>> >
>> > In this scenario, the complete broker to rack mapping may not be
>> available
>> > until every broker is restarted. During that time we fall back to
>> default
>> > replica assignment algorithm.
>> >
>> > For scenario 2:
>> >
>> > - Add the rack information to broker property file or dynamically set
>> it in
>> > the wrapper code and start the broker.
>> >
>> >
>> > On Wed, Oct 14, 2015 at 2:36 PM, Gwen Shapira 
>> wrote:
>> >
>> > > Can you clarify the workflow for the following scenarios:
>> > >
>> > > 1. I currently have 6 brokers and want to add rack information for
>> each
>> > > 2. I'm adding a new broker and I want to specify which rack it
>> belongs on
>> > > while adding it.
>> > >
>> > > Thanks!
>> > >
>> > > On Tue, Oct 13, 2015 at 2:21 PM, Allen Wang 
>> > wrote:
>> > >
>> > > > We discussed the KIP in the hangout today. The recommendation is to
>> > make
>> > > > rack as a broker property in ZooKeeper. For users with existing rack
>> > > > information stored somewhere, they would need to retrieve the
>> > information
>> > > > at broker start up and dynamically set the rack property, which can
>> be
>> > > > implemented as a wrapper to bootstrap broker. There will be no
>> > interface
>> > > or
>> > > > pluggable implementation to retrieve the rack information.
>> > > >
>> > > > The assumption is that you always need to restart the broker to
>> make a
>> > > > change to the rack.
>> > > >
>> > > > Once the rack becomes a broker property, it will be possible to make
>> > rack
>> > > > part of the meta data to help the consumer choose which in sync
>> replica
>> > > to
>> > > > consume from as part of the future consumer enhancement.
>> > > >
>> > > > I will update the KIP.
>> > > >
>> > > > Thanks,
>> > > > Allen
>> > > >
>> > > >
>> > > > On Thu, Oct 8, 2015 at 9:23 AM, Allen Wang 
>> > wrote:
>> > > >
>> > > > > I attended Tuesday's KIP hangout but this KIP was not discussed
>> due
>> > to
>> > > > > time constraint.
>> > > > >
>> > > > > However, after hearing discussion of KIP-35, I have the feeling
>> that
>> > > > > incompatibility (caused by new broker property) between brokers
>> with
>> > > > > different versions  will be solved there. In addition, having
>> stack
>> > in
>> > > > > broker property as meta data may also help consumers in the
>> future.
>> > So
>> > > I
>> > > > am
>> > > > > open to adding stack property to broker.
>> > > > >
>> > > > > Hopefully we can discuss this in the next KIP hangout.
>> > > > >
>> > > > > On Wed, Sep 30, 2015 at 2:46 PM, Allen Wang > >
>> > > > wrote:
>> > > > >
>> > > > >> Can you send me the information on the next KIP hangout?
>> > > > >>
>> > > > >> Currently the broker-rack mapping is not cached. In KafkaApis,
>> > > > >> RackLocator.getRackInfo() is called each time the mapping is
>> needed
>> > > for
>> > > > >> auto topic creation. This will ensure latest mapping is used at
>> any
>> > > > time.
>> > > > >>
>> > > > >> The ability to get the complete mapping makes it simple to reuse
>> the
>> > > > same
>> > > > >> interface in command line tools.
>> > > > >>
>> > > > >>
>> >

[jira] [Commented] (KAFKA-2484) Add schema projection utilities

2015-10-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14961476#comment-14961476
 ] 

ASF GitHub Bot commented on KAFKA-2484:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/307


> Add schema projection utilities
> ---
>
> Key: KAFKA-2484
> URL: https://issues.apache.org/jira/browse/KAFKA-2484
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Liquan Pei
>Priority: Minor
> Fix For: 0.9.0.0
>
>
> Since Copycat has support for versioned schemas and connectors may encounter 
> different versions of the same schema, it will be useful for some connectors 
> to be able to project between different versions of a schema, or have an 
> automatic way to try to project to a target schema (e.g. an existing database 
> table the connector is trying to write data to).
> These utilities should be pretty small because the complex types we support 
> are fairly limited. The primary code required will be for Structs. However, 
> we should take care in designing these utilities since there may be 
> performance implications. For example, when projecting between two schemas, 
> it would be better to come up with a plan object that can efficiently perform 
> the project and be able to reuse that plan many times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2484) Add schema projection utilities

2015-10-16 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2484:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 307
[https://github.com/apache/kafka/pull/307]

> Add schema projection utilities
> ---
>
> Key: KAFKA-2484
> URL: https://issues.apache.org/jira/browse/KAFKA-2484
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Liquan Pei
>Priority: Minor
> Fix For: 0.9.0.0
>
>
> Since Copycat has support for versioned schemas and connectors may encounter 
> different versions of the same schema, it will be useful for some connectors 
> to be able to project between different versions of a schema, or have an 
> automatic way to try to project to a target schema (e.g. an existing database 
> table the connector is trying to write data to).
> These utilities should be pretty small because the complex types we support 
> are fairly limited. The primary code required will be for Structs. However, 
> we should take care in designing these utilities since there may be 
> performance implications. For example, when projecting between two schemas, 
> it would be better to come up with a plan object that can efficiently perform 
> the project and be able to reuse that plan many times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2484: Add schema projection utilities

2015-10-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/307


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2484) Add schema projection utilities

2015-10-16 Thread Liquan Pei (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liquan Pei updated KAFKA-2484:
--
Status: Patch Available  (was: In Progress)

> Add schema projection utilities
> ---
>
> Key: KAFKA-2484
> URL: https://issues.apache.org/jira/browse/KAFKA-2484
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Liquan Pei
>Priority: Minor
> Fix For: 0.9.0.0
>
>
> Since Copycat has support for versioned schemas and connectors may encounter 
> different versions of the same schema, it will be useful for some connectors 
> to be able to project between different versions of a schema, or have an 
> automatic way to try to project to a target schema (e.g. an existing database 
> table the connector is trying to write data to).
> These utilities should be pretty small because the complex types we support 
> are fairly limited. The primary code required will be for Structs. However, 
> we should take care in designing these utilities since there may be 
> performance implications. For example, when projecting between two schemas, 
> it would be better to come up with a plan object that can efficiently perform 
> the project and be able to reuse that plan many times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2484) Add schema projection utilities

2015-10-16 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-2484:
-
Reviewer: Gwen Shapira

> Add schema projection utilities
> ---
>
> Key: KAFKA-2484
> URL: https://issues.apache.org/jira/browse/KAFKA-2484
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Liquan Pei
>Priority: Minor
> Fix For: 0.9.0.0
>
>
> Since Copycat has support for versioned schemas and connectors may encounter 
> different versions of the same schema, it will be useful for some connectors 
> to be able to project between different versions of a schema, or have an 
> automatic way to try to project to a target schema (e.g. an existing database 
> table the connector is trying to write data to).
> These utilities should be pretty small because the complex types we support 
> are fairly limited. The primary code required will be for Structs. However, 
> we should take care in designing these utilities since there may be 
> performance implications. For example, when projecting between two schemas, 
> it would be better to come up with a plan object that can efficiently perform 
> the project and be able to reuse that plan many times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2665) Docs: Images that are part of the documentation are not part of the code github

2015-10-16 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-2665:
---

 Summary: Docs: Images that are part of the documentation are not 
part of the code github
 Key: KAFKA-2665
 URL: https://issues.apache.org/jira/browse/KAFKA-2665
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira


This means that:
1. We don't include them in the docs release
2. Its awkward to modify them or add new documentation with images

I suggest we store the images under docs/images.
This also means that every version of the docs in the site (starting at 
0.9.0.0) will have its own images directory (otherwise we can't safely modify 
them if the architecture changes)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2666) Docs: Automatically generate documentation from config classes

2015-10-16 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-2666:
---

 Summary: Docs: Automatically generate documentation from config 
classes
 Key: KAFKA-2666
 URL: https://issues.apache.org/jira/browse/KAFKA-2666
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira


Our new config classes (KafkaConfig, ProducerConfig and ConsumerConfig) have 
main() method that can automatically generate documentation.

Will be nice if our build/release process could automatically generate them and 
plug them into the docs in a way that makes sense.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2412) Documentation bug: Add information for key.serializer and value.serializer to New Producer Config sections

2015-10-16 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-2412.
-
Resolution: Fixed

In 0.9.0.0 we plan to auto-generate the configuration docs from code, so all 
the necessary docs will be there automatically.

> Documentation bug: Add information for key.serializer and value.serializer to 
> New Producer Config sections
> --
>
> Key: KAFKA-2412
> URL: https://issues.apache.org/jira/browse/KAFKA-2412
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jeremy Fields
>Assignee: Grayson Chao
>Priority: Blocker
>  Labels: newbie
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-2412-r1.diff, KAFKA-2412.diff
>
>
> As key.serializer and value.serializer are required options when using the 
> new producer, they should be mentioned in the documentation ( here and svn 
> http://kafka.apache.org/documentation.html#newproducerconfigs )
> Appropriate values for these options exist in javadoc and producer.java 
> examples; however, not everyone is reading those, as is the case for anyone 
> setting up a producer.config file for mirrormaker.
> A sensible default should be suggested, such as
> org.apache.kafka.common.serialization.StringSerializer
> Or at least a mention of the key.serializer and value.serializer options 
> along with a link to javadoc
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2640) Add tests for ZK authentication

2015-10-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14961427#comment-14961427
 ] 

ASF GitHub Bot commented on KAFKA-2640:
---

GitHub user fpj opened a pull request:

https://github.com/apache/kafka/pull/324

KAFKA-2640: Add tests for ZK authentication

I've added a couple of initial tests to verify the functionality. I've 
tested that the JAAS config file loads properly and SASL with DIGEST-MD5 works 
with ZooKeeper. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/fpj/kafka KAFKA-2640

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/324.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #324


commit 6a1ca42c41f0e577e33bf92cdc6aa6ec3a8da237
Author: flavio junqueira 
Date:   2015-10-12T21:55:10Z

Initial pass, main code compiles

commit afeafabdcefc2dd93f28ab5e23041be7ebe08f3b
Author: flavio junqueira 
Date:   2015-10-13T12:10:43Z

Changes to tests to accomodate the refactoring of ZkUtils.

commit 66b116aace0990182d76b6591b50491f072b95cb
Author: flavio junqueira 
Date:   2015-10-13T12:59:06Z

Removed whitespaces.

commit 36c3720c734c003d306af76846a5531d844fdfc3
Author: flavio junqueira 
Date:   2015-10-13T13:27:43Z

KAFKA-2639: Added close() method to ZkUtils

commit a7ae4337f93c194f56456bd3b96c2f874e248190
Author: flavio junqueira 
Date:   2015-10-13T13:49:21Z

KAFKA-2639: Fixed PartitionAssignorTest

commit 78ee23d52ec0b4163c510b581910a610cd343c89
Author: flavio junqueira 
Date:   2015-10-13T14:09:50Z

KAFKA-2639: Fixed ReplicaManagerTest

commit 2e888de090cf4b33a09e6184217191c3563aea9d
Author: flavio junqueira 
Date:   2015-10-13T14:26:33Z

KAFKA-2639: Fixed ZKPathTest

commit b94fd4bba9b8a29b7bbab62c74a4214b74504704
Author: flavio junqueira 
Date:   2015-10-13T16:03:05Z

KAFKA-2639: Made isSecure a parameter of the factory methods for ZkUtils

commit bd46f61800a5e47e5d4c15df2c5721303bc4c65e
Author: flavio junqueira 
Date:   2015-10-13T16:49:05Z

KAFKA-2639: Fixed KafkaConfigTest.

commit 8c69f239a0bd255b0379d761970996f380b00cd6
Author: flavio junqueira 
Date:   2015-10-14T08:38:17Z

KAFKA-2639: Removing config via KafkaConfig.

commit 00a816957c85601af0b4434768174bd1de91fe34
Author: flavio junqueira 
Date:   2015-10-14T08:41:04Z

KAFKA-2639: Removed whitespaces.

commit 6b2fd2af854c9e762331391723286fe53c91f2b3
Author: flavio junqueira 
Date:   2015-10-14T11:36:25Z

Adding initial configuration and support to set acls

commit 311612f533a8378b7c922016e632c3b703acc844
Author: flavio junqueira 
Date:   2015-10-14T11:49:54Z

KAFKA-2639: Removed unrelated comment from ZkUtils.

commit f76c72a71fd386e2dfe2fb0351d07f2b9eca0bef
Author: flavio junqueira 
Date:   2015-10-14T13:43:07Z

KAFKA-2641: First cut at the ZK Security Migration Tool.

commit fb9a52a5becf910981f7bc42634f96a047f5750b
Author: flavio junqueira 
Date:   2015-10-14T15:12:55Z

KAFKA-2639: Moved isSecure to JaasUtils in clients.

commit 8314c7f4a91bca7b48c0909d46211ba1db6ecb3b
Author: flavio junqueira 
Date:   2015-10-14T17:14:29Z

KAFKA-2639: Covering more zk system properties.

commit 76a802d612f531d0a36c3ceab8c2a1f8964b558b
Author: flavio junqueira 
Date:   2015-10-14T17:31:48Z

KAFKA-2639: Small update to log message and exception message in JaasUtils.

commit 45e39b6874e5082b659ad04c72cd693cf1bc26d8
Author: flavio junqueira 
Date:   2015-10-15T08:41:37Z

KAFKA-2641: Adding script and moving the tool to the admin package.

commit 83e1dc545a72c762ff7ced24dd22cdcbb1204e74
Author: flavio junqueira 
Date:   2015-10-15T08:51:02Z

Merge branch 'KAFKA-2639' into KAFKA-2641

commit 02f1ae2d514bab264d561e606d652ecd7316d600
Author: flavio junqueira 
Date:   2015-10-15T09:30:06Z

Merge remote-tracking branch 'upstream/trunk' into KAFKA-2641

commit fd799b02dabc958da96d138d8f8791f8aa35cc70
Author: flavio junqueira 
Date:   2015-10-15T09:35:39Z

Merge remote-tracking branch 'upstream/trunk' into KAFKA-2639

commit cb0c751f01a65ac882a880185ec090b979ff6007
Author: flavio junqueira 
Date:   2015-10-15T10:50:27Z

KAFKA-2641: Polished migration tool code.

commit 58e17b2d0a2d53c53050efc68cb83de7649b9ea2
Author: flavio junqueira 
Date:   2015-10-15T12:06:11Z

KAFKA-2639: create->apply and isSecure->isZkSecurityEnabled

commit 943f30d428b2c6ddb1c731c91ae4e8b2e6b177c7
Author: flavio junqueira 
Date:   2015-10-16T08:54:51Z

Merge branch 'KAFKA-2639' into KAFKA-2641

commit 3e9d11cbbdcb8edd67a013672489cf27d8d3dcaa
Author: flavio junqueira 
Date:   2015-10-16T09:17:49Z

KAFKA-2640: First test case along with conf file.

commit 4a672033ad67299d5a9c9806198458a397b939aa
Author: flavio junqueira 
Date:   2015-10-16T

[GitHub] kafka pull request: KAFKA-2640: Add tests for ZK authentication

2015-10-16 Thread fpj
GitHub user fpj opened a pull request:

https://github.com/apache/kafka/pull/324

KAFKA-2640: Add tests for ZK authentication

I've added a couple of initial tests to verify the functionality. I've 
tested that the JAAS config file loads properly and SASL with DIGEST-MD5 works 
with ZooKeeper. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/fpj/kafka KAFKA-2640

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/324.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #324


commit 6a1ca42c41f0e577e33bf92cdc6aa6ec3a8da237
Author: flavio junqueira 
Date:   2015-10-12T21:55:10Z

Initial pass, main code compiles

commit afeafabdcefc2dd93f28ab5e23041be7ebe08f3b
Author: flavio junqueira 
Date:   2015-10-13T12:10:43Z

Changes to tests to accomodate the refactoring of ZkUtils.

commit 66b116aace0990182d76b6591b50491f072b95cb
Author: flavio junqueira 
Date:   2015-10-13T12:59:06Z

Removed whitespaces.

commit 36c3720c734c003d306af76846a5531d844fdfc3
Author: flavio junqueira 
Date:   2015-10-13T13:27:43Z

KAFKA-2639: Added close() method to ZkUtils

commit a7ae4337f93c194f56456bd3b96c2f874e248190
Author: flavio junqueira 
Date:   2015-10-13T13:49:21Z

KAFKA-2639: Fixed PartitionAssignorTest

commit 78ee23d52ec0b4163c510b581910a610cd343c89
Author: flavio junqueira 
Date:   2015-10-13T14:09:50Z

KAFKA-2639: Fixed ReplicaManagerTest

commit 2e888de090cf4b33a09e6184217191c3563aea9d
Author: flavio junqueira 
Date:   2015-10-13T14:26:33Z

KAFKA-2639: Fixed ZKPathTest

commit b94fd4bba9b8a29b7bbab62c74a4214b74504704
Author: flavio junqueira 
Date:   2015-10-13T16:03:05Z

KAFKA-2639: Made isSecure a parameter of the factory methods for ZkUtils

commit bd46f61800a5e47e5d4c15df2c5721303bc4c65e
Author: flavio junqueira 
Date:   2015-10-13T16:49:05Z

KAFKA-2639: Fixed KafkaConfigTest.

commit 8c69f239a0bd255b0379d761970996f380b00cd6
Author: flavio junqueira 
Date:   2015-10-14T08:38:17Z

KAFKA-2639: Removing config via KafkaConfig.

commit 00a816957c85601af0b4434768174bd1de91fe34
Author: flavio junqueira 
Date:   2015-10-14T08:41:04Z

KAFKA-2639: Removed whitespaces.

commit 6b2fd2af854c9e762331391723286fe53c91f2b3
Author: flavio junqueira 
Date:   2015-10-14T11:36:25Z

Adding initial configuration and support to set acls

commit 311612f533a8378b7c922016e632c3b703acc844
Author: flavio junqueira 
Date:   2015-10-14T11:49:54Z

KAFKA-2639: Removed unrelated comment from ZkUtils.

commit f76c72a71fd386e2dfe2fb0351d07f2b9eca0bef
Author: flavio junqueira 
Date:   2015-10-14T13:43:07Z

KAFKA-2641: First cut at the ZK Security Migration Tool.

commit fb9a52a5becf910981f7bc42634f96a047f5750b
Author: flavio junqueira 
Date:   2015-10-14T15:12:55Z

KAFKA-2639: Moved isSecure to JaasUtils in clients.

commit 8314c7f4a91bca7b48c0909d46211ba1db6ecb3b
Author: flavio junqueira 
Date:   2015-10-14T17:14:29Z

KAFKA-2639: Covering more zk system properties.

commit 76a802d612f531d0a36c3ceab8c2a1f8964b558b
Author: flavio junqueira 
Date:   2015-10-14T17:31:48Z

KAFKA-2639: Small update to log message and exception message in JaasUtils.

commit 45e39b6874e5082b659ad04c72cd693cf1bc26d8
Author: flavio junqueira 
Date:   2015-10-15T08:41:37Z

KAFKA-2641: Adding script and moving the tool to the admin package.

commit 83e1dc545a72c762ff7ced24dd22cdcbb1204e74
Author: flavio junqueira 
Date:   2015-10-15T08:51:02Z

Merge branch 'KAFKA-2639' into KAFKA-2641

commit 02f1ae2d514bab264d561e606d652ecd7316d600
Author: flavio junqueira 
Date:   2015-10-15T09:30:06Z

Merge remote-tracking branch 'upstream/trunk' into KAFKA-2641

commit fd799b02dabc958da96d138d8f8791f8aa35cc70
Author: flavio junqueira 
Date:   2015-10-15T09:35:39Z

Merge remote-tracking branch 'upstream/trunk' into KAFKA-2639

commit cb0c751f01a65ac882a880185ec090b979ff6007
Author: flavio junqueira 
Date:   2015-10-15T10:50:27Z

KAFKA-2641: Polished migration tool code.

commit 58e17b2d0a2d53c53050efc68cb83de7649b9ea2
Author: flavio junqueira 
Date:   2015-10-15T12:06:11Z

KAFKA-2639: create->apply and isSecure->isZkSecurityEnabled

commit 943f30d428b2c6ddb1c731c91ae4e8b2e6b177c7
Author: flavio junqueira 
Date:   2015-10-16T08:54:51Z

Merge branch 'KAFKA-2639' into KAFKA-2641

commit 3e9d11cbbdcb8edd67a013672489cf27d8d3dcaa
Author: flavio junqueira 
Date:   2015-10-16T09:17:49Z

KAFKA-2640: First test case along with conf file.

commit 4a672033ad67299d5a9c9806198458a397b939aa
Author: flavio junqueira 
Date:   2015-10-16T09:23:48Z

KAFKA-2641: Fixed a couple of compilation issues with tests.

commit 967ded176464eb9eada6051cb0ed7113438af0d0
Author: flavio junqueira 
Date:   2015-10-16T09:24:07Z

Merge branch 'KAFKA-2641' into KAFKA-2640

commit 53ccead1f8300f18ccc46

[jira] [Updated] (KAFKA-2594) Add a key-value store that is a fixed-capacity in-memory LRU cache

2015-10-16 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2594:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 256
[https://github.com/apache/kafka/pull/256]

> Add a key-value store that is a fixed-capacity in-memory LRU cache 
> ---
>
> Key: KAFKA-2594
> URL: https://issues.apache.org/jira/browse/KAFKA-2594
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Randall Hauch
>Assignee: Randall Hauch
> Fix For: 0.9.0.0
>
>
> The current {{KeyValueStore}} implementations are not limited in size, and 
> thus are less useful for some use cases. This subtask will add a simple 
> key-value store that maintains in memory at most a maximum number of entries 
> that were recently read or written. When the cache size reaches the capacity 
> and a new entry is to be added, the least recently used entry will be 
> automatically purged from the cache. This key-value store will extend 
> {{MeteredKeyValueStore}} for monitoring and recording of changes to a backing 
> topic, enabling recovery of the cache contents from the replicated state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2594) Add a key-value store that is a fixed-capacity in-memory LRU cache

2015-10-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14961401#comment-14961401
 ] 

ASF GitHub Bot commented on KAFKA-2594:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/256


> Add a key-value store that is a fixed-capacity in-memory LRU cache 
> ---
>
> Key: KAFKA-2594
> URL: https://issues.apache.org/jira/browse/KAFKA-2594
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Randall Hauch
>Assignee: Randall Hauch
> Fix For: 0.9.0.0
>
>
> The current {{KeyValueStore}} implementations are not limited in size, and 
> thus are less useful for some use cases. This subtask will add a simple 
> key-value store that maintains in memory at most a maximum number of entries 
> that were recently read or written. When the cache size reaches the capacity 
> and a new entry is to be added, the least recently used entry will be 
> automatically purged from the cache. This key-value store will extend 
> {{MeteredKeyValueStore}} for monitoring and recording of changes to a backing 
> topic, enabling recovery of the cache contents from the replicated state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2594 Added InMemoryLRUCacheStore

2015-10-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/256


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2664) Adding a new metric with several pre-existing metrics is very expensive

2015-10-16 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14961345#comment-14961345
 ] 

Gwen Shapira commented on KAFKA-2664:
-

I'm not certain this is just quotas. We added the use of 
o.a.k.common.network.Selector into SocketServer, which adds a bunch of 
per-connection metrics. We tried to make it efficient, but this may have added 
significant overhead too.

I'm wondering:
1. Can you specify which git-hash you reverted to?
2. Did you profile the connection? Or is this an educated guess of where time 
went?

50-100ms to create a connection is pretty bad, so I think its a great idea to 
improve our efficiency there.

> Adding a new metric with several pre-existing metrics is very expensive
> ---
>
> Key: KAFKA-2664
> URL: https://issues.apache.org/jira/browse/KAFKA-2664
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
> Fix For: 0.9.0.1
>
>
> I know the summary sounds expected, but we recently ran into a socket server 
> request queue backup that I suspect was caused by a combination of improperly 
> implemented applications that reconnect with a different (random) client-id 
> each time; and the fact that for quotas we now register a new quota 
> metric-set for each client-id.
> So here is what happened: a broker went down and a handful of other brokers 
> starting seeing queue times go up significantly. This caused the request 
> queue to backup, which caused socket timeouts and a further deluge of 
> reconnects. The only way we could get out of this was to fire-wall the broker 
> and downgrade to a version without quotas (or I think it would have worked to 
> just restart the broker).
> My guess is that there were a ton of pre-existing client-id metrics. I don’t 
> know for sure but I’m basing that on the fact that there were several new 
> unique client-ids showing up in the public access logs and request local 
> times for fetches started going up inexplicably. (It would have been useful 
> to have a metric for the number of metrics.) So it turns out that in the 
> above scenario (with say 50k pre-existing client-ids), the avg local time for 
> fetch can go up to the order of 50-100ms (at least with tests on a linux box) 
> largely due to the time taken to create new metrics; and that’s because we 
> use a copy-on-write map underneath. If you have enough (say, hundreds) of 
> clients re-connecting at the same time with new client-id's, that can cause 
> the request queues to start backing up and the overall queuing system to 
> become unstable; and the line starts to spill out of the building.
> I think this is a fairly new scenario with quotas - i.e., I don’t think the 
> past per-X metrics (per-topic for e.g.,) creation rate would ever come this 
> close.
> To be clear, the clients are clearly doing the wrong thing but I think the 
> broker can and should protect itself adequately against such rogue scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2456) Disable SSLv3 for ssl.enabledprotocols config on client & broker side

2015-10-16 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14961321#comment-14961321
 ] 

Ismael Juma commented on KAFKA-2456:


That's right, Ben. My suggestion is to use the JDK default for enabled 
protocols, document that and mention that protocols that are no longer secure 
are disabled by the JDK.

> Disable SSLv3 for ssl.enabledprotocols config on client & broker side
> -
>
> Key: KAFKA-2456
> URL: https://issues.apache.org/jira/browse/KAFKA-2456
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Sriharsha Chintalapani
>Assignee: Sriharsha Chintalapani
> Fix For: 0.9.0.0
>
>
> This is a follow-up on KAFKA-1690 . Currently users have option to pass in 
> SSLv3 we should not be allowing this as its deprecated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2456) Disable SSLv3 for ssl.enabledprotocols config on client & broker side

2015-10-16 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14961293#comment-14961293
 ] 

Sriharsha Chintalapani commented on KAFKA-2456:
---

[~benstopford] please go for it.

> Disable SSLv3 for ssl.enabledprotocols config on client & broker side
> -
>
> Key: KAFKA-2456
> URL: https://issues.apache.org/jira/browse/KAFKA-2456
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Sriharsha Chintalapani
>Assignee: Sriharsha Chintalapani
> Fix For: 0.9.0.0
>
>
> This is a follow-up on KAFKA-1690 . Currently users have option to pass in 
> SSLv3 we should not be allowing this as its deprecated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2664) Adding a new metric with several pre-existing metrics is very expensive

2015-10-16 Thread Joel Koshy (JIRA)
Joel Koshy created KAFKA-2664:
-

 Summary: Adding a new metric with several pre-existing metrics is 
very expensive
 Key: KAFKA-2664
 URL: https://issues.apache.org/jira/browse/KAFKA-2664
 Project: Kafka
  Issue Type: Bug
Reporter: Joel Koshy
 Fix For: 0.9.0.1


I know the summary sounds expected, but we recently ran into a socket server 
request queue backup that I suspect was caused by a combination of improperly 
implemented applications that reconnect with a different (random) client-id 
each time; and the fact that for quotas we now register a new quota metric-set 
for each client-id.

So here is what happened: a broker went down and a handful of other brokers 
starting seeing queue times go up significantly. This caused the request queue 
to backup, which caused socket timeouts and a further deluge of reconnects. The 
only way we could get out of this was to fire-wall the broker and downgrade to 
a version without quotas (or I think it would have worked to just restart the 
broker).

My guess is that there were a ton of pre-existing client-id metrics. I don’t 
know for sure but I’m basing that on the fact that there were several new 
unique client-ids showing up in the public access logs and request local times 
for fetches started going up inexplicably. (It would have been useful to have a 
metric for the number of metrics.) So it turns out that in the above scenario 
(with say 50k pre-existing client-ids), the avg local time for fetch can go up 
to the order of 50-100ms (at least with tests on a linux box) largely due to 
the time taken to create new metrics; and that’s because we use a copy-on-write 
map underneath. If you have enough (say, hundreds) of clients re-connecting at 
the same time with new client-id's, that can cause the request queues to start 
backing up and the overall queuing system to become unstable; and the line 
starts to spill out of the building.

I think this is a fairly new scenario with quotas - i.e., I don’t think the 
past per-X metrics (per-topic for e.g.,) creation rate would ever come this 
close.

To be clear, the clients are clearly doing the wrong thing but I think the 
broker can and should protect itself adequately against such rogue scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2663) Add quota-delay time to request processing time break-up

2015-10-16 Thread Joel Koshy (JIRA)
Joel Koshy created KAFKA-2663:
-

 Summary: Add quota-delay time to request processing time break-up
 Key: KAFKA-2663
 URL: https://issues.apache.org/jira/browse/KAFKA-2663
 Project: Kafka
  Issue Type: Bug
Reporter: Joel Koshy
Assignee: Aditya Auradkar
 Fix For: 0.9.0.1


This is probably not critical for 0.9 but should be easy to fix:

If a request is delayed due to quotas, I think the remote time will go up 
artificially - or maybe response queue time (haven’t checked). We should add a 
new quotaDelayTime to the request handling time break-up.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2456) Disable SSLv3 for ssl.enabledprotocols config on client & broker side

2015-10-16 Thread Ben Stopford (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14961208#comment-14961208
 ] 

Ben Stopford commented on KAFKA-2456:
-

[~ijuma][~harsha_ch] It looks like this is actually disabled in 1.7 (quick test 
gives an error about protocol being depricated) but I guess it wouldn't hurt to 
exclude it explicitly. 

Harsha - if you're not working on this let me know and I'll do a quick PR

> Disable SSLv3 for ssl.enabledprotocols config on client & broker side
> -
>
> Key: KAFKA-2456
> URL: https://issues.apache.org/jira/browse/KAFKA-2456
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Sriharsha Chintalapani
>Assignee: Sriharsha Chintalapani
> Fix For: 0.9.0.0
>
>
> This is a follow-up on KAFKA-1690 . Currently users have option to pass in 
> SSLv3 we should not be allowing this as its deprecated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


LogStartOffset MBean

2015-10-16 Thread Sonal Jain
Hi,

I have a technical question regarding LogStartOffset MBean. Please let me
know if this is not a right forum to ask this question.

Question# I want to calculate lag in the consumer and want to create a lag
metric. I have defined jmx MBeans. I have 3 consumers running with 3
partitions each. LogStartOffset MBean gives  a value which is 3 times the
value of the offset(for ex; if the offset is 170 then it gives 510). Do I
need to define a clientId as one of the dimension to get the proper value?
Also the LogStartOffset takes some time (may be due to log retention
policy) to get the updated offset value after consumer consumes and commits
the message. Is there any other MBean available which can give me correct
LogStartOffset value?

"jmxMetrics": [
{
  "id": "LogStartOffset",
  "objectName":
"kafka.log:name=LogStartOffset,type=Log,topic=*,partition=*",
  "attribute": “Value”,
  "metricType": "Number"
}
  ]

Thanks,
Sonal


[jira] [Commented] (KAFKA-2515) handle oversized messages properly in new consumer

2015-10-16 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14961064#comment-14961064
 ] 

Guozhang Wang commented on KAFKA-2515:
--

Yup, in the latest patch we will throw RecordTooLarge in poll() calls.

> handle oversized messages properly in new consumer
> --
>
> Key: KAFKA-2515
> URL: https://issues.apache.org/jira/browse/KAFKA-2515
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> When there is an oversized message in the broker, it seems that the new 
> consumer just silently gets stuck. We should at least log an error when this 
> happens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #33

2015-10-16 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2120: cleaning follow-up

[junrao] KAFKA-2419 - Fix to prevent background thread from getting created when

--
[...truncated 362 lines...]
:kafka-trunk-jdk8:core:compileScala UP-TO-DATE
:kafka-trunk-jdk8:core:processResources UP-TO-DATE
:kafka-trunk-jdk8:core:classes UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:javadoc
:kafka-trunk-jdk8:core:javadoc
cache taskArtifacts.bin 
(
 is corrupt. Discarding.
:kafka-trunk-jdk8:core:javadocJar
:kafka-trunk-jdk8:core:scaladoc
[ant:scaladoc] Element 
' 
does not exist.
[ant:scaladoc] 
:277:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.uncleanLeaderElectionRate
[ant:scaladoc] ^
[ant:scaladoc] 
:278:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.leaderElectionTimer
[ant:scaladoc] ^
[ant:scaladoc] warning: there were 14 feature warning(s); re-run with -feature 
for details
[ant:scaladoc] 
:72:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:32:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:137:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:120:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:97:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#put".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:152:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#take".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 9 warnings found
:kafka-trunk-jdk8:core:scaladocJar
:kafka-trunk-jdk8:core:docsJar
:docsJar_2_11_7
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk8:clients:compileJavawarning: [options] bootstrap class path 
not set in conjunction with -source 1.7
Note: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar
:kafka-trunk-jdk8:clients:javadoc
:kafka-trunk-jdk8:log4j-appender:compileJavawarning: [options] bootstrap class 
path not set in conjunction with -source 1.7
1 warning

:kafka-trunk-jdk8:log4j-appender:processResources UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:classes
:kafka-trunk-jdk8:log4j-appender:jar
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^


[jira] [Commented] (KAFKA-2515) handle oversized messages properly in new consumer

2015-10-16 Thread Onur Karaman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14960997#comment-14960997
 ] 

Onur Karaman commented on KAFKA-2515:
-

+1 on Jason's comment. With just the log message, it seems like it would be 
harder to react to the large message programmatically. What does 
ZookeeperConsumerConnector do?

> handle oversized messages properly in new consumer
> --
>
> Key: KAFKA-2515
> URL: https://issues.apache.org/jira/browse/KAFKA-2515
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> When there is an oversized message in the broker, it seems that the new 
> consumer just silently gets stuck. We should at least log an error when this 
> happens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2419) Allow certain Sensors to be garbage collected after inactivity

2015-10-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14960994#comment-14960994
 ] 

ASF GitHub Bot commented on KAFKA-2419:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/323


> Allow certain Sensors to be garbage collected after inactivity
> --
>
> Key: KAFKA-2419
> URL: https://issues.apache.org/jira/browse/KAFKA-2419
> Project: Kafka
>  Issue Type: New Feature
>Affects Versions: 0.9.0.0
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.9.0.0
>
>
> Currently, metrics cannot be removed once registered. 
> Implement a feature to remove certain sensors after a certain period of 
> inactivity (perhaps configurable).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2419) Allow certain Sensors to be garbage collected after inactivity

2015-10-16 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-2419.

Resolution: Fixed

Issue resolved by pull request 323
[https://github.com/apache/kafka/pull/323]

> Allow certain Sensors to be garbage collected after inactivity
> --
>
> Key: KAFKA-2419
> URL: https://issues.apache.org/jira/browse/KAFKA-2419
> Project: Kafka
>  Issue Type: New Feature
>Affects Versions: 0.9.0.0
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.9.0.0
>
>
> Currently, metrics cannot be removed once registered. 
> Implement a feature to remove certain sensors after a certain period of 
> inactivity (perhaps configurable).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2515) handle oversized messages properly in new consumer

2015-10-16 Thread Ben Stopford (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Stopford updated KAFKA-2515:

Assignee: Guozhang Wang  (was: Ben Stopford)

> handle oversized messages properly in new consumer
> --
>
> Key: KAFKA-2515
> URL: https://issues.apache.org/jira/browse/KAFKA-2515
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> When there is an oversized message in the broker, it seems that the new 
> consumer just silently gets stuck. We should at least log an error when this 
> happens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2419 - Fix to prevent background thread ...

2015-10-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/323


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (KAFKA-2515) handle oversized messages properly in new consumer

2015-10-16 Thread Ben Stopford (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Stopford reassigned KAFKA-2515:
---

Assignee: Ben Stopford  (was: Jason Gustafson)

> handle oversized messages properly in new consumer
> --
>
> Key: KAFKA-2515
> URL: https://issues.apache.org/jira/browse/KAFKA-2515
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients
>Reporter: Jun Rao
>Assignee: Ben Stopford
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> When there is an oversized message in the broker, it seems that the new 
> consumer just silently gets stuck. We should at least log an error when this 
> happens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2120) Add a request timeout to NetworkClient

2015-10-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14960985#comment-14960985
 ] 

ASF GitHub Bot commented on KAFKA-2120:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/320


> Add a request timeout to NetworkClient
> --
>
> Key: KAFKA-2120
> URL: https://issues.apache.org/jira/browse/KAFKA-2120
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Jiangjie Qin
>Assignee: Mayuresh Gharat
>Priority: Blocker
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-2120.patch, KAFKA-2120_2015-07-27_15:31:19.patch, 
> KAFKA-2120_2015-07-29_15:57:02.patch, KAFKA-2120_2015-08-10_19:55:18.patch, 
> KAFKA-2120_2015-08-12_10:59:09.patch, KAFKA-2120_2015-09-03_15:12:02.patch, 
> KAFKA-2120_2015-09-04_17:49:01.patch, KAFKA-2120_2015-09-09_16:45:44.patch, 
> KAFKA-2120_2015-09-09_18:56:18.patch, KAFKA-2120_2015-09-10_21:38:55.patch, 
> KAFKA-2120_2015-09-11_14:54:15.patch, KAFKA-2120_2015-09-15_18:57:20.patch, 
> KAFKA-2120_2015-09-18_19:27:48.patch, KAFKA-2120_2015-09-28_16:13:02.patch
>
>
> Currently NetworkClient does not have a timeout setting for requests. So if 
> no response is received for a request due to reasons such as broker is down, 
> the request will never be completed.
> Request timeout will also be used as implicit timeout for some methods such 
> as KafkaProducer.flush() and kafkaProducer.close().
> KIP-19 is created for this public interface change.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-19+-+Add+a+request+timeout+to+NetworkClient



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2120

2015-10-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/320


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-2662) Make ConsumerIterator thread-safe for multiple threads in different Kafka groups

2015-10-16 Thread Andrew Pennebaker (JIRA)
Andrew Pennebaker created KAFKA-2662:


 Summary: Make ConsumerIterator thread-safe for multiple threads in 
different Kafka groups
 Key: KAFKA-2662
 URL: https://issues.apache.org/jira/browse/KAFKA-2662
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Affects Versions: 0.8.2.1
Reporter: Andrew Pennebaker
Assignee: Neha Narkhede


The API for obtaining a ConsumerIterator requires a group parameter, implying 
that ConsumerIterators are thread-safe, as long as each thread is in a 
different Kafka group. However, in practice, attempting to call hasNext() on 
ConsumerIterators for a thread in one group, and for a thread in another group, 
results in an InvalidStateException.

In the future, can we please make ConsumerIterator thread-safe, for a common 
use case of one consumer thread per group?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2017) Persist Coordinator State for Coordinator Failover

2015-10-16 Thread Todd Palino (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14960809#comment-14960809
 ] 

Todd Palino commented on KAFKA-2017:


Just to throw in my 2 cents here, I don't think that persisting this state in a 
special topic in Kafka is a bad idea. My only concern is that we have seen 
issues with the offsets already from time to time, and we'll want to make sure 
we take those lessons learned and handle them from the start. The ones I am 
aware of are:

1) Creation of the special topic at cluster initialization. If we specify an RF 
of N for the special topic, then the brokers must make this happen. The first 
broker that comes up can't create it with an RF of 1 and own all the 
partitions. Either it must reject all operations that would use the special 
topic until N brokers are members of the cluster and the it can be created, or 
it must create the topic in such a way that as soon as there are N brokers 
available the RF is corrected to the configured number.

2) Load of the special topic into local cache. Whenever a coordinator loads the 
special topic, there is a period of time while it is loading state where it 
cannot service requests. We've seen problems with this related to log 
compaction, where the partitions were excessively large, but I can see as we 
move an increasing number of (group, partition) tuples over to Kafka-committed 
offsets it could become a scale issue very easily. This should not be a big 
deal for group state information, as that should always be smaller than the 
offset information for the group, but we may want to create a longer term plan 
for handling auto-scaling of the special topics (the ability to increase the 
number of partitions and move group information from the partition it used to 
hash to to the one it hashes to after scaling).

> Persist Coordinator State for Coordinator Failover
> --
>
> Key: KAFKA-2017
> URL: https://issues.apache.org/jira/browse/KAFKA-2017
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Onur Karaman
>Assignee: Guozhang Wang
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-2017.patch, KAFKA-2017_2015-05-20_09:13:39.patch, 
> KAFKA-2017_2015-05-21_19:02:47.patch
>
>
> When a coordinator fails, the group membership protocol tries to failover to 
> a new coordinator without forcing all the consumers rejoin their groups. This 
> is possible if the coordinator persists its state so that the state can be 
> transferred during coordinator failover. This state consists of most of the 
> information in GroupRegistry and ConsumerRegistry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Kafka synchronous mode performance issue

2015-10-16 Thread Dinesh J
Hi All,

We are using Kafka-Net(C#) for our kafka messaging system, we tested
Producer to produce message to Kafka with Synchronous and Asynchronous
mode. But In Asynchronous mode it performs well than Synchronous.

Synchronous mode takes 2 minutes to produce 1000 message where as
Asynchronous mode takes 2 seconds to produce 1000 messages.

Is there any limitation using Asynchronous mode?

(*Like message order problem, Message lose*)

Can we use Asynchronous mode in Production environment ?

Please any one suggest us...

Thanks,
Dinesh