[GitHub] kafka-site pull request #86: MINOR:Updating Rabobank description and Zalando...

2017-09-25 Thread manjuapu
GitHub user manjuapu opened a pull request:

https://github.com/apache/kafka-site/pull/86

MINOR:Updating Rabobank description and Zalando image in powered-by & 
streams page

@guozhangwang @dguy Updated text was provided by Rabobank, so in this PR I 
am updating it.
Please review. Thanks!!

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/manjuapu/kafka-site asf-site

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka-site/pull/86.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #86


commit f8df8287ad012ec5279dc4378c0fefe2edd2cebc
Author: Manjula K 
Date:   2017-09-26T03:06:36Z

MINOR:Updating Rabobank text and Zalando image




---


[GitHub] kafka pull request #3960: KAFKA-5975: No response when deleting topics and d...

2017-09-25 Thread mmolimar
GitHub user mmolimar opened a pull request:

https://github.com/apache/kafka/pull/3960

KAFKA-5975: No response when deleting topics and delete.topic.enable=false



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mmolimar/kafka KAFKA-5975

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3960.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3960


commit f22a5547f3d9573e3f8bfbe3c624dc6eab595c8f
Author: Mario Molina 
Date:   2017-09-26T02:10:58Z

Fix response when deleting topics and delete.topic.enable=false




---


[jira] [Created] (KAFKA-5975) No response when deleting topics and delete.topic.enable=false

2017-09-25 Thread Mario Molina (JIRA)
Mario Molina created KAFKA-5975:
---

 Summary: No response when deleting topics and 
delete.topic.enable=false
 Key: KAFKA-5975
 URL: https://issues.apache.org/jira/browse/KAFKA-5975
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.11.0.0
Reporter: Mario Molina
Priority: Minor
 Fix For: 0.11.0.1, 0.11.0.2


When trying to delete topics using the KafkaAdminClient and the flag in server 
config is set as 'delete.topic.enable=false', the client cannot get a response 
and fails returning a timeout error. This is due to the object 
DelayedCreatePartitions cannot complete the operation.

This bug fix modifies the KafkaApi key DELETE_TOPICS taking into account that 
the flag can be disabled and swallow the error to the client, this is, the 
topic is never removed and no error is returned to the client.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5865) Expiring batches with idempotence enabled could cause data loss.

2017-09-25 Thread Apurva Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apurva Mehta resolved KAFKA-5865.
-
Resolution: Fixed

This is fixed in 1.0.0 by the changes in 
https://github.com/apache/kafka/pull/3743

> Expiring batches with idempotence enabled could cause data loss.
> 
>
> Key: KAFKA-5865
> URL: https://issues.apache.org/jira/browse/KAFKA-5865
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
> Fix For: 1.0.0
>
>
> Currently we have a problem with this line:
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java#L282
> Because we can reset the producer id and return after draining batches, it 
> means that we can drain batches for some partitions, then find a batch has 
> expired, and then return. But the batches which were drained are now no 
> longer in the producer queue, and haven't been sent. Thus they are totally 
> lost, and the call backs will never be invoked.
> This is already fixed in https://github.com/apache/kafka/pull/3743 , but 
> opening this in case we want to fix it in 0.11.0.2 as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5974) Removed unused parameter in public interface

2017-09-25 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-5974:
--

 Summary: Removed unused parameter in public interface
 Key: KAFKA-5974
 URL: https://issues.apache.org/jira/browse/KAFKA-5974
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.11.0.0, 1.0.0
Reporter: Matthias J. Sax


The method {{ProcessorContext#register}} has parameter {{loggingEnabled}} that 
is unused. We should remove it eventually. However, this is a breaking change 
of a public API so we need a KIP and a proper upgrade path.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5960) Producer uses unsupported ProduceRequest version against older brokers

2017-09-25 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-5960.

Resolution: Fixed

> Producer uses unsupported ProduceRequest version against older brokers
> --
>
> Key: KAFKA-5960
> URL: https://issues.apache.org/jira/browse/KAFKA-5960
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 1.0.0
>
>
> Reported recently errors from a trunk producer on an 0.11.0.0 broker:
> {code}
> org.apache.kafka.common.errors.UnsupportedVersionException: The broker does 
> not support the requested version 5 for api PRODUCE. Supported versions are 0 
> to 3.
> {code}
> This is likely a regression introduced in KAFKA-5793. We should be using the 
> latest version that the broker supports.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3944: KAFKA-5960; Fix regression in produce version sele...

2017-09-25 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3944


---


[jira] [Created] (KAFKA-5973) ShutdownableThread catching errors can lead to partial hard to diagnose broker failure

2017-09-25 Thread Tom Crayford (JIRA)
Tom Crayford created KAFKA-5973:
---

 Summary: ShutdownableThread catching errors can lead to partial 
hard to diagnose broker failure
 Key: KAFKA-5973
 URL: https://issues.apache.org/jira/browse/KAFKA-5973
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.11.0.0, 0.11.0.1
Reporter: Tom Crayford
Priority: Minor
 Fix For: 1.0.0, 0.11.0.2


When any kafka broker {{ShutdownableThread}} subclasses crashes due to an
uncaught exception, the broker is left running in a very weird/bad state with 
some
threads not running, but potentially the broker can still be serving traffic to
users but not performing its usual operations.

This is problematic, because monitoring may say that "the broker is up and 
fine", but in fact it is not healthy.

At Heroku we've been mitigating this by monitoring all threads that "should" be
running on a broker and alerting when a given thread isn't running for some
reason.

Things that use {{ShutdownableThread}} that can crash and leave a broker/the 
controller in a bad state:
- log cleaner
- replica fetcher threads
- controller to broker send threads
- controller topic deletion threads
- quota throttling reapers
- io threads
- network threads
- group metadata management threads

Some of these can have disasterous consequences, and nearly all of them 
crashing for any reason is a cause for alert.
But, users probably shouldn't have to know about all the internals of Kafka and 
run thread dumps periodically as part of normal operations.

There are a few potential options here:

1. On the crash of any {{ShutdownableThread}}, shutdown the whole broker process

We could crash the whole broker when an individual thread dies. I think this is 
pretty reasonable, it's better to have a very visible breakage than a very hard 
to detect one.

2. Add some healthcheck JMX bean to detect these thread crashes

Users having to audit all of Kafka's source code on each new release and track 
a list of "threads that should be running" is... pretty silly. We could instead 
expose a JMX bean of some kind indicating threads that died due to uncaught 
exceptions

3. Do nothing, but add documentation around monitoring/logging that exposes 
this error

These thread deaths *do* emit log lines, but it's not that clear or obvious to 
users they need to monitor and alert on them. The project could add 
documentation




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3959: KAFKA-5901 Added Connect metrics specific to sourc...

2017-09-25 Thread rhauch
GitHub user rhauch opened a pull request:

https://github.com/apache/kafka/pull/3959

KAFKA-5901 Added Connect metrics specific to source tasks

Added Connect metrics specific to source tasks. This PR is built on top of 
#3911, and when that is merged this can be rebased and merged.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rhauch/kafka kafka-5901

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3959.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3959


commit 2b96e8ba1f0cd1fd77898281c9d7dd3cb60c4834
Author: Randall Hauch 
Date:   2017-09-19T22:09:07Z

KAFKA-5900 Corrected Percentiles and Histogram metric components

Expanded upon the unit tests, and corrected several issues found during the 
expanded testing of these classes that don’t appear to have been used yet.

commit 7563c1414adc3343161c9382114baded06989d47
Author: Randall Hauch 
Date:   2017-09-14T23:17:31Z

KAFKA-5900 Added worker task metrics common to both sink and source tasks

commit 418ec59d941e38b947b44bee27439b51b06ccd24
Author: Randall Hauch 
Date:   2017-09-25T18:04:34Z

KAFKA-5900 Added JavaDoc and improved indentation

commit b55bfafbc8e840371fad42508608e3aa14e546d4
Author: Randall Hauch 
Date:   2017-09-20T15:15:14Z

KAFKA-5901 Added source task metrics




---


[jira] [Created] (KAFKA-5972) Flatten SMT does not work with null values

2017-09-25 Thread Tomas Zuklys (JIRA)
Tomas Zuklys created KAFKA-5972:
---

 Summary: Flatten SMT does not work with null values
 Key: KAFKA-5972
 URL: https://issues.apache.org/jira/browse/KAFKA-5972
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 0.11.0.0
Reporter: Tomas Zuklys
Priority: Minor
 Attachments: kafka-transforms.patch

Hi,

I noticed a bug in Flatten SMT while doing tests with different SMTs that are 
provided out-of-box.

Flatten SMT does not work as expected with schemaless JSON that has properties 
with null values. 

Example json:
{code}
  {A={D=dValue, B=null, C=cValue}}
{code}

The issue is in if statement that checks for null value.
Current version:
{code}
  for (Map.Entry entry : originalRecord.entrySet()) {
final String fieldName = fieldName(fieldNamePrefix, entry.getKey());
Object value = entry.getValue();
if (value == null) {
newRecord.put(fieldName(fieldNamePrefix, entry.getKey()), null);
return;
}
...
{code}
should be
{code}
  for (Map.Entry entry : originalRecord.entrySet()) {
final String fieldName = fieldName(fieldNamePrefix, entry.getKey());
Object value = entry.getValue();
if (value == null) {
newRecord.put(fieldName(fieldNamePrefix, entry.getKey()), null);
continue;
}
{code}

I have attached a patch containing the fix for this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3958: Re-enable KafkaAdminClientTest#testHandleTimeout

2017-09-25 Thread cmccabe
GitHub user cmccabe opened a pull request:

https://github.com/apache/kafka/pull/3958

Re-enable KafkaAdminClientTest#testHandleTimeout



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cmccabe/kafka 
re-enable-KafkaAdminClientTest.testHandleTimeout

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3958.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3958


commit d928a2c8b3524727721ac3e7651a5dc506d13571
Author: Colin P. Mccabe 
Date:   2017-09-25T19:17:51Z

Re-enable KafkaAdminClientTest#testHandleTimeout




---


[jira] [Created] (KAFKA-5971) Broker keeps running even though not registered in ZK

2017-09-25 Thread Igor Canadi (JIRA)
Igor Canadi created KAFKA-5971:
--

 Summary: Broker keeps running even though not registered in ZK
 Key: KAFKA-5971
 URL: https://issues.apache.org/jira/browse/KAFKA-5971
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.11.0.0
Reporter: Igor Canadi


We had a curious situation happen to our kafka cluster running version 
0.11.0.0. One of the brokers was happily running, even though its ID was not 
registered in Zookeeper under `/brokers/ids`.

Based on the logs, it appears that the broker restarted very quickly and there 
was a node under `/brokers/ids/2` still present from the previous run. However, 
in that case I'd expect the broker to try again or just exit. In reality it 
continued running without any errors in the logs.

Here's the relevant part of the logs: 
```
[2017-09-06 23:50:26,095] INFO Opening socket connection to server 
zookeeper.kafka.svc.cluster.local/100.66.99.54:2181. Will not attempt to 
authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2017-09-06 23:50:26,096] INFO Socket connection established to 
zookeeper.kafka.svc.cluster.local/100.66.99.54:2181, initiating session 
(org.apache.zookeeper.ClientCnxn)
[2017-09-06 23:50:26,099] WARN Unable to reconnect to ZooKeeper service, 
session 0x15e4477405f1d40 has expired (org.apache.zookeeper.ClientCnxn)
[2017-09-06 23:50:26,099] INFO zookeeper state changed (Expired) 
(org.I0Itec.zkclient.ZkClient)
[2017-09-06 23:50:26,099] INFO Unable to reconnect to ZooKeeper service, 
session 0x15e4477405f1d40 has expired, closing socket connection 
(org.apache.zookeeper.ClientCnxn)
[2017-09-06 23:50:26,099] INFO Initiating client connection, 
connectString=zookeeper:2181 sessionTimeout=6000 
watcher=org.I0Itec.zkclient.ZkClient@2cb4893b (org.apache.zookeeper.ZooKeeper)
[2017-09-06 23:50:26,102] INFO EventThread shut down for session: 
0x15e4477405f1d40 (org.apache.zookeeper.ClientCnxn)
[2017-09-06 23:50:26,107] INFO Opening socket connection to server 
zookeeper.kafka.svc.cluster.local/100.66.99.54:2181. Will not attempt to 
authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2017-09-06 23:50:26,108] INFO Socket connection established to 
zookeeper.kafka.svc.cluster.local/100.66.99.54:2181, initiating session 
(org.apache.zookeeper.ClientCnxn)
[2017-09-06 23:50:26,111] INFO Session establishment complete on server 
zookeeper.kafka.svc.cluster.local/100.66.99.54:2181, sessionid = 
0x15e599a1a3e0013, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2017-09-06 23:50:26,112] INFO zookeeper state changed (SyncConnected) 
(org.I0Itec.zkclient.ZkClient)
[2017-09-06 23:50:26,114] INFO re-registering broker info in ZK for broker 2 
(kafka.server.KafkaHealthcheck$SessionExpireListener)
[2017-09-06 23:50:26,115] INFO Creating /brokers/ids/2 (is it secure? false) 
(kafka.utils.ZKCheckedEphemeral)
[2017-09-06 23:50:26,123] INFO Result of znode creation is: NODEEXISTS 
(kafka.utils.ZKCheckedEphemeral)
[2017-09-06 23:50:26,124] ERROR Error handling event ZkEvent[New session event 
sent to kafka.server.KafkaHealthcheck$SessionExpireListener@699f40a0] 
(org.I0Itec.zkclient.ZkEventThread)
java.lang.RuntimeException: A broker is already registered on the path 
/brokers/ids/2. This probably indicates that you either have configured a 
brokerid that is already in use, or else you have shutdown this broker and 
restarted it faster than the zookeeper timeout so it
at kafka.utils.ZkUtils.registerBrokerInZk(ZkUtils.scala:417)
at kafka.utils.ZkUtils.registerBrokerInZk(ZkUtils.scala:403)
at kafka.server.KafkaHealthcheck.register(KafkaHealthcheck.scala:70)
at 
kafka.server.KafkaHealthcheck$SessionExpireListener.handleNewSession(KafkaHealthcheck.scala:104)
at org.I0Itec.zkclient.ZkClient$6.run(ZkClient.java:736)
at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:72)
[2017-09-06 23:51:42,257] INFO [Group Metadata Manager on Broker 2]: Removed 0 
expired offsets in 0 milliseconds. 
(kafka.coordinator.group.GroupMetadataManager)
[2017-09-07 00:00:06,198] INFO Unable to read additional data from server 
sessionid 0x15e599a1a3e0013, likely server has closed socket, closing socket 
connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2017-09-07 00:00:06,354] INFO zookeeper state changed (Disconnected) 
(org.I0Itec.zkclient.ZkClient)
[2017-09-07 00:00:07,675] INFO Opening socket connection to server 
zookeeper.kafka.svc.cluster.local/100.66.99.54:2181. Will not attempt to 
authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2017-09-07 00:00:07,676] INFO Socket connection established to 
zookeeper.kafka.svc.cluster.local/100.66.99.54:2181, initiating session 
(org.apache.zookeeper.ClientCnxn)
[2017-09-07 00:00:07,680] INFO Session establishment complete on server 
zookeeper.kafka.svc.cluster.local/100.66.99.54:2181, 

Re: [DISCUSS] KIP-201: Rationalising Policy interfaces

2017-09-25 Thread Tom Bentley
Hi Ismael,

On 25 September 2017 at 17:51, Ismael Juma  wrote:

> We don't have this policy today for what it's worth.
>

Thanks for the clarification. On re-reading I realise I misinterpreted
Guozhang Wang's suggestion when 1.0.0 was first mooted:


> Just to clarify, my proposal is that moving forward beyond the next release
> we will not make any public API breaking changes in any of the major or
> minor releases, but will only mark them as "deprecated", and deprecated
> public APIs will be only considered for removing as early as the next major
> release: so if we mark the scala consumer APIs as deprecated in 1.0.0, we
> should only be consider removing it at 2.0.0 or even later.


So this would mean that if a deprecation got into 1.x the feature could be
removed in 1.(x+1) at the earliest, right? I will update the KIP.

btw isn't this a point which should be included in
https://issues.apache.org/jira/browse/KAFKA-5637 ?

Thanks,

Tom


Re: [DISCUSS] KIP-201: Rationalising Policy interfaces

2017-09-25 Thread Tom Bentley
Hi Mickael,

Thanks for the reply.

Thanks for the KIP. Is this meant to superseed KIP-170 ?
> If so, one of our key requirements was to be able to access the
> topics/partitions list from the policy, so an administrator could
> enforce a partition limit for example.
>

It's not meant to replace KIP-170 because there are things in KIP-170 which
aren't purely about policy (the change in requests and responses, for
example), which KIP-201 doesn't propose to implement. Obviously there is
overlap when it comes to policies, and both KIPs start with different
motivations for the policy changes they propose. I think it makes sense for
a single KIP address both sets of use cases if possible. I'm happy for that
to be KIP-201 if that suits you.

I think the approach taken in KIP-170, of a Provider interface for
obtaining information that's not intrinsic to the request and a method to
inject that provider into the policy, is a good one. It retains a clean
distinction between the request metadata itself and the wider cluster
metadata, which I think is a good thing. If you're happy Mickael, I'll
update KIP-201 with something similar.


> Also instead of simply having the Java Principal object, could we have
> the KafkaPrincipal ? So policies could take advantage of custom
> KafkaPrincipal object (KIP-189).
>

Certainly.


Re: [DISCUSS] KIP-190: Handle client-ids consistently between clients and brokers

2017-09-25 Thread Ewen Cheslack-Postava
On Mon, Sep 25, 2017 at 3:35 AM, Mickael Maison 
wrote:

> Hi Ewen,
>
> I understand your point of view and ideally we'd have one convention
> for handling all user provided strings. This KIP reused the
> sanitization mechanism we had in place for user principals.
>
> I think both ways have pros and cons but what I like about early
> sanitization (as is currently) is that it's consistent. While
> monitoring tools have to know about the sanitization, all metrics are
> sanitized the same way before being passed to reporters and that
> includes all "fields" in the metric name (client-id, user).
>

How is just passing the actual values to the reporters any less consistent?
The "sanitization" process that was in place was really only for internal
purposes, right? i.e. so that we'd have a path for ZK that ZK could handle?

I think the real question is why JmxReporter is being special-cased?


>
> Moving the sanitization into JMXReporter is a publicly visible change
> as it would affect the metrics we pass to other reporters.
>

How would this be any more publicly visible than the change already being
made? In fact, since we haven't really specified what reporters should
accept, if anything the change to the sanitized strings is more of a
publicly visible change (you need to understand exactly what transformation
is being applied) than the change I am suggesting (which could be
considered a bug fix that now just fixes support for certain client IDs and
only affects JMX metric names because of JMX limitations).

-Ewen


>
>
>
>
> On Fri, Sep 22, 2017 at 8:53 PM, Ewen Cheslack-Postava
>  wrote:
> > Hi all,
> >
> > In working on the patch for KIP-196: Add metrics to Kafka Connect
> > framework, we realized that we have worker and connector/task IDs that
> are
> > to be included in metrics and those don't currently have constraints on
> > naming. I'd prefer to avoid adding naming restrictions or mangling names
> > unnecessarily, and for users that define a custom metrics reporter the
> name
> > mangling may be unexpected since their metrics system may not have the
> same
> > limitations as JMX.
> >
> > The text of the KIP is pretty JMX specific and doesn't really define
> where
> > this mangling happens. Currently, it is being applied essentially as
> early
> > as possible. I would like to propose moving the name mangling into the
> > JmxReporter itself so the only impact is on JMX metrics, but other
> metrics
> > reporters would see the original. In other words, leave system-specific
> > name mangling up to that system.
> >
> > In the JmxReporter, the mangling could remain the same (though I think
> the
> > mangling for principals is an internal implementation detail, whereas
> this
> > name mangling is user-visible). The quota metrics on the broker would now
> > be inconsistent with the others, but I think trying to be less
> JMX-specific
> > given that we support pluggable reporters is the right direction to go.
> >
> > I think in practice this has no publicly visible impact and definitely no
> > compatibility concerns, it just moves where we're doing the JMX name
> > mangling. However, since discussion about metric naming/character
> > substitutions had happened here recently I wanted to raise it here and
> make
> > sure there would be agreement on this direction.
> >
> > (Long term I'd also like to get the required instantiation of JmxReporter
> > removed as well, but that requires its own KIP.)
> >
> > Thanks,
> > Ewen
> >
> > On Thu, Sep 14, 2017 at 2:09 AM, Tom Bentley 
> wrote:
> >
> >> Hi Mickael,
> >>
> >> I was just wondering why the restriction was imposed for Java clients
> the
> >> first place, do you know?
> >>
> >> Cheers,
> >>
> >> Tom
> >>
> >> On 14 September 2017 at 09:16, Ismael Juma  wrote:
> >>
> >> > Thanks for the KIP Mickael. I suggest starting a vote.
> >> >
> >> > Ismael
> >> >
> >> > On Mon, Aug 21, 2017 at 2:51 PM, Mickael Maison <
> >> mickael.mai...@gmail.com>
> >> > wrote:
> >> >
> >> > > Hi all,
> >> > >
> >> > > I have created a KIP to cleanup the way client-ids are handled by
> >> > > brokers and clients.
> >> > >
> >> > > Currently the Java clients have some restrictions on the client-ids
> >> > > that are not enforced by the brokers. Using 3rd party clients,
> >> > > client-ids containing any characters can be used causing some
> strange
> >> > > behaviours in the way brokers handle metrics and quotas.
> >> > >
> >> > > Feedback is appreciated.
> >> > >
> >> > > Thanks
> >> > >
> >> >
> >>
>


Re: [DISCUSS] KIP-201: Rationalising Policy interfaces

2017-09-25 Thread Ismael Juma
On Mon, Sep 25, 2017 at 5:32 PM, Tom Bentley  wrote:

> > bq. If this KIP is accepted for Kafka 1.1.0 this removal could happen in
> > Kafka 3.0.0
> >
> > There would be no Kafka 2.0 ?
> >
>
> As I understand it, a deprecation has to exist for a complete major version
> number cycle before the feature can be removed. So deprecations that are
> added in 1.x (x>0) have to remain in all 2.y before removal in 3. Did I
> understand the policy wrong?
>

We don't have this policy today for what it's worth.

Ismael


Re: [DISCUSS] KIP-201: Rationalising Policy interfaces

2017-09-25 Thread Mickael Maison
Hi Tom,

Thanks for the KIP. Is this meant to superseed KIP-170 ?
If so, one of our key requirements was to be able to access the
topics/partitions list from the policy, so an administrator could
enforce a partition limit for example.

Also instead of simply having the Java Principal object, could we have
the KafkaPrincipal ? So policies could take advantage of custom
KafkaPrincipal object (KIP-189).

On Mon, Sep 25, 2017 at 5:32 PM, Tom Bentley  wrote:
> Hi Ted,
>
> Thanks for the feedback!
>
> bq. topic.action.policy.class.name
>>
>> Since the policy would cover more than one action, how about using actions
>> for the second word ?
>>
>
> Good point, done.
>
>
>> For TopicState interface, the abstract modifier for its methods are not
>> needed.
>>
>
> Fixed.
>
> bq. KIP-113
>>
>> Mind adding more to the above bullet ?
>>
>
> I guess I intended to elaborate on this, but forgot to. I guess the
> question is:
>
> a) Whether AlterReplicaDir should be covered by a policy, and if so
> b) should it be covered by this policy.
>
> Thinking about it some more I don't think it should be covered by this
> policy, so I have removed this bullet. Please shout if you disagree.
>
>
>> bq. If this KIP is accepted for Kafka 1.1.0 this removal could happen in
>> Kafka 3.0.0
>>
>> There would be no Kafka 2.0 ?
>>
>
> As I understand it, a deprecation has to exist for a complete major version
> number cycle before the feature can be removed. So deprecations that are
> added in 1.x (x>0) have to remain in all 2.y before removal in 3. Did I
> understand the policy wrong?


Re: [DISCUSS] KIP-201: Rationalising Policy interfaces

2017-09-25 Thread Ted Yu
bq. deprecations that are added in 1.x (x>0) have to remain in all 2.y

Makes sense.

It is fine to exclude KIP-113 from your KIP.

Thanks

On Mon, Sep 25, 2017 at 9:32 AM, Tom Bentley  wrote:

> Hi Ted,
>
> Thanks for the feedback!
>
> bq. topic.action.policy.class.name
> >
> > Since the policy would cover more than one action, how about using
> actions
> > for the second word ?
> >
>
> Good point, done.
>
>
> > For TopicState interface, the abstract modifier for its methods are not
> > needed.
> >
>
> Fixed.
>
> bq. KIP-113
> >
> > Mind adding more to the above bullet ?
> >
>
> I guess I intended to elaborate on this, but forgot to. I guess the
> question is:
>
> a) Whether AlterReplicaDir should be covered by a policy, and if so
> b) should it be covered by this policy.
>
> Thinking about it some more I don't think it should be covered by this
> policy, so I have removed this bullet. Please shout if you disagree.
>
>
> > bq. If this KIP is accepted for Kafka 1.1.0 this removal could happen in
> > Kafka 3.0.0
> >
> > There would be no Kafka 2.0 ?
> >
>
> As I understand it, a deprecation has to exist for a complete major version
> number cycle before the feature can be removed. So deprecations that are
> added in 1.x (x>0) have to remain in all 2.y before removal in 3. Did I
> understand the policy wrong?
>


Re: [DISCUSS] KIP-201: Rationalising Policy interfaces

2017-09-25 Thread Tom Bentley
Hi Ted,

Thanks for the feedback!

bq. topic.action.policy.class.name
>
> Since the policy would cover more than one action, how about using actions
> for the second word ?
>

Good point, done.


> For TopicState interface, the abstract modifier for its methods are not
> needed.
>

Fixed.

bq. KIP-113
>
> Mind adding more to the above bullet ?
>

I guess I intended to elaborate on this, but forgot to. I guess the
question is:

a) Whether AlterReplicaDir should be covered by a policy, and if so
b) should it be covered by this policy.

Thinking about it some more I don't think it should be covered by this
policy, so I have removed this bullet. Please shout if you disagree.


> bq. If this KIP is accepted for Kafka 1.1.0 this removal could happen in
> Kafka 3.0.0
>
> There would be no Kafka 2.0 ?
>

As I understand it, a deprecation has to exist for a complete major version
number cycle before the feature can be removed. So deprecations that are
added in 1.x (x>0) have to remain in all 2.y before removal in 3. Did I
understand the policy wrong?


Re: [DISCUSS] KIP-201: Rationalising Policy interfaces

2017-09-25 Thread Ted Yu
bq. topic.action.policy.class.name

Since the policy would cover more than one action, how about using actions
for the second word ?

For TopicState interface, the abstract modifier for its methods are not
needed.

bq. KIP-113

Mind adding more to the above bullet ?

bq. If this KIP is accepted for Kafka 1.1.0 this removal could happen in
Kafka 3.0.0

There would be no Kafka 2.0 ?

Cheers

On Mon, Sep 25, 2017 at 3:34 AM, Tom Bentley  wrote:

> Hi,
>
> I'd like to start a discussion for KIP-201. The basic point is that new
> AdminClient APIs for modifying topics should have a configurable policy to
> allow the administrator to veto the modifications. Just adding a
> ModifyTopicPolicy would make for awkwardness by having separate policies
> for creation and modification, so the KIP proposes unifying both of these
> under a single new policy.
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 201%3A+Rationalising+Policy+interfaces
>
> Cheers,
>
> Tom
>


[jira] [Resolved] (KAFKA-5489) Failing test: InternalTopicIntegrationTest.shouldCompactTopicsForStateChangelogs

2017-09-25 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy resolved KAFKA-5489.
---
Resolution: Cannot Reproduce

Closing this as i haven't seen it fail in a while and i'm unable to reproduce 
it. We can re-open if it occurs again

> Failing test: 
> InternalTopicIntegrationTest.shouldCompactTopicsForStateChangelogs
> 
>
> Key: KAFKA-5489
> URL: https://issues.apache.org/jira/browse/KAFKA-5489
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Reporter: Matthias J. Sax
>Assignee: Damian Guy
>  Labels: test
>
> Test failed with
> {noformat}
> java.lang.AssertionError: expected: but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.kafka.streams.integration.InternalTopicIntegrationTest.shouldCompactTopicsForStateChangelogs(InternalTopicIntegrationTest.java:173)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3438: KAFKA-3465: Clarify warning message of ConsumerOff...

2017-09-25 Thread vahidhashemian
Github user vahidhashemian closed the pull request at:

https://github.com/apache/kafka/pull/3438


---


Re: [VOTE] KIP-202

2017-09-25 Thread Richard Yu
The vote has passed with 5++. We are now closing the vote.

On Mon, Sep 25, 2017 at 1:18 AM, Guozhang Wang  wrote:

> If no on else has opinions or votes on this thread, Richard could you close
> the voting phase then?
>
> On Sat, Sep 23, 2017 at 4:11 PM, Ismael Juma  wrote:
>
> > Thanks for the KIP, +1 (binding).
> >
> > On 19 Sep 2017 12:27 am, "Richard Yu" 
> wrote:
> >
> > > Hello, I would like to start a VOTE thread on KIP-202.
> > >
> > > Thanks.
> > >
> >
>
>
>
> --
> -- Guozhang
>


[GitHub] kafka pull request #3957: MIROR: Reject JoinGroup request from first member ...

2017-09-25 Thread omkreddy
GitHub user omkreddy opened a pull request:

https://github.com/apache/kafka/pull/3957

MIROR: Reject JoinGroup request from first member with empty 
partition.assignment.strategy



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/omkreddy/kafka JOIN-GROUP-EMPTY-PROTOCOL

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3957.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3957


commit 4fc87ffdae80dfe5f7c4634249fe64f3100746ca
Author: Manikumar Reddy 
Date:   2017-09-25T12:40:46Z

MIROR: Reject JoinGroup request from first member with empty 
partition.assignment.strategy




---


Re: KIP-203: Add toLowerCase support to sasl.kerberos.principal.to.local rule

2017-09-25 Thread Manikumar
Hi all,

If there are no further concerns,  I will start a VOTE thread for this
minor KIP.

Thanks

On Tue, Sep 19, 2017 at 3:14 PM, Manikumar 
wrote:

> Thanks for the reviews.
>
> @Ted
> Updated the KIP.
>
> @Tom
> I think we can interpret usernames as locale independent strings. I am
> planning to use Locale.ENGLISH for case conversion.
> Updated the KIP. Let me know If you have any concerns.
>
> On Tue, Sep 19, 2017 at 12:58 AM, Tom Bentley 
> wrote:
>
>> What locale is used for the case conversion, the JVM default one or a
>> specific one?
>>
>> On 18 Sep 2017 5:31 pm, "Manikumar"  wrote:
>>
>> > Hi all,
>> >
>> > I've created a small KIP to extend the sasl.kerberos.principal.to.local
>> > rule syntax to convert short names to lower case.
>> >
>> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> > 203%3A+Add+toLowerCase+support+to+sasl.kerberos.principal.to.local+rule
>> >
>> >
>> > Please have a look at the KIP.
>> >
>> > Thanks.
>> > Manikumar
>> >
>>
>
>


[GitHub] kafka pull request #3956: KAFKA-5970: Avoid locking group in operations that...

2017-09-25 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/3956

KAFKA-5970: Avoid locking group in operations that lock DelayedProduce



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka 
KAFKA-5970-delayedproduce-deadlock

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3956.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3956


commit 27238fe16a31bad09cc82785a7bb62d347f2c7ac
Author: Rajini Sivaram 
Date:   2017-09-25T11:40:28Z

KAFKA-5970: Avoid locking group in operations that lock DelayedProduce




---


[jira] [Created] (KAFKA-5970) Deadlock due to locking of DelayedProduce and group

2017-09-25 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-5970:
-

 Summary: Deadlock due to locking of DelayedProduce and group
 Key: KAFKA-5970
 URL: https://issues.apache.org/jira/browse/KAFKA-5970
 Project: Kafka
  Issue Type: Bug
  Components: clients
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram
Priority: Critical
 Fix For: 1.0.0


>From a local run of TransactionsBounceTest. Looks like we hold group lock 
>while completing DelayedProduce, which in turn may acquire group lock.

{quote}
Found one Java-level deadlock:
=
"kafka-request-handler-7":
  waiting to lock monitor 0x7fe08891fb08 (object 0x00074a9fbc50, a 
kafka.coordinator.group.GroupMetadata),
  which is held by "kafka-request-handler-4"
"kafka-request-handler-4":
  waiting to lock monitor 0x7fe0869e4408 (object 0x000749be7bb8, a 
kafka.server.DelayedProduce),
  which is held by "kafka-request-handler-3"
"kafka-request-handler-3":
  waiting to lock monitor 0x7fe08891fb08 (object 0x00074a9fbc50, a 
kafka.coordinator.group.GroupMetadata),
  which is held by "kafka-request-handler-4"

Java stack information for the threads listed above:
===
"kafka-request-handler-7":
at 
kafka.coordinator.group.GroupMetadataManager$$anonfun$handleTxnCompletion$1.apply(GroupMetadataManager.scala:752)
  waiting to lock <0x00074a9fbc50> (a 
kafka.coordinator.group.GroupMetadata)
at 
kafka.coordinator.group.GroupMetadataManager$$anonfun$handleTxnCompletion$1.apply(GroupMetadataManager.scala:750)
at scala.collection.mutable.HashSet.foreach(HashSet.scala:78)
at 
kafka.coordinator.group.GroupMetadataManager.handleTxnCompletion(GroupMetadataManager.scala:750)
at 
kafka.coordinator.group.GroupCoordinator.handleTxnCompletion(GroupCoordinator.scala:439)
at 
kafka.server.KafkaApis.kafka$server$KafkaApis$$maybeSendResponseCallback$1(KafkaApis.scala:1556)
at 
kafka.server.KafkaApis$$anonfun$handleWriteTxnMarkersRequest$1$$anonfun$apply$20.apply(KafkaApis.scala:1614)
at 
kafka.server.KafkaApis$$anonfun$handleWriteTxnMarkersRequest$1$$anonfun$apply$20.apply(KafkaApis.scala:1614)
at kafka.server.DelayedProduce.onComplete(DelayedProduce.scala:134)
at 
kafka.server.DelayedOperation.forceComplete(DelayedOperation.scala:66)
at kafka.server.DelayedProduce.tryComplete(DelayedProduce.scala:116)
at kafka.server.DelayedProduce.safeTryComplete(DelayedProduce.scala:76)
  locked <0x00074b21c968> (a kafka.server.DelayedProduce)
at 
kafka.server.DelayedOperationPurgatory$Watchers.tryCompleteWatched(DelayedOperation.scala:338)
at 
kafka.server.DelayedOperationPurgatory.checkAndComplete(DelayedOperation.scala:244)
at 
kafka.server.ReplicaManager.tryCompleteDelayedProduce(ReplicaManager.scala:284)
at 
kafka.cluster.Partition.tryCompleteDelayedRequests(Partition.scala:434)
at 
kafka.cluster.Partition.updateReplicaLogReadResult(Partition.scala:285)
at 
kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:1290)
at 
kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:1286)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at 
kafka.server.ReplicaManager.updateFollowerLogReadResults(ReplicaManager.scala:1286)
at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:786)
at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:598)
at kafka.server.KafkaApis.handle(KafkaApis.scala:100)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:65)
at java.lang.Thread.run(Thread.java:748)
"kafka-request-handler-4":
at kafka.server.DelayedProduce.safeTryComplete(DelayedProduce.scala:75)
  waiting to lock <0x000749be7bb8> (a kafka.server.DelayedProduce)
at 
kafka.server.DelayedOperationPurgatory$Watchers.tryCompleteWatched(DelayedOperation.scala:338)
at 
kafka.server.DelayedOperationPurgatory.checkAndComplete(DelayedOperation.scala:244)
at 
kafka.server.ReplicaManager.tryCompleteDelayedProduce(ReplicaManager.scala:284)
at 
kafka.cluster.Partition.tryCompleteDelayedRequests(Partition.scala:434)
at kafka.cluster.Partition.appendRecordsToLeader(Partition.scala:516)
at 
kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:707)
at 
kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:691)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.Tra

Re: [DISCUSS] KIP-190: Handle client-ids consistently between clients and brokers

2017-09-25 Thread Mickael Maison
Hi Ewen,

I understand your point of view and ideally we'd have one convention
for handling all user provided strings. This KIP reused the
sanitization mechanism we had in place for user principals.

I think both ways have pros and cons but what I like about early
sanitization (as is currently) is that it's consistent. While
monitoring tools have to know about the sanitization, all metrics are
sanitized the same way before being passed to reporters and that
includes all "fields" in the metric name (client-id, user).

Moving the sanitization into JMXReporter is a publicly visible change
as it would affect the metrics we pass to other reporters.




On Fri, Sep 22, 2017 at 8:53 PM, Ewen Cheslack-Postava
 wrote:
> Hi all,
>
> In working on the patch for KIP-196: Add metrics to Kafka Connect
> framework, we realized that we have worker and connector/task IDs that are
> to be included in metrics and those don't currently have constraints on
> naming. I'd prefer to avoid adding naming restrictions or mangling names
> unnecessarily, and for users that define a custom metrics reporter the name
> mangling may be unexpected since their metrics system may not have the same
> limitations as JMX.
>
> The text of the KIP is pretty JMX specific and doesn't really define where
> this mangling happens. Currently, it is being applied essentially as early
> as possible. I would like to propose moving the name mangling into the
> JmxReporter itself so the only impact is on JMX metrics, but other metrics
> reporters would see the original. In other words, leave system-specific
> name mangling up to that system.
>
> In the JmxReporter, the mangling could remain the same (though I think the
> mangling for principals is an internal implementation detail, whereas this
> name mangling is user-visible). The quota metrics on the broker would now
> be inconsistent with the others, but I think trying to be less JMX-specific
> given that we support pluggable reporters is the right direction to go.
>
> I think in practice this has no publicly visible impact and definitely no
> compatibility concerns, it just moves where we're doing the JMX name
> mangling. However, since discussion about metric naming/character
> substitutions had happened here recently I wanted to raise it here and make
> sure there would be agreement on this direction.
>
> (Long term I'd also like to get the required instantiation of JmxReporter
> removed as well, but that requires its own KIP.)
>
> Thanks,
> Ewen
>
> On Thu, Sep 14, 2017 at 2:09 AM, Tom Bentley  wrote:
>
>> Hi Mickael,
>>
>> I was just wondering why the restriction was imposed for Java clients the
>> first place, do you know?
>>
>> Cheers,
>>
>> Tom
>>
>> On 14 September 2017 at 09:16, Ismael Juma  wrote:
>>
>> > Thanks for the KIP Mickael. I suggest starting a vote.
>> >
>> > Ismael
>> >
>> > On Mon, Aug 21, 2017 at 2:51 PM, Mickael Maison <
>> mickael.mai...@gmail.com>
>> > wrote:
>> >
>> > > Hi all,
>> > >
>> > > I have created a KIP to cleanup the way client-ids are handled by
>> > > brokers and clients.
>> > >
>> > > Currently the Java clients have some restrictions on the client-ids
>> > > that are not enforced by the brokers. Using 3rd party clients,
>> > > client-ids containing any characters can be used causing some strange
>> > > behaviours in the way brokers handle metrics and quotas.
>> > >
>> > > Feedback is appreciated.
>> > >
>> > > Thanks
>> > >
>> >
>>


[DISCUSS] KIP-201: Rationalising Policy interfaces

2017-09-25 Thread Tom Bentley
Hi,

I'd like to start a discussion for KIP-201. The basic point is that new
AdminClient APIs for modifying topics should have a configurable policy to
allow the administrator to veto the modifications. Just adding a
ModifyTopicPolicy would make for awkwardness by having separate policies
for creation and modification, so the KIP proposes unifying both of these
under a single new policy.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-201%3A+Rationalising+Policy+interfaces

Cheers,

Tom


[GitHub] kafka pull request #3955: KAFKA-5969: Use correct error message when the JSO...

2017-09-25 Thread scholzj
GitHub user scholzj opened a pull request:

https://github.com/apache/kafka/pull/3955

KAFKA-5969: Use correct error message when the JSON file is invalid

When invalid JSON file is passed to the 
bin/kafka-preferred-replica-election.sh / PreferredReplicaLeaderElectionCommand 
tool it gives a misleading error `Preferred replica election data is empty`. 
This PR replaces it with the correct error message.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/scholzj/kafka KAFKA-5969

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3955.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3955


commit 45bd0251c02b0d960e015b6f626323ba2eeb413c
Author: Jakub Scholz 
Date:   2017-09-25T09:11:57Z

Use correct error message when the JSON file is invalid




---


[jira] [Created] (KAFKA-5969) bin/kafka-preferred-replica-election.sh gives misleading error when invalid JSON file is passed as parameter

2017-09-25 Thread Jakub Scholz (JIRA)
Jakub Scholz created KAFKA-5969:
---

 Summary: bin/kafka-preferred-replica-election.sh gives misleading 
error when invalid JSON file is passed as parameter
 Key: KAFKA-5969
 URL: https://issues.apache.org/jira/browse/KAFKA-5969
 Project: Kafka
  Issue Type: Bug
  Components: tools
Reporter: Jakub Scholz


When invalid JSON file is passed to the bin/kafka-preferred-replica-election.sh 
/ PreferredReplicaLeaderElectionCommand tool it gives a misleading error:
{code}
kafka.admin.AdminOperationException: Preferred replica election data is empty
at 
kafka.admin.PreferredReplicaLeaderElectionCommand$.parsePreferredReplicaElectionData(PreferredReplicaLeaderElectionCommand.scala:97)
at 
kafka.admin.PreferredReplicaLeaderElectionCommand$.main(PreferredReplicaLeaderElectionCommand.scala:66)
at 
kafka.admin.PreferredReplicaLeaderElectionCommand.main(PreferredReplicaLeaderElectionCommand.scala)
{code}

It suggests that the data is empty instead of invalid. This can confuse people. 
The exception text should be fixed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5968) Remove all broker metrics during shutdown

2017-09-25 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-5968:
-

 Summary: Remove all broker metrics during shutdown
 Key: KAFKA-5968
 URL: https://issues.apache.org/jira/browse/KAFKA-5968
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: Rajini Sivaram
 Fix For: 1.1.0


RequestMetrics on the broker is currently a static variable, created when the 
broker starts up, but the metrics are never removed. This makes it hard to test 
broker metrics since there could be metrics left over from previous tests. We 
should ensure that all metrics are cleared even for brokers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #2165: KAFKA-4440: Make producer RecordMetadata non-final

2017-09-25 Thread rajinisivaram
Github user rajinisivaram closed the pull request at:

https://github.com/apache/kafka/pull/2165


---


Re: [VOTE] KIP-202

2017-09-25 Thread Guozhang Wang
If no on else has opinions or votes on this thread, Richard could you close
the voting phase then?

On Sat, Sep 23, 2017 at 4:11 PM, Ismael Juma  wrote:

> Thanks for the KIP, +1 (binding).
>
> On 19 Sep 2017 12:27 am, "Richard Yu"  wrote:
>
> > Hello, I would like to start a VOTE thread on KIP-202.
> >
> > Thanks.
> >
>



-- 
-- Guozhang


[GitHub] kafka pull request #3949: MINOR: Update Streams quickstart to create output ...

2017-09-25 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3949


---


[GitHub] kafka pull request #3954: KAFKA-5758: Don't fail fetch request if replica is...

2017-09-25 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/3954

KAFKA-5758: Don't fail fetch request if replica is no longer a follower for 
a partition

We log a warning instead, which is what we also do if the partition hasn't 
been created
yet.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-5758-dont-fail-fetch-request-if-replica-is-not-follower

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3954.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3954


commit 7e12d408619b41a37cc40f4c294d47875df312ce
Author: Ismael Juma 
Date:   2017-09-25T07:33:46Z

KAFKA-5758: Don't fail fetch request if replica is no longer a follower for 
a partition

We log a warning instead, which is what we also do if the partition hasn't 
been created
yet.




---