[jira] [Resolved] (KAFKA-5690) kafka-acls command should be able to list per principal

2018-09-16 Thread Dong Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin resolved KAFKA-5690.
-
Resolution: Fixed

> kafka-acls command should be able to list per principal
> ---
>
> Key: KAFKA-5690
> URL: https://issues.apache.org/jira/browse/KAFKA-5690
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.2.0, 0.11.0.0
>Reporter: Koelli Mungee
>Assignee: Manikumar
>Priority: Major
> Fix For: 2.1.0
>
>
> Currently the `kafka-acls` command has a `--list` option that can list per 
> resource which is --topic  or --group  or --cluster. In order 
> to look at the ACLs for a particular principal the user needs to iterate 
> through the entire list to figure out what privileges a particular principal 
> has been granted. An option to list the ACL per principal would simplify this 
> process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-5690) kafka-acls command should be able to list per principal

2018-09-16 Thread Dong Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin updated KAFKA-5690:

Fix Version/s: 2.1.0

> kafka-acls command should be able to list per principal
> ---
>
> Key: KAFKA-5690
> URL: https://issues.apache.org/jira/browse/KAFKA-5690
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.2.0, 0.11.0.0
>Reporter: Koelli Mungee
>Assignee: Manikumar
>Priority: Major
> Fix For: 2.1.0
>
>
> Currently the `kafka-acls` command has a `--list` option that can list per 
> resource which is --topic  or --group  or --cluster. In order 
> to look at the ACLs for a particular principal the user needs to iterate 
> through the entire list to figure out what privileges a particular principal 
> has been granted. An option to list the ACL per principal would simplify this 
> process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-5690) kafka-acls command should be able to list per principal

2018-09-16 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16616616#comment-16616616
 ] 

ASF GitHub Bot commented on KAFKA-5690:
---

lindong28 closed pull request #5633: KAFKA-5690: Add support to list ACLs for a 
given principal (KIP-357)
URL: https://github.com/apache/kafka/pull/5633
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/core/src/main/scala/kafka/admin/AclCommand.scala 
b/core/src/main/scala/kafka/admin/AclCommand.scala
index c2dda33d5ab..ad375d20572 100644
--- a/core/src/main/scala/kafka/admin/AclCommand.scala
+++ b/core/src/main/scala/kafka/admin/AclCommand.scala
@@ -138,10 +138,22 @@ object AclCommand extends Logging {
 def listAcls(): Unit = {
   withAdminClient(opts) { adminClient =>
 val filters = getResourceFilter(opts, dieIfNoResourceFound = false)
+val listPrincipals = getPrincipals(opts, opts.listPrincipalsOpt)
 val resourceToAcls = getAcls(adminClient, filters)
 
-for ((resource, acls) <- resourceToAcls)
-  println(s"Current ACLs for resource `$resource`: $Newline 
${acls.map("\t" + _).mkString(Newline)} $Newline")
+if (listPrincipals.isEmpty) {
+  for ((resource, acls) <- resourceToAcls)
+println(s"Current ACLs for resource `$resource`: $Newline 
${acls.map("\t" + _).mkString(Newline)} $Newline")
+} else {
+  listPrincipals.foreach(principal => {
+println(s"ACLs for principal `$principal`")
+val filteredResourceToAcls =  resourceToAcls.mapValues(acls =>
+  acls.filter(acl => 
principal.toString.equals(acl.principal))).filter(entry => entry._2.nonEmpty)
+
+for ((resource, acls) <- filteredResourceToAcls)
+  println(s"Current ACLs for resource `$resource`: $Newline 
${acls.map("\t" + _).mkString(Newline)} $Newline")
+  })
+}
   }
 }
 
@@ -237,13 +249,20 @@ object AclCommand extends Logging {
 def listAcls(): Unit = {
   withAuthorizer() { authorizer =>
 val filters = getResourceFilter(opts, dieIfNoResourceFound = false)
+val listPrincipals = getPrincipals(opts, opts.listPrincipalsOpt)
 
-val resourceToAcls: Iterable[(Resource, Set[Acl])] =
-  if (filters.isEmpty) authorizer.getAcls()
-  else filters.flatMap(filter => getAcls(authorizer, filter))
-
-for ((resource, acls) <- resourceToAcls)
-  println(s"Current ACLs for resource `$resource`: $Newline 
${acls.map("\t" + _).mkString(Newline)} $Newline")
+if (listPrincipals.isEmpty) {
+  val resourceToAcls =  getFilteredResourceToAcls(authorizer, filters)
+  for ((resource, acls) <- resourceToAcls)
+println(s"Current ACLs for resource `$resource`: $Newline 
${acls.map("\t" + _).mkString(Newline)} $Newline")
+} else {
+  listPrincipals.foreach(principal => {
+println(s"ACLs for principal `$principal`")
+val resourceToAcls =  getFilteredResourceToAcls(authorizer, 
filters, Some(principal))
+for ((resource, acls) <- resourceToAcls)
+  println(s"Current ACLs for resource `$resource`: $Newline 
${acls.map("\t" + _).mkString(Newline)} $Newline")
+  })
+}
   }
 }
 
@@ -256,9 +275,23 @@ object AclCommand extends Logging {
 )
 }
 
-private def getAcls(authorizer: Authorizer, filter: 
ResourcePatternFilter): Map[Resource, Set[Acl]] =
-  authorizer.getAcls()
-.filter { case (resource, acl) => filter.matches(resource.toPattern) }
+private def getFilteredResourceToAcls(authorizer: Authorizer, filters: 
Set[ResourcePatternFilter],
+  listPrincipal: 
Option[KafkaPrincipal] = None): Iterable[(Resource, Set[Acl])] = {
+  if (filters.isEmpty)
+if (listPrincipal.isEmpty)
+  authorizer.getAcls()
+else
+  authorizer.getAcls(listPrincipal.get)
+  else filters.flatMap(filter => getAcls(authorizer, filter, 
listPrincipal))
+}
+
+private def getAcls(authorizer: Authorizer, filter: ResourcePatternFilter,
+listPrincipal: Option[KafkaPrincipal] = None): 
Map[Resource, Set[Acl]] =
+  if (listPrincipal.isEmpty)
+authorizer.getAcls().filter { case (resource, acl) => 
filter.matches(resource.toPattern) }
+  else
+authorizer.getAcls(listPrincipal.get).filter { case (resource, acl) => 
filter.matches(resource.toPattern) }
+
   }
 
   private def getResourceToAcls(opts: AclCommandOptions): Map[Resource, 
Set[Acl]] = {
@@ -521,6 +554,12 @@ object AclCommand extends Logging {
   .describedAs

[jira] [Commented] (KAFKA-5690) kafka-acls command should be able to list per principal

2018-09-16 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16616617#comment-16616617
 ] 

ASF GitHub Bot commented on KAFKA-5690:
---

omkreddy opened a new pull request #5633: KAFKA-5690: Add support to list ACLs 
for a given principal (KIP-357)
URL: https://github.com/apache/kafka/pull/5633
 
 
   
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> kafka-acls command should be able to list per principal
> ---
>
> Key: KAFKA-5690
> URL: https://issues.apache.org/jira/browse/KAFKA-5690
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.2.0, 0.11.0.0
>Reporter: Koelli Mungee
>Assignee: Manikumar
>Priority: Major
> Fix For: 2.1.0
>
>
> Currently the `kafka-acls` command has a `--list` option that can list per 
> resource which is --topic  or --group  or --cluster. In order 
> to look at the ACLs for a particular principal the user needs to iterate 
> through the entire list to figure out what privileges a particular principal 
> has been granted. An option to list the ACL per principal would simplify this 
> process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (KAFKA-5690) kafka-acls command should be able to list per principal

2018-09-16 Thread Dong Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin reopened KAFKA-5690:
-

> kafka-acls command should be able to list per principal
> ---
>
> Key: KAFKA-5690
> URL: https://issues.apache.org/jira/browse/KAFKA-5690
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.2.0, 0.11.0.0
>Reporter: Koelli Mungee
>Assignee: Manikumar
>Priority: Major
> Fix For: 2.1.0
>
>
> Currently the `kafka-acls` command has a `--list` option that can list per 
> resource which is --topic  or --group  or --cluster. In order 
> to look at the ACLs for a particular principal the user needs to iterate 
> through the entire list to figure out what privileges a particular principal 
> has been granted. An option to list the ACL per principal would simplify this 
> process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5789) Deleted topic is recreated when consumer subscribe the deleted one

2018-09-16 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5789.
--
Resolution: Duplicate

Resolving as duplicate of KAFKA-7320/KIP-361

> Deleted topic is recreated when consumer subscribe the deleted one
> --
>
> Key: KAFKA-5789
> URL: https://issues.apache.org/jira/browse/KAFKA-5789
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.11.0.0
>Reporter: Takafumi Saito
>Priority: Major
>  Labels: needs-kip
>
> When setting auto.create.topic.enbale true in broker, some deleted topics 
> will be re-created.
> Because when consumer that subscribe deleted topic is exist, broker will 
> create topic having same name.
> It is not necessary that consumers trigger new topic creation , so auto topic 
> creation in consumer should be disabled.
> I attatch the log outputted in our broker.
> This show that a topic (topic_1) was deleted at 12:02:24,672, but same topic 
> was created shortly thereafter:
> {code:java}
> [2017-08-22 12:02:24,666] INFO [ReplicaFetcherManager on broker 1] Removed 
> fetcher for partitions topic_1-0 (kafka.server.ReplicaFetcherManager)
> [2017-08-22 12:02:24,666] INFO [ReplicaFetcherManager on broker 1] Removed 
> fetcher for partitions  (kafka.server.ReplicaFetcherManager)
> [2017-08-22 12:02:24,667] INFO [ReplicaFetcherManager on broker 1] Removed 
> fetcher for partitions topic_1-0 (kafka.server.ReplicaFetcherManager)
> [2017-08-22 12:02:24,672] INFO Log for partition topic_1-0 is renamed to 
> /data/topic_1-0.ad490e8326704ae6a6fd9f6399c29614-delete and is scheduled for 
> deletion (kafka.log.LogManager)
> [2017-08-22 12:02:24,736] INFO Loading producer state from offset 0 for 
> partition topic_1-0 with message format version 2 (kafka.log.Log)
> [2017-08-22 12:02:24,736] INFO Completed load of log topic_1-0 with 1 log 
> segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
> [2017-08-22 12:02:24,737] INFO Created log for partition [topic_1,0] in /data 
> with properties {compression.type -> producer, message.format.version -> 
> 0.11.0-IV2, file.delete.delay.ms -> 6,
> max.message.bytes -> 112, min.compaction.lag.ms -> 0, 
> message.timestamp.type -> CreateTime, min.insync.replicas -> 1, 
> segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 
> 0.5, i
> ndex.interval.bytes -> 4096, unclean.leader.election.enable -> false, 
> retention.bytes -> -1, delete.retention.ms -> 8640, cleanup.policy -> 
> [delete], flush.ms -> 9223372036854775807, segment.ms -> 60
> 480, segment.bytes -> 1073741824, retention.ms -> 8640, 
> message.timestamp.difference.max.ms -> 9223372036854775807, 
> segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 
> (kafka
> .log.LogManager)
> [2017-08-22 12:02:24,738] INFO [ReplicaFetcherManager on broker 1] Removed 
> fetcher for partitions topic_1-0 (kafka.server.ReplicaFetcherManager)
> [2017-08-22 12:02:24,738] INFO [ReplicaFetcherManager on broker 1] Added 
> fetcher for partitions List([topic_1-0, initOffset 0 to broker 
> BrokerEndPoint(2,sbx-patriot-kafka02.amb-patriot.incvb.io,
> 9092)] ) (kafka.server.ReplicaFetcherManager)
> [2017-08-22 12:02:25,200] INFO [ReplicaFetcherThread-0-2]: Based on 
> follower's leader epoch, leader replied with an offset 0 >= the follower's 
> log end offset 0 in topic_1-0. No truncation needed
> . (kafka.server.ReplicaFetcherThread)
> [2017-08-22 12:02:25,200] INFO Truncating topic_1-0 to 0 has no effect as the 
> largest offset in the log is -1. (kafka.log.Log)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7344) Return early when all tasks are assigned in StickyTaskAssignor#assignActive

2018-09-16 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated KAFKA-7344:
--
Description: 
After re-assigning existing active tasks to clients that previously had the 
same active task, there is chance that {{taskIds.size() == assigned.size()}}, 
i.e. all tasks are assigned .
The method continues with:

{code}
final Set unassigned = new HashSet<>(taskIds);
unassigned.removeAll(assigned);
{code}
We can check the above condition and return early before allocating HashSet.

Similar optimization can be done before the following (around line 112):
{code}
// assign any remaining unassigned tasks
final List sortedTasks = new ArrayList<>(unassigned);
{code}

  was:
After re-assigning existing active tasks to clients that previously had the 
same active task, there is chance that {{taskIds.size() == assigned.size()}}, 
i.e. all tasks are assigned .
The method continues with:
{code}
final Set unassigned = new HashSet<>(taskIds);
unassigned.removeAll(assigned);
{code}
We can check the above condition and return early before allocating HashSet.

Similar optimization can be done before the following (around line 112):
{code}
// assign any remaining unassigned tasks
final List sortedTasks = new ArrayList<>(unassigned);
{code}


> Return early when all tasks are assigned in StickyTaskAssignor#assignActive
> ---
>
> Key: KAFKA-7344
> URL: https://issues.apache.org/jira/browse/KAFKA-7344
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Ted Yu
>Assignee: kevin.chen
>Priority: Minor
>  Labels: optimization
>
> After re-assigning existing active tasks to clients that previously had the 
> same active task, there is chance that {{taskIds.size() == assigned.size()}}, 
> i.e. all tasks are assigned .
> The method continues with:
> {code}
> final Set unassigned = new HashSet<>(taskIds);
> unassigned.removeAll(assigned);
> {code}
> We can check the above condition and return early before allocating HashSet.
> Similar optimization can be done before the following (around line 112):
> {code}
> // assign any remaining unassigned tasks
> final List sortedTasks = new ArrayList<>(unassigned);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7276) Consider using re2j to speed up regex operations

2018-09-16 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated KAFKA-7276:
--
Description: 
https://github.com/google/re2j

re2j claims to do linear time regular expression matching in Java.

Its benefit is most obvious for deeply nested regex (such as a | b | c | d).
We should consider using re2j to speed up regex operations.

  was:
https://github.com/google/re2j

re2j claims to do linear time regular expression matching in Java.

Its benefit is most obvious for deeply nested regex (such as a | b | c | d).

We should consider using re2j to speed up regex operations.


> Consider using re2j to speed up regex operations
> 
>
> Key: KAFKA-7276
> URL: https://issues.apache.org/jira/browse/KAFKA-7276
> Project: Kafka
>  Issue Type: Task
>  Components: packaging
>Reporter: Ted Yu
>Assignee: kevin.chen
>Priority: Major
>
> https://github.com/google/re2j
> re2j claims to do linear time regular expression matching in Java.
> Its benefit is most obvious for deeply nested regex (such as a | b | c | d).
> We should consider using re2j to speed up regex operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7316) Use of filter method in KTable.scala may result in StackOverflowError

2018-09-16 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16616834#comment-16616834
 ] 

Ted Yu commented on KAFKA-7316:
---

Can this be resolved ?

> Use of filter method in KTable.scala may result in StackOverflowError
> -
>
> Key: KAFKA-7316
> URL: https://issues.apache.org/jira/browse/KAFKA-7316
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Priority: Major
>  Labels: scala
> Fix For: 2.0.1, 2.1.0
>
> Attachments: 7316.v4.txt
>
>
> In this thread:
> http://search-hadoop.com/m/Kafka/uyzND1dNbYKXzC4F1?subj=Issue+in+Kafka+2+0+0+
> Druhin reported seeing StackOverflowError when using filter method from 
> KTable.scala
> This can be reproduced with the following change:
> {code}
> diff --git 
> a/streams/streams-scala/src/test/scala/org/apache/kafka/streams/scala/StreamToTableJoinScalaIntegrationTestImplicitSerdes.scala
>  b/streams/streams-scala/src/test/scala
> index 3d1bab5..e0a06f2 100644
> --- 
> a/streams/streams-scala/src/test/scala/org/apache/kafka/streams/scala/StreamToTableJoinScalaIntegrationTestImplicitSerdes.scala
> +++ 
> b/streams/streams-scala/src/test/scala/org/apache/kafka/streams/scala/StreamToTableJoinScalaIntegrationTestImplicitSerdes.scala
> @@ -58,6 +58,7 @@ class StreamToTableJoinScalaIntegrationTestImplicitSerdes 
> extends StreamToTableJ
>  val userClicksStream: KStream[String, Long] = 
> builder.stream(userClicksTopic)
>  val userRegionsTable: KTable[String, String] = 
> builder.table(userRegionsTopic)
> +userRegionsTable.filter { case (_, count) => true }
>  // Compute the total per region by summing the individual click counts 
> per region.
>  val clicksPerRegion: KTable[String, Long] =
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-7175) Make version checking logic more flexible in streams_upgrade_test.py

2018-09-16 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16574181#comment-16574181
 ] 

Ted Yu edited comment on KAFKA-7175 at 9/16/18 6:15 PM:


Thanks for taking this, Ray .


was (Author: yuzhih...@gmail.com):
Thanks for taking this, Ray.

> Make version checking logic more flexible in streams_upgrade_test.py
> 
>
> Key: KAFKA-7175
> URL: https://issues.apache.org/jira/browse/KAFKA-7175
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams, system tests
>Reporter: Ted Yu
>Assignee: Ray Chiang
>Priority: Major
>  Labels: newbie++
>
> During debugging of system test failure for KAFKA-5037, it was re-discovered 
> that the version numbers inside version probing related messages are hard 
> coded in streams_upgrade_test.py
> This is in-flexible.
> We should correlate latest version from Java class with the expected version 
> numbers.
> Matthias made the following suggestion:
> We should also make this more generic and test upgrades from 3 -> 4, 3 -> 5 
> and 4 -> 5. The current code does only go from latest version to future 
> version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7131) Update release script to generate announcement email text

2018-09-16 Thread bibin sebastian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16617098#comment-16617098
 ] 

bibin sebastian commented on KAFKA-7131:


[@ewencp|https://github.com/ewencp] [@mjsax|https://github.com/mjsax] can you 
please the PR?

> Update release script to generate announcement email text
> -
>
> Key: KAFKA-7131
> URL: https://issues.apache.org/jira/browse/KAFKA-7131
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Matthias J. Sax
>Assignee: bibin sebastian
>Priority: Minor
>  Labels: newbie
>
> When a release is finalized, we send out an email to announce the release. 
> Atm, we have a template in the wiki 
> ([https://cwiki.apache.org/confluence/display/KAFKA/Release+Process|https://cwiki.apache.org/confluence/display/KAFKA/Release+Process]).
>  However, the template needs some manual changes to fill in the release 
> number, number of contributors, etc.
> Some parts could be automated – the corresponding commands are document in 
> the wiki already.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-7131) Update release script to generate announcement email text

2018-09-16 Thread bibin sebastian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16617098#comment-16617098
 ] 

bibin sebastian edited comment on KAFKA-7131 at 9/17/18 5:20 AM:
-

[~mjsax] [~ewencp] can you please the PR?


was (Author: bibin84):
[@ewencp|https://github.com/ewencp] [@mjsax|https://github.com/mjsax] can you 
please the PR?

> Update release script to generate announcement email text
> -
>
> Key: KAFKA-7131
> URL: https://issues.apache.org/jira/browse/KAFKA-7131
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Matthias J. Sax
>Assignee: bibin sebastian
>Priority: Minor
>  Labels: newbie
>
> When a release is finalized, we send out an email to announce the release. 
> Atm, we have a template in the wiki 
> ([https://cwiki.apache.org/confluence/display/KAFKA/Release+Process|https://cwiki.apache.org/confluence/display/KAFKA/Release+Process]).
>  However, the template needs some manual changes to fill in the release 
> number, number of contributors, etc.
> Some parts could be automated – the corresponding commands are document in 
> the wiki already.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)