Repository: kafka-site
Updated Branches:
  refs/heads/asf-site cab4453c0 -> 3811ec372


More edits on 0.10.2 web docs after the release

Ping derrickdoo ewencp for reviews.

Author: Guozhang Wang <wangg...@gmail.com>

Reviewers: Derrick Or, Jun Rao

Closes #47 from guozhangwang/KMINOR-post-0.10.2-streams


Project: http://git-wip-us.apache.org/repos/asf/kafka-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka-site/commit/3811ec37
Tree: http://git-wip-us.apache.org/repos/asf/kafka-site/tree/3811ec37
Diff: http://git-wip-us.apache.org/repos/asf/kafka-site/diff/3811ec37

Branch: refs/heads/asf-site
Commit: 3811ec372fe67acf89435c39af1a5046378e6710
Parents: cab4453
Author: Guozhang Wang <wangg...@gmail.com>
Authored: Wed Feb 22 21:27:52 2017 -0800
Committer: Guozhang Wang <wangg...@gmail.com>
Committed: Wed Feb 22 21:27:52 2017 -0800

----------------------------------------------------------------------
 0102/api.html                    |   2 +-
 0102/generated/topic_config.html |   4 +-
 0102/introduction.html           |   6 +-
 0102/ops.html                    | 130 ++++++++-
 0102/streams.html                | 502 +++++++++++++++++++---------------
 0102/toc.html                    |   4 +-
 0102/upgrade.html                |  48 +---
 0102/uses.html                   |  49 +++-
 events.html                      |   8 +-
 9 files changed, 461 insertions(+), 292 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kafka-site/blob/3811ec37/0102/api.html
----------------------------------------------------------------------
diff --git a/0102/api.html b/0102/api.html
index 9b9cd96..de0bb1d 100644
--- a/0102/api.html
+++ b/0102/api.html
@@ -66,7 +66,7 @@
        Examples showing how to use this library are given in the
        <a 
href="/{{version}}/javadoc/index.html?org/apache/kafka/streams/KafkaStreams.html"
 title="Kafka 0.10.2 Javadoc">javadocs</a>
        <p>
-       Additional documentation on using the Streams API is available <a 
href="/documentation.html#streams">here</a>.
+       Additional documentation on using the Streams API is available <a 
href="/{{version}}/documentation/streams">here</a>.
        <p>
        To use Kafka Streams you can use the following maven dependency:
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/3811ec37/0102/generated/topic_config.html
----------------------------------------------------------------------
diff --git a/0102/generated/topic_config.html b/0102/generated/topic_config.html
index 8974147..d390707 100644
--- a/0102/generated/topic_config.html
+++ b/0102/generated/topic_config.html
@@ -21,11 +21,11 @@
 <tr>
 <td>flush.ms</td><td>This setting allows specifying a time interval at which 
we will force an fsync of data written to the log. For example if this was set 
to 1000 we would fsync after 1000 ms had passed. In general we recommend you 
not set this and use replication for durability and allow the operating 
system's background flush capabilities as it is more 
efficient.</td><td>long</td><td>9223372036854775807</td><td>[0,...]</td><td>log.flush.interval.ms</td><td>medium</td></tr>
 <tr>
-<td>follower.replication.throttled.replicas</td><td>A list of replicas for 
which log replication should be throttled on the follower side. The list should 
describe a set of replicas in the form 
[PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the 
wildcard '*' can be used to throttle all replicas for this 
topic.</td><td>list</td><td>""</td><td>kafka.server.ThrottledReplicaListValidator$@7be8c2a2</td><td>follower.replication.throttled.replicas</td><td>medium</td></tr>
+<td>follower.replication.throttled.replicas</td><td>A list of replicas for 
which log replication should be throttled on the follower side. The list should 
describe a set of replicas in the form 
[PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the 
wildcard '*' can be used to throttle all replicas for this 
topic.</td><td>list</td><td>""</td><td>kafka.server.ThrottledReplicaListValidator$@1060b431</td><td>follower.replication.throttled.replicas</td><td>medium</td></tr>
 <tr>
 <td>index.interval.bytes</td><td>This setting controls how frequently Kafka 
adds an index entry to it's offset index. The default setting ensures that we 
index a message roughly every 4096 bytes. More indexing allows reads to jump 
closer to the exact position in the log but makes the index larger. You 
probably don't need to change 
this.</td><td>int</td><td>4096</td><td>[0,...]</td><td>log.index.interval.bytes</td><td>medium</td></tr>
 <tr>
-<td>leader.replication.throttled.replicas</td><td>A list of replicas for which 
log replication should be throttled on the leader side. The list should 
describe a set of replicas in the form 
[PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the 
wildcard '*' can be used to throttle all replicas for this 
topic.</td><td>list</td><td>""</td><td>kafka.server.ThrottledReplicaListValidator$@7be8c2a2</td><td>leader.replication.throttled.replicas</td><td>medium</td></tr>
+<td>leader.replication.throttled.replicas</td><td>A list of replicas for which 
log replication should be throttled on the leader side. The list should 
describe a set of replicas in the form 
[PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the 
wildcard '*' can be used to throttle all replicas for this 
topic.</td><td>list</td><td>""</td><td>kafka.server.ThrottledReplicaListValidator$@1060b431</td><td>leader.replication.throttled.replicas</td><td>medium</td></tr>
 <tr>
 <td>max.message.bytes</td><td>This is largest message size Kafka will allow to 
be appended. Note that if you increase this size you must also increase your 
consumer's fetch size so they can fetch messages this 
large.</td><td>int</td><td>1000012</td><td>[0,...]</td><td>message.max.bytes</td><td>medium</td></tr>
 <tr>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/3811ec37/0102/introduction.html
----------------------------------------------------------------------
diff --git a/0102/introduction.html b/0102/introduction.html
index 7672a51..556aa02 100644
--- a/0102/introduction.html
+++ b/0102/introduction.html
@@ -43,7 +43,7 @@
       <ul style="float: left; width: 40%;">
       <li>The <a href="/documentation.html#producerapi">Producer API</a> 
allows an application to publish a stream of records to one or more Kafka 
topics.
       <li>The <a href="/documentation.html#consumerapi">Consumer API</a> 
allows an application to subscribe to one or more topics and process the stream 
of records produced to them.
-    <li>The <a href="/documentation.html#streams">Streams API</a> allows an 
application to act as a <i>stream processor</i>, consuming an input stream from 
one or more topics and producing an output stream to one or more output topics, 
effectively transforming the input streams to output streams.
+    <li>The <a href="/documentation/streams">Streams API</a> allows an 
application to act as a <i>stream processor</i>, consuming an input stream from 
one or more topics and producing an output stream to one or more output topics, 
effectively transforming the input streams to output streams.
     <li>The <a href="/documentation.html#connect">Connector API</a> allows 
building and running reusable producers or consumers that connect Kafka topics 
to existing applications or data systems. For example, a connector to a 
relational database might capture every change to a table.
   </ul>
       <img src="/{{version}}/images/kafka-apis.png" style="float: right; 
width: 50%;">
@@ -171,7 +171,7 @@
   For example, a retail application might take in input streams of sales and 
shipments, and output a stream of reorders and price adjustments computed off 
this data.
   </p>
   <p>
-  It is possible to do simple processing directly using the producer and 
consumer APIs. However for more complex transformations Kafka provides a fully 
integrated <a href="/documentation.html#streams">Streams API</a>. This allows 
building applications that do non-trivial processing that compute aggregations 
off of streams or join streams together.
+  It is possible to do simple processing directly using the producer and 
consumer APIs. However for more complex transformations Kafka provides a fully 
integrated <a href="/documentation/streams">Streams API</a>. This allows 
building applications that do non-trivial processing that compute aggregations 
off of streams or join streams together.
   </p>
   <p>
   This facility helps solve the hard problems this type of application faces: 
handling out-of-order data, reprocessing input as code changes, performing 
stateful computations, etc.
@@ -203,4 +203,4 @@
   </p>
 </script>
 
-<div class="p-introduction"></div>
\ No newline at end of file
+<div class="p-introduction"></div>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/3811ec37/0102/ops.html
----------------------------------------------------------------------
diff --git a/0102/ops.html b/0102/ops.html
index a3423a7..9232f65 100644
--- a/0102/ops.html
+++ b/0102/ops.html
@@ -14,8 +14,8 @@
  See the License for the specific language governing permissions and
  limitations under the License.
 -->
-
 <script id="ops-template" type="text/x-handlebars-template">
+  
   Here is some information on actually running Kafka as a production system 
based on usage and experience at LinkedIn. Please send us any additional tips 
you know of.
 
   <h3><a id="basic_ops" href="#basic_ops">6.1 Basic Kafka Operations</a></h3>
@@ -842,9 +842,9 @@
       </tr>
   </tbody></table>
 
-  <h4><a id="selector_monitoring" href="#selector_monitoring">Common 
monitoring metrics for producer/consumer/connect</a></h4>
+  <h4><a id="selector_monitoring" href="#selector_monitoring">Common 
monitoring metrics for producer/consumer/connect/streams</a></h4>
 
-  The following metrics are available on producer/consumer/connector 
instances.  For specific metrics, please see following sections.
+  The following metrics are available on producer/consumer/connector/streams 
instances.  For specific metrics, please see following sections.
 
   <table class="data-table">
     <tbody>
@@ -931,9 +931,9 @@
     </tbody>
   </table>
 
-  <h4><a id="common_node_monitoring" href="#common_node_monitoring">Common 
Per-broker metrics for producer/consumer/connect</a></h4>
+  <h4><a id="common_node_monitoring" href="#common_node_monitoring">Common 
Per-broker metrics for producer/consumer/connect/streams</a></h4>
 
-  The following metrics are available on producer/consumer/connector 
instances.  For specific metrics, please see following sections.
+  The following metrics are available on producer/consumer/connector/streams 
instances.  For specific metrics, please see following sections.
 
   <table class="data-table">
     <tbody>
@@ -1314,7 +1314,125 @@
     </tbody>
   </table>
 
-  <h5><a id="others_monitoring" href="#others_monitoring">Others</a></h5>
+
+
+  <h4><a id="kafka_streams_monitoring" 
href="#kafka_streams_monitoring">Streams Monitoring</a></h4>
+
+  A Kafka Streams instance contains all the producer and consumer metrics as 
well as additional metrics specific to streams. By default Kafka Streams has 
metrics with two recording levels: debug and info. The debug level records all 
metrics, while the info level records only the thread-level metrics.  Use the 
following configuration option to specify which metrics you want collected:
+<pre>metrics.recording.level="info"</pre>
+
+<h5><a id="kafka_streams_thread_monitoring" 
href="#kafka_streams_thread_monitoring">Thread Metrics</a></h5>
+All the following metrics have a recording level of ``info``:
+<table class="data-table">
+    <tbody>
+      <tr>
+        <th>Metric/Attribute name</th>
+        <th>Description</th>
+        <th>Mbean name</th>
+      </tr>
+      <tr>
+        <td>[commit | poll | process | punctuate]-latency-[avg | max]</td>
+        <td>The [average | maximum] execution time in ms, for the respective 
operation, across all running tasks of this thread.</td>
+        <td>kafka.streams:type=stream-metrics,thread.client-id=([-.\w]+)</td>
+      </tr>
+      <tr>
+        <td>[commit | poll | process | punctuate]-rate</td>
+        <td>The average number of respective operations per second across all 
tasks.</td>
+        <td>kafka.streams:type=stream-metrics,thread.client-id=([-.\w]+)</td>
+      </tr>
+      <tr>
+        <td>task-created-rate</td>
+        <td>The average number of newly created tasks per second.</td>
+        <td>kafka.streams:type=stream-metrics,thread.client-id=([-.\w]+)</td>
+      </tr>
+      <tr>
+        <td>task-closed-rate</td>
+        <td>The average number of tasks closed per second.</td>
+        <td>kafka.streams:type=stream-metrics,thread.client-id=([-.\w]+)</td>
+      </tr>
+      <tr>
+        <td>skipped-records-rate</td>
+        <td>The average number of skipped records per second. </td>
+        <td>kafka.streams:type=stream-metrics,thread.client-id=([-.\w]+)</td>
+      </tr>
+ </tbody>
+</table>
+
+<h5><a id="kafka_streams_task_monitoring" 
href="#kafka_streams_task_monitoring">Task Metrics</a></h5>
+All the following metrics have a recording level of ``debug``:
+ <table class="data-table">
+      <tbody>
+      <tr>
+        <th>Metric/Attribute name</th>
+        <th>Description</th>
+        <th>Mbean name</th>
+      </tr>
+      <tr>
+        <td>commit-latency-[avg | max]</td>
+        <td>The [average | maximum] commit time in ns for this task. </td>
+        
<td>kafka.streams:type=stream-task-metrics,streams-task-id=([-.\w]+)</td>
+      </tr>
+      <tr>
+        <td>commit-rate</td>
+        <td>The average number of commit calls per second. </td>
+        
<td>kafka.streams:type=stream-task-metrics,streams-task-id=([-.\w]+)</td>
+      </tr>
+ </tbody>
+</table>
+
+ <h5><a id="kafka_streams_node_monitoring" 
href="#kafka_streams_node_monitoring">Processor Node Metrics</a></h5>
+All the following metrics have a recording level of ``debug``:
+ <table class="data-table">
+      <tbody>
+      <tr>
+        <th>Metric/Attribute name</th>
+        <th>Description</th>
+        <th>Mbean name</th>
+      </tr>
+      <tr>
+        <td>[process | punctuate | create | destroy]-latency-[avg | max]</td>
+        <td>The [average | maximum] execution time in ns, for the respective 
operation. </td>
+        <td>kafka.streams:type=stream-processor-node-metrics, 
processor-node-id=([-.\w]+)</td>
+      </tr>
+      <tr>
+        <td>[process | punctuate | create | destroy]-rate</td>
+        <td>The average number of respective operations per second. </td>
+        <td>kafka.streams:type=stream-processor-node-metrics, 
processor-node-id=([-.\w]+)</td>
+      </tr>
+      <tr>
+        <td>forward-rate</td>
+        <td>The average rate of records being forwarded downstream, from 
source nodes only, per second. </td>
+        <td>kafka.streams:type=stream-processor-node-metrics, 
processor-node-id=([-.\w]+)</td>
+      </tr>
+ </tbody>
+ </table>
+
+ <h5><a id="kafka_streams_store_monitoring" 
href="#kafka_streams_store_monitoring">State Store Metrics</a></h5>
+All the following metrics have a recording level of ``debug``:
+
+ <table class="data-table">
+      <tbody>
+      <tr>
+        <th>Metric/Attribute name</th>
+        <th>Description</th>
+        <th>Mbean name</th>
+      </tr>
+      <tr>
+        <td>[put | put-if-absent | get | delete | put-all | all | range | 
flush | restore]-latency-[avg | max]</td>
+        <td>The average execution time in ns, for the respective operation. 
</td>
+        <td>kafka.streams:type=stream-[store-type]-metrics</td>
+      </tr>
+        <tr>
+        <td>[put | put-if-absent | get | delete | put-all | all | range | 
flush | restore]-rate</td>
+        <td>The average rate of respective operations per second for this 
store.</td>
+        <td>kafka.streams:type=stream-[store-type]-metrics</td>
+      </tr>
+      
+    </tbody>
+</table>
+
+
+  <h4><a id="others_monitoring" href="#others_monitoring">Others</a></h4>
 
   We recommend monitoring GC time and other stats and various server stats 
such as CPU utilization, I/O service time, etc.
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/3811ec37/0102/streams.html
----------------------------------------------------------------------
diff --git a/0102/streams.html b/0102/streams.html
index 94ce7a9..6461f86 100644
--- a/0102/streams.html
+++ b/0102/streams.html
@@ -39,7 +39,7 @@
                 </ul>
             </li>
             <li>
-                <a href="#streams_upgrade">Upgrade Guide and API Changes</a>
+                <a href="#streams_upgrade_and_api">Upgrade Guide and API 
Changes</a>
             </li>
         </ol>
 
@@ -230,7 +230,7 @@
 
         <p>
         Kafka Streams provides so-called <b>state stores</b>, which can be 
used by stream processing applications to store and query data,
-        which is an important capability when implementing stateful 
operations. The <a href="streams_dsl">Kafka Streams DSL</a>, for example, 
automatically creates
+        which is an important capability when implementing stateful 
operations. The <a href="#streams_dsl">Kafka Streams DSL</a>, for example, 
automatically creates
         and manages such state stores when you are calling stateful operators 
such as <code>join()</code> or <code>aggregate()</code>, or when you are 
windowing a stream.
         </p>
 
@@ -257,7 +257,7 @@
         <p>
         In addition, Kafka Streams makes sure that the local state stores are 
robust to failures, too. For each state store, it maintains a replicated 
changelog Kafka topic in which it tracks any state updates.
         These changelog topics are partitioned as well so that each local 
state store instance, and hence the task accessing the store, has its own 
dedicated changelog topic partition.
-        <a href="/documentation/#compaction">Log compaction</a> is enabled on 
the changelog topics so that old data can be purged safely to prevent the 
topics from growing indefinitely.
+        <a href="/{{version}}/documentation/#compaction">Log compaction</a> is 
enabled on the changelog topics so that old data can be purged safely to 
prevent the topics from growing indefinitely.
         If tasks run on a machine that fails and are restarted on another 
machine, Kafka Streams guarantees to restore their associated state stores to 
the content before the failure by
         replaying the corresponding changelog topics prior to resuming the 
processing on the newly started tasks. As a result, failure handling is 
completely transparent to the end user.
         </p>
@@ -266,14 +266,14 @@
         Note that the cost of task (re)initialization typically depends 
primarily on the time for restoring the state by replaying the state stores' 
associated changelog topics.
         To minimize this restoration time, users can configure their 
applications to have <b>standby replicas</b> of local states (i.e. fully 
replicated copies of the state).
         When a task migration happens, Kafka Streams then attempts to assign a 
task to an application instance where such a standby replica already exists in 
order to minimize
-        the task (re)initialization cost. See 
<code>num.standby.replicas</code> at the <a 
href="/documentation/#streamsconfigs">Kafka Streams Configs</a> Section.
+        the task (re)initialization cost. See 
<code>num.standby.replicas</code> at the <a 
href="/{{version}}/documentation/#streamsconfigs">Kafka Streams Configs</a> 
Section.
         </p>
         <br>
 
         <h2><a id="streams_developer" href="#streams_developer">Developer 
Guide</a></h2>
 
         <p>
-        There is a <a 
href="/documentation/#quickstart_kafkastreams">quickstart</a> example that 
provides how to run a stream processing program coded in the Kafka Streams 
library.
+        There is a <a 
href="/{{version}}/documentation/#quickstart_kafkastreams">quickstart</a> 
example that provides how to run a stream processing program coded in the Kafka 
Streams library.
         This section focuses on how to write, configure, and execute a Kafka 
Streams application.
         </p>
 
@@ -306,60 +306,60 @@
         The following example <code>Processor</code> implementation defines a 
simple word-count algorithm:
         </p>
 
-        <pre>
-            public class MyProcessor extends Processor&lt;String, String&gt; {
-                private ProcessorContext context;
-                private KeyValueStore&lt;String, Long&gt; kvStore;
+<pre>
+public class MyProcessor extends Processor&lt;String, String&gt; {
+    private ProcessorContext context;
+    private KeyValueStore&lt;String, Long&gt; kvStore;
 
-                @Override
-                @SuppressWarnings("unchecked")
-                public void init(ProcessorContext context) {
-                    // keep the processor context locally because we need it 
in punctuate() and commit()
-                    this.context = context;
+    @Override
+    @SuppressWarnings("unchecked")
+    public void init(ProcessorContext context) {
+        // keep the processor context locally because we need it in 
punctuate() and commit()
+        this.context = context;
 
-                    // call this processor's punctuate() method every 1000 
milliseconds.
-                    this.context.schedule(1000);
+        // call this processor's punctuate() method every 1000 milliseconds.
+        this.context.schedule(1000);
 
-                    // retrieve the key-value store named "Counts"
-                    this.kvStore = (KeyValueStore&lt;String, Long&gt;) 
context.getStateStore("Counts");
-                }
+        // retrieve the key-value store named "Counts"
+        this.kvStore = (KeyValueStore&lt;String, Long&gt;) 
context.getStateStore("Counts");
+    }
 
-                @Override
-                public void process(String dummy, String line) {
-                    String[] words = line.toLowerCase().split(" ");
+    @Override
+    public void process(String dummy, String line) {
+        String[] words = line.toLowerCase().split(" ");
 
-                    for (String word : words) {
-                        Long oldValue = this.kvStore.get(word);
+        for (String word : words) {
+            Long oldValue = this.kvStore.get(word);
 
-                        if (oldValue == null) {
-                            this.kvStore.put(word, 1L);
-                        } else {
-                            this.kvStore.put(word, oldValue + 1L);
-                        }
-                    }
-                }
+            if (oldValue == null) {
+                this.kvStore.put(word, 1L);
+            } else {
+                this.kvStore.put(word, oldValue + 1L);
+            }
+        }
+    }
 
-                @Override
-                public void punctuate(long timestamp) {
-                    KeyValueIterator&lt;String, Long&gt; iter = 
this.kvStore.all();
+    @Override
+    public void punctuate(long timestamp) {
+        KeyValueIterator&lt;String, Long&gt; iter = this.kvStore.all();
 
-                    while (iter.hasNext()) {
-                        KeyValue&lt;String, Long&gt; entry = iter.next();
-                        context.forward(entry.key, entry.value.toString());
-                    }
+        while (iter.hasNext()) {
+            KeyValue&lt;String, Long&gt; entry = iter.next();
+            context.forward(entry.key, entry.value.toString());
+        }
 
-                    iter.close();
-                    // commit the current processing progress
-                    context.commit();
-                }
+        iter.close();
+        // commit the current processing progress
+        context.commit();
+    }
 
-                @Override
-                public void close() {
-                    // close the key-value store
-                    this.kvStore.close();
-                }
-            };
-        </pre>
+    @Override
+    public void close() {
+        // close the key-value store
+        this.kvStore.close();
+    }
+};
+</pre>
 
         <p>
         In the above implementation, the following actions are performed:
@@ -379,31 +379,31 @@
         by connecting these processors together:
         </p>
 
-        <pre>
-            TopologyBuilder builder = new TopologyBuilder();
+<pre>
+TopologyBuilder builder = new TopologyBuilder();
 
-            builder.addSource("SOURCE", "src-topic")
-                // add "PROCESS1" node which takes the source processor 
"SOURCE" as its upstream processor
-                .addProcessor("PROCESS1", () -> new MyProcessor1(), "SOURCE")
+builder.addSource("SOURCE", "src-topic")
+    // add "PROCESS1" node which takes the source processor "SOURCE" as its 
upstream processor
+    .addProcessor("PROCESS1", () -> new MyProcessor1(), "SOURCE")
 
-                // add "PROCESS2" node which takes "PROCESS1" as its upstream 
processor
-                .addProcessor("PROCESS2", () -> new MyProcessor2(), "PROCESS1")
+    // add "PROCESS2" node which takes "PROCESS1" as its upstream processor
+    .addProcessor("PROCESS2", () -> new MyProcessor2(), "PROCESS1")
 
-                // add "PROCESS3" node which takes "PROCESS1" as its upstream 
processor
-                .addProcessor("PROCESS3", () -> new MyProcessor3(), "PROCESS1")
+    // add "PROCESS3" node which takes "PROCESS1" as its upstream processor
+    .addProcessor("PROCESS3", () -> new MyProcessor3(), "PROCESS1")
 
-                // add the sink processor node "SINK1" that takes Kafka topic 
"sink-topic1"
-                // as output and the "PROCESS1" node as its upstream processor
-                .addSink("SINK1", "sink-topic1", "PROCESS1")
+    // add the sink processor node "SINK1" that takes Kafka topic "sink-topic1"
+    // as output and the "PROCESS1" node as its upstream processor
+    .addSink("SINK1", "sink-topic1", "PROCESS1")
 
-                // add the sink processor node "SINK2" that takes Kafka topic 
"sink-topic2"
-                // as output and the "PROCESS2" node as its upstream processor
-                .addSink("SINK2", "sink-topic2", "PROCESS2")
+    // add the sink processor node "SINK2" that takes Kafka topic "sink-topic2"
+    // as output and the "PROCESS2" node as its upstream processor
+    .addSink("SINK2", "sink-topic2", "PROCESS2")
 
-                // add the sink processor node "SINK3" that takes Kafka topic 
"sink-topic3"
-                // as output and the "PROCESS3" node as its upstream processor
-                .addSink("SINK3", "sink-topic3", "PROCESS3");
-        </pre>
+    // add the sink processor node "SINK3" that takes Kafka topic "sink-topic3"
+    // as output and the "PROCESS3" node as its upstream processor
+    .addSink("SINK3", "sink-topic3", "PROCESS3");
+</pre>
 
         There are several steps in the above code to build the topology, and 
here is a quick walk through:
 
@@ -423,13 +423,13 @@
         In the following example, a persistent key-value store named 
“Counts” with key type <code>String</code> and value type <code>Long</code> 
is created.
         </p>
 
-        <pre>
-            StateStoreSupplier countStore = Stores.create("Counts")
-              .withKeys(Serdes.String())
-              .withValues(Serdes.Long())
-              .persistent()
-              .build();
-        </pre>
+<pre>
+StateStoreSupplier countStore = Stores.create("Counts")
+    .withKeys(Serdes.String())
+    .withValues(Serdes.Long())
+    .persistent()
+    .build();
+</pre>
 
         <p>
         To take advantage of these state stores, developers can use the 
<code>TopologyBuilder.addStateStore</code> method when building the
@@ -437,24 +437,24 @@
         state store with the existing processor nodes through 
<code>TopologyBuilder.connectProcessorAndStateStores</code>.
         </p>
 
-        <pre>
-            TopologyBuilder builder = new TopologyBuilder();
+<pre>
+TopologyBuilder builder = new TopologyBuilder();
 
-            builder.addSource("SOURCE", "src-topic")
+builder.addSource("SOURCE", "src-topic")
 
-                .addProcessor("PROCESS1", MyProcessor1::new, "SOURCE")
-                // add the created state store "COUNTS" associated with 
processor "PROCESS1"
-                .addStateStore(countStore, "PROCESS1")
-                .addProcessor("PROCESS2", MyProcessor3::new /* the 
ProcessorSupplier that can generate MyProcessor3 */, "PROCESS1")
-                .addProcessor("PROCESS3", MyProcessor3::new /* the 
ProcessorSupplier that can generate MyProcessor3 */, "PROCESS1")
+    .addProcessor("PROCESS1", MyProcessor1::new, "SOURCE")
+    // add the created state store "COUNTS" associated with processor 
"PROCESS1"
+    .addStateStore(countStore, "PROCESS1")
+    .addProcessor("PROCESS2", MyProcessor3::new /* the ProcessorSupplier that 
can generate MyProcessor3 */, "PROCESS1")
+    .addProcessor("PROCESS3", MyProcessor3::new /* the ProcessorSupplier that 
can generate MyProcessor3 */, "PROCESS1")
 
-                // connect the state store "COUNTS" with processor "PROCESS2"
-                .connectProcessorAndStateStores("PROCESS2", "COUNTS");
+    // connect the state store "COUNTS" with processor "PROCESS2"
+    .connectProcessorAndStateStores("PROCESS2", "COUNTS");
 
-                .addSink("SINK1", "sink-topic1", "PROCESS1")
-                .addSink("SINK2", "sink-topic2", "PROCESS2")
-                .addSink("SINK3", "sink-topic3", "PROCESS3");
-        </pre>
+    .addSink("SINK1", "sink-topic1", "PROCESS1")
+    .addSink("SINK2", "sink-topic2", "PROCESS2")
+    .addSink("SINK3", "sink-topic3", "PROCESS3");
+</pre>
 
         In the next section we present another way to build the processor 
topology: the Kafka Streams DSL.
         <br>
@@ -470,7 +470,7 @@
 
         <p>
         Before we discuss concepts such as aggregations in Kafka Streams we 
must first introduce tables, and most importantly the relationship between 
tables and streams:
-        the so-called <a 
href="https://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying";>stream-table
 duality</a>.
+        the so-called <a 
href="https://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying/";>stream-table
 duality</a>.
         Essentially, this duality means that a stream can be viewed as a 
table, and vice versa. Kafka's log compaction feature, for example, exploits 
this duality.
         </p>
 
@@ -511,9 +511,9 @@
 
         To illustrate the difference between KStreams and 
KTables/GlobalKTables, let’s imagine the following two data records are being 
sent to the stream:
 
-        <pre>
-            ("alice", 1) --> ("alice", 3)
-        </pre>
+<pre>
+("alice", 1) --> ("alice", 3)
+</pre>
 
         If these records a KStream and the stream processing application were 
to sum the values it would return <code>4</code>. If these records were a 
KTable or GlobalKTable, the return would be <code>3</code>, since the last 
record would be considered as an update.
 
@@ -525,13 +525,13 @@
         from a single topic).
         </p>
 
-        <pre>
-            KStreamBuilder builder = new KStreamBuilder();
+<pre>
+KStreamBuilder builder = new KStreamBuilder();
 
-            KStream&lt;String, GenericRecord&gt; source1 = 
builder.stream("topic1", "topic2");
-            KTable&lt;String, GenericRecord&gt; source2 = 
builder.table("topic3", "stateStoreName");
-            GlobalKTable&lt;String, GenericRecord&gt; source2 = 
builder.globalTable("topic4", "globalStoreName");
-        </pre>
+KStream&lt;String, GenericRecord&gt; source1 = builder.stream("topic1", 
"topic2");
+KTable&lt;String, GenericRecord&gt; source2 = builder.table("topic3", 
"stateStoreName");
+GlobalKTable&lt;String, GenericRecord&gt; source2 = 
builder.globalTable("topic4", "globalStoreName");
+</pre>
 
         <h4><a id="streams_dsl_windowing" 
href="#streams_dsl_windowing">Windowing a stream</a></h4>
         A stream processor may need to divide data records into time buckets, 
i.e. to <b>window</b> the stream by time. This is usually needed for join and 
aggregation operations, etc. Kafka Streams currently defines the following 
types of windows:
@@ -539,6 +539,13 @@
         <li><b>Hopping time windows</b> are windows based on time intervals. 
They model fixed-sized, (possibly) overlapping windows. A hopping window is 
defined by two properties: the window's size and its advance interval (aka 
"hop"). The advance interval specifies by how much a window moves forward 
relative to the previous one. For example, you can configure a hopping window 
with a size 5 minutes and an advance interval of 1 minute. Since hopping 
windows can overlap a data record may belong to more than one such windows.</li>
         <li><b>Tumbling time windows</b> are a special case of hopping time 
windows and, like the latter, are windows based on time intervals. They model 
fixed-size, non-overlapping, gap-less windows. A tumbling window is defined by 
a single property: the window's size. A tumbling window is a hopping window 
whose window size is equal to its advance interval. Since tumbling windows 
never overlap, a data record will belong to one and only one window.</li>
         <li><b>Sliding windows</b> model a fixed-size window that slides 
continuously over the time axis; here, two data records are said to be included 
in the same window if the difference of their timestamps is within the window 
size. Thus, sliding windows are not aligned to the epoch, but on the data 
record timestamps. In Kafka Streams, sliding windows are used only for join 
operations, and can be specified through the <code>JoinWindows</code> 
class.</li>
+        <li><b>Session windows</b> are used to aggregate key-based events into 
sessions.
+            Sessions represent a period of activity separated by a defined gap 
of inactivity.
+            Any events processed that fall within the inactivity gap of any 
existing sessions are merged into the existing sessions.
+            If the event falls outside of the session gap, then a new session 
will be created.
+            Session windows are tracked independently across keys (e.g. 
windows of different keys typically have different start and end times) and 
their sizes vary (even windows for the same key typically have different sizes);
+            as such session windows can't be pre-computed and are instead 
derived from analyzing the timestamps of the data records.
+        </li>
         </ul>
 
         <p>
@@ -567,7 +574,8 @@
             A new <code>KStream</code> instance representing the result stream 
of the join is returned from this operator.</li>
         </ul>
 
-        Depending on the operands the following join operations are supported: 
<b>inner joins</b>, <b>outer joins</b> and <b>left joins</b>. Their semantics 
are similar to the corresponding operators in relational databases.
+        Depending on the operands the following join operations are supported: 
<b>inner joins</b>, <b>outer joins</b> and <b>left joins</b>.
+        Their <a 
href="https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Streams+Join+Semantics";>semantics</a>
 are similar to the corresponding operators in relational databases.
 
         <h5><a id="streams_dsl_aggregations" 
href="#streams_dsl_aggregations">Aggregate a stream</a></h5>
         An <b>aggregation</b> operation takes one input stream, and yields a 
new stream by combining multiple input records into a single output record. 
Examples of aggregations are computing counts or sum. An aggregation over 
record streams usually needs to be performed on a windowing basis because 
otherwise the number of records that must be maintained for performing the 
aggregation may grow indefinitely.
@@ -597,10 +605,10 @@
 
         </p>
 
-        <pre>
-            // written in Java 8+, using lambda expressions
-            KStream&lt;String, GenericRecord&gt; mapped = 
source1.mapValue(record -> record.get("category"));
-        </pre>
+<pre>
+// written in Java 8+, using lambda expressions
+KStream&lt;String, GenericRecord&gt; mapped = source1.mapValue(record -> 
record.get("category"));
+</pre>
 
         <p>
         Stateless transformations, by definition, do not depend on any state 
for processing, and hence implementation-wise
@@ -611,19 +619,19 @@
         based on them.
         </p>
 
-        <pre>
-            // written in Java 8+, using lambda expressions
-            KTable&lt;Windowed&lt;String&gt;, Long&gt; counts = 
source1.groupByKey().aggregate(
-                () -> 0L,  // initial value
-                (aggKey, value, aggregate) -> aggregate + 1L,   // aggregating 
value
-                TimeWindows.of("counts", 5000L).advanceBy(1000L), // intervals 
in milliseconds
-                Serdes.Long() // serde for aggregated value
-            );
+<pre>
+// written in Java 8+, using lambda expressions
+KTable&lt;Windowed&lt;String&gt;, Long&gt; counts = 
source1.groupByKey().aggregate(
+    () -> 0L,  // initial value
+    (aggKey, value, aggregate) -> aggregate + 1L,   // aggregating value
+    TimeWindows.of("counts", 5000L).advanceBy(1000L), // intervals in 
milliseconds
+    Serdes.Long() // serde for aggregated value
+);
 
-            KStream&lt;String, String&gt; joined = source1.leftJoin(source2,
-                (record1, record2) -> record1.get("user") + "-" + 
record2.get("region");
-            );
-        </pre>
+KStream&lt;String, String&gt; joined = source1.leftJoin(source2,
+    (record1, record2) -> record1.get("user") + "-" + record2.get("region");
+);
+</pre>
 
         <h4><a id="streams_dsl_sink" href="#streams_dsl_sink">Write streams 
back to Kafka</a></h4>
 
@@ -632,21 +640,21 @@
         <code>KStream.to</code> and <code>KTable.to</code>.
         </p>
 
-        <pre>
-            joined.to("topic4");
-        </pre>
+<pre>
+joined.to("topic4");
+</pre>
 
         If your application needs to continue reading and processing the 
records after they have been materialized
         to a topic via <code>to</code> above, one option is to construct a new 
stream that reads from the output topic;
         Kafka Streams provides a convenience method called 
<code>through</code>:
 
-        <pre>
-            // equivalent to
-            //
-            // joined.to("topic4");
-            // materialized = builder.stream("topic4");
-            KStream&lt;String, String&gt; materialized = 
joined.through("topic4");
-        </pre>
+<pre>
+// equivalent to
+//
+// joined.to("topic4");
+// materialized = builder.stream("topic4");
+KStream&lt;String, String&gt; materialized = joined.through("topic4");
+</pre>
         <br>
 
         <h3><a id="streams_execute" href="#streams_execute">Application 
Configuration and Execution</a></h3>
@@ -654,7 +662,7 @@
         <p>
         Besides defining the topology, developers will also need to configure 
their applications
         in <code>StreamsConfig</code> before running it. A complete list of
-        Kafka Streams configs can be found <a 
href="/documentation/#streamsconfigs"><b>here</b></a>.
+        Kafka Streams configs can be found <a 
href="/{{version}}/documentation/#streamsconfigs"><b>here</b></a>.
         </p>
 
         <p>
@@ -662,21 +670,21 @@
         set the necessary parameters, and construct a 
<code>StreamsConfig</code> instance from the <code>Properties</code> instance.
         </p>
 
-        <pre>
-            import java.util.Properties;
-            import org.apache.kafka.streams.StreamsConfig;
+<pre>
+import java.util.Properties;
+import org.apache.kafka.streams.StreamsConfig;
 
-            Properties settings = new Properties();
-            // Set a few key parameters
-            settings.put(StreamsConfig.APPLICATION_ID_CONFIG, 
"my-first-streams-application");
-            settings.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, 
"kafka-broker1:9092");
-            settings.put(StreamsConfig.ZOOKEEPER_CONNECT_CONFIG, 
"zookeeper1:2181");
-            // Any further settings
-            settings.put(... , ...);
+Properties settings = new Properties();
+// Set a few key parameters
+settings.put(StreamsConfig.APPLICATION_ID_CONFIG, 
"my-first-streams-application");
+settings.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka-broker1:9092");
+settings.put(StreamsConfig.ZOOKEEPER_CONNECT_CONFIG, "zookeeper1:2181");
+// Any further settings
+settings.put(... , ...);
 
-            // Create an instance of StreamsConfig from the Properties instance
-            StreamsConfig config = new StreamsConfig(settings);
-        </pre>
+// Create an instance of StreamsConfig from the Properties instance
+StreamsConfig config = new StreamsConfig(settings);
+</pre>
 
         <p>
         Apart from Kafka Streams' own configuration parameters you can also 
specify parameters for the Kafka consumers and producers that are used 
internally,
@@ -686,24 +694,24 @@
         If you want to set different values for consumer and producer for such 
a parameter, you can prefix the parameter name with <code>consumer.</code> or 
<code>producer.</code>:
         </p>
 
-        <pre>
-            Properties settings = new Properties();
-            // Example of a "normal" setting for Kafka Streams
-            settings.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, 
"kafka-broker-01:9092");
+<pre>
+Properties settings = new Properties();
+// Example of a "normal" setting for Kafka Streams
+settings.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka-broker-01:9092");
 
-            // Customize the Kafka consumer settings
-            streamsSettings.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 
60000);
+// Customize the Kafka consumer settings
+streamsSettings.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 60000);
 
-            // Customize a common client setting for both consumer and producer
-            settings.put(CommonClientConfigs.RETRY_BACKOFF_MS_CONFIG, 100L);
+// Customize a common client setting for both consumer and producer
+settings.put(CommonClientConfigs.RETRY_BACKOFF_MS_CONFIG, 100L);
 
-            // Customize different values for consumer and producer
-            settings.put("consumer." + ConsumerConfig.RECEIVE_BUFFER_CONFIG, 
1024 * 1024);
-            settings.put("producer." + ProducerConfig.RECEIVE_BUFFER_CONFIG, 
64 * 1024);
-            // Alternatively, you can use
-            
settings.put(StreamsConfig.consumerPrefix(ConsumerConfig.RECEIVE_BUFFER_CONFIG),
 1024 * 1024);
-            
settings.put(StremasConfig.producerConfig(ProducerConfig.RECEIVE_BUFFER_CONFIG),
 64 * 1024);
-        </pre>
+// Customize different values for consumer and producer
+settings.put("consumer." + ConsumerConfig.RECEIVE_BUFFER_CONFIG, 1024 * 1024);
+settings.put("producer." + ProducerConfig.RECEIVE_BUFFER_CONFIG, 64 * 1024);
+// Alternatively, you can use
+settings.put(StreamsConfig.consumerPrefix(ConsumerConfig.RECEIVE_BUFFER_CONFIG),
 1024 * 1024);
+settings.put(StremasConfig.producerConfig(ProducerConfig.RECEIVE_BUFFER_CONFIG),
 64 * 1024);
+</pre>
 
         <p>
         You can call Kafka Streams from anywhere in your application code.
@@ -716,68 +724,68 @@
         that is used to define a topology; The second argument is an instance 
of <code>StreamsConfig</code> mentioned above.
         </p>
 
-        <pre>
-            import org.apache.kafka.streams.KafkaStreams;
-            import org.apache.kafka.streams.StreamsConfig;
-            import org.apache.kafka.streams.kstream.KStreamBuilder;
-            import org.apache.kafka.streams.processor.TopologyBuilder;
+<pre>
+import org.apache.kafka.streams.KafkaStreams;
+import org.apache.kafka.streams.StreamsConfig;
+import org.apache.kafka.streams.kstream.KStreamBuilder;
+import org.apache.kafka.streams.processor.TopologyBuilder;
 
-            // Use the builders to define the actual processing topology, e.g. 
to specify
-            // from which input topics to read, which stream operations 
(filter, map, etc.)
-            // should be called, and so on.
+// Use the builders to define the actual processing topology, e.g. to specify
+// from which input topics to read, which stream operations (filter, map, etc.)
+// should be called, and so on.
 
-            KStreamBuilder builder = ...;  // when using the Kafka Streams DSL
-            //
-            // OR
-            //
-            TopologyBuilder builder = ...; // when using the Processor API
+KStreamBuilder builder = ...;  // when using the Kafka Streams DSL
+//
+// OR
+//
+TopologyBuilder builder = ...; // when using the Processor API
 
-            // Use the configuration to tell your application where the Kafka 
cluster is,
-            // which serializers/deserializers to use by default, to specify 
security settings,
-            // and so on.
-            StreamsConfig config = ...;
+// Use the configuration to tell your application where the Kafka cluster is,
+// which serializers/deserializers to use by default, to specify security 
settings,
+// and so on.
+StreamsConfig config = ...;
 
-            KafkaStreams streams = new KafkaStreams(builder, config);
-        </pre>
+KafkaStreams streams = new KafkaStreams(builder, config);
+</pre>
 
         <p>
         At this point, internal structures have been initialized, but the 
processing is not started yet. You have to explicitly start the Kafka Streams 
thread by calling the <code>start()</code> method:
         </p>
 
-        <pre>
-            // Start the Kafka Streams instance
-            streams.start();
-        </pre>
+<pre>
+// Start the Kafka Streams instance
+streams.start();
+</pre>
 
         <p>
         To catch any unexpected exceptions, you may set an 
<code>java.lang.Thread.UncaughtExceptionHandler</code> before you start the 
application. This handler is called whenever a stream thread is terminated by 
an unexpected exception:
         </p>
 
-        <pre>
-            streams.setUncaughtExceptionHandler(new 
Thread.UncaughtExceptionHandler() {
-                public uncaughtException(Thread t, throwable e) {
-                    // here you should examine the exception and perform an 
appropriate action!
-                }
-            );
-        </pre>
+<pre>
+streams.setUncaughtExceptionHandler(new Thread.UncaughtExceptionHandler() {
+    public uncaughtException(Thread t, throwable e) {
+        // here you should examine the exception and perform an appropriate 
action!
+    }
+);
+</pre>
 
         <p>
         To stop the application instance call the <code>close()</code> method:
         </p>
 
-        <pre>
-            // Stop the Kafka Streams instance
-            streams.close();
-        </pre>
+<pre>
+// Stop the Kafka Streams instance
+streams.close();
+</pre>
 
         Now it's time to execute your application that uses the Kafka Streams 
library, which can be run just like any other Java application – there is no 
special magic or requirement on the side of Kafka Streams.
         For example, you can package your Java application as a fat jar file 
and then start the application via:
 
-        <pre>
-            # Start the application in class `com.example.MyStreamsApp`
-            # from the fat jar named `path-to-app-fatjar.jar`.
-            $ java -cp path-to-app-fatjar.jar com.example.MyStreamsApp
-        </pre>
+<pre>
+# Start the application in class `com.example.MyStreamsApp`
+# from the fat jar named `path-to-app-fatjar.jar`.
+$ java -cp path-to-app-fatjar.jar com.example.MyStreamsApp
+</pre>
 
         <p>
         When the application instance starts running, the defined processor 
topology will be initialized as one or more stream tasks that can be executed 
in parallel by the stream threads within the instance.
@@ -790,13 +798,21 @@
         </p>
         <br>
 
-        <h2><a id="streams_upgrade" href="#streams_upgrade">Upgrade Guide and 
API Changes</a></h2>
+        <h2><a id="streams_upgrade_and_api" 
href="#streams_upgrade_and_api">Upgrade Guide and API Changes</a></h2>
 
         <p>
-        See the <a href="/documentation/#upgrade_1020_streams">Upgrade 
Section</a> for upgrading a Kafka Streams Application from 0.10.1.x to 0.10.2.0.
+        If you want to upgrade from 0.10.1.x to 0.10.2, see the <a 
href="/{{version}}/documentation/#upgrade_1020_streams">Upgrade Section for 
0.10.2</a>.
+        It highlights incompatible changes you need to consider to upgrade 
your code and application.
+        See <a href="#streams_api_changes_0102">below</a> a complete list of 
0.10.2 API and semantical changes that allow you to advance your application 
and/or simplify your code base, including the usage of new features.
         </p>
 
-        <h3><a id="streams_api_changes" href="#streams_api_changes">Streams 
API changes in 0.10.2.0</a></h3>
+        <p>
+        If you want to upgrade from 0.10.0.x to 0.10.1, see the <a 
href="/{{version}}/documentation/#upgrade_1010_streams">Upgrade Section for 
0.10.1</a>.
+        It highlights incompatible changes you need to consider to upgrade 
your code and application.
+        See <a href="#streams_api_changes_0101">below</a> a complete list of 
0.10.1 API changes that allow you to advance your application and/or simplify 
your code base, including the usage of new features.
+        </p>
+
+        <h3><a id="streams_api_changes_0102" 
href="#streams_api_changes_0102">Streams API changes in 0.10.2.0</a></h3>
 
         <p>
             New methods in <code>KafkaStreams</code>:
@@ -824,46 +840,84 @@
             <li> added methods: <code>#addLatencyAndThroughputSensor()</code>, 
<code>#addThroughputSensor()</code>, <code>#recordThroughput()</code>,
             <code>#addSensor()</code>, <code>#removeSensor()</code> </li>
         </ul>
+
         <p> New methods in <code>TopologyBuilder</code>: </p>
-            <ul>
-                <li> added overloads for <code>#addSource()</code> that allow 
to define a <code>auto.offset.reset</code> policy per source node </li>
-                <li> added methods <code>#addGlobalStore()</code> to add 
global <code>StateStore</code>s </li>
-            </ul>
+        <ul>
+            <li> added overloads for <code>#addSource()</code> that allow to 
define a <code>auto.offset.reset</code> policy per source node </li>
+            <li> added methods <code>#addGlobalStore()</code> to add global 
<code>StateStore</code>s </li>
+        </ul>
 
         <p> New methods in <code>KStreamBuilder</code>: </p>
-            <ul>
-                <li> added overloads for <code>#stream()</code> and 
<code>#table()</code> that allow to define a <code>auto.offset.reset</code> 
policy per input stream/table </li>
-                <li> <code>#table()</code> always requires store name </li>
-                <li> added method <code>#globalKTable()</code> to create a 
<code>GlobalKTable</code> </li>
-            </ul>
+        <ul>
+            <li> added overloads for <code>#stream()</code> and 
<code>#table()</code> that allow to define a <code>auto.offset.reset</code> 
policy per input stream/table </li>
+            <li> added method <code>#globalKTable()</code> to create a 
<code>GlobalKTable</code> </li>
+        </ul>
 
         <p> New joins for <code>KStream</code>: </p>
-            <ul>
-                <li> added overloads for <code>#join()</code> to join with 
<code>KTable</code> </li>
-                <li> added overloads for <code>#join()</code> and 
<code>leftJoin()</code> to join with <code>GlobalKTable</code> </li>
-            </ul>
+        <ul>
+            <li> added overloads for <code>#join()</code> to join with 
<code>KTable</code> </li>
+            <li> added overloads for <code>#join()</code> and 
<code>leftJoin()</code> to join with <code>GlobalKTable</code> </li>
+            <li> note, join semantics in 0.10.2 were improved and thus you 
might see different result compared to 0.10.0.x and 0.10.1.x
+                 (cf. <a 
href="https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Streams+Join+Semantics";>Kafka
 Streams Join Semantics</a> in the Apache Kafka wiki)
+        </ul>
 
         <p> Aligned <code>null</code>-key handling for <code>KTable</code> 
joins: </p>
-            <ul>
-                <li> like all other KTable operations, 
<code>KTable-KTable</code> joins do not throw an exception on <code>null</code> 
key records anymore, but drop those records silently </li>
-            </ul>
+        <ul>
+            <li> like all other KTable operations, <code>KTable-KTable</code> 
joins do not throw an exception on <code>null</code> key records anymore, but 
drop those records silently </li>
+        </ul>
 
         <p> New window type <em>Session Windows</em>: </p>
-            <ul>
-                <li> added class <code>SessionWindows</code> to specify 
session windows </li>
-                <li> added overloads for <code>KGroupedStream</code> methods 
<code>#count()</code>, <code>#reduce()</code>, and <code>#aggregate()</code>
-                     to allow session window aggregations </li>
-            </ul>
+        <ul>
+            <li> added class <code>SessionWindows</code> to specify session 
windows </li>
+            <li> added overloads for <code>KGroupedStream</code> methods 
<code>#count()</code>, <code>#reduce()</code>, and <code>#aggregate()</code>
+                 to allow session window aggregations </li>
+        </ul>
 
         <p> Changes to <code>TimestampExtractor</code>: </p>
-            <ul>
-                <li> method <code>#extract()</code> has a second parameter now 
</li>
-                <li> new default timestamp extractor class 
<code>FailOnInvalidTimestamp</code>
-                     (it gives the same behavior as old (and removed) default 
extractor <code>ConsumerRecordTimestampExtractor</code>) </li>
-                <li> new alternative timestamp extractor classes 
<code>LogAndSkipOnInvalidTimestamp</code> and 
<code>UsePreviousTimeOnInvalidTimestamps</code> </li>
-            </ul>
+        <ul>
+            <li> method <code>#extract()</code> has a second parameter now 
</li>
+            <li> new default timestamp extractor class 
<code>FailOnInvalidTimestamp</code>
+                 (it gives the same behavior as old (and removed) default 
extractor <code>ConsumerRecordTimestampExtractor</code>) </li>
+            <li> new alternative timestamp extractor classes 
<code>LogAndSkipOnInvalidTimestamp</code> and 
<code>UsePreviousTimeOnInvalidTimestamps</code> </li>
+        </ul>
 
         <p> Relaxed type constraints of many DSL interfaces, classes, and 
methods (cf. <a 
href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-100+-+Relax+Type+constraints+in+Kafka+Streams+API";>KIP-100</a>).
 </p>
+
+        <h3><a id="streams_api_changes_0101" 
href="#streams_api_changes_0101">Streams API changes in 0.10.1.0</a></h3>
+
+        <p> Stream grouping and aggregation split into two methods: </p>
+        <ul>
+            <li> old: KStream #aggregateByKey(), #reduceByKey(), and 
#countByKey() </li>
+            <li> new: KStream#groupByKey() plus KGroupedStream #aggregate(), 
#reduce(), and #count() </li>
+            <li> Example: stream.countByKey() changes to 
stream.groupByKey().count() </li>
+        </ul>
+
+        <p> Auto Repartitioning: </p>
+        <ul>
+            <li> a call to through() after a key-changing operator and before 
an aggregation/join is no longer required </li>
+            <li> Example: stream.selectKey(...).through(...).countByKey() 
changes to stream.selectKey().groupByKey().count() </li>
+        </ul>
+
+        <p> TopologyBuilder: </p>
+        <ul>
+            <li> methods #sourceTopics(String applicationId) and 
#topicGroups(String applicationId) got simplified to #sourceTopics() and 
#topicGroups() </li>
+        </ul>
+
+        <p> DSL: new parameter to specify state store names: </p>
+        <ul>
+            <li> The new Interactive Queries feature requires to specify a 
store name for all source KTables and window aggregation result KTables 
(previous parameter "operator/window name" is now the storeName) </li>
+            <li> KStreamBuilder#table(String topic) changes to #topic(String 
topic, String storeName) </li>
+            <li> KTable#through(String topic) changes to #through(String 
topic, String storeName) </li>
+            <li> KGroupedStream #aggregate(), #reduce(), and #count() require 
additional parameter "String storeName"</li>
+            <li> Example: stream.countByKey(TimeWindows.of("windowName", 
1000)) changes to stream.groupByKey().count(TimeWindows.of(1000), 
"countStoreName") </li>
+        </ul>
+
+        <p> Windowing: </p>
+        <ul>
+            <li> Windows are not named anymore: TimeWindows.of("name", 1000) 
changes to TimeWindows.of(1000) (cf. DSL: new parameter to specify state store 
names) </li>
+            <li> JoinWindows has no default size anymore: 
JoinWindows.of("name").within(1000) changes to JoinWindows.of(1000) </li>
+        </ul>
+
 </script>
 
 <!--#include virtual="../includes/_header.htm" -->

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/3811ec37/0102/toc.html
----------------------------------------------------------------------
diff --git a/0102/toc.html b/0102/toc.html
index 792dc4e..787153d 100644
--- a/0102/toc.html
+++ b/0102/toc.html
@@ -144,11 +144,11 @@
                     <li><a 
href="/{{version}}/documentation/streams#streams_dsl">High-Level Streams 
DSL</a></li>
                     <li><a 
href="/{{version}}/documentation/streams#streams_execute">Application 
Configuration and Execution</a></li>
                 </ul>
-                <li><a 
href="/{{version}}/documentation/streams#streams_upgrade">9.5 Upgrade Guide and 
API Changes</a></li>
+                <li><a 
href="/{{version}}/documentation/streams#streams_upgrade_and_api">9.5 Upgrade 
Guide and API Changes</a></li>
             </ul>
         </li>
     </ul>
 
 </script>
 
-<div class="p-toc"></div>
\ No newline at end of file
+<div class="p-toc"></div>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/3811ec37/0102/upgrade.html
----------------------------------------------------------------------
diff --git a/0102/upgrade.html b/0102/upgrade.html
index a06eeb6..5976054 100644
--- a/0102/upgrade.html
+++ b/0102/upgrade.html
@@ -47,15 +47,14 @@ Kafka cluster before upgrading your clients. Version 0.10.2 
brokers support 0.8.
 
 <p><b>Note:</b> Bumping the protocol version and restarting can be done any 
time after the brokers were upgraded. It does not have to be immediately after.
 
-<h5><a id="upgrade_1020_streams" href="#upgrade_1020_streams">Upgrading a 
Kafka Streams Application</a></h5>
+<h5><a id="upgrade_1020_streams" href="#upgrade_1020_streams">Upgrading a 
0.10.1 Kafka Streams Application</a></h5>
 <ul>
     <li> Upgrading your Streams application from 0.10.1 to 0.10.2 does not 
require a broker upgrade.
          A Kafka Streams 0.10.2 application can connect to 0.10.2 and 0.10.1 
brokers (it is not possible to connect to 0.10.0 brokers though). </li>
     <li> You need to recompile your code. Just swapping the Kafka Streams 
library jar file will not work and will break your application. </li>
-    <li> <code>KStreamBuilder#table()</code> always requires a store name. 
</li>
-    <li> <code>KTable#through()</code> always requires a store name. </li>
     <li> If you use a custom (i.e., user implemented) timestamp extractor, you 
will need to update this code, because the <code>TimestampExtractor</code> 
interface was changed. </li>
     <li> If you register custom metrics, you will need to update this code, 
because the <code>StreamsMetric</code> interface was changed. </li>
+    <li> See <a 
href="/{{version}}/documentation/streams#streams_api_changes_0102">Streams API 
changes in 0.10.2</a> for more details. </li>
 </ul>
 
 <h5><a id="upgrade_1020_notable" href="#upgrade_1020_notable">Notable changes 
in 0.10.2.0</a></h5>
@@ -75,7 +74,8 @@ Kafka cluster before upgrading your clients. Version 0.10.2 
brokers support 0.8.
         modifying Zookeeper directly. This eliminates the need for privileges 
to access Zookeeper directly and "StreamsConfig.ZOOKEEPER_CONFIG"
         should not be set in the Streams app any more. If the Kafka cluster is 
secured, Streams apps must have the required security privileges to create new 
topics.</li>
     <li>Several new fields including "security.protocol", 
"connections.max.idle.ms", "retry.backoff.ms", "reconnect.backoff.ms" and 
"request.timeout.ms" were added to
-        StreamsConfig class. User should pay attenntion to the default values 
and set these if needed. For more details please refer to <a 
href="#streamsconfigs">3.5 Kafka Streams Configs</a>.</li>
+        StreamsConfig class. User should pay attention to the default values 
and set these if needed. For more details please refer to <a 
href="/{{version}}/documentation/#streamsconfigs">3.5 Kafka Streams 
Configs</a>.</li>
+    <li>The <code>offsets.topic.replication.factor</code> broker config is now 
enforced upon auto topic creation. Internal auto topic creation will fail with 
a GROUP_COORDINATOR_NOT_AVAILABLE error until the cluster size meets this 
replication factor requirement.</li>
 </ul>
 
 <h5><a id="upgrade_1020_new_protocols" href="#upgrade_1020_new_protocols">New 
Protocol Versions</a></h5>
@@ -125,41 +125,11 @@ only support 0.10.1.x or later brokers while 0.10.1.x 
brokers also support older
     <li> Due to the increased number of index files, on some brokers with 
large amount the log segments (e.g. >15K), the log loading process during the 
broker startup could be longer. Based on our experiment, setting the 
num.recovery.threads.per.data.dir to one may reduce the log loading time. </li>
 </ul>
 
-<h5><a id="upgrade_1010_streams" href="#upgrade_1010_streams">Streams API 
changes in 0.10.1.0</a></h5>
+<h5><a id="upgrade_1010_streams" href="#upgrade_1010_streams">Upgrading a 
0.10.0 Kafka Streams Application</a></h5>
 <ul>
-    <li> Stream grouping and aggregation split into two methods:
-        <ul>
-            <li> old: KStream #aggregateByKey(), #reduceByKey(), and 
#countByKey() </li>
-            <li> new: KStream#groupByKey() plus KGroupedStream #aggregate(), 
#reduce(), and #count() </li>
-            <li> Example: stream.countByKey() changes to 
stream.groupByKey().count() </li>
-        </ul>
-    </li>
-    <li> Auto Repartitioning:
-        <ul>
-            <li> a call to through() after a key-changing operator and before 
an aggregation/join is no longer required </li>
-            <li> Example: stream.selectKey(...).through(...).countByKey() 
changes to stream.selectKey().groupByKey().count() </li>
-        </ul>
-    </li>
-    <li> TopologyBuilder:
-        <ul>
-            <li> methods #sourceTopics(String applicationId) and 
#topicGroups(String applicationId) got simplified to #sourceTopics() and 
#topicGroups() </li>
-        </ul>
-    </li>
-    <li> DSL: new parameter to specify state store names:
-        <ul>
-            <li> The new Interactive Queries feature requires to specify a 
store name for all source KTables and window aggregation result KTables 
(previous parameter "operator/window name" is now the storeName) </li>
-            <li> KStreamBuilder#table(String topic) changes to #topic(String 
topic, String storeName) </li>
-            <li> KTable#through(String topic) changes to #through(String 
topic, String storeName) </li>
-            <li> KGroupedStream #aggregate(), #reduce(), and #count() require 
additional parameter "String storeName"</li>
-            <li> Example: stream.countByKey(TimeWindows.of("windowName", 
1000)) changes to stream.groupByKey().count(TimeWindows.of(1000), 
"countStoreName") </li>
-        </ul>
-    </li>
-    <li> Windowing:
-        <ul>
-            <li> Windows are not named anymore: TimeWindows.of("name", 1000) 
changes to TimeWindows.of(1000) (cf. DSL: new parameter to specify state store 
names) </li>
-            <li> JoinWindows has no default size anymore: 
JoinWindows.of("name").within(1000) changes to JoinWindows.of(1000) </li>
-        </ul>
-    </li>
+    <li> Upgrading your Streams application from 0.10.0 to 0.10.1 does require 
a <a href="#upgrade_10_1">broker upgrade</a> because a Kafka Streams 0.10.1 
application can only connect to 0.10.1 brokers. </li>
+    <li> There are couple of API changes, that are not backward compatible 
(cf. <a 
href="/{{version}}/documentation/streams#streams_api_changes_0101">Streams API 
changes in 0.10.1</a> for more details).
+         Thus, you need to update and recompile your code. Just swapping the 
Kafka Streams library jar file will not work and will break your application. 
</li>
 </ul>
 
 <h5><a id="upgrade_1010_notable" href="#upgrade_1010_notable">Notable changes 
in 0.10.1.0</a></h5>
@@ -299,7 +269,7 @@ work with 0.10.0.x brokers. Therefore, 0.9.0.0 clients 
should be upgraded to 0.9
 <h5><a id="upgrade_10_notable" href="#upgrade_10_notable">Notable changes in 
0.10.0.0</a></h5>
 
 <ul>
-    <li> Starting from Kafka 0.10.0.0, a new client library named <b>Kafka 
Streams</b> is available for stream processing on data stored in Kafka topics. 
This new client library only works with 0.10.x and upward versioned brokers due 
to message format changes mentioned above. For more information please read <a 
href="#streams_overview">this section</a>.</li>
+    <li> Starting from Kafka 0.10.0.0, a new client library named <b>Kafka 
Streams</b> is available for stream processing on data stored in Kafka topics. 
This new client library only works with 0.10.x and upward versioned brokers due 
to message format changes mentioned above. For more information please read <a 
href="/{{version}}/documentation/stream">Streams documentation</a>.</li>
     <li> The default value of the configuration parameter 
<code>receive.buffer.bytes</code> is now 64K for the new consumer.</li>
     <li> The new consumer now exposes the configuration parameter 
<code>exclude.internal.topics</code> to restrict internal topics (such as the 
consumer offsets topic) from accidentally being included in regular expression 
subscriptions. By default, it is enabled.</li>
     <li> The old Scala producer has been deprecated. Users should migrate 
their code to the Java producer included in the kafka-clients JAR as soon as 
possible. </li>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/3811ec37/0102/uses.html
----------------------------------------------------------------------
diff --git a/0102/uses.html b/0102/uses.html
index 2d238c2..4e88859 100644
--- a/0102/uses.html
+++ b/0102/uses.html
@@ -15,40 +15,67 @@
  limitations under the License.
 -->
 
-<p> Here is a description of a few of the popular use cases for Apache 
Kafka&trade;. For an overview of a number of these areas in action, see <a 
href="http://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying";>this
 blog post</a>. </p>
+<p> Here is a description of a few of the popular use cases for Apache 
Kafka&trade;.
+For an overview of a number of these areas in action, see <a 
href="https://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying/";>this
 blog post</a>. </p>
 
 <h4><a id="uses_messaging" href="#uses_messaging">Messaging</a></h4>
 
-Kafka works well as a replacement for a more traditional message broker. 
Message brokers are used for a variety of reasons (to decouple processing from 
data producers, to buffer unprocessed messages, etc). In comparison to most 
messaging systems Kafka has better throughput, built-in partitioning, 
replication, and fault-tolerance which makes it a good solution for large scale 
message processing applications.
+Kafka works well as a replacement for a more traditional message broker.
+Message brokers are used for a variety of reasons (to decouple processing from 
data producers, to buffer unprocessed messages, etc).
+In comparison to most messaging systems Kafka has better throughput, built-in 
partitioning, replication, and fault-tolerance which makes it a good
+solution for large scale message processing applications.
 <p>
-In our experience messaging uses are often comparatively low-throughput, but 
may require low end-to-end latency and often depend on the strong durability 
guarantees Kafka provides.
+In our experience messaging uses are often comparatively low-throughput, but 
may require low end-to-end latency and often depend on the strong
+durability guarantees Kafka provides.
 <p>
-In this domain Kafka is comparable to traditional messaging systems such as <a 
href="http://activemq.apache.org";>ActiveMQ</a> or <a 
href="https://www.rabbitmq.com";>RabbitMQ</a>.
+In this domain Kafka is comparable to traditional messaging systems such as <a 
href="http://activemq.apache.org";>ActiveMQ</a> or
+<a href="https://www.rabbitmq.com";>RabbitMQ</a>.
 
 <h4><a id="uses_website" href="#uses_website">Website Activity 
Tracking</a></h4>
 
-The original use case for Kafka was to be able to rebuild a user activity 
tracking pipeline as a set of real-time publish-subscribe feeds. This means 
site activity (page views, searches, or other actions users may take) is 
published to central topics with one topic per activity type. These feeds are 
available for subscription for a range of use cases including real-time 
processing, real-time monitoring, and loading into Hadoop or offline data 
warehousing systems for offline processing and reporting.
+The original use case for Kafka was to be able to rebuild a user activity 
tracking pipeline as a set of real-time publish-subscribe feeds.
+This means site activity (page views, searches, or other actions users may 
take) is published to central topics with one topic per activity type.
+These feeds are available for subscription for a range of use cases including 
real-time processing, real-time monitoring, and loading into Hadoop or
+offline data warehousing systems for offline processing and reporting.
 <p>
 Activity tracking is often very high volume as many activity messages are 
generated for each user page view.
 
 <h4><a id="uses_metrics" href="#uses_metrics">Metrics</a></h4>
 
-Kafka is often used for operational monitoring data. This involves aggregating 
statistics from distributed applications to produce centralized feeds of 
operational data.
+Kafka is often used for operational monitoring data.
+This involves aggregating statistics from distributed applications to produce 
centralized feeds of operational data.
 
 <h4><a id="uses_logs" href="#uses_logs">Log Aggregation</a></h4>
 
-Many people use Kafka as a replacement for a log aggregation solution. Log 
aggregation typically collects physical log files off servers and puts them in 
a central place (a file server or HDFS perhaps) for processing. Kafka abstracts 
away the details of files and gives a cleaner abstraction of log or event data 
as a stream of messages. This allows for lower-latency processing and easier 
support for multiple data sources and distributed data consumption.
+Many people use Kafka as a replacement for a log aggregation solution.
+Log aggregation typically collects physical log files off servers and puts 
them in a central place (a file server or HDFS perhaps) for processing.
+Kafka abstracts away the details of files and gives a cleaner abstraction of 
log or event data as a stream of messages.
+This allows for lower-latency processing and easier support for multiple data 
sources and distributed data consumption.
 
-In comparison to log-centric systems like Scribe or Flume, Kafka offers 
equally good performance, stronger durability guarantees due to replication, 
and much lower end-to-end latency.
+In comparison to log-centric systems like Scribe or Flume, Kafka offers 
equally good performance, stronger durability guarantees due to replication,
+and much lower end-to-end latency.
 
 <h4><a id="uses_streamprocessing" href="#uses_streamprocessing">Stream 
Processing</a></h4>
 
-Many users of Kafka process data in processing pipelines consisting of 
multiple stages, where raw input data is consumed from Kafka topics and then 
aggregated, enriched, or otherwise transformed into new topics for further 
consumption or follow-up processing. For example, a processing pipeline for 
recommending news articles might crawl article content from RSS feeds and 
publish it to an "articles" topic; further processing might normalize or 
deduplicate this content and published the cleansed article content to a new 
topic; a final processing stage might attempt to recommend this content to 
users. Such processing pipelines create graphs of real-time data flows based on 
the individual topics. Starting in 0.10.0.0, a light-weight but powerful stream 
processing library called <a href="#streams_overview">Kafka Streams</a> is 
available in Apache Kafka to perform such data processing as described above. 
Apart from Kafka Streams, alternative open source stream processing tools 
include <a h
 ref="https://storm.apache.org/";>Apache Storm</a> and <a 
href="http://samza.apache.org/";>Apache Samza</a>.
+Many users of Kafka process data in processing pipelines consisting of 
multiple stages, where raw input data is consumed from Kafka topics and then
+aggregated, enriched, or otherwise transformed into new topics for further 
consumption or follow-up processing.
+For example, a processing pipeline for recommending news articles might crawl 
article content from RSS feeds and publish it to an "articles" topic;
+further processing might normalize or deduplicate this content and published 
the cleansed article content to a new topic;
+a final processing stage might attempt to recommend this content to users.
+Such processing pipelines create graphs of real-time data flows based on the 
individual topics.
+Starting in 0.10.0.0, a light-weight but powerful stream processing library 
called <a href="/{{version}}/documentation/streams">Kafka Streams</a>
+is available in Apache Kafka to perform such data processing as described 
above.
+Apart from Kafka Streams, alternative open source stream processing tools 
include <a href="https://storm.apache.org/";>Apache Storm</a> and
+<a href="http://samza.apache.org/";>Apache Samza</a>.
 
 <h4><a id="uses_eventsourcing" href="#uses_eventsourcing">Event 
Sourcing</a></h4>
 
-<a href="http://martinfowler.com/eaaDev/EventSourcing.html";>Event sourcing</a> 
is a style of application design where state changes are logged as a 
time-ordered sequence of records. Kafka's support for very large stored log 
data makes it an excellent backend for an application built in this style.
+<a href="http://martinfowler.com/eaaDev/EventSourcing.html";>Event sourcing</a> 
is a style of application design where state changes are logged as a
+time-ordered sequence of records. Kafka's support for very large stored log 
data makes it an excellent backend for an application built in this style.
 
 <h4><a id="uses_commitlog" href="#uses_commitlog">Commit Log</a></h4>
 
-Kafka can serve as a kind of external commit-log for a distributed system. The 
log helps replicate data between nodes and acts as a re-syncing mechanism for 
failed nodes to restore their data. The <a 
href="/documentation.html#compaction">log compaction</a> feature in Kafka helps 
support this usage. In this usage Kafka is similar to <a 
href="http://zookeeper.apache.org/bookkeeper/";>Apache BookKeeper</a> project.
+Kafka can serve as a kind of external commit-log for a distributed system. The 
log helps replicate data between nodes and acts as a re-syncing
+mechanism for failed nodes to restore their data.
+The <a href="/documentation.html#compaction">log compaction</a> feature in 
Kafka helps support this usage.
+In this usage Kafka is similar to <a 
href="http://zookeeper.apache.org/bookkeeper/";>Apache BookKeeper</a> project.

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/3811ec37/events.html
----------------------------------------------------------------------
diff --git a/events.html b/events.html
index 00d1bdc..c680d39 100644
--- a/events.html
+++ b/events.html
@@ -12,21 +12,21 @@
                        <div style="float:left; width: 28rem;">
                                <h5 style="margin-bottom:0;">North America</h5>
                                <ul>
-                                       <li><a 
href="https://www.meetup.com/http-kafka-apache-org/"; target="_blank">Bay 
Area</a></li>
+                                       <li><a 
href="https://www.meetup.com/http-kafka-apache-org/"; target="_blank">Bay Area, 
CA</a></li>
+                                       <li><a 
href="http://www.meetup.com/Apache-Kafka-San-Francisco/"; target="_blank">San 
Francisco, CA</a></li>
                                        <li><a 
href="https://www.meetup.com/Apache-Kafka-ATL/"; target="_blank">Atlanta, 
GA</a></li>
                                        <li><a 
href="https://www.meetup.com/Austin-Apache-Kafka-Meetup-Stream-Data-Platform/"; 
target="_blank">Austin, TX</a></li>
                                        <li><a 
href="https://www.meetup.com/Chicago-Area-Kafka-Enthusiasts/"; 
target="_blank">Chicago, IL</a></li>
                                        <li><a 
href="https://www.meetup.com/Front-Range-Apache-Kafka/"; target="_blank">Denver, 
CO</a></li>
-                                       <li><a 
href="http://www.meetup.com/Kafka-Montreal-Meetup/"; target="_blank">Montréal, 
Canada</a></li>
                                        <li><a 
href="https://www.meetup.com/Apache-Kafka-NYC/"; target="_blank">New York, 
NY</a></li>
-                                       <li><a 
href="http://www.meetup.com/Apache-Kafka-San-Francisco/"; target="_blank">San 
Francisco, CA</a></li>
                                        <li><a 
href="https://www.meetup.com/Seattle-Apache-Kafka-Meetup/"; 
target="_blank">Seattle, WA</a></li>
-                                       <li><a 
href="https://www.meetup.com/Logger/"; target="_blank">Toronto, Canada</a></li>
                                        <li><a 
href="https://www.meetup.com/Apache-Kafka-DC/"; target="_blank">Washington, 
DC</a></li>
                                        <li><a 
href="https://www.meetup.com/Boston-Apache-kafka-Meetup/"; 
target="_blank">Bostom, MA</a></li>
                                        <li><a 
href="https://www.meetup.com/Minneapolis-Apache-Kafka/"; 
target="_blank">Minneapolis, MN</a></li>
                                        <li><a 
href="https://www.meetup.com/Portland-Apache-Kafka/"; target="_blank">Portland, 
OR</a></li>
                                        <li><a 
href="https://www.meetup.com/Apache-Kafka-Phoenix-Meetup/"; 
target="_blank">Phoenix, AZ</a></li>
+                                       <li><a 
href="https://www.meetup.com/Logger/"; target="_blank">Toronto, Canada</a></li>
+                                       <li><a 
href="https://www.meetup.com/Kafka-Montreal-Meetup/"; target="_blank">Montréal, 
Canada</a></li>
                                </ul>
                        </div>
                        <div style="float:left; width: 28rem;">

Reply via email to