http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a7c3675d/0102/protocol.html
----------------------------------------------------------------------
diff --git a/0102/protocol.html b/0102/protocol.html
new file mode 100644
index 0000000..5285f2e
--- /dev/null
+++ b/0102/protocol.html
@@ -0,0 +1,230 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<!--#include virtual="../includes/_header.html" -->
+<!--#include virtual="../includes/_top.html" -->
+<div class="content">
+    <!--#include virtual="../includes/_nav.html" -->
+    <div class="right">
+        <h1>Kafka protocol guide</h1>
+
+<p>This document covers the wire protocol implemented in Kafka. It is meant to 
give a readable guide to the protocol that covers the available requests, their 
binary format, and the proper way to make use of them to implement a client. 
This document assumes you understand the basic design and terminology described 
<a href="https://kafka.apache.org/documentation.html#design";>here</a></p>
+
+<ul class="toc">
+    <li><a href="#protocol_preliminaries">Preliminaries</a>
+        <ul>
+            <li><a href="#protocol_network">Network</a>
+            <li><a href="#protocol_partitioning">Partitioning and 
bootstrapping</a>
+            <li><a href="#protocol_partitioning_strategies">Partitioning 
Strategies</a>
+            <li><a href="#protocol_batching">Batching</a>
+            <li><a href="#protocol_compatibility">Versioning and 
Compatibility</a>
+        </ul>
+    </li>
+    <li><a href="#protocol_details">The Protocol</a>
+        <ul>
+            <li><a href="#protocol_types">Protocol Primitive Types</a>
+            <li><a href="#protocol_grammar">Notes on reading the request 
format grammars</a>
+            <li><a href="#protocol_common">Common Request and Response 
Structure</a>
+            <li><a href="#protocol_message_sets">Message Sets</a>
+        </ul>
+    </li>
+    <li><a href="#protocol_constants">Constants</a>
+        <ul>
+            <li><a href="#protocol_error_codes">Error Codes</a>
+            <li><a href="#protocol_api_keys">Api Keys</a>
+        </ul>
+    </li>
+    <li><a href="#protocol_messages">The Messages</a></li>
+    <li><a href="#protocol_philosophy">Some Common Philosophical 
Questions</a></li>
+</ul>
+
+<h4><a id="protocol_preliminaries" 
href="#protocol_preliminaries">Preliminaries</a></h4>
+
+<h5><a id="protocol_network" href="#protocol_network">Network</a></h5>
+
+<p>Kafka uses a binary protocol over TCP. The protocol defines all apis as 
request response message pairs. All messages are size delimited and are made up 
of the following primitive types.</p>
+
+<p>The client initiates a socket connection and then writes a sequence of 
request messages and reads back the corresponding response message. No 
handshake is required on connection or disconnection. TCP is happier if you 
maintain persistent connections used for many requests to amortize the cost of 
the TCP handshake, but beyond this penalty connecting is pretty cheap.</p>
+
+<p>The client will likely need to maintain a connection to multiple brokers, 
as data is partitioned and the clients will need to talk to the server that has 
their data. However it should not generally be necessary to maintain multiple 
connections to a single broker from a single client instance (i.e. connection 
pooling).</p>
+
+<p>The server guarantees that on a single TCP connection, requests will be 
processed in the order they are sent and responses will return in that order as 
well. The broker's request processing allows only a single in-flight request 
per connection in order to guarantee this ordering. Note that clients can (and 
ideally should) use non-blocking IO to implement request pipelining and achieve 
higher throughput. i.e., clients can send requests even while awaiting 
responses for preceding requests since the outstanding requests will be 
buffered in the underlying OS socket buffer. All requests are initiated by the 
client, and result in a corresponding response message from the server except 
where noted.</p>
+
+<p>The server has a configurable maximum limit on request size and any request 
that exceeds this limit will result in the socket being disconnected.</p>
+
+<h5><a id="protocol_partitioning" href="#protocol_partitioning">Partitioning 
and bootstrapping</a></h5>
+
+<p>Kafka is a partitioned system so not all servers have the complete data 
set. Instead recall that topics are split into a pre-defined number of 
partitions, P, and each partition is replicated with some replication factor, 
N. Topic partitions themselves are just ordered "commit logs" numbered 0, 1, 
..., P.</p>
+
+<p>All systems of this nature have the question of how a particular piece of 
data is assigned to a particular partition. Kafka clients directly control this 
assignment, the brokers themselves enforce no particular semantics of which 
messages should be published to a particular partition. Rather, to publish 
messages the client directly addresses messages to a particular partition, and 
when fetching messages, fetches from a particular partition. If two clients 
want to use the same partitioning scheme they must use the same method to 
compute the mapping of key to partition.</p>
+
+<p>These requests to publish or fetch data must be sent to the broker that is 
currently acting as the leader for a given partition. This condition is 
enforced by the broker, so a request for a particular partition to the wrong 
broker will result in an the NotLeaderForPartition error code (described 
below).</p>
+
+<p>How can the client find out which topics exist, what partitions they have, 
and which brokers currently host those partitions so that it can direct its 
requests to the right hosts? This information is dynamic, so you can't just 
configure each client with some static mapping file. Instead all Kafka brokers 
can answer a metadata request that describes the current state of the cluster: 
what topics there are, which partitions those topics have, which broker is the 
leader for those partitions, and the host and port information for these 
brokers.</p>
+
+<p>In other words, the client needs to somehow find one broker and that broker 
will tell the client about all the other brokers that exist and what partitions 
they host. This first broker may itself go down so the best practice for a 
client implementation is to take a list of two or three urls to bootstrap from. 
The user can then choose to use a load balancer or just statically configure 
two or three of their kafka hosts in the clients.</p>
+
+<p>The client does not need to keep polling to see if the cluster has changed; 
it can fetch metadata once when it is instantiated cache that metadata until it 
receives an error indicating that the metadata is out of date. This error can 
come in two forms: (1) a socket error indicating the client cannot communicate 
with a particular broker, (2) an error code in the response to a request 
indicating that this broker no longer hosts the partition for which data was 
requested.</p>
+<ol>
+    <li>Cycle through a list of "bootstrap" kafka urls until we find one we 
can connect to. Fetch cluster metadata.</li>
+    <li>Process fetch or produce requests, directing them to the appropriate 
broker based on the topic/partitions they send to or fetch from.</li>
+    <li>If we get an appropriate error, refresh the metadata and try 
again.</li>
+</ol>
+
+<h5><a id="protocol_partitioning_strategies" 
href="#protocol_partitioning_strategies">Partitioning Strategies</a></h5>
+
+<p>As mentioned above the assignment of messages to partitions is something 
the producing client controls. That said, how should this functionality be 
exposed to the end-user?</p>
+
+<p>Partitioning really serves two purposes in Kafka:</p>
+<ol>
+    <li>It balances data and request load over brokers</li>
+    <li>It serves as a way to divvy up processing among consumer processes 
while allowing local state and preserving order within the partition. We call 
this semantic partitioning.</li>
+</ol>
+
+<p>For a given use case you may care about only one of these or both.</p>
+
+<p>To accomplish simple load balancing a simple approach would be for the 
client to just round robin requests over all brokers. Another alternative, in 
an environment where there are many more producers than brokers, would be to 
have each client chose a single partition at random and publish to that. This 
later strategy will result in far fewer TCP connections.</p>
+
+<p>Semantic partitioning means using some key in the message to assign 
messages to partitions. For example if you were processing a click message 
stream you might want to partition the stream by the user id so that all data 
for a particular user would go to a single consumer. To accomplish this the 
client can take a key associated with the message and use some hash of this key 
to choose the partition to which to deliver the message.</p>
+
+<h5><a id="protocol_batching" href="#protocol_batching">Batching</a></h5>
+
+<p>Our apis encourage batching small things together for efficiency. We have 
found this is a very significant performance win. Both our API to send messages 
and our API to fetch messages always work with a sequence of messages not a 
single message to encourage this. A clever client can make use of this and 
support an "asynchronous" mode in which it batches together messages sent 
individually and sends them in larger clumps. We go even further with this and 
allow the batching across multiple topics and partitions, so a produce request 
may contain data to append to many partitions and a fetch request may pull data 
from many partitions all at once.</p>
+
+<p>The client implementer can choose to ignore this and send everything one at 
a time if they like.</p>
+
+<h5><a id="protocol_compatibility" href="#protocol_compatibility">Versioning 
and Compatibility</a></h5>
+
+<p>The protocol is designed to enable incremental evolution in a backward 
compatible fashion. Our versioning is on a per API basis, each version 
consisting of a request and response pair. Each request contains an API key 
that identifies the API being invoked and a version number that indicates the 
format of the request and the expected format of the response.</p>
+
+<p>The intention is that clients will support a range of API versions. When 
communicating with a particular broker, a given client should use the highest 
API version supported by both and indicate this version in their requests.</p>
+
+<p>The server will reject requests with a version it does not support, and 
will always respond to the client with exactly the protocol format it expects 
based on the version it included in its request. The intended upgrade path is 
that new features would first be rolled out on the server (with the older 
clients not making use of them) and then as newer clients are deployed these 
new features would gradually be taken advantage of.</p>
+
+<p>Our goal is primarily to allow API evolution in an environment where 
downtime is not allowed and clients and servers cannot all be changed at 
once.</p>
+
+<p>Currently all versions are baselined at 0, as we evolve these APIs we will 
indicate the format for each version individually.</p>
+
+<h5><a id="api_versions" href="#api_versions">Retrieving Supported API 
versions</a></h5>
+<p>In order to work against multiple broker versions, clients need to know 
what versions of various APIs a
+    broker supports. The broker exposes this information since 0.10.0.0 as 
described in <a 
href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version";>KIP-35</a>.
+    Clients should use the supported API versions information to choose the 
highest API version supported by both client and broker. If no such version
+    exists, an error should be reported to the user.</p>
+<p>The following sequence may be used by a client to obtain supported API 
versions from a broker.</p>
+<ol>
+    <li>Client sends <code>ApiVersionsRequest</code> to a broker after 
connection has been established with the broker. If SSL is enabled,
+        this happens after SSL connection has been established.</li>
+    <li>On receiving <code>ApiVersionsRequest</code>, a broker returns its 
full list of supported ApiKeys and
+        versions regardless of current authentication state (e.g., before SASL 
authentication on an SASL listener, do note that no
+        Kafka protocol requests may take place on a SSL listener before the 
SSL handshake is finished). If this is considered to
+        leak information about the broker version a workaround is to use SSL 
with client authentication which is performed at an
+        earlier stage of the connection where the 
<code>ApiVersionRequest</code> is not available. Also, note that broker 
versions older
+        than 0.10.0.0 do not support this API and will either ignore the 
request or close connection in response to the request.</li>
+    <li>If multiple versions of an API are supported by broker and client, 
clients are recommended to use the latest version supported
+        by the broker and itself.</li>
+    <li>Deprecation of a protocol version is done by marking an API version as 
deprecated in the protocol documentation.</li>
+    <li>Supported API versions obtained from a broker are only valid for the 
connection on which that information is obtained.
+        In the event of disconnection, the client should obtain the 
information from the broker again, as the broker might have been
+        upgraded/downgraded in the mean time.</li>
+</ol>
+
+<h5><a id="sasl_handshake" href="#sasl_handshake">SASL Authentication 
Sequence</a></h5>
+<p>The following sequence is used for SASL authentication:
+<ol>
+  <li>Kafka <code>ApiVersionsRequest</code> may be sent by the client to 
obtain the version ranges of requests supported by the broker. This is 
optional.</li>
+  <li>Kafka <code>SaslHandshakeRequest</code> containing the SASL mechanism 
for authentication is sent by the client. If the requested mechanism is not 
enabled
+    in the server, the server responds with the list of supported mechanisms 
and closes the client connection. If the mechanism is enabled
+    in the server, the server sends a successful response and continues with 
SASL authentication.
+  <li>The actual SASL authentication is now performed. A series of SASL client 
and server tokens corresponding to the mechanism are sent as opaque
+    packets. These packets contain a 32-bit size followed by the token as 
defined by the protocol for the SASL mechanism.
+  <li>If authentication succeeds, subsequent packets are handled as Kafka API 
requests. Otherwise, the client connection is closed.
+</ol>
+<p>For interoperability with 0.9.0.x clients, the first packet received by the 
server is handled as a SASL/GSSAPI client token if it is not a valid
+Kafka request. SASL/GSSAPI authentication is performed starting with this 
packet, skipping the first two steps above.</p>
+
+
+<h4><a id="protocol_details" href="#protocol_details">The Protocol</a></h4>
+
+<h5><a id="protocol_types" href="#protocol_types">Protocol Primitive 
Types</a></h5>
+
+<p>The protocol is built out of the following primitive types.</p>
+
+<p><b>Fixed Width Primitives</b><p>
+
+<p>int8, int16, int32, int64 - Signed integers with the given precision (in 
bits) stored in big endian order.</p>
+
+<p><b>Variable Length Primitives</b><p>
+
+<p>bytes, string - These types consist of a signed integer giving a length N 
followed by N bytes of content. A length of -1 indicates null. string uses an 
int16 for its size, and bytes uses an int32.</p>
+
+<p><b>Arrays</b><p>
+
+<p>This is a notation for handling repeated structures. These will always be 
encoded as an int32 size containing the length N followed by N repetitions of 
the structure which can itself be made up of other primitive types. In the BNF 
grammars below we will show an array of a structure foo as [foo].</p>
+
+<h5><a id="protocol_grammar" href="#protocol_grammar">Notes on reading the 
request format grammars</a></h5>
+
+<p>The <a 
href="https://en.wikipedia.org/wiki/Backus%E2%80%93Naur_Form";>BNF</a>s below 
give an exact context free grammar for the request and response binary format. 
The BNF is intentionally not compact in order to give human-readable name. As 
always in a BNF a sequence of productions indicates concatenation. When there 
are multiple possible productions these are separated with '|' and may be 
enclosed in parenthesis for grouping. The top-level definition is always given 
first and subsequent sub-parts are indented.</p>
+
+<h5><a id="protocol_common" href="#protocol_common">Common Request and 
Response Structure</a></h5>
+
+<p>All requests and responses originate from the following grammar which will 
be incrementally describe through the rest of this document:</p>
+
+<pre>
+RequestOrResponse => Size (RequestMessage | ResponseMessage)
+Size => int32
+</pre>
+
+<table class="data-table"><tbody>
+<tr><th>Field</th><th>Description</th></tr>
+<tr><td>message_size</td><td>The message_size field gives the size of the 
subsequent request or response message in bytes. The client can read requests 
by first reading this 4 byte size as an integer N, and then reading and parsing 
the subsequent N bytes of the request.</td></tr>
+</table>
+
+<h5><a id="protocol_message_sets" href="#protocol_message_sets">Message 
Sets</a></h5>
+
+<p>A description of the message set format can be found <a 
href="https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-Messagesets";>here</a>.
 (KAFKA-3368)</p>
+
+<h4><a id="protocol_constants" href="#protocol_constants">Constants</a></h4>
+
+<h5><a id="protocol_error_codes" href="#protocol_error_codes">Error 
Codes</a></h5>
+<p>We use numeric codes to indicate what problem occurred on the server. These 
can be translated by the client into exceptions or whatever the appropriate 
error handling mechanism in the client language. Here is a table of the error 
codes currently in use:</p>
+<!--#include virtual="generated/protocol_errors.html" -->
+
+<h5><a id="protocol_api_keys" href="#protocol_api_keys">Api Keys</a></h5>
+<p>The following are the numeric codes that the ApiKey in the request can take 
for each of the below request types.</p>
+<!--#include virtual="generated/protocol_api_keys.html" -->
+
+<h4><a id="protocol_messages" href="#protocol_messages">The Messages</a></h4>
+
+<p>This section gives details on each of the individual API Messages, their 
usage, their binary format, and the meaning of their fields.</p>
+<!--#include virtual="generated/protocol_messages.html" -->
+
+<h4><a id="protocol_philosophy" href="#protocol_philosophy">Some Common 
Philosophical Questions</a></h4>
+
+<p>Some people have asked why we don't use HTTP. There are a number of 
reasons, the best is that client implementors can make use of some of the more 
advanced TCP features--the ability to multiplex requests, the ability to 
simultaneously poll many connections, etc. We have also found HTTP libraries in 
many languages to be surprisingly shabby.</p>
+
+<p>Others have asked if maybe we shouldn't support many different protocols. 
Prior experience with this was that it makes it very hard to add and test new 
features if they have to be ported across many protocol implementations. Our 
feeling is that most users don't really see multiple protocols as a feature, 
they just want a good reliable client in the language of their choice.</p>
+
+<p>Another question is why we don't adopt XMPP, STOMP, AMQP or an existing 
protocol. The answer to this varies by protocol, but in general the problem is 
that the protocol does determine large parts of the implementation and we 
couldn't do what we are doing if we didn't have control over the protocol. Our 
belief is that it is possible to do better than existing messaging systems have 
in providing a truly distributed messaging system, and to do this we need to 
build something that works differently.</p>
+
+<p>A final question is why we don't use a system like Protocol Buffers or 
Thrift to define our request messages. These packages excel at helping you to 
managing lots and lots of serialized messages. However we have only a few 
messages. Support across languages is somewhat spotty (depending on the 
package). Finally the mapping between binary log format and wire protocol is 
something we manage somewhat carefully and this would not be possible with 
these systems. Finally we prefer the style of versioning APIs explicitly and 
checking this to inferring new values as nulls as it allows more nuanced 
control of compatibility.</p>
+
+    <script>
+        // Show selected style on nav item
+        $(function() { $('.b-nav__project').addClass('selected'); });
+    </script>
+
+<!--#include virtual="../includes/_footer.html" -->

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a7c3675d/0102/quickstart.html
----------------------------------------------------------------------
diff --git a/0102/quickstart.html b/0102/quickstart.html
new file mode 100644
index 0000000..763d3e3
--- /dev/null
+++ b/0102/quickstart.html
@@ -0,0 +1,403 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<p>
+This tutorial assumes you are starting fresh and have no existing Kafka or 
ZooKeeper data.
+Since Kafka console scripts are different for Unix-based and Windows 
platforms, on Windows platforms use <code>bin\windows\</code> instead of 
<code>bin/</code>, and change the script extension to <code>.bat</code>.
+</p>
+
+<h4><a id="quickstart_download" href="#quickstart_download">Step 1: Download 
the code</a></h4>
+
+<a 
href="https://www.apache.org/dyn/closer.cgi?path=/kafka/0.10.2.0/kafka_2.11-0.10.2.0.tgz";
 title="Kafka downloads">Download</a> the 0.10.2.0 release and un-tar it.
+
+<pre>
+&gt; <b>tar -xzf kafka_2.11-0.10.2.0.tgz</b>
+&gt; <b>cd kafka_2.11-0.10.2.0</b>
+</pre>
+
+<h4><a id="quickstart_startserver" href="#quickstart_startserver">Step 2: 
Start the server</a></h4>
+
+<p>
+Kafka uses ZooKeeper so you need to first start a ZooKeeper server if you 
don't already have one. You can use the convenience script packaged with kafka 
to get a quick-and-dirty single-node ZooKeeper instance.
+</p>
+
+<pre>
+&gt; <b>bin/zookeeper-server-start.sh config/zookeeper.properties</b>
+[2013-04-22 15:01:37,495] INFO Reading configuration from: 
config/zookeeper.properties 
(org.apache.zookeeper.server.quorum.QuorumPeerConfig)
+...
+</pre>
+
+<p>Now start the Kafka server:</p>
+<pre>
+&gt; <b>bin/kafka-server-start.sh config/server.properties</b>
+[2013-04-22 15:01:47,028] INFO Verifying properties 
(kafka.utils.VerifiableProperties)
+[2013-04-22 15:01:47,051] INFO Property socket.send.buffer.bytes is overridden 
to 1048576 (kafka.utils.VerifiableProperties)
+...
+</pre>
+
+<h4><a id="quickstart_createtopic" href="#quickstart_createtopic">Step 3: 
Create a topic</a></h4>
+
+<p>Let's create a topic named "test" with a single partition and only one 
replica:</p>
+<pre>
+&gt; <b>bin/kafka-topics.sh --create --zookeeper localhost:2181 
--replication-factor 1 --partitions 1 --topic test</b>
+</pre>
+
+<p>We can now see that topic if we run the list topic command:</p>
+<pre>
+&gt; <b>bin/kafka-topics.sh --list --zookeeper localhost:2181</b>
+test
+</pre>
+<p>Alternatively, instead of manually creating topics you can also configure 
your brokers to auto-create topics when a non-existent topic is published 
to.</p>
+
+<h4><a id="quickstart_send" href="#quickstart_send">Step 4: Send some 
messages</a></h4>
+
+<p>Kafka comes with a command line client that will take input from a file or 
from standard input and send it out as messages to the Kafka cluster. By 
default, each line will be sent as a separate message.</p>
+<p>
+Run the producer and then type a few messages into the console to send to the 
server.</p>
+
+<pre>
+&gt; <b>bin/kafka-console-producer.sh --broker-list localhost:9092 --topic 
test</b>
+<b>This is a message</b>
+<b>This is another message</b>
+</pre>
+
+<h4><a id="quickstart_consume" href="#quickstart_consume">Step 5: Start a 
consumer</a></h4>
+
+<p>Kafka also has a command line consumer that will dump out messages to 
standard output.</p>
+
+<pre>
+&gt; <b>bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 
--topic test --from-beginning</b>
+This is a message
+This is another message
+</pre>
+<p>
+If you have each of the above commands running in a different terminal then 
you should now be able to type messages into the producer terminal and see them 
appear in the consumer terminal.
+</p>
+<p>
+All of the command line tools have additional options; running the command 
with no arguments will display usage information documenting them in more 
detail.
+</p>
+
+<h4><a id="quickstart_multibroker" href="#quickstart_multibroker">Step 6: 
Setting up a multi-broker cluster</a></h4>
+
+<p>So far we have been running against a single broker, but that's no fun. For 
Kafka, a single broker is just a cluster of size one, so nothing much changes 
other than starting a few more broker instances. But just to get feel for it, 
let's expand our cluster to three nodes (still all on our local machine).</p>
+<p>
+First we make a config file for each of the brokers (on Windows use the 
<code>copy</code> command instead):
+</p>
+<pre>
+&gt; <b>cp config/server.properties config/server-1.properties</b>
+&gt; <b>cp config/server.properties config/server-2.properties</b>
+</pre>
+
+<p>
+Now edit these new files and set the following properties:
+</p>
+<pre>
+
+config/server-1.properties:
+    broker.id=1
+    listeners=PLAINTEXT://:9093
+    log.dir=/tmp/kafka-logs-1
+
+config/server-2.properties:
+    broker.id=2
+    listeners=PLAINTEXT://:9094
+    log.dir=/tmp/kafka-logs-2
+</pre>
+<p>The <code>broker.id</code> property is the unique and permanent name of 
each node in the cluster. We have to override the port and log directory only 
because we are running these all on the same machine and we want to keep the 
brokers from all trying to register on the same port or overwrite each other's 
data.</p>
+<p>
+We already have Zookeeper and our single node started, so we just need to 
start the two new nodes:
+</p>
+<pre>
+&gt; <b>bin/kafka-server-start.sh config/server-1.properties &amp;</b>
+...
+&gt; <b>bin/kafka-server-start.sh config/server-2.properties &amp;</b>
+...
+</pre>
+
+<p>Now create a new topic with a replication factor of three:</p>
+<pre>
+&gt; <b>bin/kafka-topics.sh --create --zookeeper localhost:2181 
--replication-factor 3 --partitions 1 --topic my-replicated-topic</b>
+</pre>
+
+<p>Okay but now that we have a cluster how can we know which broker is doing 
what? To see that run the "describe topics" command:</p>
+<pre>
+&gt; <b>bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic 
my-replicated-topic</b>
+Topic:my-replicated-topic      PartitionCount:1        ReplicationFactor:3     
Configs:
+       Topic: my-replicated-topic      Partition: 0    Leader: 1       
Replicas: 1,2,0 Isr: 1,2,0
+</pre>
+<p>Here is an explanation of output. The first line gives a summary of all the 
partitions, each additional line gives information about one partition. Since 
we have only one partition for this topic there is only one line.</p>
+<ul>
+  <li>"leader" is the node responsible for all reads and writes for the given 
partition. Each node will be the leader for a randomly selected portion of the 
partitions.
+  <li>"replicas" is the list of nodes that replicate the log for this 
partition regardless of whether they are the leader or even if they are 
currently alive.
+  <li>"isr" is the set of "in-sync" replicas. This is the subset of the 
replicas list that is currently alive and caught-up to the leader.
+</ul>
+<p>Note that in my example node 1 is the leader for the only partition of the 
topic.</p>
+<p>
+We can run the same command on the original topic we created to see where it 
is:
+</p>
+<pre>
+&gt; <b>bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic 
test</b>
+Topic:test     PartitionCount:1        ReplicationFactor:1     Configs:
+       Topic: test     Partition: 0    Leader: 0       Replicas: 0     Isr: 0
+</pre>
+<p>So there is no surprise there&mdash;the original topic has no replicas and 
is on server 0, the only server in our cluster when we created it.</p>
+<p>
+Let's publish a few messages to our new topic:
+</p>
+<pre>
+&gt; <b>bin/kafka-console-producer.sh --broker-list localhost:9092 --topic 
my-replicated-topic</b>
+...
+<b>my test message 1</b>
+<b>my test message 2</b>
+<b>^C</b>
+</pre>
+<p>Now let's consume these messages:</p>
+<pre>
+&gt; <b>bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 
--from-beginning --topic my-replicated-topic</b>
+...
+my test message 1
+my test message 2
+<b>^C</b>
+</pre>
+
+<p>Now let's test out fault-tolerance. Broker 1 was acting as the leader so 
let's kill it:</p>
+<pre>
+&gt; <b>ps aux | grep server-1.properties</b>
+<i>7564</i> ttys002    0:15.91 
/System/Library/Frameworks/JavaVM.framework/Versions/1.8/Home/bin/java...
+&gt; <b>kill -9 7564</b>
+</pre>
+
+On Windows use:
+<pre>
+&gt; <b>wmic process get processid,caption,commandline | find "java.exe" | 
find "server-1.properties"</b>
+java.exe    java  -Xmx1G -Xms1G -server -XX:+UseG1GC ... 
build\libs\kafka_2.10-0.10.2.0.jar"  kafka.Kafka config\server-1.properties    
<i>644</i>
+&gt; <b>taskkill /pid 644 /f</b>
+</pre>
+
+<p>Leadership has switched to one of the slaves and node 1 is no longer in the 
in-sync replica set:</p>
+
+<pre>
+&gt; <b>bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic 
my-replicated-topic</b>
+Topic:my-replicated-topic      PartitionCount:1        ReplicationFactor:3     
Configs:
+       Topic: my-replicated-topic      Partition: 0    Leader: 2       
Replicas: 1,2,0 Isr: 2,0
+</pre>
+<p>But the messages are still available for consumption even though the leader 
that took the writes originally is down:</p>
+<pre>
+&gt; <b>bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 
--from-beginning --topic my-replicated-topic</b>
+...
+my test message 1
+my test message 2
+<b>^C</b>
+</pre>
+
+
+<h4><a id="quickstart_kafkaconnect" href="#quickstart_kafkaconnect">Step 7: 
Use Kafka Connect to import/export data</a></h4>
+
+<p>Writing data from the console and writing it back to the console is a 
convenient place to start, but you'll probably want
+to use data from other sources or export data from Kafka to other systems. For 
many systems, instead of writing custom
+integration code you can use Kafka Connect to import or export data.</p>
+
+<p>Kafka Connect is a tool included with Kafka that imports and exports data 
to Kafka. It is an extensible tool that runs
+<i>connectors</i>, which implement the custom logic for interacting with an 
external system. In this quickstart we'll see
+how to run Kafka Connect with simple connectors that import data from a file 
to a Kafka topic and export data from a
+Kafka topic to a file.</p>
+
+<p>First, we'll start by creating some seed data to test with:</p>
+
+<pre>
+&gt; <b>echo -e "foo\nbar" > test.txt</b>
+</pre>
+
+<p>Next, we'll start two connectors running in <i>standalone</i> mode, which 
means they run in a single, local, dedicated
+process. We provide three configuration files as parameters. The first is 
always the configuration for the Kafka Connect
+process, containing common configuration such as the Kafka brokers to connect 
to and the serialization format for data.
+The remaining configuration files each specify a connector to create. These 
files include a unique connector name, the connector
+class to instantiate, and any other configuration required by the 
connector.</p>
+
+<pre>
+&gt; <b>bin/connect-standalone.sh config/connect-standalone.properties 
config/connect-file-source.properties config/connect-file-sink.properties</b>
+</pre>
+
+<p>
+These sample configuration files, included with Kafka, use the default local 
cluster configuration you started earlier
+and create two connectors: the first is a source connector that reads lines 
from an input file and produces each to a Kafka topic
+and the second is a sink connector that reads messages from a Kafka topic and 
produces each as a line in an output file.
+</p>
+
+<p>
+During startup you'll see a number of log messages, including some indicating 
that the connectors are being instantiated.
+Once the Kafka Connect process has started, the source connector should start 
reading lines from <code>test.txt</code> and
+producing them to the topic <code>connect-test</code>, and the sink connector 
should start reading messages from the topic <code>connect-test</code>
+and write them to the file <code>test.sink.txt</code>. We can verify the data 
has been delivered through the entire pipeline
+by examining the contents of the output file:
+</p>
+
+
+<pre>
+&gt; <b>cat test.sink.txt</b>
+foo
+bar
+</pre>
+
+<p>
+Note that the data is being stored in the Kafka topic 
<code>connect-test</code>, so we can also run a console consumer to see the
+data in the topic (or use custom consumer code to process it):
+</p>
+
+
+<pre>
+&gt; <b>bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 
--topic connect-test --from-beginning</b>
+{"schema":{"type":"string","optional":false},"payload":"foo"}
+{"schema":{"type":"string","optional":false},"payload":"bar"}
+...
+</pre>
+
+<p>The connectors continue to process data, so we can add data to the file and 
see it move through the pipeline:</p>
+
+<pre>
+&gt; <b>echo "Another line" >> test.txt</b>
+</pre>
+
+<p>You should see the line appear in the console consumer output and in the 
sink file.</p>
+
+<h4><a id="quickstart_kafkastreams" href="#quickstart_kafkastreams">Step 8: 
Use Kafka Streams to process data</a></h4>
+
+<p>
+Kafka Streams is a client library of Kafka for real-time stream processing and 
analyzing data stored in Kafka brokers.
+This quickstart example will demonstrate how to run a streaming application 
coded in this library. Here is the gist
+of the <code>WordCountDemo</code> example code (converted to use Java 8 lambda 
expressions for easy reading).
+</p>
+<pre>
+KTable&lt;String, Long&gt; wordCounts = textLines
+    // Split each text line, by whitespace, into words.
+    .flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
+
+    // Ensure the words are available as record keys for the next aggregate 
operation.
+    .map((key, value) -> new KeyValue<>(value, value))
+
+    // Count the occurrences of each word (record key) and store the results 
into a table named "Counts".
+    .countByKey("Counts")
+</pre>
+
+<p>
+It implements the WordCount
+algorithm, which computes a word occurrence histogram from the input text. 
However, unlike other WordCount examples
+you might have seen before that operate on bounded data, the WordCount demo 
application behaves slightly differently because it is
+designed to operate on an <b>infinite, unbounded stream</b> of data. Similar 
to the bounded variant, it is a stateful algorithm that
+tracks and updates the counts of words. However, since it must assume 
potentially
+unbounded input data, it will periodically output its current state and 
results while continuing to process more data
+because it cannot know when it has processed "all" the input data.
+</p>
+<p>
+We will now prepare input data to a Kafka topic, which will subsequently be 
processed by a Kafka Streams application.
+</p>
+
+<!--
+<pre>
+&gt; <b>./bin/kafka-topics --create \</b>
+            <b>--zookeeper localhost:2181 \</b>
+            <b>--replication-factor 1 \</b>
+            <b>--partitions 1 \</b>
+            <b>--topic streams-file-input</b>
+
+</pre>
+
+-->
+
+<pre>
+&gt; <b>echo -e "all streams lead to kafka\nhello kafka streams\njoin kafka 
summit" > file-input.txt</b>
+</pre>
+Or on Windows:
+<pre>
+&gt; <b>echo all streams lead to kafka> file-input.txt</b>
+&gt; <b>echo hello kafka streams>> file-input.txt</b>
+&gt; <b>echo|set /p=join kafka summit>> file-input.txt</b>
+</pre>
+
+<p>
+Next, we send this input data to the input topic named 
<b>streams-file-input</b> using the console producer (in practice,
+stream data will likely be flowing continuously into Kafka where the 
application will be up and running):
+</p>
+
+<pre>
+&gt; <b>bin/kafka-topics.sh --create \</b>
+            <b>--zookeeper localhost:2181 \</b>
+            <b>--replication-factor 1 \</b>
+            <b>--partitions 1 \</b>
+            <b>--topic streams-file-input</b>
+</pre>
+
+
+<pre>
+&gt; <b>bin/kafka-console-producer.sh --broker-list localhost:9092 --topic 
streams-file-input < file-input.txt</b>
+</pre>
+
+<p>
+We can now run the WordCount demo application to process the input data:
+</p>
+
+<pre>
+&gt; <b>bin/kafka-run-class.sh 
org.apache.kafka.streams.examples.wordcount.WordCountDemo</b>
+</pre>
+
+<p>
+There won't be any STDOUT output except log entries as the results are 
continuously written back into another topic named 
<b>streams-wordcount-output</b> in Kafka.
+The demo will run for a few seconds and then, unlike typical stream processing 
applications, terminate automatically.
+</p>
+<p>
+We can now inspect the output of the WordCount demo application by reading 
from its output topic:
+</p>
+
+<pre>
+&gt; <b>bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 \</b>
+            <b>--topic streams-wordcount-output \</b>
+            <b>--from-beginning \</b>
+            <b>--formatter kafka.tools.DefaultMessageFormatter \</b>
+            <b>--property print.key=true \</b>
+            <b>--property print.value=true \</b>
+            <b>--property 
key.deserializer=org.apache.kafka.common.serialization.StringDeserializer \</b>
+            <b>--property 
value.deserializer=org.apache.kafka.common.serialization.LongDeserializer</b>
+</pre>
+
+<p>
+with the following output data being printed to the console:
+</p>
+
+<pre>
+all     1
+lead    1
+to      1
+hello   1
+streams 2
+join    1
+kafka   3
+summit  1
+</pre>
+
+<p>
+Here, the first column is the Kafka message key, and the second column is the 
message value, both in <code>java.lang.String</code> format.
+Note that the output is actually a continuous stream of updates, where each 
data record (i.e. each line in the original output above) is
+an updated count of a single word, aka record key such as "kafka". For 
multiple records with the same key, each later record is an update of the 
previous one.
+</p>
+
+<p>
+Now you can write more input messages to the <b>streams-file-input</b> topic 
and observe additional messages added
+to <b>streams-wordcount-output</b> topic, reflecting updated word counts 
(e.g., using the console producer and the
+console consumer, as described above).
+</p>
+
+<p>You can stop the console consumer via <b>Ctrl-C</b>.</p>

Reply via email to