It must lead to One - Re: Kafka algebra

2014-07-19 Thread Robert Withers
If a 3d Beatrix models the set of 4d spaces, then a 4d snoopy would describe a 
set of 5d spaces.  If you treat the elements of the matrix as a complex 
sub-matrix, there is no reason it couldn't be a higher dimension super-matrix.  
Then a higher dimension matrix could be described by a lower dimension compound 
matrix.  It must lead to One.

That's transcedentally sweet,
Rob

> On Jul 19, 2014, at 8:08 PM, "Rob Withers"  wrote:
> 
> I think you guys have created a system that demonstrates a new field of 
> mathematics, unless I am missed something in my research on the topic.  If we 
> define the system as being an N-dimensional space of R-dimensional subspaces, 
> does this mean we need to use a real 3D matrix to model the algebra implicit 
> in the implementation?  Here's the catch, I don't mean a 2D matrix modeling 3 
> dimensions (square matrix), I mean a 3D matrix with matrix elements 
> E(x)(y)(z).  The dimensionality of the (z) elements of the matrix would be R, 
> while (x) and (y) would be the number of partitions in a square matrix.  What 
> is the definition of a determinant and such in a matrix that is a volume of 
> elements?  Like with a 4D Voronoi diagram, with the ability to project 3D 
> "surfaces" out of the 4D diagram, a 3D matrix (vatrix?) could project 2D 
> matrices.
> 
> So what are off plane elements doing and what operations can be performed 
> with a 3D vatrix?
> 
> That's neat, thank you for Kafka!
> 
> Rob
> 
>> On Friday, July 18, 2014 at 7:22 PM, Robert Withers 
>>  wrote:
>> 
>> I had some time to consider my suggestion that it be viewed 
>> as a relativistic frame of reference.  Consider the model 
>> where each dimension of the frame of reference for each 
>> consumer is each partition, actually a sub-space with the 
>> dimensionality of the replication factor, but with a single 
>> leader election, so consider it 1 dimension.  The total 
>> dimensionality of the consumers frame of reference is the 
>> number of partitions, but only assigned partitions are open 
>> to a given consumers viewpoint.  The offset is the partition 
>> dimension's coordinate and only consumers with an open 
>> dimension can translate the offset.  A rebalance opens or 
>> closes a dimension for a given consumer and can be viewed as 
>> a rotation.  Could Kafka consumption and rebalance (and ISR 
>> leader election) be reduced to matrix operations?
> 
> 


Kafka algebra

2014-07-19 Thread Rob Withers
I think you guys have created a system that demonstrates a new field of 
mathematics, unless I am missed something in my research on the topic.  If we 
define the system as being an N-dimensional space of R-dimensional subspaces, 
does this mean we need to use a real 3D matrix to model the algebra implicit in 
the implementation?  Here's the catch, I don't mean a 2D matrix modeling 3 
dimensions (square matrix), I mean a 3D matrix with matrix elements E(x)(y)(z). 
 The dimensionality of the (z) elements of the matrix would be R, while (x) and 
(y) would be the number of partitions in a square matrix.  What is the 
definition of a determinant and such in a matrix that is a volume of elements?  
Like with a 4D Voronoi diagram, with the ability to project 3D "surfaces" out 
of the 4D diagram, a 3D matrix (vatrix?) could project 2D matrices.

So what are off plane elements doing and what operations can be performed with 
a 3D vatrix?

That's neat, thank you for Kafka!

Rob

> On Friday, July 18, 2014 at 7:22 PM, Robert Withers 
>  wrote:
>
> I had some time to consider my suggestion that it be viewed 
> as a relativistic frame of reference.  Consider the model 
> where each dimension of the frame of reference for each 
> consumer is each partition, actually a sub-space with the 
> dimensionality of the replication factor, but with a single 
> leader election, so consider it 1 dimension.  The total 
> dimensionality of the consumers frame of reference is the 
> number of partitions, but only assigned partitions are open 
> to a given consumers viewpoint.  The offset is the partition 
> dimension's coordinate and only consumers with an open 
> dimension can translate the offset.  A rebalance opens or 
> closes a dimension for a given consumer and can be viewed as 
> a rotation.  Could Kafka consumption and rebalance (and ISR 
> leader election) be reduced to matrix operations?




Re: Improving the Kafka client ecosystem

2014-07-19 Thread Mark Roberts
Hi all,

As a client engineer on the python client, I would really appreciate a
separate mailing list for client implementation discussion and a language
agnostic test suite.  What might also be really useful is an enumerated
list of error conditions and the expected behavior to come out of them.
 For instance, what do you do if you have a multi-partition producer that
tries to produce to a non-existent topic?  The metadata request is going to
return nothing, which means you don't know where to send the request at
all.  You could just arbitrarily send it to a broker I guess?

At any rate, I have lots of questions about a formalized "certified client"
process.  I'm not against the idea (in fact quite the opposite), but I'm
concerned that non-Java clients will be constrained purely to the currently
existing Java API in the name of client uniformity and standardization.

-Mark



On Sat, Jul 19, 2014 at 12:30 AM, Timothy Chen  wrote:

> The certified client test suite really will benefit all the client
> developers, as writing a Kafka client often is not just talking protocol
> but to be able to handle correctly all the cases, errors and situations,
> but also performance.
>
> From my experience writing a C# client definitely feel that a lot of test
> scenarios could be generalized and used for all clients.
>
> I was reviewing some other client implementation and there are errors and
> cases it didn't handle and having a suite that exposes that will allow
> users to not run knot those problems and try to determine its a client or
> server bug as it's sometimes hard to figure out.
>
> Tim
>
> > On Jul 18, 2014, at 3:57 PM, Jay Kreps  wrote:
> >
> > Basically my thought with getting a separate mailing list was to have
> > a place specifically to discuss issues around clients. I don't see a
> > lot of discussion about them on the main list. I thought perhaps this
> > was because people don't like to ask questions which are about
> > adjacent projects/code bases. But basically whatever will lead to a
> > robust discussion, bug tracking, etc on clients.
> >
> > -Jay
> >
> >> On Fri, Jul 18, 2014 at 3:49 PM, Jun Rao  wrote:
> >> Another important part of eco-system could be around the adaptors of
> >> getting data from other systems into Kafka and vice versa. So, for the
> >> ingestion part, this can include things like getting data from mysql,
> >> syslog, apache server log, etc. For the egress part, this can include
> >> putting Kafka data into HDFS, S3, etc.
> >>
> >> Will a separate mailing list be convenient? Could we just use the Kafka
> >> mailing list?
> >>
> >> Thanks,
> >>
> >> Jun
> >>
> >>
> >>> On Fri, Jul 18, 2014 at 2:34 PM, Jay Kreps 
> wrote:
> >>>
> >>> A question was asked in another thread about what was an effective way
> >>> to contribute to the Kafka project for people who weren't very
> >>> enthusiastic about writing Java/Scala code.
> >>>
> >>> I wanted to kind of advocate for an area I think is really important
> >>> and not as good as it could be--the client ecosystem. I think our goal
> >>> is to make Kafka effective as a general purpose, centralized, data
> >>> subscription system. This vision only really works if all your
> >>> applications, are able to integrate easily, whatever language they are
> >>> in.
> >>>
> >>> We have a number of pretty good non-java producers. We have been
> >>> lacking the features on the server-side to make writing non-java
> >>> consumers easy. We are fixing that right now as part of the consumer
> >>> work going on right now (which moves a lot of the functionality in the
> >>> java consumer to the server side).
> >>>
> >>> But apart from this I think there may be a lot more we can do to make
> >>> the client ecosystem better.
> >>>
> >>> Here are some concrete ideas. If anyone has additional ideas please
> >>> reply to this thread and share them. If you are interested in picking
> >>> any of these up, please do.
> >>>
> >>> 1. The most obvious way to improve the ecosystem is to help work on
> >>> clients. This doesn't necessarily mean writing new clients, since in
> >>> many cases we already have a client in a given language. I think any
> >>> way we can incentivize fewer, better clients rather than many
> >>> half-working clients we should do. However we are working now on the
> >>> server-side consumer co-ordination so it should now be possible to
> >>> write much simpler consumers.
> >>>
> >>> 2. It would be great if someone put together a mailing list just for
> >>> client developers to share tips, tricks, problems, and so on. We can
> >>> make sure all the main contributors on this too. I think this could be
> >>> a forum for kind of directing improvements in this area.
> >>>
> >>> 3. Help improve the documentation on how to implement a client. We
> >>> have tried to make the protocol spec not just a dry document but also
> >>> have it share best practices, rationale, and intentions. I think this
> >>> could potentially be even better as there is rea

Re: Improving the Kafka client ecosystem

2014-07-19 Thread Jay Kreps
Hey Philip,

Yeah I think we have actually done pretty good at getting reasonably
solid clients in a bunch of languages. I just think it is an important
area.

The architecture design patterns idea is fantastic. That would be a
great thing to do.

-Jay



On Fri, Jul 18, 2014 at 11:46 PM, Philip O'Toole
 wrote:
> Thanks Jay -- some good ideas there.
>
> I agree strongly that fewer, more solid, non-Java clients are better than 
> many shallow ones. Interesting that you feel we could do some more work in 
> this area, as I thought it was well served (even if they have proliferated).
>
> One area I would like see documented better -- and I am considering it myself 
> -- is a collection of Kafka "Architectural Design Patterns", all in one one 
> place. For example, how to use Kafka to build a staging and test environment 
> (tapping the production flow in a non-destructive manner), how to build 
> robust pipelines, to read to and from, say, Apache Storm, how to deploy a 
> cluster in EC2 (the interaction with Availability Zones), topic vs. partition 
> demuxing, etc, etc. I've yet to see a nice consolidation of this information 
> -- it would not really be about coding, but system design. Ideally it would 
> be reviewed by you committers, but someone else would do the work.
>
> Philip
>
>
> ---
> www.philipotoole.com
>
>
>
> On Friday, July 18, 2014 3:58 PM, Jay Kreps  wrote:
>
>
>
> Basically my thought with getting a separate mailing list was to have
> a place specifically to discuss issues around clients. I don't see a
> lot of discussion about them on the main list. I thought perhaps this
> was because people don't like to ask questions which are about
> adjacent projects/code bases. But basically whatever will lead to a
> robust discussion, bug tracking, etc on clients.
>
> -Jay
>
>
> On Fri, Jul 18, 2014 at 3:49 PM, Jun Rao  wrote:
>> Another important part of eco-system could be around the adaptors of
>> getting data from other systems into Kafka and vice versa. So, for the
>> ingestion part, this can include things like getting data from mysql,
>> syslog, apache server log, etc. For the egress part, this can include
>> putting Kafka data into HDFS, S3, etc.
>>
>> Will a separate mailing list be convenient? Could we just use the Kafka
>> mailing list?
>>
>> Thanks,
>>
>> Jun
>>
>>
>> On Fri, Jul 18, 2014 at 2:34 PM, Jay Kreps  wrote:
>>
>>> A question was asked in another thread about what was an effective way
>>> to contribute to the Kafka project for people who weren't very
>>> enthusiastic about writing Java/Scala code.
>>>
>>> I wanted to kind of advocate for an area I think is really important
>>> and not as good as it could be--the client ecosystem. I think our goal
>>> is to make Kafka effective as a general purpose, centralized, data
>>> subscription system. This vision only really works if all your
>>> applications, are able to integrate easily, whatever language they are
>>> in.
>>>
>>> We have a number of pretty good non-java producers. We have been
>>> lacking the features on the server-side to make writing non-java
>>> consumers easy. We are fixing that right now as part of the consumer
>>> work going on right now (which moves a lot of the functionality in the
>>> java consumer to the server side).
>>>
>>> But apart from this I think there may be a lot more we can do to make
>>> the client ecosystem better.
>>>
>>> Here are some concrete ideas. If anyone has additional ideas please
>>> reply to this thread and share them. If you are interested in picking
>>> any of these up, please do.
>>>
>>> 1. The most obvious way to improve the ecosystem is to help work on
>>> clients. This doesn't necessarily mean writing new clients, since in
>>> many cases we already have a client in a given language. I think any
>>> way we can incentivize fewer, better clients rather than many
>>> half-working clients we should do. However we are working now on the
>>> server-side consumer co-ordination so it should now be possible to
>>> write much simpler consumers.
>>>
>>> 2. It would be great if someone put together a mailing list just for
>>> client developers to share tips, tricks, problems, and so on. We can
>>> make sure all the main contributors on this too. I think this could be
>>> a forum for kind of directing improvements in this area.
>>>
>>> 3. Help improve the documentation on how to implement a client. We
>>> have tried to make the protocol spec not just a dry document but also
>>> have it share best practices, rationale, and intentions. I think this
>>> could potentially be even better as there is really a range of options
>>> from a very simple quick implementation to a more complex highly
>>> optimized version. It would be good to really document some of the
>>> options and tradeoffs.
>>>
>>> 4. Come up with a standard way of documenting the features of clients.
>>> In an ideal world it would be possible to get the same information
>>> (author, language, feature set, downl

Re: Performance/Stress tools

2014-07-19 Thread Steve Morin
Otis,
  Yes this would work for Kafka because it's using to launch containers to
generate load for performance testing.  It also works in standalone mode to
run on a single machine.  We currently are updating it to make it easier to
use and upgrading it's ability to collect statistics in distributed mode.

The load tests are configurable using Javascript.

-Steve


On Sat, Jul 19, 2014 at 9:48 AM, Otis Gospodnetic <
otis.gospodne...@gmail.com> wrote:

> Would this work for Kafka, Steve, considering Kafka doesn't use Yarn?
>
> Dayo - we developed https://github.com/sematext/ActionGenerator to feed
> events to things like Elasticsearch, Solr, and MongoDB.  We'd gladly take a
> pull request for Kafka.  SPM for Kafka
> <
> http://blog.sematext.com/2013/10/16/announcement-spm-performance-monitoring-for-kafka/
> >
> may be handy for observing various Kafka metrics while you run your
> performance tests.
>
> Otis
> --
> Performance Monitoring * Log Analytics * Search Analytics
> Solr & Elasticsearch Support * http://sematext.com/
>
>
>
>
> On Wed, Jul 16, 2014 at 1:07 PM, Steve Morin  wrote:
>
> > We are working on this Yarn based tool , but it's still in alpha
> > https://github.com/DemandCube/DemandSpike
> >
> >
> > On Wed, Jul 16, 2014 at 11:30 AM, Dayo Oliyide 
> > wrote:
> >
> > > Hi,
> > >
> > > I'm setting up a Kafka Cluster and would like to carry out some
> > > performance/stress tests on different configurations.
> > >
> > > Other than the performance testing scripts that come with Kafka, are
> > there
> > > any other tools that anyone would recommend?
> > >
> > > Regards,
> > > Dayo
> > >
> >
>


Re: New Consumer Design

2014-07-19 Thread Robert Withers
Lock is a bad way to say it; a barrier is better.  I don't think what I am 
saying is even a barrier, since the rebalance would just need to recompute a 
rebalance schedule and submit it.  The only processing delay is to allow a soft 
remove to let the client cleanup, before you turn on the new guy, so it lags a 
bit.  Do you think this could this work?

Thanks,
Rob

> On Jul 18, 2014, at 7:22 PM, Robert Withers  
> wrote:
> 
> Hi Guozhang,
> 
> Thank you for considering my suggestions.  The security layer sounds like the 
> right facet to design for these sorts of capabilities.  Have you considered a 
> chained ocap security model for the broker using hash tokens?  This would 
> provide for per-partition read/write capabilities with QoS context including 
> leases, revocation, debug level and monitoring.  Overkill disappears as no 
> domain specific info needs to be stored at the brokers, like 
> consumer/partition assignments.  The read ocap for consumer 7/topic 
> bingo/partition 131 could be revoked at the brokers for a partition and 
> subsequent reads would fail the fetch for requests with that ocap token.  You 
> could also dynamically change the log level for a specific consumer/partition.
> 
> There are advantages we could discuss to having finer grained control.  
> Consider that scheduled partition rebalancing could be implemented with no 
> pauses from the perspective of the consumer threads; it looks like single 
> partition lag, as the offset commit occurs before rotation, with no lag to 
> non-rebalanced partitions: rebalance 1 partition per second so as to creep 
> load to a newbie consumer.  It would eliminate a global read lock and even 
> the internal Kafka consumer would never block on IO protocol other than the 
> normal fetch request (and the initial join group request). 
> 
> A global lock acquired through a pull protocol (HeartbeatRequest followed by 
> a JoinGroupRequest) for all live consumers is a much bigger lock than 
> security-based push protocol, as I assume the coordinator will have open 
> sockets to all brokers in order to reach out as needed.  As well, each lock 
> would be independent between partitions and be with only those brokers in ISR 
> for a given partition.  It is a much smaller lock.
> 
> I had some time to consider my suggestion that it be viewed as a relativistic 
> frame of reference.  Consider the model where each dimension of the frame of 
> reference for each consumer is each partition, actually a sub-space with the 
> dimensionality of the replication factor, but with a single leader election, 
> so consider it 1 dimension.  The total dimensionality of the consumers frame 
> of reference is the number of partitions, but only assigned partitions are 
> open to a given consumers viewpoint.  The offset is the partition dimension's 
> coordinate and only consumers with an open dimension can translate the 
> offset.  A rebalance opens or closes a dimension for a given consumer and can 
> be viewed as a rotation.  Could Kafka consumption and rebalance (and ISR 
> leader election) be reduced to matrix operations?
> 
> Rob
> 
>> On Jul 18, 2014, at 12:08 PM, Guozhang Wang  wrote:
>> 
>> Hi Rob,
>> 
>> Sorry for the late reply.
>> 
>> If I understand your approach correctly, it requires all brokers to
>> remember the partition assignment of each consumer in order to decide
>> whether or not authorizing the fetch request, correct? If we are indeed
>> going to do such authorization for the security project then maybe it is a
>> good way to go, but otherwise might be an overkill to just support finer
>> grained partition assignment. In addition, instead of requiring a round
>> trip between the coordinator and the consumers for the synchronization
>> barrier, now the coordinator needs to wait for a round trip between itself
>> and other brokers before it can return the join-group request, right?
>> 
>> Guozhang
>> 
>> 
>> On Wed, Jul 16, 2014 at 10:27 AM, Rob Withers 
>> wrote:
>> 
>>> Hi Guozhang,
>>> 
>>> 
>>> 
>>> Currently, the brokers do not know which high-level consumers are reading
>>> which partitions and it is the rebalance between the consumers and the
>>> coordinator which would authorize a consumer to fetch a particular
>>> partition, I think.  Does this mean that when a rebalance occurs, all
>>> consumers must send a JoinGroupRequest and that the coordinator will not
>>> respond to any consumers until all consumers have sent the
>>> JoinGroupRequest, to enable the synchronization barrier?  That has the
>>> potential to be a sizable global delay.
>>> 
>>> 
>>> 
>>> On the assumption that there is only one coordinator for a group, why
>>> couldn't the synchronization barrier be per partition and internal to kafka
>>> and mostly not involve the consumers, other than a chance for offsetCommit
>>> by the consumer losing a partition?   If the brokers have session state and
>>> knows the new assignment before the consumer is notified with a
>>> HeartbeatRespo

Re: Performance/Stress tools

2014-07-19 Thread Otis Gospodnetic
Would this work for Kafka, Steve, considering Kafka doesn't use Yarn?

Dayo - we developed https://github.com/sematext/ActionGenerator to feed
events to things like Elasticsearch, Solr, and MongoDB.  We'd gladly take a
pull request for Kafka.  SPM for Kafka

may be handy for observing various Kafka metrics while you run your
performance tests.

Otis
--
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/




On Wed, Jul 16, 2014 at 1:07 PM, Steve Morin  wrote:

> We are working on this Yarn based tool , but it's still in alpha
> https://github.com/DemandCube/DemandSpike
>
>
> On Wed, Jul 16, 2014 at 11:30 AM, Dayo Oliyide 
> wrote:
>
> > Hi,
> >
> > I'm setting up a Kafka Cluster and would like to carry out some
> > performance/stress tests on different configurations.
> >
> > Other than the performance testing scripts that come with Kafka, are
> there
> > any other tools that anyone would recommend?
> >
> > Regards,
> > Dayo
> >
>


Switch Apache logo URL to HTTPS

2014-07-19 Thread Marcin ZajÄ…czkowski
Hi,

Reading Kafka webpage/documentation through HTTPS (e.g.
https://kafka.apache.org/documentation.html) I spotted that it generates
warning (in Firefox and Chromium) about a reference to unencrypted
resources. In fact the Apache logo is always load using HTTP -
http://www.apache.org/images/feather-small.png

As that image is also available via HTTPS I think it would painless to
switch "src" in "img" tag in
https://svn.apache.org/repos/asf/kafka/site/includes/footer.html to
HTTPS or use double slash syntax which was defined in 15+ years old RFC
1808 and should be supported by recent browsers -
http://stackoverflow.com/a/9632363/313516

Marcin

-- 
http://blog.solidsoft.info/ - Working code is not enough


Re: Improving the Kafka client ecosystem

2014-07-19 Thread Timothy Chen
The certified client test suite really will benefit all the client developers, 
as writing a Kafka client often is not just talking protocol but to be able to 
handle correctly all the cases, errors and situations, but also performance.

From my experience writing a C# client definitely feel that a lot of test 
scenarios could be generalized and used for all clients.

I was reviewing some other client implementation and there are errors and cases 
it didn't handle and having a suite that exposes that will allow users to not 
run knot those problems and try to determine its a client or server bug as it's 
sometimes hard to figure out.

Tim

> On Jul 18, 2014, at 3:57 PM, Jay Kreps  wrote:
> 
> Basically my thought with getting a separate mailing list was to have
> a place specifically to discuss issues around clients. I don't see a
> lot of discussion about them on the main list. I thought perhaps this
> was because people don't like to ask questions which are about
> adjacent projects/code bases. But basically whatever will lead to a
> robust discussion, bug tracking, etc on clients.
> 
> -Jay
> 
>> On Fri, Jul 18, 2014 at 3:49 PM, Jun Rao  wrote:
>> Another important part of eco-system could be around the adaptors of
>> getting data from other systems into Kafka and vice versa. So, for the
>> ingestion part, this can include things like getting data from mysql,
>> syslog, apache server log, etc. For the egress part, this can include
>> putting Kafka data into HDFS, S3, etc.
>> 
>> Will a separate mailing list be convenient? Could we just use the Kafka
>> mailing list?
>> 
>> Thanks,
>> 
>> Jun
>> 
>> 
>>> On Fri, Jul 18, 2014 at 2:34 PM, Jay Kreps  wrote:
>>> 
>>> A question was asked in another thread about what was an effective way
>>> to contribute to the Kafka project for people who weren't very
>>> enthusiastic about writing Java/Scala code.
>>> 
>>> I wanted to kind of advocate for an area I think is really important
>>> and not as good as it could be--the client ecosystem. I think our goal
>>> is to make Kafka effective as a general purpose, centralized, data
>>> subscription system. This vision only really works if all your
>>> applications, are able to integrate easily, whatever language they are
>>> in.
>>> 
>>> We have a number of pretty good non-java producers. We have been
>>> lacking the features on the server-side to make writing non-java
>>> consumers easy. We are fixing that right now as part of the consumer
>>> work going on right now (which moves a lot of the functionality in the
>>> java consumer to the server side).
>>> 
>>> But apart from this I think there may be a lot more we can do to make
>>> the client ecosystem better.
>>> 
>>> Here are some concrete ideas. If anyone has additional ideas please
>>> reply to this thread and share them. If you are interested in picking
>>> any of these up, please do.
>>> 
>>> 1. The most obvious way to improve the ecosystem is to help work on
>>> clients. This doesn't necessarily mean writing new clients, since in
>>> many cases we already have a client in a given language. I think any
>>> way we can incentivize fewer, better clients rather than many
>>> half-working clients we should do. However we are working now on the
>>> server-side consumer co-ordination so it should now be possible to
>>> write much simpler consumers.
>>> 
>>> 2. It would be great if someone put together a mailing list just for
>>> client developers to share tips, tricks, problems, and so on. We can
>>> make sure all the main contributors on this too. I think this could be
>>> a forum for kind of directing improvements in this area.
>>> 
>>> 3. Help improve the documentation on how to implement a client. We
>>> have tried to make the protocol spec not just a dry document but also
>>> have it share best practices, rationale, and intentions. I think this
>>> could potentially be even better as there is really a range of options
>>> from a very simple quick implementation to a more complex highly
>>> optimized version. It would be good to really document some of the
>>> options and tradeoffs.
>>> 
>>> 4. Come up with a standard way of documenting the features of clients.
>>> In an ideal world it would be possible to get the same information
>>> (author, language, feature set, download link, source code, etc) for
>>> all clients. It would be great to standardize the documentation for
>>> the client as well. For example having one or two basic examples that
>>> are repeated for every client in a standardized way. This would let
>>> someone come to the Kafka site who is not a java developer, and click
>>> on the link for their language and view examples of interacting with
>>> Kafka in the language they know using the client they would eventually
>>> use.
>>> 
>>> 5. Build a Kafka Client Compatibility Kit (KCCK) :-) The idea is this:
>>> anyone who wants to implement a client would implement a simple
>>> command line program with a set of standardized options. The
>>> compatibil