KAFKA-85 raised a good question: what's the right approach to support client
bindings for languages other than java? I don't have a perfect answer and
would like to start a discussion and let everybody weigh in.

The approach that KAFKA-85 took is to re-write all the logic in our fat
client (both the producer and the consumer) in C#. This means that a lot of
code has to be re-written and maintained and it's a lot of work if every
language does the same thing.

There are 2 other approaches that some Apache projects have used to support
different language binding. The first one is to use an RPC code generator to
directly expose the api to other languages. For example, Cassandra uses
Thrift to define the client API and let Thrift generate language specific
client code to talk to server. This approach works well for thin clients. In
Cassandra, the client only does serializing/deserializing requests/responses
and the complex routing logic is on the server. This approach may not work
well with Kafka since our client is relatively fat (lots of code in handling
both the produce and the consume request in the client library).

The second approach is to have a gateway. For example, HBase also has a
relatively fat java client. To support other languages, it exposes its api
indirectly in a java gateway. The gateway api is compiled into different
languages using Thrift. The generated gateway client code is thin and all
the complicated routing logic is in the gateway itself. The downside is that
this adds the complexity of maintaining the gateway and adds one extra hop
between the client and the server. Setting these 2 concerns aside, this
approach probably works well with Kafka producers. However, it's not clear
how this works with the consumers since they get data continuously.

Does anyone know how other queuing systems (activemq, rabbitmq, etc) support
non-java clients?

Thanks,

Jun

Reply via email to