Re: Need some clarification of Kafka and MQTT

2019-10-08 Thread Svante Karlsson
No, you needs something that can speak mqtt and sent that data to kafka
(and possibly the other way around). There are many such alternatives but
kafka knows nothing about mqtt

Den tis 8 okt. 2019 kl 18:01 skrev Sarvesh Gupta :

> Hi,
>
> We are building some IoT driven solution for industries, I am very new to
> Kafka and I need some clarifications so please help me out with below given
> doubts.
> We have use case where we wanted to connect Kafka with MQTT as source of
> receiving data, But I am little bit confused on this. I saw lots of blog
> and videos where people are connecting Kafka with MQTT using confluent,
> lenses and some another third party libraries. So my question is can’t we
> directly connect Kafka with MQTT without using any third party dependencies.
> And I also wanted to know what are ways to connect Kafka and MQTT. If
> there is any way to connect apache Kafka and MQTT directly without using
> confluent or other platforms then please give me some idea about that
> method. And also we are using python language for out product so if there
> is any pythonic way to do this then please let me know.
>
>
> Thanks and Regards,
> Sarvesh Gupta
>
>


Re: server-side message filter

2019-08-28 Thread Svante Karlsson
The other problem would be that kafka has no idea of the serdes you use in
your messages ie it only sees the key and value as bytes so it would be
difficult to implement generic filtering in the broker.

for your second problem it seems that creating separate topics for
respective destinations solves your problem.

Another option would be to implement your own filtering proxy with some
other api (say streaming grpc) with knowledge of your data. And run this
close to your brokers. Zero-copy is out of the question and you get the
full traffic from broker to filtering proxy. We did this, but for a
multi-tenant use case where we did not want to expose kafka externaly but
still needed streaming.

/svante

>
>
>


Re: Backup and Restore of kafka broker with zookeeper

2019-05-22 Thread Svante Karlsson
You have to get a consistent zookeeper cluster before adding your Kafka
nodes - they will "mirror" the state in zookeeper.

So restore all the files before starting anything. Then start zookeepers
and wait for for it to stabilize. Finally start Kafka nodes. - ( with the
data in place of course.)



Den ons 22 maj 2019 kl 10:08 skrev Srinivas, Kaushik (Nokia - IN/Bangalore)
:

> Hi All,
>
> We are trying to do a backup of kafka + zookeeper data and restore the
> same back for one of our application's use case.
>
> We are taking below steps for the same,
> 1. Take folder contents back up of data and log directories for kafka and
> zookeeper.
> 2. Delete the kafka zookeepers and brokers
> 3. Delete the data and log files.
> 4. Install kafka and zookeepers back with same broker IDs.
> 5. restore the data and log files to the same directory.
> // Observation here is,old topics are not listed after the broker is up
> and running.
> 7. Restarted kafka and zookeeper
> // Observation : topics get listed but still the log files are present in
> the broker log directory with empty log files.
> Consumers does not work.
>
> We see this working in one node kafka broker cluster, but the same
> scenario does not work with multi node kafka cluster.
> One observation with respect to leaders of partitions is that, after
> restore and restart leaders of the partitions are changed.
>
> Is it due to this the back up is not happening and we have to do reassign
> partitions and restore the old partition assignment for a topic ??
> Is there any better or recommended way to take the back up and restore a
> kafka+zookeeper cluster ??
>
> Any information on this front would be very useful.
>
> Thanks in advance,
> Kaushik.
>
>


Re: Regarding : Store stream for infinite time

2018-01-23 Thread Svante Karlsson
Yes, it will store the last value for each key

2018-01-23 18:30 GMT+01:00 Aman Rastogi :

> Hi All,
>
> We have a use case to store stream for infinite time (given we have enough
> storage).
>
> We are planning to solve this by Log Compaction. If each message key is
> unique and Log compaction is enabled, it will store whole stream for
> infinite time. Just wanted to check if my assumption is correct and this is
> an appropriate way to solve this.
>
> Thanks in advance.
>
> Regards,
> Aman
>


Re: Using JMXMP to access Kafka metrics

2017-07-19 Thread Svante Karlsson
I've used jolokia which gets JMX metrics without RMI (actually json over
http)
https://jolokia.org/

Integrates nicely with telegraf (and influxdb)

2017-07-19 20:47 GMT+02:00 Vijay Prakash <
vijay.prak...@microsoft.com.invalid>:

> Hey,
>
> Is there a way to use JMXMP instead of RMI to access Kafka metrics through
> JMX? I tried creating a JMXMP JMXConnector but the connect attempt just
> hangs forever.
>
> Thanks,
> Vijay
>


Re: zookeeper usage in kafka offset commit / fetch requests

2015-03-30 Thread svante karlsson
Just to close this,

I found out that the broker handles the request different depending on
whether you specify api version 0 or 1.

If using V1 the broker commits the offsets to internal topic.

/svante


Re: C++ Client Library -- libkafka-asio

2015-03-23 Thread svante karlsson
@Ewen added license.txt (boost v1.0)

thanks
svante



2015-03-24 2:15 GMT+01:00 Ewen Cheslack-Postava :

> You don't get edit permission by default, you need to get one of the admins
> to give you permission.
>
> @Daniel, I've added libkafka-asio.
>
> @svante I started to add csi-kafka, but couldn't find a license?
>
>
> On Sun, Mar 22, 2015 at 8:29 AM, svante karlsson  wrote:
>
> > Cool, Looks nice. I was looking for something similar a year ago. We also
> > ended up rolling our own. https://github.com/bitbouncer/csi-kafka
> >
> >
> > Have you got any performance figures?
> >
> > /svante
> >
> > 2015-03-22 14:29 GMT+01:00 Daniel Joos :
> >
> > > Hello there,
> > >
> > > I'm currently working on a C++ client library, implementing the Kafka
> > > protocol using Boost ASIO.
> > > You can find the source code and some examples on github:
> > > https://github.com/danieljoos/libkafka-asio
> > >
> > > I tried to add it to the "Clients" section of the Kafka wiki, but
> either
> > > I'm to blind to see the "Edit" button, or I just don't have enough
> > > permissions to edit the page ;-)
> > > In case you like the library, it would be very nice, if someone with
> > > sufficient permissions for the wiki could add it there.
> > >
> > > Thanks.
> > > Best regards,
> > >
> > > Daniel
> > >
> > >
> >
>
>
>
> --
> Thanks,
> Ewen
>


zookeeper usage in kafka offset commit / fetch requests

2015-03-23 Thread svante karlsson
I'm using kafka 0.8.2.0

I'm working on a C++ client library and I'm adding consumer offset
management to the client. (https://github.com/bitbouncer/csi-kafka)

I know that the creation of zookeeper "paths" is not handled by kafkabroker
so I've manually created

/consumers/consumer_offset_sample/offsets

in zookeeper using a command line utility.

After this I'm able to get consumer metadata from kafka.

If I commit a consumer offset to an existing topic/partition
("perf-8-new/0") I see the following paths in zookeeper

/consumers/consumer_offset_sample
offsets
offsets/perf-8-new
offsets/perf-8-new/0
owners

I'm surprised as to why the committed values shows up in zookeeper since I
have no bindings to a zookeeper and the "offset.storage" property is a
consumer config. My initial understanding was that they were only written
to __consumer_offsets topic.

Finally, I change the previously committed value manually in zookeeper
/consumers/consumer_offset_sample/offsets/perf-8-new/0  -> 42

then that's what I get back in my get_consumer_offset() as well - so it
seems the zookeeper is involved in the offsets commits/fetches from brokers
point of view.

Is this the intended way or am I doing something wrong?

best regards
svante


Re: C++ Client Library -- libkafka-asio

2015-03-22 Thread svante karlsson
Cool, Looks nice. I was looking for something similar a year ago. We also
ended up rolling our own. https://github.com/bitbouncer/csi-kafka


Have you got any performance figures?

/svante

2015-03-22 14:29 GMT+01:00 Daniel Joos :

> Hello there,
>
> I'm currently working on a C++ client library, implementing the Kafka
> protocol using Boost ASIO.
> You can find the source code and some examples on github:
> https://github.com/danieljoos/libkafka-asio
>
> I tried to add it to the "Clients" section of the Kafka wiki, but either
> I'm to blind to see the "Edit" button, or I just don't have enough
> permissions to edit the page ;-)
> In case you like the library, it would be very nice, if someone with
> sufficient permissions for the wiki could add it there.
>
> Thanks.
> Best regards,
>
> Daniel
>
>


Re: decoding of responses with error_codes

2014-05-12 Thread svante karlsson
Thanks for the clarification.

-if (errorCode == ErrorMapping.NoError) {
-  coordinator.get.writeTo(buffer)

The removal of the "if" statement would fix the issue but the patch
changes other things that I can't say much about as I don't know
scala.

/svante










2014-05-12 17:25 GMT+02:00 Guozhang Wang :

> Hi Svante,
>
> This is indeed an issue with the protocol definition of
> ConsumerMetadataResponse. Do you think this issue is fixed in the following
> JIRA?
>
> https://issues.apache.org/jira/browse/KAFKA-1437
>
> Guozhang
>
>
> On Mon, May 12, 2014 at 1:10 AM, svante karlsson  wrote:
>
> > I'm writing (yet another) C++ binding for kafka and I'm curious on the
> > encoding in relation to error-code != 0
> >
> > There seems to be a discrepancy as to how to decode messages in presence
> of
> > errors.
> >
> > ConsumerMetadataResponse error_code !=0 -> no more data should be
> decoded.
> >
> > in all others we continue parsing of the rest of the message. Is this
> > assumption correct?
> >
> > ie.
> >
> > ProduceResponse...   offset should always be decoded
> >
> > FetchResponse  highwater_mark_offset, message_set_size and
> > corresponding message should be decoded (most likely 0 size)
> >
> > OffsetResponse:offsets array should be decoded (guessing 0 size
> or
> > NULL)
> >
> > MetadataResponsetopic_data::error-code and
> partition-data::error_code,
> > the rest of the message should be decoded.
> >
> >
> > /svante
> >
>
>
>
> --
> -- Guozhang
>


decoding of responses with error_codes

2014-05-12 Thread svante karlsson
I'm writing (yet another) C++ binding for kafka and I'm curious on the
encoding in relation to error-code != 0

There seems to be a discrepancy as to how to decode messages in presence of
errors.

ConsumerMetadataResponse error_code !=0 -> no more data should be decoded.

in all others we continue parsing of the rest of the message. Is this
assumption correct?

ie.

ProduceResponse...   offset should always be decoded

FetchResponse  highwater_mark_offset, message_set_size and
corresponding message should be decoded (most likely 0 size)

OffsetResponse:offsets array should be decoded (guessing 0 size or
NULL)

MetadataResponsetopic_data::error-code and partition-data::error_code,
the rest of the message should be decoded.


/svante