I'm not sure what you mean by "not using stream topology". What does that
mean to you that you'd rather avoid?
However, you can indeed use KSQL to define streams & tables that process
data from a number of topics. However, I think you may have the
misimpression that KSQL is designed so you can
Hello,
I am quite new to KSQL, so apologise for misunderstanding it's concept.
I have a list of topics that I want to search data for. I am not using
stream process, but plain topics which has data retained for 14 days. All i
want to do is search for data in SQL-like way as long as it's within
The kafka-avro-console-producer is only in conflunet kafka. But i am using
apache kafka.
Seems apache kafka kafka-console-producer is not able to send avro serialazed
data to kafka,
but kafka-console-consumer can read avro serialized
data.
I have tried
;,"type":"string"}]}'
https://docs.confluent.io/3.0.0/quickstart.html
--
Miguel Silvestre
On Fri, May 8, 2020 at 10:53 AM wangl...@geekplus.com.cn <
wangl...@geekplus.com.cn> wrote:
>
> I can consume avro serialized data from kafka like this:
>
> b
I can consume avro serialized data from kafka like this:
bin/kafka-console-consumer.sh --bootstrap-server xxx:9092 --topic xxx
--property print.key=true --formatter
io.confluent.kafka.formatter.AvroMessageFormatter --property
schema.registry.url=http://xxx:8088
It is possible to send avro
Amin
> wrote:
> >
> > Hello,
> > I'm new to kafka and I'd like to write data from kafka to a CSV file in a
> > Mac. Please, advise.
> > Thank You & Kindest Regards,Doaa.
>
>
>
> --
> Richard Rossel
> Atlanta - GA
>
--
Richard Rossel
Atlanta - GA
; I'm new to kafka and I'd like to write data from kafka to a CSV file in a
> Mac. Please, advise.
> Thank You & Kindest Regards,Doaa.
--
Richard Rossel
Atlanta - GA
Hello,
> I'm new to kafka and I'd like to write data from kafka to a CSV file in a
> Mac. Please, advise.
> Thank You & Kindest Regards,Doaa.
--
Richard Rossel
Atlanta - GA
Hello,
I'm new to kafka and I'd like to write data from kafka to a CSV file in a Mac.
Please, advise.
Thank You & Kindest Regards,Doaa.
One more thing to note:
You are looking at regular, base NATS. On its own, it is not a direct 1-1
comparison to Kafka because it lacks things like data retention, clustering
and replication. Instead, you would want to compare it to NATS-Streaming, (
Thats a 4.5 year old benchmark and it was run with a single broker node and
only 1 producer and 1 consumer all running on a single MacBookPro. Definitely
not the target production environment for Kafka.
-hans
> On Mar 21, 2019, at 11:43 AM, M. Manna wrote:
>
> HI All,
>
>
HI All,
https://nats.io/about/
this shows a general comparison of sender/receiver throughputs for NATS and
other messaging system including our favourite Kafka.
It appears that Kafka, despite taking the 2nd place, has a very low
throughput. My question is, where does Kafka win over NATS? is it
ned
--
Robin Moffatt | Developer Advocate | ro...@confluent.io | @rmoff
On Thu, 6 Dec 2018 at 18:30, Marcos Juarez wrote:
> We're trying to use Kafka Connect to pull down data from Kafka, but we're
> having issues with the Avro deserialization.
>
> When we attempt to consume data
fka Connect to pull down data from Kafka, but we're
> having issues with the Avro deserialization.
>
> When we attempt to consume data using the kafka-avro-console-consumer, we
> can consume it, and deserialize it correctly. Our command is similar to
> the following:
>
> *./kaf
We're trying to use Kafka Connect to pull down data from Kafka, but we're
having issues with the Avro deserialization.
When we attempt to consume data using the kafka-avro-console-consumer, we
can consume it, and deserialize it correctly. Our command is similar to
the following:
*./kafka-avro
e
> > > behing. If it is centralized in one place, in could be better to no use
> > > mirror maker and have duplication of the consumer.
> > >
> > > So something looking more like a star schema, let me try some ascii
> art :
> > >
> > > Main DC :
n one place, in could be better to no use
> > mirror maker and have duplication of the consumer.
> >
> > So something looking more like a star schema, let me try some ascii art :
> >
> > Main DC :Data storage/processing DC :
> > Producer --
something looking more like a star schema, let me try some ascii art :
>
> Main DC :Data storage/processing DC :
> Producer --> Kafka |Consumer > Data storage
> | /->
> Backup DC : |
some ascii art :
Main DC :Data storage/processing DC :
Producer --> Kafka |Consumer > Data storage
| /->
Backup DC : | /
Producer --> Kafka |Consumer /
If you have an outage on the main, th
Purging will never prevent that it does not get replicated for sure. There will
be always a case (error to purge etc) and then it is still replicated. You may
reduce the probability but it will never be impossible.
Your application should be able to handle duplicated messages.
> On 25. May
Hello,
We have cross data center replication. Using Kafka mirror maker we are
replicating data from our primary cluster to backup cluster. Problem arises
when we start operating from backup cluster, in case of drill or actual
outage. Data gathered at backup cluster needs to be reverse-replicated
gt;
> -Ursprüngliche Nachricht-
> Von: Matthias J. Sax <matth...@confluent.io>
> Gesendet: Dienstag, 15. Mai 2018 22:58
> An: users@kafka.apache.org
> Betreff: Re: Exception stopps data processing (Kafka Streams)
>
> Claudia,
>
> I leader change is a
'max.in.flight.requests.per.connection' to 1 to still
guarantee ordering, right?
Best,
Claudia
-Ursprüngliche Nachricht-
Von: Matthias J. Sax <matth...@confluent.io>
Gesendet: Dienstag, 15. Mai 2018 22:58
An: users@kafka.apache.org
Betreff: Re: Exception stopps data processing (Kafka Streams)
Claudia,
I
Claudia,
I leader change is a retryable error. What is your producer config for
`retries`? You might want to increase it such that the producer does not
throw the exception immediately but retries couple of times -- you might
also want to adjust `retry.backoff.ms` that sets the time to wait until
Hey there,
I've got a few Kafka Streams services which run smoothly most of the time.
Sometimes, however, some of them get an exception "Abort sending since an error
caught with a previous record" (see below for a full example). The Stream
Service having this exception just stops its work
Hi guys,
I would like to recommend the following post, discussing and comparing
techniques for loading data from Kafka to BigQuery.
https://medium.com/myheritage-engineering/kafka-to-bigquery-load-a-guide-for-streaming-billions-of-daily-events-cbbf31f4b737
Feedback is welcome.
*Ofir Sharony
nks.
> --Vahid
>
>
>
>
> From: karan alang <karan.al...@gmail.com>
> To: users@kafka.apache.org
> Date: 06/22/2017 11:14 PM
> Subject:Re: Deleting/Purging data from Kafka topics (Kafka 0.10)
>
>
>
> Hi Vahid,
> here is th
`.
I hope this answers your question.
Thanks.
--Vahid
From: karan alang <karan.al...@gmail.com>
To: users@kafka.apache.org
Date: 06/22/2017 11:14 PM
Subject:Re: Deleting/Purging data from Kafka topics (Kafka 0.10)
Hi Vahid,
here is the output of the GetOffsetShell co
est:0:105
> - with `--time -2` you should get test:0:105
>
> Could you please advise whether you're seeing a different behavior?
>
> Thanks.
> --Vahid
>
>
>
>
> From: "Vahid S Hashemian" <vahidhashem...@us.ibm.com>
> To: users@kafka.apache
whether you're seeing a different behavior?
Thanks.
--Vahid
From: "Vahid S Hashemian" <vahidhashem...@us.ibm.com>
To: users@kafka.apache.org
Date: 06/22/2017 06:43 PM
Subject:Re: Deleting/Purging data from Kafka topics (Kafka 0.10)
Hi Karan,
I think the issue is
com>
To: users@kafka.apache.org
Date: 06/22/2017 06:09 PM
Subject:Re: Deleting/Purging data from Kafka topics (Kafka 0.10)
Hi Vahid,
somehow, the changes suggested don't seem to be taking effect, and i dont
see the data being purged from the topic.
Here are the steps i followe
r example if this broker config
> value is much higher, then the broker doesn't delete old logs regular
> enough.
>
> --Vahid
>
>
>
> From: karan alang <karan.al...@gmail.com>
> To: users@kafka.apache.org
> Date: 06/22/2017 12:27 PM
> Subject:Delet
<karan.al...@gmail.com>
To: users@kafka.apache.org
Date: 06/22/2017 12:27 PM
Subject:Deleting/Purging data from Kafka topics (Kafka 0.10)
Hi All -
How do i go about deleting data from Kafka Topics ? I've Kafka 0.10
installed.
I tried setting the parameter of the topi
Hi All -
How do i go about deleting data from Kafka Topics ? I've Kafka 0.10
installed.
I tried setting the parameter of the topic as shown below ->
$KAFKA10_HOME/bin/kafka-topics.sh --zookeeper localhost:2161 --alter
--topic mmtopic6 --config retention.ms=1000
I was expecting to have the d
hans
>>
>>
>>
>>
>>> On Jun 2, 2017, at 8:12 AM, Mina Aslani <aslanim...@gmail.com> wrote:
>>>
>>> Hi,
>>>
>>> I would like to add that I use kafka-connect and schema-registery
>> version `
>>> 3.2.1-6
>
> > On Fri, Jun 2, 2017 at 10:59 AM, Mina Aslani <aslanim...@gmail.com>
> wrote:
> >
> >> Hi.
> >>
> >> Is there any way that I get the data into a Kafka topic in Json format?
> >> The source that I ingest the data from have th
9 AM, Mina Aslani <aslanim...@gmail.com> wrote:
>
>> Hi.
>>
>> Is there any way that I get the data into a Kafka topic in Json format?
>> The source that I ingest the data from have the data in Json format,
>> however when I look that data in the kafka topic, s
.
-hans
> On Jun 2, 2017, at 7:59 AM, Mina Aslani <aslanim...@gmail.com> wrote:
>
> Hi.
>
> Is there any way that I get the data into a Kafka topic in Json format?
> The source that I ingest the data from have the data in Json format,
> however when I look that data
Hi,
I would like to add that I use kafka-connect and schema-registery version `
3.2.1-6`.
Best regards,
Mina
On Fri, Jun 2, 2017 at 10:59 AM, Mina Aslani <aslanim...@gmail.com> wrote:
> Hi.
>
> Is there any way that I get the data into a Kafka topic in Json format?
> The so
Hi.
Is there any way that I get the data into a Kafka topic in Json format?
The source that I ingest the data from have the data in Json format,
however when I look that data in the kafka topic, schema and payload fields
are added and data is not in json format.
I want to avoid implementing
: Mireia [mailto:mireya.miguel.alva...@alumnos.upm.es]
Sent: Thursday, June 1, 2017 2:51 PM
To: users@kafka.apache.org
Subject: Android app produces data in Kafka
Hi.
I am going to do a project in the University in order to finish my master in
IoT.
I need to know if it is posible connect an android
ar <
> > an...@systeminsights.com>
> > > wrote:
> > >
> > > > Try Presto https://prestodb.io. It may solve your problem.
> > > >
> > > > On Sat, 4 Mar 2017, 03:18 Milind Vaidya, <kava...@gmail.com> wrote:
> > >
>
> > > Try Presto https://prestodb.io. It may solve your problem.
> > >
> > > On Sat, 4 Mar 2017, 03:18 Milind Vaidya, <kava...@gmail.com> wrote:
> > >
> > > > I have 6 broker kafka setup.
> > > >
> > > > I have retent
Mar 4, 2017 at 9:48 AM, Anish Mashankar <an...@systeminsights.com>
> wrote:
>
> > Try Presto https://prestodb.io. It may solve your problem.
> >
> > On Sat, 4 Mar 2017, 03:18 Milind Vaidya, <kava...@gmail.com> wrote:
> >
> > > I have 6 broker kafka setup.
> > &g
I'd use option 2 (Kafka Connect).
Advantages of #2:
- The code is decoupled from the processing code and easier to refactor in
the future. (same as #4)
- The runtime/uptime/scalability of your Kafka Streams app (processing) is
decoupled from the runtime/uptime/scalability of the data ingestion
Thank Eno,
Yes, I am aware of that. It indeed looks like a very useful feature.
The result of the processing in kafka streams is only a small amount of
data that is require by our service.
Currently it make more sense for us to update the remote database were we
have more data that our
Hi Shimi,
Could you tell us more about your scenario? Kafka Streams uses embedded
databases (RocksDb) to store it's state, so often you don't need to write
anything to an external database and you can query your streams state directly
from streams. Have a look at this blog if that matches your
Hi Everyone,
I was wondering about writing data to remote database.
I see 4 possible options:
1. Read from a topic and write to the database.
2. Use kafka connect
3. Write from anywhere in kafka streams.
4. Register a CachedStateStore FlushListener that will send a batch of
ry Presto https://prestodb.io. It may solve your problem.
>
> On Sat, 4 Mar 2017, 03:18 Milind Vaidya, <kava...@gmail.com> wrote:
>
> > I have 6 broker kafka setup.
> >
> > I have retention period of 48 hrs.
> >
> > To debug if certain data has rea
Try Presto https://prestodb.io. It may solve your problem.
On Sat, 4 Mar 2017, 03:18 Milind Vaidya, <kava...@gmail.com> wrote:
> I have 6 broker kafka setup.
>
> I have retention period of 48 hrs.
>
> To debug if certain data has reached kafka or not I am using co
Hi All,
Is it possible to update kafka topic data based on changes in data sources
using python?
<https://www.quora.com/unanswered/Is-it-possible-to-update-kafka-topic-data-based-on-changes-in-data-sources-using-python>
Hello sir,
I want to send mailchimp data to kafka broker(Topic) using producer api.
counld you please help me?
protocols for example Modbus, DNP or IEC61850 and next to Storm
> processing system.
> I'm wondering how can I get these data via Kafka and I don't know whether
> that's supported or not.
>
> Any suggestion and hint are warmly welcomed!
>
> Regards,
> Long Tian
>
>
Dear all gurus,
I'm new to Kafka and I'm going to connect the real time data steaming from
power system supervision and control devices to Kafka via different
communication protocols for example Modbus, DNP or IEC61850 and next to Storm
processing system.
I'm wondering how can I get these data
On January 28, 2016 at 7:07:02 PM, Ewen Cheslack-Postava (e...@confluent.io)
wrote:
Randall,
Great question. Ideally you wouldn't need this type of state since it
should really be available in the source system. In your case, it might
actually make sense to be able to grab that information
On Fri, Jan 29, 2016 at 7:06 AM, Randall Hauch wrote:
> On January 28, 2016 at 7:07:02 PM, Ewen Cheslack-Postava (
> e...@confluent.io) wrote:
>
> Randall,
>
> Great question. Ideally you wouldn't need this type of state since it
> should really be available in the source
> On Jan 29, 2016, at 7:06 AM, Randall Hauch wrote:
>
> On January 28, 2016 at 7:07:02 PM, Ewen Cheslack-Postava (e...@confluent.io)
> wrote:
> Randall,
>
> Great question. Ideally you wouldn't need this type of state since it
> should really be available in the source system.
> On Jan 28, 2016, at 5:06 PM, Ewen Cheslack-Postava wrote:
>
> Randall,
>
> Great question. Ideally you wouldn't need this type of state since it
> should really be available in the source system. In your case, it might
> actually make sense to be able to grab that
Randall,
Great question. Ideally you wouldn't need this type of state since it
should really be available in the source system. In your case, it might
actually make sense to be able to grab that information from the DB itself,
although that will also have issues if, for example, there have been
Rather than leave this thread so open ended, perhaps I can narrow down to what
I think is the best approach. These accumulations are really just additional
information from the source that don’t get written to the normal topics.
Instead, each change to the accumulated state can be emitted as
I’m creating a custom Kafka Connect source connector, and I’m running into a
situation for which Kafka Connect doesn’t seem to provide a solution out of the
box. I thought I’d first post to the users list in case I’m just missing a
feature that’s already there.
My connector’s SourceTask
Hi,
I am trying to send message to kafka producer using encryption and
authentication.After creating the key and everything successfully.While
passing the value through console i am getting this error:
ERROR Error when sending message to topic test with key: null, value: 2
bytes with error:
what is your server config?
> On 9 Dec 2015, at 18:21, Ritesh Sinha
> wrote:
>
> Hi,
>
> I am trying to send message to kafka producer using encryption and
> authentication.After creating the key and everything successfully.While
> passing the value through
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the
Hi Ritesh
You config on both sides looks fine. There may be something wrong with your
truststore, although you should see exceptions in either the client or server
log files if that is the case.
As you appear to be running locally, try creating the JKS files using the shell
script included
After editing my server.properties. I didn't start kafka again.That was
causing the issue.Silly mistake. Thanks a lot Ben for your replies.
On Thu, Dec 10, 2015 at 2:27 AM, Ben Stopford wrote:
> Hi Ritesh
>
> You config on both sides looks fine. There may be something wrong
lines.foreachRDD(new FunctionJavaRDDString, Void() {
@Override
public Void call(JavaRDDString rdd) throws Exception {
ListString collect = rdd.collect();
for (String data : collect) {
try {
// save data in the log.txt file
Path filePath = Paths
.get(rdd save file);
if (!Files.exists(filePath)) {
Good evening. Can someone suggest existing framework that allows to
reliably load data from kafka into relation database like Oracle in real
time?
Thanks so much in advance,
Vadim
wrote:
Good evening. Can someone suggest existing framework that allows to
reliably load data from kafka into relation database like Oracle in real
time?
Thanks so much in advance,
Vadim
Hi folks,
I am new to both Kafka and Storm and I have problem having KafkaSpout to
get data from Kafka in our three-node environment with Kafka 0.8.1.1 and
Storm 0.9.3.
What is working:
- I have a Kafka producer (a java application) to generate random string to
a topic and I was able to run
Hi Kishore,
You can use Kafka Producer API for same. You can find the sample code on
Kafka Quick Start page : http://kafka.apache.org/07/quickstart.html
Regards,
Nitin Kumar Sharma.
On Wed, Dec 10, 2014 at 2:14 AM, kishore kumar akishore...@gmail.com
wrote:
Hi kafkars,
I want to write a
Hi kafkars,
I want to write a java code to ingest csv files into kafka,any help.
Thanks,
Kishore.
will over time garbage collect all messages with the same key,
except for the most recent message.
Thanks,
Jun
On Wed, Nov 5, 2014 at 10:15 AM, Ivan Balashov ibalas...@gmail.com wrote:
Hi,
It looks like it is a general practice to avoid storing data in kafka
keys. Some examples
Hi,
It looks like it is a general practice to avoid storing data in kafka
keys. Some examples of this: Camus, Secor both not using keys. Even
such a swiss-army tool as kafkacat doesn't seem to have the ability to
display key (although I might be wrong). Also, console producer does
not display
/display/KAFKA/Consumer+Group+Example
As far as data going in you are going to find the most breadth of
difference because a (the) goal is to get all of your data into Kafka and
creating a unified log
http://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about
Dear experts,
I'm new to Kafka and am doing some study around overall real-time data
integration architecture. What is the common ways of pushing data into
Kafka? Does anyone use ESB or others to feed various message streams into
Kafka in real-time / an event-drvien fashion?
Thanks.
-HQ
,
I'm new to Kafka and am doing some study around overall real-time data
integration architecture. What is the common ways of pushing data into
Kafka? Does anyone use ESB or others to feed various message streams into
Kafka in real-time / an event-drvien fashion?
Thanks.
-HQ
-time data
integration architecture. What is the common ways of pushing data into
Kafka? Does anyone use ESB or others to feed various message streams into
Kafka in real-time / an event-drvien fashion?
Thanks.
-HQ
side Storm is
consuming with the max speed of 1100 messages per second. It means Storm is
consuming messages 4 times slower than Kafka producing.
We running these systems in production and I am bit worried about
data loss. Kafka is pushing 35 million in 2 hours and Storm is taking 7-8
hours
handle much more data than kafka boundary. :)
Best,
Siyuan
On Thu, Jun 19, 2014 at 4:30 PM, Shaikh Ahmed rnsr.sha...@gmail.com wrote:
Hi All,
Thanks for your valuable comments.
Sure, I will give a try with Samza and Data Torrent.
Meanwhile, I sharing screenshot of Storm UI. Please have
are downloaded 28 Million of messages and Monthly it goes up to
800+ million.
We want to process this amount of data through our kafka and storm cluster
and would like to store in HBase cluster.
We are targeting to process one month of data in one day. Is it possible?
We have setup our cluster thinking
of data through our kafka and storm cluster
and would like to store in HBase cluster.
We are targeting to process one month of data in one day. Is it possible?
We have setup our cluster thinking that we can process million of messages
in one sec as mentioned on web. Unfortunately, we have
up to
800+ million.
We want to process this amount of data through our kafka and storm
cluster
and would like to store in HBase cluster.
We are targeting to process one month of data in one day. Is it
possible?
We have setup our cluster thinking that we can process million
wrote:
Hi,
Daily we are downloaded 28 Million of messages and Monthly it goes up to
800+ million.
We want to process this amount of data through our kafka and storm cluster
and would like to store in HBase cluster.
We are targeting to process one month of data in one day. Is it possible
to
800+ million.
We want to process this amount of data through our kafka and storm cluster
and would like to store in HBase cluster.
We are targeting to process one month of data in one day. Is it possible?
We have setup our cluster thinking that we can process million of messages
in one
are downloaded 28 Million of messages and Monthly it goes up to
800+ million.
We want to process this amount of data through our kafka and storm cluster
and would like to store in HBase cluster.
We are targeting to process one month of data in one day. Is it possible?
We have setup our
of messages and Monthly it goes up to
800+ million.
We want to process this amount of data through our kafka and storm
cluster
and would like to store in HBase cluster.
We are targeting to process one month of data in one day. Is it
possible?
We have setup our cluster thinking that we
Hi,
Daily we are downloaded 28 Million of messages and Monthly it goes up to
800+ million.
We want to process this amount of data through our kafka and storm cluster
and would like to store in HBase cluster.
We are targeting to process one month of data in one day. Is it possible?
We have
Hi All,
I am using kafka 0.8.
My producers configurations are as follows
kafka8.bytearray.producer.type=sync
kafka8.producer.batch.num.messages=100
kafka8.producer.topic.metadata.refresh.interval.ms=60
kafka8.producer.retry.backoff.ms=100
You will need to configure request.required.acks properly. See
http://kafka.apache.org/documentation.html#producerconfigs for details.
Thanks,
Jun
On Tue, Dec 10, 2013 at 1:55 AM, Nishant Kumar nish.a...@gmail.com wrote:
Hi All,
I am using kafka 0.8.
My producers configurations are as
No. Kafka broker stores the binary data as it is. The binary data may be
compressed, if compression is enabled at the producer.
Thanks,
Jun
On Wed, May 8, 2013 at 5:57 AM, Sybrandy, Casey
casey.sybra...@six3systems.com wrote:
All,
Does the Kafka broker Base64 encode the messages? We are
That's what I would have assumed. And no, we're not using compression.
Thanks.
From: Jun Rao [mailto:jun...@gmail.com]
Sent: Wednesday, May 08, 2013 11:26 AM
To: users@kafka.apache.org
Cc: Sybrandy, Casey
Subject: Re: Binary Data and Kafka
No. Kafka broker stores the binary data
92 matches
Mail list logo