Hi all,
Is there a way to include timestamp with each record when using Kafka's REST
proxy? The documentation does not show any examples and when I tried to use a
"timestamp" field, I got an "unknown field" error in response.
Any help would be greatly appreciated.
ThanksSachin
These ideas are specific to Samza and ymmv in how they apply to other
processing frameworks, but we use a couple of custom tools to keep tabs on
processing lag:
- one is a produce/consume timestamp comparison tool which utilizes writes
a message production timestamps out to ZooKeeper on a per-part
I am not 100% sure what Burrow does, but I would assume that it compares
committed offsets to end offsets (similar to
`bin/kafka-consumer-group.sh`). This is a "global" view over all
consumer in the group. Compare to the consumer metric, the might report
a higher lag as it relies on consumer commit
Hi Bill,
thank you for replying.
Yes keys are all the same type (machine ID string)
Btw, your solution sounds great, but it'll only work if al the 3 streams
have the same number of partitions, right?
Otherwise there's no guarantee that all the data of the same machine (the
topic keys are the ma
By default, RocksDB is used. You can also change it to use an in-memory
store that is basically a HashMap.
-Matthias
On 5/12/20 10:16 AM, Pushkar Deole wrote:
> Thanks Liam!
>
> On Tue, May 12, 2020, 15:12 Liam Clarke-Hutchinson <
> liam.cla...@adscale.co.nz> wrote:
>
>> Hi Pushkar,
>>
>> Glob
I guess you can just stop all servers, update the binaries (and
potentially configs), and afterward restart the servers.
Of course, you might want to stop all applications that connect to the
cluster first.
-Matthias
On 5/11/20 9:50 AM, Praveen wrote:
> Hi folks,
>
> I'd like to take downtime
Thanks Liam!
On Tue, May 12, 2020, 15:12 Liam Clarke-Hutchinson <
liam.cla...@adscale.co.nz> wrote:
> Hi Pushkar,
>
> GlobalKTables and KTables can have whatever data structure you like, if you
> provide the appropriate deserializers - for example, an Kafka Streams app I
> maintain stores model d
Thanks Liamessentially, it would be an internal topic that we would be
creating to use as a cache store by accessing topic through a GlobalKTable,
so the problem you mentioned above for storing Hashmap may not apply there
On Tue, May 12, 2020, 21:25 Liam Clarke-Hutchinson <
liam.cla...@adscale
Hi Pushkar,
Just wanted to say, as someone with battle scars from ActiveMQ and Camel,
there's very many good reasons to avoid Java serialization on a messaging
system. What if you need to tail a topic from the console? What if your
testers want to access in their pytests? Etc. And that's not even
Thanks Bill, my apologies I did not elaborate my use case.
In my use case, the data from Cassandra is pushed to Kafka and then we consume
from Kafka to snowflake. Once we push the data to snowflake, we do not want to
go back to the source(Cassandra) to pull the data again. There are occasions
I'm not sure that's feasible in this case, but I'll have a look!
Thanks,
Ben
-Original Message-
From: Liam Clarke-Hutchinson
Sent: 06 May 2020 19:47
To: users@kafka.apache.org
Subject: EXTERNAL: Re: Separate Kafka partitioning from key compaction
Could you deploy a Kafka Streams app tha
Hi Rajib,
Generally, it's best to let Kafka handle the offset management.
Under normal circumstances, when you restart a consumer, it will start
reading records from the last committed offset, there's no need for you to
manage that process yourself.
If you need manually commit records vs. using au
Hi Alessandro,
For merging the three streams, have you considered the `KStream.merge`
method?
If the values are of different types, you'll need to map them into a common
type first, but I think
something like this will work:
KStream mappedOne = orignalStreamOne.mapValues(...);
KStream mappedTwo =
Hi Pushkar,
GlobalKTables and KTables can have whatever data structure you like, if you
provide the appropriate deserializers - for example, an Kafka Streams app I
maintain stores model data (exported to a topic per entity from Postgres
via Kafka Connect's JDBC Source) as a GlobalKTable of Jackson
Hello confluent team,
Could you provide some information on what data structures are used
internally by GlobalKTable and KTables. The application that I am working
on has a requirement to read cached data from GlobalKTable on every
incoming event, so the reads from GlobalKTable need to be efficien
Hi Vishnu,
I'm no expert on the Connector ecosystem, but I'm not aware of any source
connector which does that for some arbitrary (i.e. configurable) HTTP
endpoint. I suppose that's due to the difficulty of making it configurable
over the space of all endpoint behaviour (e.g. http methods, request
Thanks Tom
I am receiving data from one Rest Endpoint and post that data from
endpoint to topic.
Is it possible or any other connector available for that?
On Tue, May 12, 2020, 13:24 Tom Bentley wrote:
> Hi Vishnu,
>
> These are probably a good place to start:
> 1. https://docs.confluent.io
Hi Vishnu,
These are probably a good place to start:
1. https://docs.confluent.io/current/connect/devguide.html
2.
https://www.confluent.io/blog/create-dynamic-kafka-connect-source-connectors/
Cheers,
Tom
On Tue, May 12, 2020 at 7:34 AM vishnu murali
wrote:
> Hi Guys,
>
> i am trying to crea
Hi friends,
I am having Rest Endpoint and data is receiving in that endpoint
continuously..
I need to send that data to the Kafka topic ..
For these above scenarios I need to solve using connector..
Because I didn't want to run another application to receive data from rest
and send to Kafka.
19 matches
Mail list logo