Hello,
This isn't resource intensive at all, it merely just changes à value in the
zookeeper instance and share it across the cluster.

Le dim. 12 avr. 2020 à 05:56, KhajaAsmath Mohammed <mdkhajaasm...@gmail.com>
a écrit :

>
> Thanks Senthil. This is helpful but I am worried about doing it with
> standalone process as our data is huge.
>
> Is there a way to do the same thing using kstream and utilize cluster
> resources instead of doing with standalone client process ?
>
> Sent from my iPhone
>
> > On Apr 11, 2020, at 7:27 PM, SenthilKumar K <senthilec...@gmail.com>
> wrote:
> >
> > Hi, We can re-consume the data from particular point using
> consumer.seek()
> > and consumer.assign() API [1]. Pls check out documentation.
> >
> > If you have used timestamp at the time producing records , You can use
> > particular timestamp to consume records [2].
> >
> >
> >
> https://kafka.apache.org/24/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html
> > https://mapr.com/docs/61/MapR_Streams/example-TimeBasedConsumer.html
> >
> > --Senthil
> >> On Sat, 11 Apr 2020, 6:24 pm KhajaAsmath Mohammed, <
> mdkhajaasm...@gmail.com>
> >> wrote:
> >>
> >> Hi,
> >>
> >> We have lost some data while processing and would like to reprocess it.
> >> May I know the procedure to do it . I have offsets numbers that I need
> to
> >> process.
> >>
> >> Any suggestions please. Would be really helpful.
> >>
> >> Thanks,
> >> Asmath
> >>
> >> Sent from my iPhone
>


-- 
*Nicolas Carlot*

Lead dev
|  | nicolas.car...@chronopost.fr


*Veuillez noter qu'à partir du 20 mai, le siège Chronopost déménage. La
nouvelle adresse est : 3 boulevard Romain Rolland 75014 Paris*

[image: Logo Chronopost]
| chronopost.fr <http://www.chronopost.fr/>
Suivez nous sur Facebook <https://fr-fr.facebook.com/chronopost> et Twitter
<https://twitter.com/chronopost>.

[image: DPD Group]

Reply via email to