Hello Boris,

I think your described scenario can still be achieved via a global state
store (if you are using high-level DSL it means via a GlobalKTable). The
global state store's source topic's all partitions will be piped to all
instances, and all records will be materialized to that store. Then what
you can do is the following (I'm assuming you are using the lower-level
processor API, for DSL it is similar pattern):

Topology.addGlobalStore("control-topic-name", "global-store-name" ...)
              .addSource("data-topic-name", "source-name")
              .addProcessor(MyProcessor, "processor-name", "source-name")

where MyProcessor can be implemented as:

process() {

    control-flag = .. // read from global store "global-store-name"

    switch (control-flag) {

       // based on different controls of the current state, execute
differently.
    }
}

------------

Behind the scene, the global table is independently updated by another
thread which uses a different consumer than the consumer other stream
thread uses for processing, so that consumers' group-id would not be
specified by the user and will be handled by the Streams library itself.


Guozhang


On Mon, Nov 13, 2017 at 1:11 PM, Boris Lublinsky <
boris.lublin...@lightbend.com> wrote:

> I was thinking about controlled stream use case, where one stream is data
> for processing, while the second one controls execution.
> If I want to scale this, I want to run multiple instances. In this case I
> want these instances to share data topic, but control topic should be
> delivered to all
> Instances.
> This means that I would like to control group IDs for streams individually
>
> Boris Lublinsky
> FDP Architect
> boris.lublin...@lightbend.com
> https://www.lightbend.com/
>
> > On Nov 13, 2017, at 2:47 PM, Guozhang Wang <wangg...@gmail.com> wrote:
> >
> > Boris,
> >
> > What's your use case scenarios that you'd prefer to set different
> > subscriber IDs for different streams?
> >
> >
> > Guozhang
> >
> >
> > On Mon, Nov 13, 2017 at 6:49 AM, Boris Lublinsky <
> > boris.lublin...@lightbend.com> wrote:
> >
> >> This seems like a very limiting implementation
> >>
> >>
> >> Boris Lublinsky
> >> FDP Architect
> >> boris.lublin...@lightbend.com
> >> https://www.lightbend.com/
> >>
> >>> On Nov 13, 2017, at 4:21 AM, Damian Guy <damian....@gmail.com> wrote:
> >>>
> >>> Hi,
> >>>
> >>> The configurations apply to all streams consumed within the same
> streams
> >>> application. There is no way of overriding it per input stream.
> >>>
> >>> Thanks,
> >>> Damian
> >>>
> >>> On Mon, 13 Nov 2017 at 04:49 Boris Lublinsky <
> >> boris.lublin...@lightbend.com>
> >>> wrote:
> >>>
> >>>> I am writing Kafka Streams implementation (1.0.0), for which I have 2
> >>>> input streams.
> >>>> Is it possible to have different subscriber IDs for these 2 streams.
> >>>> I see only one place where subscriber’s ID can be specified:
> >>>>
> >>>> streamsConfiguration.put(StreamsConfig.CLIENT_ID_CONFIG,
> >>>> ApplicationKafkaParameters.DATA_GROUP);
> >>>> And it does not seem like either Topology or DSL APIs allow to
> overwrite
> >>>> it during Stream creation.
> >>>>
> >>>> Thanks for the help
> >>>>
> >>>> Boris Lublinsky
> >>>> FDP Architect
> >>>> boris.lublin...@lightbend.com
> >>>> https://www.lightbend.com/
> >>>>
> >>>>
> >>
> >>
> >
> >
> > --
> > -- Guozhang
>
>


-- 
-- Guozhang

Reply via email to