Thomas,

My responses are inline..

On Tue, Apr 25, 2017 at 8:42 AM, Thomas Weise <t...@apache.org> wrote:

> Pramod,
>
> Sounds like some sort of alternative "processing mode" that from engine
> perspective allows potentially inconsistent state when there is a pipeline
> failure. This is of course only something the user can decide.
>

Calling it an alternate processing mode is a good idea.


>
> Does the proposal assume that the operator state is immutable (or what is
> sometimes tagged with the stateless annotation)? For example an operator
> that has to load a large amount of state from another source before it can
> process the first tuple?
>

Operator state can change, not necessarily stateless. Stateless may not
automatically fall into this category as our current definition of
stateless denotes window level stateless and not necessarily tuple level.


>
> Also, it would be an optimization but not something that will help with SLA
> if the operator still needs to be recovered when its own container fails.
> It might help to clarify that and also why there is a need to recover in
> the batch use case (vs. reprocess).
>

Correct, as I mentioned in the last statement, if the container where the
operator is running itself goes down then it is recovery from checkpoint
and business as usual. I may have misspoken about batch, meant to say apps
where operators have large state not necessarily batch, the use case we are
dealing with is batch and restart is not practical as the run takes a long
time.

Thanks,
Pramod


> Thanks,
> Thomas
>
>
>
>
> On Mon, Apr 24, 2017 at 8:57 AM, Pramod Immaneni <pra...@datatorrent.com>
> wrote:
>
> > In a failure scenario, when a container fails, it is redeployed along
> with
> > all the operators in it. The operators downstream to these operators are
> > also redeployed within their containers. The operators are restored from
> > their checkpoint and connect to the appropriate point in the stream
> > according to the processing mode. In at least once mode, for example, the
> > data is replayed from the same checkpoint
> >
> > Restoring an operator state from checkpoint could turn out to be a costly
> > operation depending on the size of the state. In some use cases, based on
> > the operator logic, when there is an upstream failure, the operator state
> > without being restored to the checkpoint i.e., remaining as is, will
> still
> > produce the same results with the data replayed from the last fully
> > processed window. This is true with some operators in batch use cases.
> The
> > operator state can remain the same as it was before the upstream failure
> by
> > reusing the same operator instance from before and only the streams and
> > window reset to the window after the last fully processed window to
> > guarantee the at least once processing of tuples. If the container where
> > the operator itself is running goes down, it would need to be restored
> from
> > the checkpoint of course.
> >
> > I would like to propose adding the ability for a user to explicitly
> > identify operators to be of this type and the corresponding functionality
> > in the engine to handle their recovery in the way described above by not
> > restoring their state from checkpoint, reusing the instance and restoring
> > the stream to the window after the last fully processed window for the
> > operator. When operators are not identified to be of this type, the
> default
> > behavior is what it is today and nothing changes.
> >
> > I have done some prototyping on the engine side to ensure that this is
> > possible with our current code base without requiring a massive overhaul,
> > especially the restoration of the operator instance within the Node in
> the
> > streaming container, the re-establishment of the subscriber stream to a
> > window in the buffer server where the publisher (upstream) hasn't yet
> > reached as it would be restarting from checkpoint and have been able to
> get
> > it all working successfully.
> >
> > Thanks
> >
>

Reply via email to