Responses in line. Glad you brought this up.
On Oct 11, 2016 2:16 PM, "Matt Franklin" wrote:
>
> Dredging up the past here. After working with Streams for a couple of
> years, I think things work fairly well, but still see a need for a more
> reactive producer paradigm. Polling providers for da
Dredging up the past here. After working with Streams for a couple of
years, I think things work fairly well, but still see a need for a more
reactive producer paradigm. Polling providers for data creates a
bottleneck in the production step. IMO, the runtime should be responsible
for queuing data
:+1: right now they have no way to talk to each other. Provider doesn't know
when he is going to be polled again and the builder implementation has no idea
if the provider is done providing.
Sent from my iPhone
> On Jun 12, 2014, at 8:51 AM, Matt Franklin wrote:
>
> Do we have consensus on ne
Do we have consensus on next steps? From what I can see, everyone agrees
that the addition of an isRunning method to the provider makes sense. I
will create a ticket and commit that change; but, I encourage others to
continue discussion on the next steps for improvement.
On Thu, May 15, 2014 at
Hi all,
After working with the Streams project a bit, I have noticed some of the
same issues that Matt and Ryan have brought up. I think that Matt's idea
to implement two interfaces (Producer, Listener) would make a great
addition to the project. Not only would it increase efficiency but it
would
My biggest concern against the proposed interfaces is that it won't
guarantee that all streams components will be able to run in the storm
runtime (which I know is not exactly working at the moment). Storm
guarantees the processing of every tuple that enters the system from a
spout. Therefore eve
Steve,
Thanks for the reply!
I think that both can be accommodated, using different interfaces, maybe
there is room for 2 types of processor? While, this paradigm might be
great for Pig and MR1, it falls short on what can be done with Storm. It
also falls short of many complicated ETL problems th
Ryan,
Thank you for your comments!! I, however, must respectfully disagree. The
current pattern is very limiting. A provider should know if it is not
functioning correctly, I agree.
However, I would like to challenge how a user would re-use various providers
offered as contrib to streams and
On Tue, May 6, 2014 at 9:53 PM, Matthew Hager [W2O Digital]
wrote:
> Good Day!
>
> I would like to throw in my two pents in on this if it pleases the
> community.
>
> Here are my thoughts based on implementations that I have written with
> streams to ensure timely, high yield execution. Personally
Fundamentally processors as initially conceived do not maintain fire events
autonomously or maintain state between messages. Changing that paradigm would
mean Pig/MR1 would not longer be capable of serving as a full-featured
processor runtime. Agree this is limiting, but only in terms of what
I think a processor should solely be responsible for processing data. I
think the interface describes exactly what a processor should do, get a
piece of data and produce out put data. Having it do more than that is
expanding the functionality of a processor beyond its intent.
I do agree that bei
Matt,
As always thanks for your feedback and mentorship as I work to contribute
to this project.
I feel the current pattern for processor is extremely limiting given the
constraint of the return statement rather than the alternative of receive
-> process -> write. It seems that if we revise the
On Tue, May 6, 2014 at 10:53 PM, Matthew Hager [W2O Digital] <
mha...@w2odigital.com> wrote:
> StreamsResultSet - I actually found this to be quite useful paradigm. A
> queue prevents a buffer overflow, an iterator makes it fun and easy to
> read (I love iterators), and it is simple and succinc
Good Day!
I would like to throw in my two pents in on this if it pleases the
community.
Here are my thoughts based on implementations that I have written with
streams to ensure timely, high yield execution. Personally, I had to
override much of the LocalStreamsBuilder to fit my use cases for many
Good Day!
I would like to throw in my two pents in on this if it pleases the
community.
Here are my thoughts based on implementations that I have written with
streams to ensure timely, high yield execution. Personally, I had to
override much of the LocalStreamsBuilder to fit my use cases for many
On Tue, May 6, 2014 at 8:24 AM, Matt Franklin wrote:
> On Mon, May 5, 2014 at 1:15 PM, Steve Blackmon wrote:
>
>> What I meant to say re #1 below is that batch-level metadata could be
>> useful for modules downstream of the StreamsProvider /
>> StreamsPersistReader, and the StreamsResultSet gives
On Mon, May 5, 2014 at 1:15 PM, Steve Blackmon wrote:
> What I meant to say re #1 below is that batch-level metadata could be
> useful for modules downstream of the StreamsProvider /
> StreamsPersistReader, and the StreamsResultSet gives us a class to
> which we can add new metadata in core as th
What I meant to say re #1 below is that batch-level metadata could be
useful for modules downstream of the StreamsProvider /
StreamsPersistReader, and the StreamsResultSet gives us a class to
which we can add new metadata in core as the project evolves, or
supplement on a per-module or per-implemen
Comments on this in-line below.
On Thu, May 1, 2014 at 4:38 PM, Ryan Ebanks wrote:
> The use and implementations of the StreamsProviders seems to have drifted
> away from what it was originally designed for. I recommend that we change
> the StreamsProvider interface and StreamsProvider task to r
The use and implementations of the StreamsProviders seems to have drifted
away from what it was originally designed for. I recommend that we change
the StreamsProvider interface and StreamsProvider task to reflect the
current usage patterns and to be more efficient.
Current Problems:
1.) newPerp
20 matches
Mail list logo