[
https://issues.apache.org/jira/browse/FLUME-1227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493481#comment-13493481
]
Roshan Naik commented on FLUME-1227:
------------------------------------
I agree Scribe's policy is sub optimal. It is better to prioritize the parent
channel whenever it has spare capacity and still maintain order. To achieve
this I have a simple algorithm in mind...
The parent channel maintains a 'drain order' queue of signed numbers which
indicates at anytime the order in which the items in it and its overflow
channel should be drained. For instance the following numbers in that queue
[3,-2,6,-1] indicate the following drain order:
- drain 3 from self
- then drain 2 from overflow
- then 6 from self
- then 1 from overflow
The channel's put() will update its drain order queue (DOQ) as follows:
if(I have capacity) {
+ add event to my DOQ
+ if last element in DOQ is +ve then increment it
+ else push +1 to DOQ
} else {
+ Call put() on overflow
+ if last element in DOW is -ve then decrement it
+ else push -1 to DOQ
}
I think the take() should be obvious.
Obviously corner cases like empty self and empty overflow need to be handled
appropriately.. but this is just capturing the idea.
> Introduce some sort of SpillableChannel
> ---------------------------------------
>
> Key: FLUME-1227
> URL: https://issues.apache.org/jira/browse/FLUME-1227
> Project: Flume
> Issue Type: New Feature
> Components: Channel
> Reporter: Jarek Jarcec Cecho
>
> I would like to introduce new channel that would behave similarly as scribe
> (https://github.com/facebook/scribe). It would be something between memory
> and file channel. Input events would be saved directly to the memory (only)
> and would be served from there. In case that the memory would be full, we
> would outsource the events to file.
> Let me describe the use case behind this request. We have plenty of frontend
> servers that are generating events. We want to send all events to just
> limited number of machines from where we would send the data to HDFS (some
> sort of staging layer). Reason for this second layer is our need to decouple
> event aggregation and front end code to separate machines. Using memory
> channel is fully sufficient as we can survive lost of some portion of the
> events. However in order to sustain maintenance windows or networking issues
> we would have to end up with a lot of memory assigned to those "staging"
> machines. Referenced "scribe" is dealing with this problem by implementing
> following logic - events are saved in memory similarly as our MemoryChannel.
> However in case that the memory gets full (because of maintenance, networking
> issues, ...) it will spill data to disk where they will be sitting until
> everything start working again.
> I would like to introduce channel that would implement similar logic. It's
> durability guarantees would be same as MemoryChannel - in case that someone
> would remove power cord, this channel would lose data. Based on the
> discussion in FLUME-1201, I would propose to have the implementation
> completely independent on any other channel internal code.
> Jarcec
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira