So what do you do when you OOM?  A network traffic spike beyond a
particular threshold is exactly why you want a bounded queue, b/c it gives
you an opportunity to actually handle it and recover, even if the recovery
is just "drop messages I can't handle".

Backpressure doesn't make sense on an edge server handling traffic you
don't control, but spill-to-disk or discarding messages does.

Having a bound on your queue size and statically allocating a gigantic
channel buffer are orthogonal issues.  You can bound a linked list.


On Thu, Dec 19, 2013 at 1:23 PM, Kevin Ballard <ke...@sb.org> wrote:

> Here’s an example from where I use an infinite queue.
>
> I have an IRC bot, written in Go. The incoming network traffic of this bot
> is handled in one goroutine, which parses each line into its components,
> and enqueues the result on a channel. The channel is very deliberately made
> infinite (via a separate goroutine that stores the infinite buffer in a
> local slice). The reason it’s infinite is because the bot needs to be
> resilient against the case where either the consumer unexpectedly blocks,
> or the network traffic spikes. The general assumption is that, under normal
> conditions, the consumer will always be able to keep up with the producer
> (as the producer is based on network traffic and not e.g. a tight CPU loop
> generating messages as fast as possible). Backpressure makes no sense here,
> as you cannot put backpressure on the network short of letting the socket
> buffer fill up, and letting the socket buffer fill up with cause the IRC
> network to disconnect you. So the overriding goal here is to prevent
> network disconnects, while assuming that the consumer will be able to catch
> up if it ever gets behind.
>
> This particular use case very explicitly wants a dynamically-sized
> infinite channel. I suppose an absurdly large channel would be acceptable,
> because if the consumer ever gets e.g. 100,000 lines behind then it’s in
> trouble already, but I’d rather not have the memory overhead of a
> statically-allocated gigantic channel buffer.
>
> -Kevin
>
> On Dec 19, 2013, at 10:04 AM, Jason Fager <jfa...@gmail.com> wrote:
>
> Okay, parallelism, of course, and I'm sure others.  Bad use of the word
> 'only'.  The point is that if your consumers aren't keeping up with your
> producers, you're screwed anyways, and growing the queue indefinitely isn't
> a way to get around that.  Growing queues should only serve specific
> purposes and make it easy to apply back pressure when the assumptions
> behind those purposes go awry.
>
>
> On Thursday, December 19, 2013, Patrick Walton wrote:
>
>> On 12/19/13 6:31 AM, Jason Fager wrote:
>>
>>> I work on a system that handles 10s of billions of events per day, and
>>> we do a lot of queueing.  Big +1 on having bounded queues.  Unbounded
>>> in-memory queues aren't, they just have a bound you have no direct
>>> control over and that blows up the world when its hit.
>>>
>>> The only reason to have a queue size greater than 1 is to handle spikes
>>> in the producer, short outages in the consumer, or a bit of
>>> out-of-phaseness between producers and consumers.
>>>
>>
>> Well, also parallelism.
>>
>> Patrick
>>
>> _______________________________________________
>> Rust-dev mailing list
>> Rust-dev@mozilla.org
>> https://mail.mozilla.org/listinfo/rust-dev
>>
>  _______________________________________________
> Rust-dev mailing list
> Rust-dev@mozilla.org
> https://mail.mozilla.org/listinfo/rust-dev
>
>
>
_______________________________________________
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to