IMO the best alternative for a non-blocking send on a bounded channel is
returning an Option.
If you send successfully, you return None.
If you can't send because the channel is full, you return Some(message).
This lets the sender recover the message (important for moveable objects)
and decide how to handle it (retry, fail, drop, etc.).

Personally, I lean toward providing unbounded channels as the primitive and
implementing bounded channels on top of them OR providing both as
primitives.


On Fri, Dec 20, 2013 at 4:19 PM, Carter Schonwald <
carter.schonw...@gmail.com> wrote:

> actually, you're right, in go they're fixed sized buffers
> http://golang.org/src/pkg/runtime/chan.c . I can understands (and agree!)
> that this is not a good default if a more dynamic data structure can work
> well.
>
> in haskell / ghc , bounded channels are dynamically sized, and merely have
> a max size thats enforced by the provided api,and I've been speaking with
> that sort of memory usage model in mind.
>
>
> On Fri, Dec 20, 2013 at 4:15 PM, Carter Schonwald <
> carter.schonw...@gmail.com> wrote:
>
>> i'd be very very surprised if bounded channels in go don't dynamically
>> resize their queues and then atomically insert / remove elements while
>> checking the bound.  I'd actually argue that such would be a bug.
>>
>>
>> On Fri, Dec 20, 2013 at 4:09 PM, Kevin Ballard <ke...@sb.org> wrote:
>>
>>> I haven’t profiled it, but my belief is that under normal circumstances,
>>> messages come in slow enough that the consumer is always idle and ready to
>>> process the next message as soon as it’s sent. However, I expect it does
>>> occasionally back up a bit, e.g. when I get a burst of traffic such as
>>> during a netsplit when I’m sent a large batch of “<user> has quit” or
>>> “<user> has joined” (when the netsplit is over). I don’t know how much the
>>> channel backs up at that point, probably not too much.
>>>
>>> For this particular use-case, a channel that’s bounded at e.g. 100,000
>>> elements would be indistinguishable from an infinite channel, as long as it
>>> still dynamically allocates (I don’t *think* Go channels dynamically
>>> allocate, which is why I can’t just use a 100,000 element channel for real).
>>>
>>> However, my overall point about large bounds being indistinguishable
>>> from infinite is that if your goal is to pick a bound large enough to
>>> appear infinite to the program, without actually risking OOM, then there’s
>>> no automated way to do this. Different environments have differing amounts
>>> of available resources, and there’s no good way to pick a bound that is
>>> sufficiently high but is definitively lower than the resource bounds. This
>>> is why I’m recommending that we have truly infinite channels, for users who
>>> don’t want to have to think about bounds (e.g. my irc program), as well as
>>> bounded channels, where the user has to explicitly pick a bound (with no
>>> “default” provided).
>>>
>>> -Kevin
>>>
>>> On Dec 20, 2013, at 12:55 PM, Carter Schonwald <
>>> carter.schonw...@gmail.com> wrote:
>>>
>>> kevin, what sort of applications and workloads are you speaking about.
>>> Eg in your example irc server, whats the typical workload when you've used
>>> it?
>>>
>>> cheers
>>> -Carter
>>>
>>>
>>> On Fri, Dec 20, 2013 at 12:54 PM, Kevin Ballard <ke...@sb.org> wrote:
>>>
>>>> On Dec 20, 2013, at 8:59 AM, Carter Schonwald <
>>>> carter.schonw...@gmail.com> wrote:
>>>>
>>>> agreed! Applications that lack explicit logic for handling heavy
>>>> workloads (ie producers outpacing consumers for a sustained period) are the
>>>> most common culprit for unresponsive desktop applications that become
>>>> completely unusable.
>>>>
>>>>
>>>> That’s a pretty strong claim, and one I would have to disagree with
>>>> quite strongly. Every time I’ve sampled an unresponsive application, I
>>>> don’t think I’ve *ever* seen a backtrace that suggests a producer
>>>> outpacing a consumer.
>>>>
>>>> -Kevin
>>>>
>>>> relatedly: would not bounded but programmatically growable channels
>>>> also make it trivial to provide a "unbounded" style channel abstraction?
>>>> (not that i'm advocating that, merely that it seems like it would turn the
>>>> unbounded channel abstraction into an equivalent one that is resource usage
>>>> aware)
>>>>
>>>>
>>>> On Fri, Dec 20, 2013 at 8:52 AM, György Andrasek <jur...@gmail.com>wrote:
>>>>
>>>>> On 12/19/2013 11:13 PM, Tony Arcieri wrote:
>>>>>
>>>>>> So I think that entire line of reasoning is a red herring. People
>>>>>> writing toy programs that never have their channels fill beyond a
>>>>>> small
>>>>>> number of messages won't care either way.
>>>>>>
>>>>>> However, overloaded programs + queues bounded by system resources are
>>>>>> a
>>>>>> production outage waiting to happen. What's really important here is
>>>>>> providing a means of backpressure so overloaded Rust programs don't
>>>>>> grow
>>>>>> until they consume system resources and OOM.
>>>>>>
>>>>>
>>>>> While I disagree with the notion that all programs which don't have
>>>>> their bottlenecks right here are "toys", we should definitely strive for
>>>>> the invariant that task failure does not cause independent tasks to fail.
>>>>>
>>>>> Also, OOM is not free. If you manage to go OOM on a desktop, you'll
>>>>> get a *very* unhappy user, regardless of their expectations wrt your 
>>>>> memory
>>>>> usage. Linux with a spinning disk and swap for example will degrade to the
>>>>> point where they'll reboot before the OOM killer kicks in.
>>>>>
>>>>> Can we PLEASE not do that *by default*?
>>>>>
>>>>> _______________________________________________
>>>>> Rust-dev mailing list
>>>>> Rust-dev@mozilla.org
>>>>> https://mail.mozilla.org/listinfo/rust-dev
>>>>>
>>>>
>>>> _______________________________________________
>>>> Rust-dev mailing list
>>>> Rust-dev@mozilla.org
>>>> https://mail.mozilla.org/listinfo/rust-dev
>>>>
>>>>
>>>>
>>>
>>>
>>
>
> _______________________________________________
> Rust-dev mailing list
> Rust-dev@mozilla.org
> https://mail.mozilla.org/listinfo/rust-dev
>
>
_______________________________________________
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to