By the way, the latest code at https://play.golang.org/p/QTkSJAOPtE isn't 
great, because once Close() is called, new tasks will be blocked until 
existing tasks finish.

That is, once Lock() is called in Close(), subsequent RLocks() are blocked. 
 At least, this is document in an old version of the docs (1.5.2):

> Lock locks rw for writing. If the lock is already locked for reading or 
> writing, Lock blocks until the lock is available. To ensure that the lock 
> eventually becomes available, a blocked Lock call excludes new readers from 
> acquiring the lock.


However, current docs don't indicate 
that: https://godoc.org/sync#RWMutex.Lock
But the code shows that it does: 
https://github.com/golang/go/blob/master/src/sync/rwmutex.go#L80

So if the processing takes a long time, many goroutines may be stuck 
waiting for it to resolve before being rejected.

- Augusto

On Tuesday, September 13, 2016 at 5:09:59 PM UTC-7, aro...@gmail.com wrote:
>
> The way to test it is something like this:  
> https://play.golang.org/p/8hN9q9ibIc
>
> Specifically, have the handler push to start & finish channels. In the 
> monitoring goroutine, you can record events for start, finish, & close (and 
> anything else you want to check, maybe Queue?).  After the call to Close() 
> completes, you can check through the event history.  Depending on how you 
> set up your test, you could see, for example, that nothing starts after the 
> CLOSED event.
>
> The test that I made is a little weak in that it tries to verify that 
> exactly N are started which means that to avoid races I don't try to submit 
> to-be-rejected requests until after the channel is closed.  Perhaps it 
> would be better to also have a separate tests that continuously submits 
> requests (e.g. until 100 have failed) and then check the event log to make 
> sure that none start after close and that all started before close are 
> finished before close returns.
>
> I think that satisfies your testing criteria, although it doesn't 100% 
> verify that everything is finished before Close returns -- it's possible 
> that you get lucky and the entire machine freezes on return from Close() 
> and then the queued goroutines flush before the call to read the event log. 
>  If you really want to make that testable, you can put a hook in Close() 
> that sends a signal before it returns.
>
> - Augusto
>
> On Tuesday, September 13, 2016 at 2:06:43 PM UTC-7, Evan Digby wrote:
>>
>> Hi John,
>>
>> Thank you!
>>
>> The h.closed channel, if checked properly (after the RLock), prevents the 
>> race condition that Augusto pointed out in his post a few back.
>>
>> I fixed my implementation here: https://play.golang.org/p/QTkSJAOPtE
>>
>> Thanks again,
>>
>> Evan
>>
>> On Tue, 13 Sep 2016 at 14:04 John Souvestre <jo...@souvestre.com> wrote:
>>
>>> OK.  Give me a minute to add that.  I just wanted to make sure I was 
>>> headed in the right direction.  J
>>>
>>>  
>>>
>>> Note:  In looking at your original code I didn’t see any way that the 
>>> error could happen, so I ignored that case.  Given this, there was no need 
>>> for the h.closed channel.
>>>
>>>  
>>>
>>> Back in a few.  J
>>>
>>>  
>>>
>>> John
>>>
>>>     John Souvestre - New Orleans LA
>>>
>>>  
>>>
>>> *From:* Evan Digby [mailto:evan...@gmail.com] 
>>> *Sent:* 2016 September 13, Tue 15:59
>>> *To:* John Souvestre; golang-nuts
>>>
>>>
>>> *Subject:* Re: [go-nuts] Having difficulty testing this "cleanly"
>>>
>>>  
>>>
>>> Hi John,
>>>
>>>  
>>>
>>> What you've posted is a valid way to implement the handler, but not a 
>>> way to validate it.
>>>
>>>  
>>>
>>> The implementation in the example isn't the problem. It's how to 
>>> validate the implementation with a test.
>>>
>>>  
>>>
>>> If we add a WaitGroup.Wait inside the handler then the test is not valid 
>>> because it will wait until they're done. If the test does the waiting, then 
>>> we aren't validating that the implementation itself does the waiting. 
>>>
>>>  
>>>
>>> I'm trying to find a clean way to validate that the waiting is done by 
>>> the Close call.
>>>
>>>  
>>>
>>> Thanks again for your effort in this!
>>>
>>>  
>>>
>>> Evan
>>>
>>>  
>>>
>>> On Tue, 13 Sep 2016 at 13:52 John Souvestre <jo...@souvestre.com> wrote:
>>>
>>> Hi Evan.
>>>
>>>  
>>>
>>> I still don’t quite understand exactly what you are shooting for.  I 
>>> tried to reimplement what you posted originally.  Check out 
>>> https://play.golang.org/p/koUJYCKFpa.  Does this come close 
>>> functionally?
>>>
>>>  
>>>
>>> John
>>>
>>>     John Souvestre - New Orleans LA
>>>
>>>  
>>>
>>> *From:* golan...@googlegroups.com [mailto:golan...@googlegroups.com] *On 
>>> Behalf Of *Evan Digby
>>> *Sent:* 2016 September 13, Tue 15:32
>>> *To:* golang-nuts
>>> *Cc:* aro...@gmail.com
>>> *Subject:* Re: [go-nuts] Having difficulty testing this "cleanly"
>>>
>>>  
>>>
>>> Hi John/Egon/Augusto,
>>>
>>>  
>>>
>>> I should point out is that all we need to guarantee (barring abnormal 
>>> termination of course) is that once a task starts processing, it finishes. 
>>> Partially processed messages are bad, but http requests that don't result 
>>> in a message being processed at all are okay.
>>>
>>>  
>>>
>>> We don't need to guarantee that the result of every Accept in the HTTP 
>>> server results in a processed message. We handle this on the OPS side by 
>>> ensuring we stop sending requests to that instance before terminating the 
>>> process. We just want to make sure, at that point, that the messages which 
>>> did make it to the handler are flushed.
>>>
>>>  
>>>
>>> So the case where:
>>>
>>>  
>>>
>>> h.Handle(...)  <-- gets past the closed channel check, calls go ..., butthe 
>>> goroutine doesn't execute yet.
>>> h.Close() <-- closes the close channel, Locks and Unlocks,returns.
>>> ...now the goroutine executes and acquires the read lock.
>>>
>>>  
>>>
>>> We actually don't care if "Handle" completes in this example. We only 
>>> care if that our task handler starts processing a message that it completes 
>>> the processing.
>>>
>>>  
>>>
>>> Thanks again,
>>>
>>> Evan
>>>
>>>
>>> On Tuesday, 13 September 2016 13:24:03 UTC-7, aro...@gmail.com wrote:
>>>
>>> The mutex approach is fundamentally broken because you can't guarantee 
>>> that the tasks are all started (and have a read-lock acquired) before you 
>>> call close.
>>>
>>>  
>>>
>>> Consider:
>>>
>>> h.Handle(...)  <-- gets past the closed channel check, calls go ..., 
>>> butthe goroutine doesn't execute yet.
>>> h.Close() <-- closes the close channel, Locks and Unlocks,returns.
>>> ...now the goroutine executes and acquires the read lock.
>>>
>>>  
>>>
>>> So really, if you can't control the Handle() function, you need two 
>>> WaitGroups:  one to verify that all goroutines have started before shutting 
>>> down the task handler and a second one for all goroutines to have 
>>> finished.  However, it's tricky if we don't know the real use case.
>>>
>>>
>>> Sounds like you are trying to do graceful http shutdown.  Have you 
>>> looked at other libraries that do that?  If you don't have a way to account 
>>> for the time between Handle(..) is called and the goroutine starts, you 
>>> always might miss a task that got called near the time Close() was called.
>>>
>>>  
>>>
>>> - Augusto
>>>
>>>
>>> On Tuesday, September 13, 2016 at 12:50:50 PM UTC-7, Evan Digby wrote:
>>>
>>> Hi Aroman,
>>>
>>>  
>>>
>>> Your approach using the WaitGroup is definitely better in this toy 
>>> example. The reason I didn't use the WaitGroup is because the non-toy 
>>> example is wrapping the HTTP Server handler. I have no way to inject an 
>>> "add" before the goroutine is created since that's handled by Go's HTTP 
>>> Server without re-implementing the accept->handle loop using the listener. 
>>>
>>>  
>>>
>>> Apologies for not giving the full context in the example.  
>>>
>>>  
>>>
>>> I'm not sure how it could block an outstanding task since the closed 
>>> channel is called before the Lock(), so no additional calls to RLock will 
>>> be made at that point, and the Lock will just wait until all of the RLocks 
>>> are complete.
>>>
>>>  
>>>
>>> Regarding your testing strategy, I do like it better than any of my 
>>> current strategy; however, There still is a chance that a task could 
>>> complete between lines 90 and 91:
>>>
>>>  
>>>
>>>             h.Close()
>>>
>>>             events <- ALL_TASKS_FINISHED
>>>
>>>  
>>>
>>> So this doesn't solve the racy-ness I'm concerned about unless you put 
>>> an arbitrary sleep in the handlers, which I'm trying to avoid. 
>>>
>>>
>>> On Tuesday, 13 September 2016 12:34:17 UTC-7, aro...@gmail.com wrote:
>>>
>>> The WaitGroup is better than the lock approach, since the lock approach 
>>> could block an outstanding task.  The key to using waitgroups is to call 
>>> Add() outside of goroutines that might call done:
>>>
>>>  
>>>
>>> https://play.golang.org/p/QVWoy8fCmI
>>>
>>> On Tuesday, September 13, 2016 at 12:19:16 PM UTC-7, Evan Digby wrote:
>>>
>>> Hi John,
>>>
>>>  
>>>
>>> Thanks for the reply. I've tried many incarnations that include 
>>> WaitGroups; however, none seem to achieve the desired result. 
>>>
>>>  
>>>
>>> If I add a WaitGroup with a defer done in the handler, and then wait 
>>> after the Close() then the test itself implements the requirement and won't 
>>> protect from future refactors. There's no way to test that a WaitGroup is 
>>> done without waiting for it, and even if there was it would be racy because 
>>> between the Close() and WaitGroup wait call tasks could complete. If I 
>>> wrapped the wait and the done in goroutines to see which one happened 
>>> first, also racy. 
>>>
>>>  
>>>
>>> If you have something else in mind can you elaborate on how it would 
>>> help in this case?
>>>
>>>  
>>>
>>> Thanks again!
>>>
>>>  
>>>
>>> Evan
>>>
>>>
>>> On Tuesday, 13 September 2016 12:01:29 UTC-7, John Souvestre wrote:
>>>
>>> Have you considered using a sync.WaitGroup?
>>>
>>>  
>>>
>>> John
>>>
>>>     John Souvestre - New Orleans LA
>>>
>>>  
>>>
>>> *From:* golan...@googlegroups.com [mailto:golan...@googlegroups.com] *On 
>>> Behalf Of *Evan Digby
>>> *Sent:* 2016 September 13, Tue 13:56
>>> *To:* golang-nuts
>>> *Subject:* [go-nuts] Having difficulty testing this "cleanly"
>>>
>>>  
>>>
>>> Has anyone come across a good way, non-racy way to ensure that N tasks 
>>> are guaranteed to be completed after a function is called? Essentially I 
>>> have a “Close” function that must be guaranteed to block until all tasks 
>>> are finished. Achieving this was pretty simple: wrap each task in an RLock, 
>>> and then a Lock on close. 
>>>
>>>  
>>>
>>> Example: https://play.golang.org/p/7lhBPUhkUE
>>>
>>>  
>>>
>>> Now I want to write a solid test to guarantee Close will meet that 
>>> requirement of all tasks must finish first for posterity. In that example, 
>>> try commenting out the RLock/RUnlock on lines 25/26. You'll see that it no 
>>> longer outputs many, if any, lines. I'm trying to prevent that from 
>>> happening in the future by some cowboy refactor!
>>>
>>>  
>>>
>>> All of the ways I can come up with involve Sleeping or launching more 
>>> tasks than I _think_ can be finished in time--obviously not good!
>>>
>>>  
>>>
>>> I feel like I must be missing some obvious way to test this and I'll end 
>>> up feeling silly once someone replies with the solution. I'm okay with that!
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "golang-nuts" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to golang-nuts...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "golang-nuts" group.
>>>
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to golang-nuts...@googlegroups.com.
>>>
>>>
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>> -- 
>>> You received this message because you are subscribed to a topic in the 
>>> Google Groups "golang-nuts" group.
>>> To unsubscribe from this topic, visit 
>>> https://groups.google.com/d/topic/golang-nuts/jh-nvt9ukBg/unsubscribe.
>>> To unsubscribe from this group and all its topics, send an email to 
>>> golang-nuts...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>> -- 
>>> You received this message because you are subscribed to a topic in the 
>>> Google Groups "golang-nuts" group.
>>> To unsubscribe from this topic, visit 
>>> https://groups.google.com/d/topic/golang-nuts/jh-nvt9ukBg/unsubscribe.
>>> To unsubscribe from this group and all its topics, send an email to 
>>> golang-nuts...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to