Out of curiosity, can anyone show an example of how add_callback can be 
used to interrupt the server-side process? I have the same problem as the 
OP for my application -- server-side can run for a very long time and if 
the client times out, then I need the server to cancel immediately. I've 
tried a variety of techniques, but I cannot get the callback function to 
stop the server-side call.

On Tuesday, December 18, 2018 at 12:51:23 PM UTC-8, [email protected] wrote:
>
> Ah; thanks--we're having to use subprocess.Popen in a few cases anyway.  
> I'll try that and see what we can do.  Thanks for the note on "grpc within 
> grpc"; that may simplify some things too.
>
> On Tuesday, December 18, 2018 at 1:07:00 PM UTC-6, Eric Gribkoff wrote:
>>
>>
>>
>> On Tue, Dec 18, 2018 at 10:45 AM <[email protected]> wrote:
>>
>>> Thanks, Eric.  That makes some degree of sense, although there are a few 
>>> cases we still won't be able to deal with, I suspect (and we may have 
>>> trouble later anyway... in some cases our server program has to shell out 
>>> to run a separate program, and if that runs into the fork trouble and can't 
>>> be supported by GRPC we may be stuck with a very clanky REST 
>>> implementation).
>>>
>>>
>> Sorry, I should have been more precise in my earlier response: you are 
>> fine to use fork+exec (e.g., subprocess.Popen) to run a separate program in 
>> a new shell. (Caveat: we had a bug 
>> <https://github.com/grpc/grpc/issues/17093> that may cause problems even 
>> with fork+exec when using Python3. The fix is now merged and will be in the 
>> next release; our nightly builds will also include the fix ~tomorrow if you 
>> are hitting this issue). The issues on the server-side with fork arise when 
>> using libraries that fork and, rather than exec'ing a new program, continue 
>> to run the original program in the child process, e.g., Python's 
>> multiprocessing module.
>>
>>  
>>
>>> Hmm, quite a pickle.  I can see I'll be playing with a bunch of toy 
>>> problems for a bit before even considering doing a migration to GRPC.  Most 
>>> disagreeable, but we'll see what we get.
>>>
>>> Can grpc client stubs be used from within grpc servicers?  (imagining 
>>> fracturing this whole thing into microservices even if that doesn't solve 
>>> this particular problem).
>>>
>>
>> Absolutely, and that's an intended/common usage.
>>
>> Thanks,
>>
>> Eric
>>  
>>
>>>
>>> On Tuesday, December 18, 2018 at 12:32:15 PM UTC-6, Eric Gribkoff wrote:
>>>>
>>>>
>>>>
>>>> On Tue, Dec 18, 2018 at 10:17 AM <[email protected]> wrote:
>>>>
>>>>> Hmm; I'm having some luck looking at the context, which quite happily 
>>>>> changes from is_active() to not is_active() the instant I kill the 
>>>>> waiting 
>>>>> client.  So I thought I'd proceed with something like
>>>>>
>>>>> while not my_future.done():
>>>>>   if not context.is_active():
>>>>>     my_future.cancel()
>>>>>
>>>>>
>>>> Consider using add_callback 
>>>> <https://grpc.io/grpc/python/grpc.html#grpc.RpcContext.add_callback> on 
>>>> the RpcContext instead, so you don't have to poll.
>>>>  
>>>>
>>>>> Terminating the worker thread/process is actually vexing me though!  I 
>>>>> tried having a ThreadPoolExecutor to give me a future for the worker 
>>>>> task, 
>>>>> but you can't really cancel a future from a thread, it turns out (you can 
>>>>> only cancel it if it hasn't started running; once it's started, it still 
>>>>> goes to completion).  So I've tried having a separate ProcessPoolExecutor 
>>>>> (maybe processes can be killed?) but that's not actually going so well 
>>>>> either, as attempts to use that to generate futures results in some odd 
>>>>> "Failed accept4: Invalid Argument" errors which I can't quite work 
>>>>> through.
>>>>>
>>>>>
>>>> ProcessPoolExecutor will fork subprocesses, and gRPC servers (and many 
>>>> other multi-threaded libraries) are not compatible with this. There is 
>>>> some 
>>>> discussion around this in https://github.com/grpc/grpc/issues/16001. 
>>>> You could pre-fork (fork before creating the gRPC server), but I don't 
>>>> think this will help with your goal of cancelling long-running jobs. It's 
>>>> difficult to cleanly kill subprocesses, as they may be in the middle of an 
>>>> operation that you would really like to clean up gracefully.
>>>>  
>>>>
>>>>> Most confusing.  I wonder if I'll need to subclass grpc.server or if 
>>>>> my servicer can manually run a secondary process or some such.  
>>>>>
>>>>> Still, surprising to me this isn't a solved problem built into GRPC.  
>>>>> I feel like I'm missing something really obvious.
>>>>>
>>>>>
>>>> I wouldn't consider cancelling long running jobs spawned by your server 
>>>> as part of the functionality that gRPC is intended for - this is a task 
>>>> that can came up regardless of what server protocol you are using, and 
>>>> will 
>>>> arise often even on non-server applications. A standard approach for this 
>>>> in a multi-threaded environment would be setting a cancel boolean variable 
>>>> (e.g., in your gRPC servicer implementation) that your task (the 
>>>> long-running subroutine) periodically checks for to exit early. This 
>>>> should 
>>>> be compatible with ThreadPoolExecutor.
>>>>
>>>> Thanks,
>>>>
>>>> Eric
>>>>  
>>>>
>>>>> On Monday, December 17, 2018 at 1:35:41 PM UTC-6, robert engels wrote:
>>>>>>
>>>>>> You don’t have to - just use the future as described - if the stream 
>>>>>> is cancelled by the client - you can cancel the future - if the future 
>>>>>> completes you send the result back in the stream (if any) - you don’t 
>>>>>> have 
>>>>>> to keep sending messages as long as the keep alive is on.
>>>>>>
>>>>>> On Dec 17, 2018, at 1:32 PM, [email protected] wrote:
>>>>>>
>>>>>> Good idea, but the problem I have with this (if I understand you 
>>>>>> right) is that some of the server tasks are just these big monolithic 
>>>>>> calls 
>>>>>> that sit there doing CPU-intensive work (sometimes in a third-party 
>>>>>> library; it's not trivial to change them to stream back progress reports 
>>>>>> or 
>>>>>> anything).  
>>>>>>
>>>>>> So it feels like some way of running them in a separate thread and 
>>>>>> having an overseer method able to kill them if the client disconnects is 
>>>>>> the way to go.  We're already using a ThreadPoolExecutor to run worker 
>>>>>> threads so I feel like there's something that can be done on that 
>>>>>> side... 
>>>>>> just seems like this ought to be a Really Common Problem, so I'm 
>>>>>> surprised 
>>>>>> it's either not directly addressed or at least commonly answered.
>>>>>>
>>>>>> On Monday, December 17, 2018 at 1:27:39 PM UTC-6, robert engels wrote:
>>>>>>>
>>>>>>> You can do this if you use the streaming protocol - that is the only 
>>>>>>> way I know to have any facilities to determine when a “client 
>>>>>>> disconnects”.
>>>>>>>
>>>>>>> On Dec 17, 2018, at 1:24 PM, [email protected] wrote:
>>>>>>>
>>>>>>> I'm sure it's been answered before but I've searched for quite a 
>>>>>>> while and not found anything, so apologies:
>>>>>>>
>>>>>>> We're using python... we've got server tasks that can last quite a 
>>>>>>> while (minutes) and chew up lots of CPU.  Right now we're using REST, 
>>>>>>> and 
>>>>>>> when/if the client disconnects before return, the task keeps running on 
>>>>>>> the 
>>>>>>> server side.  This is unfortunate; it's costly (since the server may be 
>>>>>>> using for-pay services remotely, leaving the task running could cost 
>>>>>>> the 
>>>>>>> client) and vulnerable (a malicious client could just start and 
>>>>>>> immediately 
>>>>>>> disconnect hundreds of tasks and lock the server up for quite a while).
>>>>>>>
>>>>>>> I was hoping that a move to GRPC, in addition to solving other 
>>>>>>> problems, would provide a clean way to deal with this.  But it's not 
>>>>>>> immediately obvious how to do so.  I could see maybe manually starting 
>>>>>>> a 
>>>>>>> thread/Future for the worker process and iterating sleeping until 
>>>>>>> either 
>>>>>>> the context is invalid or the thread/future returns, but I feel like 
>>>>>>> that's 
>>>>>>> manually hacking something that probably exists and I'm not 
>>>>>>> understanding.  
>>>>>>> Maybe some sort of server interceptor?
>>>>>>>
>>>>>>> How would it be best to handle this?  I'd like to handle both very 
>>>>>>> long unary calls and streaming calls in the same manner.
>>>>>>>
>>>>>>> Cheers,
>>>>>>> Vic
>>>>>>>
>>>>>>>
>>>>>>> -- 
>>>>>>> You received this message because you are subscribed to the Google 
>>>>>>> Groups "grpc.io" group.
>>>>>>> To unsubscribe from this group and stop receiving emails from it, 
>>>>>>> send an email to [email protected].
>>>>>>> To post to this group, send email to [email protected].
>>>>>>> Visit this group at https://groups.google.com/group/grpc-io.
>>>>>>> To view this discussion on the web visit 
>>>>>>> https://groups.google.com/d/msgid/grpc-io/9e84949d-139c-43df-a09e-5d8cc79022be%40googlegroups.com
>>>>>>>  
>>>>>>> <https://groups.google.com/d/msgid/grpc-io/9e84949d-139c-43df-a09e-5d8cc79022be%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>>>> .
>>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>> -- 
>>>>>> You received this message because you are subscribed to the Google 
>>>>>> Groups "grpc.io" group.
>>>>>> To unsubscribe from this group and stop receiving emails from it, 
>>>>>> send an email to [email protected].
>>>>>> To post to this group, send email to [email protected].
>>>>>> Visit this group at https://groups.google.com/group/grpc-io.
>>>>>> To view this discussion on the web visit 
>>>>>> https://groups.google.com/d/msgid/grpc-io/90ba2085-8fb9-4851-9ae7-75ad45a5021d%40googlegroups.com
>>>>>>  
>>>>>> <https://groups.google.com/d/msgid/grpc-io/90ba2085-8fb9-4851-9ae7-75ad45a5021d%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>>> .
>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>
>>>>>>
>>>>>> -- 
>>>>> You received this message because you are subscribed to the Google 
>>>>> Groups "grpc.io" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>>> an email to [email protected].
>>>>> To post to this group, send email to [email protected].
>>>>> Visit this group at https://groups.google.com/group/grpc-io.
>>>>> To view this discussion on the web visit 
>>>>> https://groups.google.com/d/msgid/grpc-io/733b0293-6162-47c8-85f7-28cfa0b932b8%40googlegroups.com
>>>>>  
>>>>> <https://groups.google.com/d/msgid/grpc-io/733b0293-6162-47c8-85f7-28cfa0b932b8%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>> .
>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>
>>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "grpc.io" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to [email protected].
>>> To post to this group, send email to [email protected].
>>> Visit this group at https://groups.google.com/group/grpc-io.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/grpc-io/e67efea6-e740-4e08-90c1-b093b85a9914%40googlegroups.com
>>>  
>>> <https://groups.google.com/d/msgid/grpc-io/e67efea6-e740-4e08-90c1-b093b85a9914%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/8700d718-4b26-4c16-a640-d3143c0897c0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to