Hmm; I'm having some luck looking at the context, which quite happily 
changes from is_active() to not is_active() the instant I kill the waiting 
client.  So I thought I'd proceed with something like

while not my_future.done():
  if not context.is_active():
    my_future.cancel()

Terminating the worker thread/process is actually vexing me though!  I 
tried having a ThreadPoolExecutor to give me a future for the worker task, 
but you can't really cancel a future from a thread, it turns out (you can 
only cancel it if it hasn't started running; once it's started, it still 
goes to completion).  So I've tried having a separate ProcessPoolExecutor 
(maybe processes can be killed?) but that's not actually going so well 
either, as attempts to use that to generate futures results in some odd 
"Failed accept4: Invalid Argument" errors which I can't quite work through.

Most confusing.  I wonder if I'll need to subclass grpc.server or if my 
servicer can manually run a secondary process or some such.  

Still, surprising to me this isn't a solved problem built into GRPC.  I 
feel like I'm missing something really obvious.

On Monday, December 17, 2018 at 1:35:41 PM UTC-6, robert engels wrote:
>
> You don’t have to - just use the future as described - if the stream is 
> cancelled by the client - you can cancel the future - if the future 
> completes you send the result back in the stream (if any) - you don’t have 
> to keep sending messages as long as the keep alive is on.
>
> On Dec 17, 2018, at 1:32 PM, [email protected] <javascript:> wrote:
>
> Good idea, but the problem I have with this (if I understand you right) is 
> that some of the server tasks are just these big monolithic calls that sit 
> there doing CPU-intensive work (sometimes in a third-party library; it's 
> not trivial to change them to stream back progress reports or anything).  
>
> So it feels like some way of running them in a separate thread and having 
> an overseer method able to kill them if the client disconnects is the way 
> to go.  We're already using a ThreadPoolExecutor to run worker threads so I 
> feel like there's something that can be done on that side... just seems 
> like this ought to be a Really Common Problem, so I'm surprised it's either 
> not directly addressed or at least commonly answered.
>
> On Monday, December 17, 2018 at 1:27:39 PM UTC-6, robert engels wrote:
>>
>> You can do this if you use the streaming protocol - that is the only way 
>> I know to have any facilities to determine when a “client disconnects”.
>>
>> On Dec 17, 2018, at 1:24 PM, [email protected] wrote:
>>
>> I'm sure it's been answered before but I've searched for quite a while 
>> and not found anything, so apologies:
>>
>> We're using python... we've got server tasks that can last quite a while 
>> (minutes) and chew up lots of CPU.  Right now we're using REST, and when/if 
>> the client disconnects before return, the task keeps running on the server 
>> side.  This is unfortunate; it's costly (since the server may be using 
>> for-pay services remotely, leaving the task running could cost the client) 
>> and vulnerable (a malicious client could just start and immediately 
>> disconnect hundreds of tasks and lock the server up for quite a while).
>>
>> I was hoping that a move to GRPC, in addition to solving other problems, 
>> would provide a clean way to deal with this.  But it's not immediately 
>> obvious how to do so.  I could see maybe manually starting a thread/Future 
>> for the worker process and iterating sleeping until either the context is 
>> invalid or the thread/future returns, but I feel like that's manually 
>> hacking something that probably exists and I'm not understanding.  Maybe 
>> some sort of server interceptor?
>>
>> How would it be best to handle this?  I'd like to handle both very long 
>> unary calls and streaming calls in the same manner.
>>
>> Cheers,
>> Vic
>>
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected].
>> To post to this group, send email to [email protected].
>> Visit this group at https://groups.google.com/group/grpc-io.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/9e84949d-139c-43df-a09e-5d8cc79022be%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/grpc-io/9e84949d-139c-43df-a09e-5d8cc79022be%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>>
> -- 
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected] <javascript:>.
> To post to this group, send email to [email protected] <javascript:>
> .
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/grpc-io/90ba2085-8fb9-4851-9ae7-75ad45a5021d%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/grpc-io/90ba2085-8fb9-4851-9ae7-75ad45a5021d%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/733b0293-6162-47c8-85f7-28cfa0b932b8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to