>From your code snippet I assume you use cpp. I'm not familiar with thrift
cpp library but I can share how we handled this in go library:

1. in TSocket.IsOpen and TSSLSocket.IsOpen, we added a connectivity check
on supported platforms (basically all unix) to return false if we detected
that the other end already closed the connection (src
<https://github.com/apache/thrift/blob/15cc0c4da218a375cadc67e541a99fdc6c5083f2/lib/go/thrift/socket_unix_conn.go#L41>
)
2. in compiler generated handler process function, we add a background
goroutine (= lightweight thread in other languages) that checks for
connectivity periodically, and propagate the result via a context passed
into handler functions (src in thrift compiler
<https://github.com/apache/thrift/blob/15cc0c4da218a375cadc67e541a99fdc6c5083f2/compiler/cpp/src/thrift/generate/t_go_generator.cc#L2992-L3041>
)

this way the service handler function (and everything downstream it calls)
can check for the context passed in and abandon the request if it's no
longer needed.

On Mon, Apr 4, 2022 at 12:54 AM Julien Greard <jgre...@e-vitech.com> wrote:

> Hello,
>
> I have the following architecture in my code:
>
>   * 1 thrift Server
>   * 1 thrift MultiplexedProcessor
>   * several Services & associated processors/handlers
>   * N thrift clients (several by processor/handler)
>   * 1 queue which contains the tasks asked by the clients
>
> What I*really*want is to be able cancel a task when a client crashes.
>
> Let me explain:
>
> Let's say I have the following service handler method :
>
> |// inside thrift handler // this method is currently instanciated once
> and called by every client void do_stuff(const std::string& parameter) {
> auto task = make_task(parameter); auto future_result = task
> .get_future(); add_to_queue(task); auto status =
> resultFuture.wait_for(timeout); // wait here until the task is over if
> (status != std::future_status::timeout) { return future_result.get(); } } |
>
> If the client crashes/disconnects, there is no need to process with the
> task anymore, so I would like to cancel it (remove from my queue) asap.
>
> To do so, I had the following ideas:
>
> 1/ Using the*setServerEventHandler*method from my Thrift Server to be
> notified when a client disconnects (deleteContext method). It works very
> well but I am not able to know which request was made by which specific
> client: within my handler I do not have access to the client infos, and
> I can't use (or didnt managed to) the void * context created by the
> method creatContext in my ServerEventHandler
>
> 2/ Using the contructor & destructor of my handler object to link 1
> client to 1 handler. Then when the client disconnects, I only have to
> cancel every task he asked in the handler destructor. Currently I have N
> client and 1 handler, so it doesn't work. I figured I could use a
> TMultiplexedProcessorFactory but there is no such class in thrift. The
> TMultiplexedProcessor has to register Processor (with single handler)
> when I'd like to register a Factory which will create one handler
> instance by client. I could implement my own
> TMultiplexedProcessorFactory but I think there might be a good reason
> this doesn't exist yet.
>
> Any idea of what I should do ?
>
> I can post it inhttps://issues.apache.org/but it's not exactly an issue
> with thrift (great lib btw), rather than a question.
>
> Thanks in advance, don't hesitate to ask for more details
>
>
> This post is a duplicate of a StackOverflow question I asked last Friday
> <
> https://stackoverflow.com/questions/71707726/using-a-processor-factory-with-tmultiplexedprocessor-in-thrift
> >
>
>
> Thanks in advance,
>
>
> Julien Greard
>
> Dev at Evitech - France
>

Reply via email to