[
https://issues.apache.org/jira/browse/AXIS2C-1328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Robert Lazarski resolved AXIS2C-1328.
-------------------------------------
Resolution: Implemented
The underlying performance concern (efficient handling of multiple concurrent
client
connections without thread-per-socket overhead) has been addressed by the
HTTP/2 transport
implementation. HTTP/2 multiplexing provides a modern, standards-based
solution that
supersedes the 2009 proposal for Reactor/Proactor patterns or I/O completion
ports.
> Implementing new transport to optimize Axis2/C when used in a multi-threading
> client environment.
> -------------------------------------------------------------------------------------------------
>
> Key: AXIS2C-1328
> URL: https://issues.apache.org/jira/browse/AXIS2C-1328
> Project: Axis2-C
> Issue Type: Improvement
> Components: core/transport
> Reporter: Damitha N.M. Kumarage
> Priority: Minor
>
> I would like to highlight the following discussion in mailing thread[1].
> Patric:
> The asynchronous call implementation of axis is based on creating new threads
> that just wait on a response. Threads are a rather expensive resource to use
> for just waiting on an IO completion (and then performing some small task).
> It might be better that the waiting on all outstanding IO is done by one
> single thread. The work after the IO completed can then be done by either
> that thread, or a (small and static) thread pool. That way, no threads have
> to be created / deleted on the flow, not more than one thread is waiting on
> IO and no high amount of threads will exist when a lot of asynchronous calls
> exist in parallel. My axis knowledge is not enough to see solutions for this
> within axis right now.
> Carl:
> My reason for responding though is really to comment on this phrase:"Threads
> are a rather expensive resource to use for just waiting on an IO completion".
> It may be my lack of understanding, but I am pretty
> sure that -- at least in the win32 tcp/ip stack -- once your thread goes into
> asynchronous communication on a socket, you do not see it again until there
> is some result. This means if there is a timeout your
> thread is inactive for a long time. How can one thread wait on more than one
> asychronous communication? I admit this would be a far better solution,
> however from my understanding of winsock2 it is not possible.
> Seen this way, one thread per socket communication is maybe expensive in
> resources, but it is the only way to ensure your main thread continues to
> operate in a timely fashion.
> Patric:
> With the fd_set in winsock and the select() function, you can wait at a
> maximum of 64 (current implementation) sockets at once. With I/O Completion
> Ports you can use one thread for an infinite number of ports (though a pool
> of threads might be a good idea if the number of sockets grows large). This
> is also used by the well known boost (C++) library. Mechanisms like these
> would be a much better implementation. But I think they don't fit well in the
> modular (transportation) design of axis, since they require knowledge about
> the lower level transportation on a higher level.
> Carl:That's very interesting. I'm curious as to more of the details of how
> this functions... If you have one thread waiting on 12 sockets and want to
> make a new call, can this thread begin the next call, or does a second thread
> open the socket and pass the job of waiting on it to the first thread?
> I think we would all agree that your use case would benefit from adding this
> capability to Axis2/C. You mention a potential conflict with the modular
> design of Axis; there is also the idea that making such a powerful feature
> accessible to the average programmer using Axis could be a challenge. Maybe
> the solution would be to add a new communication mode instead of changing all
> asynchronous communication to one-thread-multi-socket. I wish I understood
> the Axis2/C architecture
> more fully because this would be an interesting area to contribute.
> Patric:
> But one implementation might be to add another 'configuration structure'
> (like the allocator and thread pool) for socket IO and make that responsible
> for all IO. That implementation can then decide to use one or multiple
> threads for IO. It can use call-backs to signal the completion (or failure or
> timeout) of the IO. The async calls can then be implemented as writing data
> (by the new io struct) and exiting that start-call. Finished. Nothing more to
> do. No extra thread, nothing. Then, when finished, the call-back can be used
> to parse the result and call the user call-back for the result. The io struct
> (module) should probably use a (real!) thread pool for this to prevent one
> time-consuming call to block other calls. But a simple implementation might
> to for the 'average' user. This pattern mimics the io completion port / boost
> interface, so users of axis can easily use these for their async IO.
> ------
> By looking at the above discussion I suggest implementing a new transport for
> Axis2/C using boost library[2] or some other good library that would do the
> job. Boost is based on Reactor/Proactor I/O design patterns [3]. However
> this would require some changes to Axis2/C engine internals.
> Note: In the start of the discussion Carl says "The asynchronous call
> implementation of axis is based on creating new threads that just wait on a
> response"; However I think this is true only if user does not use
> axis2_options_set_use_separate_listener(options, env, AXIS2_TRUE);
> call. If this is called then a listener manager thread is started which
> listen on incoming responses. New threads are created only to process the
> intercepted responses.
> [1] http://marc.info/?t=122899247800001&r=1&w=2
> [2] http://www.boost.org/doc/libs/1_37_0/doc/html/boost_asio/overview.html
> [3] http://www.artima.com/articles/io_design_patterns.html
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]