ok, two things will help you:
one way methods, using tcp or tls transports, are guaranteed delivery,
and delivered
in order (unless using @AsyncReceiver annotation). while you don't know
in a transaction
sense that it executed without failure, you do know that it was at least
received if a follow
on two way method successfully executes.
any one way message failures do send a response, but you cannot easily
correlate it
with the original send. the messages for failed one way messages are
delivered to the
_sessionNotify method.
you can also use asynchronous send with regular (action) messages, and
thus double,
triple, whatever buffer your messages. this has been very effective in
tests i've done.
check the generated code for your project. here's something from our
RemoteClusterSvcServer:
public final PlayerSessionInfo play(String gameId, Object userId,
String publicAddr, Boolean isPlay)
{
return _async._end_play( _async._begin_play(gameId, userId,
publicAddr, isPlay) );
}
you can call _async._begin_play to send your request, saving the
returned mailbox,
and then later call _async._end_play to complete the action, receive any
response
(or just the confirmation that no exception was thrown). you can even
register with
the returned mailbox to receive notification via callback when the action is
done.
let me know if you need more info or if this is enough to get you going...
scott out
On 12/14/2010 8:03 AM, Nicolae Mihalache wrote:
Hello and thanks for the answer.
My problem is not of chunking the data, that I can easily do. The
problem is how to efficiently send data at the maximum throughput
available. If I use normal methods (called actions as I learned in the
meanwhile), I will be very limited by the latency. For example the
Europe-US round-trip is>100ms, that will limit the speed to 10
messages/sec.
If I use oneway methods (called events as I also learned today), there
is no guaranteed order of delivery.
That's what I thought based on my previous CORBA experience (in CORBA
one has to setup a special single threaded POA to guarantee ordered
delivery and even that has problems). Reading more through the etch
mailing lists I found out that oneway methods are actually delivered
in order. And even better if there is a problem with the delivery
there is a notification mechanism. So it actually seems to work the
way I want (but I didn't tested yet).
It's a pity that the documentation isn't better when it comes to
threads and stuff.
Reading the mailing lists helps a lot, so I started reading them all.
I reached now the thread from September "Future of etch"...
I'll come with more questions after I do some tests.
nicolae
On Tue, Dec 14, 2010 at 2:21 PM, scott comer<[email protected]> wrote:
hi Nicolae!
if the data is easily chunked by you, such as a very large byte array, then
the standard
methods work just fine. for example, to transfer an image or sound, video,
etc. you could
easily implement an OutputStream to buffer up data and transmit in chunks
via etch,
reassemble on the other side, etc.
the problem comes when you have objects with complicated structure, such as
rows
from a db table. can you write 100 rows? 1000? depends upon the data in each
row.
the big message problem has no easy solution, but one that works might be
this:
create a virtual stream of data by using the etch binary encoding to code
your large
data structure, then chop the stream up and transmit the chunks, and
reassemble on
the other side. you have to do the work yourself, but it isn't hard work and
could serve
as a basis for a real solution for the big message problem.
can you say some more about your application?
scott out
On 12/14/2010 3:57 AM, Nicolae Mihalache wrote:
Hello,
I'm considering the possibilities to replace CORBA in some application
and I found the etch project.
It looks nice and seem to satisfy all my needs except the Big Message
Problem as described here:
http://incubator.apache.org/etch/big-message-problem.html
What would be nice is the ability to stream messages, a bit like
@oneway but with guaranteed order and possibility to receive
acknowledge if a message has generated an exception.
The functionality would be somehow similar with standard TCP sockets:
one pushes the data as fast as possible and the write is blocked if
the the network or the reader cannot sustain the throughput.
From the API point of view it will be like:
start transfer
while(not finished):
id=push_data(new_chunk)
end transfer
--> at this point all data is guaranteed to have been delivered
If an exception is caught, the transfer is interrupted and one can get
the id of the message that generated the exception.
Strangely enough this functionality useful for file or big data
transfer is missing from all the RPC frameworks I've checked so far.
It can be emulated somehow with asynchronous calls but one has to
manually tune the number of the back buffers depending on the network
throughput and latency.
Do you plan to implement such a thing in etch? Or to accept such feature?
nicolae