Hello Nicolae, 

oneway messages are delivered using the same TCP transport as twoway messages 
and are therefore 
delivered in order. The difference is that there is no acknowledge (or return 
value) sent after 
processing the message on the other side.

This should work well in your scenario. There is a significant performance gain 
when using 
oneway calls in a fast sequence one after each other compared to twoway 
messages. I have experienced
up to a factor of 10 when using oneway. I am pretty sure you will come close to 
the maximum bandwidth 
available when using Oneways for your scenario. 

Please tell us about your experiences!

Regards, 
Holger 

> -----Original Message-----
> From: Nicolae Mihalache [mailto:[email protected]]
> Sent: Dienstag, 14. Dezember 2010 15:03
> To: [email protected]
> Subject: Re: Big data transfer using etch
> 
> Hello and thanks for the answer.
> 
> My problem is not of chunking the data, that I can easily do. The
> problem is how to efficiently send data at the maximum throughput
> available. If I use normal methods (called actions as I learned in the
> meanwhile), I will be very limited by the latency. For example the
> Europe-US round-trip is >100ms, that will limit the speed to 10
> messages/sec.
> 
> If I use oneway methods (called events as I also learned today), there
> is no guaranteed order of delivery.
> That's what I thought based on my previous CORBA experience (in CORBA
> one has to setup a special single threaded POA to guarantee ordered
> delivery and even that has problems). Reading more through the etch
> mailing lists I found out that oneway methods are actually delivered
> in order. And even better if there is a problem with the delivery
> there is a notification mechanism. So it actually seems to work the
> way I want (but I didn't tested yet).
> 
> It's a pity that the documentation isn't better when it comes to
> threads and stuff.
> Reading the mailing lists helps a lot, so I started reading them all.
> I reached now the thread from September "Future of etch"...
> 
> I'll come with more questions after I do some tests.
> 
> nicolae
> 
> On Tue, Dec 14, 2010 at 2:21 PM, scott comer <[email protected]> wrote:
> > hi Nicolae!
> >
> > if the data is easily chunked by you, such as a very large byte
> array, then
> > the standard
> > methods work just fine. for example, to transfer an image or sound,
> video,
> > etc. you could
> > easily implement an OutputStream to buffer up data and transmit in
> chunks
> > via etch,
> > reassemble on the other side, etc.
> >
> > the problem comes when you have objects with complicated structure,
> such as
> > rows
> > from a db table. can you write 100 rows? 1000? depends upon the data
> in each
> > row.
> >
> > the big message problem has no easy solution, but one that works
> might be
> > this:
> >
> > create a virtual stream of data by using the etch binary encoding to
> code
> > your large
> > data structure, then chop the stream up and transmit the chunks, and
> > reassemble on
> > the other side. you have to do the work yourself, but it isn't hard
> work and
> > could serve
> > as a basis for a real solution for the big message problem.
> >
> > can you say some more about your application?
> >
> > scott out
> >
> > On 12/14/2010 3:57 AM, Nicolae Mihalache wrote:
> >>
> >> Hello,
> >>
> >> I'm considering the possibilities to replace CORBA in some
> application
> >> and I found the etch project.
> >> It looks nice and seem to satisfy all my needs except the Big
> Message
> >> Problem as described here:
> >> http://incubator.apache.org/etch/big-message-problem.html
> >>
> >> What would be nice is the ability to stream messages, a bit like
> >> @oneway but with guaranteed order and possibility to receive
> >> acknowledge if a message has generated an exception.
> >> The functionality would be somehow similar with standard TCP
> sockets:
> >> one pushes the data as fast as possible and the write is blocked if
> >> the the network or the reader cannot sustain the throughput.
> >>  From the API point of view it will be like:
> >> start transfer
> >> while(not finished):
> >>   id=push_data(new_chunk)
> >> end transfer
> >> -->  at this point all data is guaranteed to have been delivered
> >> If an exception is caught, the transfer is interrupted and one can
> get
> >> the id of the message that generated the exception.
> >>
> >>
> >> Strangely enough this functionality useful for file or big data
> >> transfer is missing from all the RPC frameworks I've checked so far.
> >>
> >> It can be emulated somehow with asynchronous calls but one has to
> >> manually tune the number of the back buffers depending on the
> network
> >> throughput and latency.
> >>
> >> Do you plan to implement such a thing in etch? Or to accept such
> feature?
> >>
> >>
> >> nicolae
> >
> >

Reply via email to