Hi.
I'm currently creating a transport module for the TIPC protocol for 0MQ.
When doing benchmarks vs TCP, i've noticed some odd behavior for higher buffer
sizes (above 4096B).
Graph comparing TIPC/TCP throughput (bundled perf test).
http://imgur.com/fwA4gDV
Rather than posting all benchmark
I'd advise against trying to use identities to make this failover work.
Okay.
You're going to be fighting 0MQ's autoreconnect. Instead, provide
authentication in the protocol itself and design the router app to do the
failover itself.
I don't understand what you mean when you say design
So, i now realize that the bw figures are actually in _megabit_..
and shortly after sending, obviously i found the problem.
As i posted on IRC some moments ago:
[13:33] haze_ it's an overflow on this line
[13:33] haze_ throughput = (unsigned long)((double) message_count / (double)
elapsed *
On Mon, Feb 25, 2013 at 1:24 PM, Trevor Bernard
trevor.bern...@gmail.com wrote:
The pattern I'm trying to implement is a reliable pipeline. I want to
be able to guarantee that what I send from the front endpoint won't be
lost on it's way to the back endpoint.
OK, the simplest design is to
I've sent a pull request with a patch to fix this.
https://github.com/zeromq/libzmq/pull/522
-Pieter
On Mon, Feb 25, 2013 at 1:32 PM, Erik Hugne erik.hu...@ericsson.com wrote:
So, i now realize that the bw figures are actually in _megabit_..
and shortly after sending, obviously i found the
Meant to write:
should be at least:
throughput = (unsigned long)((double) message_count / (double) elapsed *
(double)( 1024*1024));
On Mon, Feb 25, 2013 at 5:39 AM, A. Mark gougol...@gmail.com wrote:
Hi,
I've done some extensive benchmarks using local_thr and remote and tcp in
the last
Would you share the basic hardware and setup of the 10GE you are testing on?
On Mon, Feb 25, 2013 at 5:41 AM, A. Mark gougol...@gmail.com wrote:
Meant to write:
should be at least:
throughput = (unsigned long)((double) message_count / (double) elapsed *
(double)( 1024*1024));
On Mon,
On Mon, Feb 25, 2013 at 2:39 PM, A. Mark gougol...@gmail.com wrote:
I've done some extensive benchmarks using local_thr and remote and tcp in
the last couple of months and found some odd things. This line is slightly
wrong for the throughput calculation ( apart from the overflow):
...
Nice
Thanks Pieter, I've missed the contributing page...I will submit a pull
request shortly.
Mark
On Mon, Feb 25, 2013 at 6:15 AM, Pieter Hintjens p...@imatix.com wrote:
On Mon, Feb 25, 2013 at 2:39 PM, A. Mark gougol...@gmail.com wrote:
I've done some extensive benchmarks using local_thr and
On Mon, Feb 25, 2013 at 4:23 PM, Trevor Bernard
trevor.bern...@gmail.com wrote:
All I want to do is send work upstream reliably and have the
ventilator be notified by the sink through an acknowledgement if the
batch was successful or not so I can take appropriate action.
How would you
On Mon, Feb 25, 2013 at 4:50 PM, Trevor Bernard
trevor.bern...@gmail.com wrote:
A reliable pipeline was one of the patterns I'd meant to build but
didn't get around to. Sorry about that.
I can help you flesh this out.
I'd start by taking the existing pipeline example and adding random
The identity problem is traced to file zhelpers.h.
The s_dump method does not initialize the variable on line 127 to zero . . .
int64_t more; // Multipart detection
Setting this to zero fixed the behavior so that the C++ and C examples for
identity work the same way.
I have a case where I want to have minimum latency, thus I would like that
the queue length is zero at all times.
If more messages arrive, I would like to drop them. The selection of the
messages dropped will be based on a priority
value.
How can I guarantee that the queue will always be near
13 matches
Mail list logo