Hi Mateusz,

I suppose you could make this work.  Though I can imagine scenarios where
things get difficult, like if a method decides to hold onto a request buffer
for a really long time.  I would be curious to know what sort of performance
improvement we could expect.  Might be worth doing a back of the envelope to
see if it's worth it.

- Doug

On Wed, Apr 28, 2010 at 2:34 PM, Mateusz Berezecki <[email protected]>wrote:

> Hi Doug,
>
> I really like the idea of reading extra header!
>
> I agree that an extra copy is very expensive and I too expect that the
> associated overhead would cancel the gains of reduced number of system
> calls.
>
> My previous email lacked an explanation of how to deliver buffers to
> application without copying. One idea I have been thinking about is to
> have a set of fixed size buffers and load messages there. Once packets
> are laid out in the buffer, the application would be delivered
> pointers to packets within these buffers. The buffers would be
> recycled as soon as the application is done with handling requests.
> The downsides of this approach would be dangling pointers, buffer
> management, etc. but I do not want to consider them for now in this
> e-mail. The upsides are smaller number of memory allocations, and
> lower number of system calls issued.
>
> Mateusz
>
>
> On Wed, Apr 28, 2010 at 2:11 PM, Doug Judd <[email protected]> wrote:
> > Hi Mateusz,
> > Good observation.  The one problem I see with your proposal is that the
> > extra copy from the circular buffer into a buffer that can be delivered
> up
> > into the application would be too expensive.  I suspect it would dwarf
> any
> > savings you got by reducing the number of system calls.  One idea I've
> been
> > meaning to implement is to have the comm layer read an extra 36 bytes
> > (header size) when reading the payload so that the next header gets read
> > along with it.  This would cut the number of read requests in half, under
> > heavy load.
> > - Doug
> > On Wed, Apr 28, 2010 at 1:46 PM, Mateusz Berezecki <[email protected]>
> > wrote:
> >>
> >> Hi,
> >>
> >> I've been tinkering around AsyncComm and I realized that performance
> >> of message reading is suboptimal.
> >>
> >> The current scheme works like this:
> >>
> >> 1. reactor calls data dispatcher's handle_event
> >> 2. message handler tries to read the header from socket (sometimes
> >> using partial reads) via read()
> >> 3. once the header is read, dispatcher tries to read the message body
> >> (possibly using partial reads) via read()
> >>
> >> There's also a memory allocation happening for every packet to make
> >> room for packet body.
> >>
> >> The performance degradation of this scheme starts becoming visible
> >> under heavy load (i.e. high volume of small messages being processed).
> >>
> >> The degradation is mostly due to excessive system calls and userspace
> >> / kernelspace context switches that are inseparably linked to read
> >> system calls. It's also worth noting that linux does not use SYSCALL
> >> instructions for x86 cpus inside the kernel, but generic interrupt
> >> driven mechanism unless CPU type is explicitly configured to the CPU
> >> type pentium 4 or better. Configuring kernel to use SYSCALL/SYSRET are
> >> somewhat faster but if the number of system calls is large the gain is
> >> negligible.
> >>
> >> Do you think it would be better to read as much data as possible into
> >> a fixed cyclic buffer, process messages from that buffer to also avoid
> >> excessive memory allocations and allocate event objects from some pool
> >> allocator and send them for dispatch ?
> >>
> >> What do you think?
> >>
> >> Mateusz
> >>
> >> --
> >> You received this message because you are subscribed to the Google
> Groups
> >> "Hypertable Development" group.
> >> To post to this group, send email to [email protected].
> >> To unsubscribe from this group, send email to
> >> [email protected]<hypertable-dev%[email protected]>
> .
> >> For more options, visit this group at
> >> http://groups.google.com/group/hypertable-dev?hl=en.
> >>
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Hypertable Development" group.
> > To post to this group, send email to [email protected].
> > To unsubscribe from this group, send email to
> > [email protected]<hypertable-dev%[email protected]>
> .
> > For more options, visit this group at
> > http://groups.google.com/group/hypertable-dev?hl=en.
> >
>
> --
> You received this message because you are subscribed to the Google Groups
> "Hypertable Development" group.
> To post to this group, send email to [email protected].
> To unsubscribe from this group, send email to
> [email protected]<hypertable-dev%[email protected]>
> .
> For more options, visit this group at
> http://groups.google.com/group/hypertable-dev?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Hypertable Development" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/hypertable-dev?hl=en.

Reply via email to