On Fri, Oct 26, 2012 at 2:36 PM, Evan Huus <eapa...@gmail.com> wrote:

> On Fri, Oct 26, 2012 at 12:14 PM, Sébastien Tandel
> <sebastien.tan...@gmail.com> wrote:
> >
> >
> > On Fri, Oct 26, 2012 at 1:58 PM, Evan Huus <eapa...@gmail.com> wrote:
> >>
> >> On Fri, Oct 26, 2012 at 11:40 AM, Graham Bloice
> >> <graham.blo...@trihedral.com> wrote:
> >> >
> >> > On 26 October 2012 14:44, Evan Huus <eapa...@gmail.com> wrote:
> >> >>
> >> >> On Fri, Oct 26, 2012 at 9:29 AM, Sébastien Tandel
> >> >> <sebastien.tan...@gmail.com> wrote:
> >> >> >
> >> >> >
> >> >> > On Wed, Oct 24, 2012 at 11:13 AM, Evan Huus <eapa...@gmail.com>
> >> >> > wrote:
> >> >> >>
> >> >> >> On Wed, Oct 24, 2012 at 8:10 AM, Sébastien Tandel
> >> >> >> <sebastien.tan...@gmail.com> wrote:
> >> >> >> >
> >> >> >> >
> >> >> >> > On Wed, Oct 24, 2012 at 1:10 AM, Guy Harris <g...@alum.mit.edu>
> >> >> >> > wrote:
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> On Oct 18, 2012, at 6:01 PM, Evan Huus <eapa...@gmail.com>
> wrote:
> >> >> >> >>
> >> >> >> >> > I have linked a tarball [2] containing the following files:
> >> >> >> >> > - wmem_allocator.h - the definition of the allocator
> interface
> >> >> >> >> > - wmem_allocator_glib.* - a simple implementation of the
> >> >> >> >> > allocator
> >> >> >> >> > interface backed by g_malloc and a singly-linked list.
> >> >> >> >>
> >> >> >> >> Presumably an implementation of the allocator could, instead of
> >> >> >> >> calling
> >> >> >> >> a
> >> >> >> >> lower-level memory allocator (malloc(), g_malloc(), etc.) for
> >> >> >> >> each
> >> >> >> >> allocation call, allocate larger chunks and parcel out memory
> >> >> >> >> from
> >> >> >> >> the
> >> >> >> >> larger chunks (as the current emem allocator does), if that
> ends
> >> >> >> >> up
> >> >> >> >> saving
> >> >> >> >> enough CPU, by making fewer allocate and free calls to the
> >> >> >> >> underlying
> >> >> >> >> memory
> >> >> >> >> allocator, so as to make it worth whatever wasted memory we
> have
> >> >> >> >> at
> >> >> >> >> the
> >> >> >> >> ends
> >> >> >> >> of chunks?
> >> >> >> >>
> >> >> >> >
> >> >> >> > One step further, instead of mempools, I think wireshark could
> >> >> >> > have
> >> >> >> > great
> >> >> >> > interest in implementing slabs (slab allocator). Slabs had
> >> >> >> > initially
> >> >> >> > been
> >> >> >> > designed for kernel with several advantages over traditional
> >> >> >> > allocators
> >> >> >> > in
> >> >> >> > terms of resources needed to allocate (CPU), (external /
> internal)
> >> >> >> > fragmentation and also cache friendliness (most of the
> traditional
> >> >> >> > allocators don't care). I've attached some slides about a
> >> >> >> > high-level
> >> >> >> > description of slab.
> >> >> >> >
> >> >> >> > Since then, another paper has been written showing some
> >> >> >> > improvements
> >> >> >> > and
> >> >> >> > what it took to write a slab for user-space (libumem). There is
> >> >> >> > another
> >> >> >> > well-known exampel out there, called memcache, that implements
> its
> >> >> >> > own
> >> >> >> > version (and could be a good intial point for wireshark
> >> >> >> > implementation,
> >> >> >> > who
> >> >> >> > knows? :))
> >> >> >>
> >> >> >> If I understand correctly, a slab allocator provides the most
> >> >> >> benefit
> >> >> >> when you have to alloc/free a large number of the same type of
> >> >> >> object,
> >> >> >
> >> >> > you're right, that's where slab is the most efficient at. Although,
> >> >> > the
> >> >> > second paper shows it can be efficient for general purpose
> allocation
> >> >> > based
> >> >> > on size and not specific structure.
> >> >> >
> >> >> >> but I don't know if this is necessarily the case in Wireshark.
> There
> >> >> >> are probably places where it would be useful, but I can't think of
> >> >> >> any
> >> >> >> off the top of my head. TVBs maybe? I know emem is currently used
> >> >> >> all
> >> >> >> over the place for all sorts of different objects...
> >> >> >
> >> >> > I guess the most obvious would be emem_tree (emem_tree_node) might
> be
> >> >> > an
> >> >> > example used all over and over while dissecting. :)
> >> >> > There is indeed a bunch of different objects allocated with emem.
> >> >> > Also,
> >> >> > it
> >> >> > might be used to allocate memory for some fragments.
> >> >>
> >> >> Ah, yes, the various emem data structures (tree, stack, etc.) would
> >> >> likely benefit from slab allocators. Converting them to use slabs
> >> >> would be something to do while porting them from emem to wmem.
> >> >>
> >> >> > Since your interface seems to allow it, we could create several
> slabs
> >> >> > types,
> >> >> > one for each specific structures that are allocated very frequently
> >> >> > (emem_tree_node?), others for packets/fragments with some tuned
> slabs
> >> >> > sizes
> >> >> > and another with some generic sizes.
> >> >>
> >> >> That seems reasonable, presumably with some shared slab code doing
> the
> >> >> type-agnostic heavy lifting. I'll have to give a bit of thought to
> >> >> what the interface for that would be like - if you already have an
> >> >> interface in mind, please share :)
> >> >>
> >> >
> >> > Are the slab allocators mentioned "homegrown" or provided by the host
> >> > OS. If
> >> > the latter, what platforms are they available on?
> >>
> >> Homegrown on top of malloc/g_malloc/mmap, I believe. A slab allocator
> >> is (or was) used internally in the linux and solaris kernels, but has
> >> never been exposed to userspace to my knowledge.
> >
> >
> > It's indeed not exposed to users. It's used internally as a "kernel
> object
> > cache allocator".
> > But, memcached has a user-space implementation that could -probably- be
> > leveraged for wireshark.
>
> I took a quick look, and I think it would be significant overkill for
> our needs. It also directly references pthreads a lot, which isn't
> available to us on Windows.
>
too bad ... :(


> It might be useful as a reference implementation, but I don't think
> it's worth using directly.
>
indeed

another source of inspiration : https://github.com/gburd/libumem



> ___________________________________________________________________________
> Sent via:    Wireshark-dev mailing list <wireshark-dev@wireshark.org>
> Archives:    http://www.wireshark.org/lists/wireshark-dev
> Unsubscribe: https://wireshark.org/mailman/options/wireshark-dev
>              mailto:wireshark-dev-requ...@wireshark.org
> ?subject=unsubscribe
>
___________________________________________________________________________
Sent via:    Wireshark-dev mailing list <wireshark-dev@wireshark.org>
Archives:    http://www.wireshark.org/lists/wireshark-dev
Unsubscribe: https://wireshark.org/mailman/options/wireshark-dev
             mailto:wireshark-dev-requ...@wireshark.org?subject=unsubscribe

Reply via email to