On Fri, Jul 11, 2014 at 4:08 PM, Evan Huus <eapa...@gmail.com> wrote:

> The biggest win, I think, would be if we can avoid calling free_chain at
> all because tvbs are always allocated in the right scope and so get freed
> automatically. I think this would involve touching every place that creates
> new tvbs backed with glib memory though...
>
> I will try and think about this and review your patches sometime this
> weekend.
>

Balint's concerns temporarily aside (this may end up being moot if we
decide not to use wmem here at all, but I wanted to discuss the
architecture anyways), my current understanding is the following:

1. Most current uses of TVBs fall naturally into one of the existing wmem
scopes. The main tvb (edt->tvb) has the same effective scope as
pinfo->pool. Any TVBs generated by subsets, decryptions, decompressions,
etc. likewise have pinfo scope. Reassembled TVBs have file scope.

2. The exception to the above is the intermediate TVBs used by reassembly
when not all fragments have been received. It isn't clear to me that these
have a defined scope at all.

3. TVB chaining was a convenient way to track all the various subsets etc.
created during dissection, so we can simply free the parent and all the
others get cleaned up as well.

Architecturally, the approach that seems simplest to me (and is I believe
likely to be most efficient) would be to get rid of tvb chaining entirely.
Allocate tvbuffs (and their backing data, if appropriate) in the correct
wmem scope, and let wmem clean them up at the appropriate point. For
intermediate reassembly buffers, use scope NULL and manually free them the
way we are doing now.

This approach avoids keeping any additional free-lists. It greatly
simplifies the tvbuff code and API by removing all need for chaining and
the various acrobatics that accompany it. It ends up using pinfo-scope for
the vast majority of tvbs in the normal path, which Jakub's ver2 benchmark
showed to be a measurable (if small) speed-up. It lets (most) TVBs be
collected by wmem's free_all, which I expect will be another measurable (if
small) speed-up over manually freeing each one.

Thoughts?


> On Fri, Jul 11, 2014 at 2:58 AM, Jakub Zawadzki <darkjames...@darkjames.pl
> > wrote:
>
>> Hi,
>>
>> On Sat, Jun 21, 2014 at 10:12:48PM +0200, Jakub Zawadzki wrote:
>> > If we're in topic of optimizing 'slower' [de]allocations in common
>> functions:
>> >
>> > - tvb allocation/deallocation (2.5%, or 3.4% when no filtering)
>> >
>> >    243,931,671  *  ???:tvb_new
>> [/tmp/wireshark/epan/.libs/libwireshark.so.0.0.0]
>> >    202,052,290  >   ???:g_slice_alloc (2463493x)
>> [/usr/lib64/libglib-2.0.so.0.3600.4]
>> >
>> >    291,765,126  *  ???:tvb_free_chain
>> [/tmp/wireshark/epan/.libs/libwireshark.so.0.0.0]
>> >    256,390,635  >   ???:g_slice_free1 (2435843x)
>> [/usr/lib64/libglib-2.0.so.0.3600.4]
>>
>> > This, or next week I'll try to do tvb.
>>
>> ... or maybe this week:
>>
>> ver0 | 18,055,719,820 (-----------) | Base version
>> 96f0585268f1cc4e820767c4038c10ed4915c12a
>> ver1 | 18,185,185,838 (0.6% slower) | Change tvb allocator g_slice_* to
>> wmem with file scope
>> ver2 | 17,809,433,204 (1.4% faster) | Change tvb allocator g_slice_* to
>> wmem with file/packet scope
>> ver3 | 17,812,128,887 (1.3% faster) | Change tvb allocator g_slice_* to
>> simple object allocator with epan scope
>> ver4 | 17,704,132,561 (2.0% faster) | Change tvb allocator g_slice_* to
>> simple object allocator with file scope
>>
>> I have uploaded patches & profiler outputs to:
>> http://www.wireshark.org/~darkjames/tvb-opt-allocator/
>>
>> Please review, and check what version is OK to be applied.
>>
>>
>> P.S: I'll might find some time to do ver5 with slab allocator, but it'll
>> look like object allocator API with epan scope.
>>
>> Cheers,
>> Jakub.
>>
>> ___________________________________________________________________________
>> Sent via:    Wireshark-dev mailing list <wireshark-dev@wireshark.org>
>> Archives:    http://www.wireshark.org/lists/wireshark-dev
>> Unsubscribe: https://wireshark.org/mailman/options/wireshark-dev
>>              mailto:wireshark-dev-requ...@wireshark.org
>> ?subject=unsubscribe
>>
>
>
___________________________________________________________________________
Sent via:    Wireshark-dev mailing list <wireshark-dev@wireshark.org>
Archives:    http://www.wireshark.org/lists/wireshark-dev
Unsubscribe: https://wireshark.org/mailman/options/wireshark-dev
             mailto:wireshark-dev-requ...@wireshark.org?subject=unsubscribe

Reply via email to