On Fri, Oct 11, 2013 at 12:37 PM, Evan Huus wrote:
On Sat, Oct 12, 2013 at 11:46 AM, Anders Broman wrote:
>> Just looking at performance in general as I got reports that top of trunk
>> was slower than 1.8.
>> Thinking about it fast filtering is more attractive as long as loading isn't
>> to slo
On Fri, Oct 11, 2013 at 12:41 AM, Anders Broman wrote:
> In the particular case I'm looking at there is mostly no match in the
> heuristics tables except false positives
> the same is true for many of the uint table lookups too as there is RTP sent
> from a tool simulating many
> users with many I
On Sat, Oct 12, 2013 at 6:31 PM, Jakub Zawadzki
wrote:
> On Sat, Oct 12, 2013 at 05:08:04PM -0400, Evan Huus wrote:
>> On Sat, Oct 12, 2013 at 12:29 PM, Evan Huus wrote:
>> > Now I'm wondering how much of this could be alleviated somehow by a more
>> > efficient tree representation...
>>
>> The a
On Sat, Oct 12, 2013 at 05:08:04PM -0400, Evan Huus wrote:
> On Sat, Oct 12, 2013 at 12:29 PM, Evan Huus wrote:
> > Now I'm wondering how much of this could be alleviated somehow by a more
> > efficient tree representation...
>
> The answer is apparently lots :)
We've already had similar benchma
On Sat, Oct 12, 2013 at 12:29 PM, Evan Huus wrote:
> On Sat, Oct 12, 2013 at 11:46 AM, Anders Broman wrote:
>> Just looking at performance in general as I got reports that top of trunk
>> was slower than 1.8.
>> Thinking about it fast filtering is more attractive as long as loading isn't
>> to sl
On Sat, Oct 12, 2013 at 11:46 AM, Anders Broman wrote:
> Just looking at performance in general as I got reports that top of trunk
> was slower than 1.8.
> Thinking about it fast filtering is more attractive as long as loading isn't
> to slow I suppose.
> It's quite annoying to wait 2 minutes for
Evan Huus skrev 2013-10-11 22:45:
On Fri, Oct 11, 2013 at 12:37 PM, Evan Huus wrote:
On Fri, Oct 11, 2013 at 11:14 AM, Anders Broman
wrote:
Not really as the RTP dissector is weak and defaulted off and I'm only
interested in performance improvements at this point.
But it brings up a question
On Fri, Oct 11, 2013 at 12:37 PM, Evan Huus wrote:
> On Fri, Oct 11, 2013 at 11:14 AM, Anders Broman
> wrote:
>> Not really as the RTP dissector is weak and defaulted off and I'm only
>> interested in performance improvements at this point.
>> But it brings up a question; some of the heuristic
On Fri, Oct 11, 2013 at 11:14 AM, Anders Broman
wrote:
> Not really as the RTP dissector is weak and defaulted off and I'm only
> interested in performance improvements at this point.
> But it brings up a question; some of the heuristic dissectors are for
> "unusual" protocols and not perfect a
-Original Message-
From: wireshark-dev-boun...@wireshark.org
[mailto:wireshark-dev-boun...@wireshark.org] On Behalf Of Evan Huus
Sent: den 11 oktober 2013 16:37
To: Developer support list for Wireshark
Subject: Re: [Wireshark-dev] Idea for faster dissection on second pas
On Fri, Oct 11
On 10/11/13 10:37, Evan Huus wrote:
On Fri, Oct 11, 2013 at 9:22 AM, Jeff Morriss wrote:
On 10/10/13 18:22, Evan Huus wrote:
It might be simpler and almost as efficient to have
recently-successful heuristic dissectors bubble nearer to the top of
the list so they are tried sooner. Port/convers
On 10/11/13 00:41, Anders Broman wrote:
Evan Huus skrev 2013-10-11 01:51:
On Thu, Oct 10, 2013 at 6:22 PM, Evan Huus wrote:
It might be simpler and almost as efficient to have
recently-successful heuristic dissectors bubble nearer to the top of
the list so they are tried sooner. Port/conversat
Le vendredi 11 octobre 2013 à 09:22 -0400, Jeff Morriss a écrit :
> On 10/10/13 18:22, Evan Huus wrote:
> > It might be simpler and almost as efficient to have
> > recently-successful heuristic dissectors bubble nearer to the top of
> > the list so they are tried sooner. Port/conversation lookups a
On Fri, Oct 11, 2013 at 12:41 AM, Anders Broman wrote:
> Evan Huus skrev 2013-10-11 01:51:
>
> On Thu, Oct 10, 2013 at 6:22 PM, Evan Huus wrote:
>
> It might be simpler and almost as efficient to have
> recently-successful heuristic dissectors bubble nearer to the top of
> the list so they are tr
On Fri, Oct 11, 2013 at 9:22 AM, Jeff Morriss wrote:
> On 10/10/13 18:22, Evan Huus wrote:
>>
>> It might be simpler and almost as efficient to have
>> recently-successful heuristic dissectors bubble nearer to the top of
>> the list so they are tried sooner. Port/conversation lookups are
>> hash-t
-Original Message-
From: wireshark-dev-boun...@wireshark.org
[mailto:wireshark-dev-boun...@wireshark.org] On Behalf Of Jeff Morriss
Sent: den 11 oktober 2013 15:23
To: Developer support list for Wireshark
Subject: Re: [Wireshark-dev] Idea for faster dissection on second pas
On 10/10/13
On 10/10/13 18:22, Evan Huus wrote:
It might be simpler and almost as efficient to have
recently-successful heuristic dissectors bubble nearer to the top of
the list so they are tried sooner. Port/conversation lookups are
hash-tables for the most part and likely won't be made noticeably
faster by
> On Oct 10, 2013, at 10:20 PM, ronnie sahlberg
> wrote:
>
> That would be a good additions, but I always tried to do something like :
> as soon as the heuristic dissector found a match then it would
> explicitely register itself as the dissector for the conversation.
> Perhaps we can make somet
That would be a good additions, but I always tried to do something like :
as soon as the heuristic dissector found a match then it would
explicitely register itself as the dissector for the conversation.
Perhaps we can make something like that automatic?
Similarly to the current discussion some di
On Thu, Oct 10, 2013 at 6:22 PM, Evan Huus wrote:
> It might be simpler and almost as efficient to have
> recently-successful heuristic dissectors bubble nearer to the top of
> the list so they are tried sooner. Port/conversation lookups are
> hash-tables for the most part and likely won't be made
It might be simpler and almost as efficient to have
recently-successful heuristic dissectors bubble nearer to the top of
the list so they are tried sooner. Port/conversation lookups are
hash-tables for the most part and likely won't be made noticeably
faster by caching.
Evan
On Thu, Oct 10, 2013
On Oct 10, 2013, at 10:22 PM, Anders Broman wrote:
> Hi,
> If we in the UDP/TCP/(SCTP?) dissectors saved next dissector on the first pas
> in say per packet data we could avoid
> repeated calls to heuristic dissectors and port/conversation lookups making
> the second pas faster.
> Does any one
Hi,
If we in the UDP/TCP/(SCTP?) dissectors saved next dissector on the
first pas in say per packet data we could avoid
repeated calls to heuristic dissectors and port/conversation lookups
making the second pas faster.
Does any one see any pitfalls with this idea?
I can think of two ways of im
23 matches
Mail list logo