On Mon, Jul 21, 2014 at 2:22 AM, Anders Broman <a.broma...@gmail.com> wrote:
> > Den 21 jul 2014 02:34 skrev "Evan Huus" <eapa...@gmail.com>: > > > > > On Sun, Jul 20, 2014 at 8:25 PM, Guy Harris <g...@alum.mit.edu> wrote: > >> > >> > >> On Jul 20, 2014, at 5:04 PM, Evan Huus <eapa...@gmail.com> wrote: > >> > >> > I don't really get this - it happens inconsistently that the "fast" > allocator takes longer to run than the "block" allocator. The fast > allocator does much less work, and runs substantially faster than the block > allocator everywhere I've tested it. > >> > > >> > I don't know what glib's timing mechanism is like, but is it possible > the underlying machine is busy, so the wall-clock time is varying a lot? > >> > > >> > I'm tempted to just disable the test, but that feels wrong. > >> > >> Is there some reason not to test CPU time instead? > > > > > > I don't know how to do that cross-platform. Glib's timer is very > convenient except for the fact that it measures wall time and not CPU time. > Also, generally, wall time is the relevant metric as long as it is > consistent; if I write code that takes very little CPU but is slow because > it causes many page faults, I want the timer to reflect that because the > actual user experience will reflect that (wall time more accurately > reflects UX time). > > Why not use both where available? then we can compare the results. > I looked at it briefly and it quickly turned into a mess of makefile changes and ifdefs in order to detect the necessary headers and call the appropriate functions. Doesn't seem worth the trouble. > > > >> > >> Yes, if, for example, the fast allocator ends up causing more page > faults than the block allocator, even if it takes less CPU time, and if CPU > time spent servicing the page fault doesn't get charged to your process or > gets drowned out by I/O to service the page fault, a wall-clock time test > might tell you something - or it might just reflect changes, during the > time when the tests run, in memory pressure. > >> > ___________________________________________________________________________ > >> Sent via: Wireshark-dev mailing list <wireshark-dev@wireshark.org> > >> Archives: http://www.wireshark.org/lists/wireshark-dev > >> Unsubscribe: https://wireshark.org/mailman/options/wireshark-dev > >> mailto:wireshark-dev-requ...@wireshark.org > ?subject=unsubscribe > > > > > > > > > ___________________________________________________________________________ > > Sent via: Wireshark-dev mailing list <wireshark-dev@wireshark.org> > > Archives: http://www.wireshark.org/lists/wireshark-dev > > Unsubscribe: https://wireshark.org/mailman/options/wireshark-dev > > mailto:wireshark-dev-requ...@wireshark.org > ?subject=unsubscribe > > ___________________________________________________________________________ > Sent via: Wireshark-dev mailing list <wireshark-dev@wireshark.org> > Archives: http://www.wireshark.org/lists/wireshark-dev > Unsubscribe: https://wireshark.org/mailman/options/wireshark-dev > mailto:wireshark-dev-requ...@wireshark.org > ?subject=unsubscribe >
___________________________________________________________________________ Sent via: Wireshark-dev mailing list <wireshark-dev@wireshark.org> Archives: http://www.wireshark.org/lists/wireshark-dev Unsubscribe: https://wireshark.org/mailman/options/wireshark-dev mailto:wireshark-dev-requ...@wireshark.org?subject=unsubscribe