On Fri, 21 Jan 2005 02:54:22 +0100, Marcin Krol <[EMAIL PROTECTED]> wrote:

| WL> 3)  the results are correct and metakit's behavior *is*
| WL>      not very predictable.
|
| WL> perhaps someone would be eager to falsify at least the
| WL> last hypothesis.
|
| I have a scenario 4) for you:
|
| You may get hit by the fact that in current
| implementation Python does not release ANY object
| smaller than 256 bytes (huh?!). The very same thing hit
| me while running 50 consecutive tests patched by Brian -
| they were becoming slower and slower, the memory usage
| was constantly growing, finally system complained about
| having to increase VM size.
|
| Check your memory usage while running this thing. In my
| case running the very same test in a loop several times
| was enough to slow it down by more than 10%, but that
| had nothing to do with MK, it's Python throwing in more
| and more memory at it. After 16 iterations I got 250%
| increase in time! As you might guess it was VM wheezing.


thanks for pointing this out to me. python's memory management details are sure a deep and dark matter. i rewrote my tests a bit; now one outer process does the steering and opens subshells in random order where the unpicklicking of a test data set and writing to an mk storage takes place. some results:


Profile for 10000 entries TOTAL : 21.1410 0 0 1 1: 0.6010 **************************** 0 1 1 1: 0.6410 ****************************** 0 0 0 1: 0.6410 ****************************** 0 1 0 1: 0.7310 ********************************** 0 1 1 0: 1.0510 ************************************************* 0 0 0 0: 1.0610 ************************************************** 0 0 1 0: 1.0710 ************************************************** 0 1 0 0: 1.0710 ************************************************** a b x t

Profile for 100000 entries
TOTAL : 191.5750
0 1 0 1: 9.1230 ************************************
0 1 1 1: 9.1430 ************************************
0 0 1 1: 9.8140 ***************************************
0 0 0 1: 10.4050 *****************************************
0 0 0 0: 10.6450 ******************************************
0 0 1 0: 11.1560 ********************************************
0 1 1 0: 11.1960 ********************************************
0 1 0 0: 12.6180 **************************************************
a b x t


Profile for 100000 entries
TOTAL : 183.6840
0 1 1 1: 8.5620 **************************************
0 1 0 1: 8.8230 ***************************************
0 0 0 1: 9.3240 *****************************************
0 0 1 1: 9.3840 *****************************************
0 1 0 0: 10.4650 **********************************************
0 1 1 0: 10.6060 ***********************************************
0 0 0 0: 10.8060 ************************************************
0 0 1 0: 11.3370 **************************************************
a b x t


the meanings of the abxt parameters have remained unchanged:

    a0  --  use slice assignment (see code)
    a1  --  use append with loop (see code)

    b0  --  do not use blocked view
    b1  --  use blocked view

    x0  --  use normal commit mode
    x1  --  use extend commit mode

    t1  --  use tuples (see code)
    t0  --  use dictionaries (see code)

the behaviors are much more understandable now, factor t having a
clear tendency to come out near the top. differences between best
and worst cases have been considerably flattened out now, however,
i am still unsure where that may come from.






-- Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/ _____________________________________________ Metakit mailing list - [email protected] http://www.equi4.com/mailman/listinfo/metakit

Reply via email to