Virgil Dupras <[EMAIL PROTECTED]> added the comment:

I had a 54 mb hotshot profile lying around, and it is indeed very long to load, 
so I ran a profiling session of 
hotshot.stats.load(MY_BIG_FILE) with python and stdlib of r61515, and here are 
the results (the resulting prof file 
is 27 mb):

         96541166 function calls in 299.936 CPU seconds

   Ordered by: cumulative time
   List reduced from 30 to 15 due to restriction <15>

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.000    0.000  299.936  299.936 stats.py:11(load)
        1   95.089   95.089  299.936  299.936 stats.py:22(load)
 27583167   51.183    0.000   61.815    0.000 log.py:95(next)
 13791583   59.314    0.000   59.314    0.000 
profile.py:328(trace_dispatch_return)
 13791583   35.014    0.000   42.910    0.000 stats.py:54(new_frame)
 13791584   40.807    0.000   40.807    0.000 
profile.py:295(trace_dispatch_call)
 13791583   10.632    0.000   10.632    0.000 log.py:138(_decode_location)
 13791583    7.897    0.000    7.897    0.000 stats.py:87(__init__)
        1    0.000    0.000    0.000    0.000 pstats.py:73(__init__)
        1    0.000    0.000    0.000    0.000 pstats.py:95(init)
        1    0.000    0.000    0.000    0.000 pstats.py:138(get_top_level_stats)
        1    0.000    0.000    0.000    0.000 log.py:24(__init__)
        1    0.000    0.000    0.000    0.000 pstats.py:117(load_stats)
        1    0.000    0.000    0.000    0.000 profile.py:436(create_stats)
        1    0.000    0.000    0.000    0.000 profile.py:440(snapshot_stats)


         96541166 function calls in 299.936 CPU seconds

   Ordered by: internal time
   List reduced from 30 to 20 due to restriction <20>

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1   95.089   95.089  299.936  299.936 stats.py:22(load)
 13791583   59.314    0.000   59.314    0.000 
profile.py:328(trace_dispatch_return)
 27583167   51.183    0.000   61.815    0.000 log.py:95(next)
 13791584   40.807    0.000   40.807    0.000 
profile.py:295(trace_dispatch_call)
 13791583   35.014    0.000   42.910    0.000 stats.py:54(new_frame)
 13791583   10.632    0.000   10.632    0.000 log.py:138(_decode_location)
 13791583    7.897    0.000    7.897    0.000 stats.py:87(__init__)
        1    0.000    0.000    0.000    0.000 log.py:24(__init__)
        1    0.000    0.000    0.000    0.000 profile.py:440(snapshot_stats)
        1    0.000    0.000    0.000    0.000 pstats.py:138(get_top_level_stats)
       30    0.000    0.000    0.000    0.000 pstats.py:484(func_std_string)
        5    0.000    0.000    0.000    0.000 posixpath.py:306(normpath)
        1    0.000    0.000  299.936  299.936 stats.py:11(load)
       24    0.000    0.000    0.000    0.000 stats.py:80(__init__)
        1    0.000    0.000    0.000    0.000 pstats.py:117(load_stats)
        1    0.000    0.000    0.000    0.000 profile.py:402(simulate_call)
        1    0.000    0.000    0.000    0.000 profile.py:118(_get_time_resource)
        1    0.000    0.000    0.000    0.000 pstats.py:73(__init__)
        1    0.000    0.000    0.000    0.000 pstats.py:95(init)
        1    0.000    0.000    0.000    0.000 profile.py:166(__init__)


So the bulk of the time seems to be taken by the sheer number of 
trace_dispatch_return and trace_dispatch_call 
calls. I took a quick look, but I'm not sure I can do anything to make it 
faster. If no one else has any idea, I 
suggest just closing the ticket.

----------
nosy: +vdupras

____________________________________
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue984219>
____________________________________
_______________________________________________
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to