pancake wrote:
> > Do you mean from testing the aligned-read adjustment, or from the size
> > of the buffer ("chunksize")?  Because I've realised that the way I did
> > the realignment testing was probably broken; for the different
> > chunksizes, the system times for repeated resizes were as follows:
> > 
> > (sizes of files and number of repeats varied from box to box according
> > to the available disk)
> > 
> > insert (r+1)
> > 
> >         Core2        P4      Arm       P1
> > 
> > 0x80000 20.057    19.9061   0.85     10.5859 
> > 0x40000 17.762    16.5460   0.6488    9.5203 
> > 0x20000 16.783    12.9803   0.6063    9.0292 
> > 0x10000 17.001    12.3071   0.5876    8.6795 
> > 0x8000  17.512    12.6925   0.5893    8.3565 
> > 0x4000  18.402    13.6256   0.6090    8.5908 
> > 0x2000  19.773    15.524    0.6582    9.2976 
> > 0x1000  23.098    19.0384   0.7778   10.2119 
> > 
> 
> strange. the lower the better? is this in time?

Sorry, I didn't introduce that bit clearly.  Those figures are for
placing the given value after

ut64 chunksize = 

Yes, they're the "sys" field output by time(1), averaged out after 10-20
repeats.

I haven't included the figures for fiddling with the alignment, because
I was doing that wrongly...

> usually kthe compiler generates stack aligned variables in the stack frame. 
> for heap. this depends on libc. glib and gstreamer have some helpers to do 
> this and speed up memory copies.

Ah, what I was doing was trying to ensure that the file-read offsets
were always aligned, in the hope that only reading from one (filesystem)
block at a time might give better performance.  But it appeared to be
slightly better in some tests, and a lot worse in others.

Glyn
_______________________________________________
radare mailing list
[email protected]
http://lists.nopcode.org/listinfo.cgi/radare-nopcode.org

Reply via email to