> > (This is based on the idea that a full set of mipmaps packs 
> perfectly to take
> > up two times the size of the base texture).  That's also 
> not true for all
> > architectures...
> 
> Ok, that explains a bit.  However, in some circumstances we 
> may loose a
> level.  The mipmaps don't double the size, the only increase 
> it by 1/3.
> Then there are architectures like MGA the can't use all 11 mipmaps.

Okay, me was odd. 
(this always happens when i am thinking with the stomach...)


But now i will try to go into deep and get to final formula of
  (4/3 * Lvl_size)
for the maximum of expected pixels in the total amount of data 
stored in a squared level stack. Further i will point out to
the goods and bads of the mentioned bithshift formlua and the
alternative method of looping. In the end i will point out to
reverse calculation methods and to specific implementations
that finally might break or at least complicate the overall
systematics of pyramid stack calculations.


grafically:

n=2  n=1 n=0
#### ##  #
#### ##
####
####

mathematically:
  level width = 2^n
  level size = (2^n)*(2^n) = 2^(2*n)
  size increse ratio per level increment = 4
  = (2^(n+1))*(2^(n+1)) / ((2^n)*(2^n))
  = (2^(n+1-n))*(2^(n+1-n))
  = (2^1)*(2^1)
  = 2*2 = 4

summarized size of all levels = sum(x=0, x=n, 2^(2*x) )
(sorry, i dont have the math book handy for solving that here)

as the increase ratio is significantly bigger than 2, 
the sum of n-1 will never be bigger than the current 
level size. So the assumption in the code suggestion
with formula
   2*(current level size)
is okay for now but its not totally accurate as it is.
Just let me go on to make it really evident.

Lookup table:
Lvl_n   width   size       size (hex)   accummulated size (hex)   acc/sz
  0        1          1          1            1                   1.0
  1        2          4          4            5                   1.25
  2        4         16         10           15                       1.31
  3        8         64         40           55                   1.328
  4       16        256        100          155
  5       32       1024        400          555
  6       64       4096       1000         1555                   ...
  7      128      16384       4000         5555
  8      256      65536     1 0000       1 5555
  9      512     262144     4 0000       5 5555
  10    1024    1048576    10 0000      15 5555
  11    2048    4194394    40 0000      55 5555
  12    4096   16777216   100 0000     155 5555
1.333333313465
...

the +300% increase (or 4* increase) ratio for the size itself 
is (hopefully) made nicely visible with above.

the accummulated size vs. Lvl_size does behave as it is approaching 4/3:
  accum size / size <= 4/3

so you are still on the safe side if you estimate like this:
  max accum size = size * 4/3 = size * 1,33333333333

Paranoid people would just add a "+1" after the division,
(or much better a "+2" before the division) so that any 
possible fraction of the division (which will be thrased)
gets compensated by rounding the result up to the higher 
value.

A final stack size calculation for a given level depth could look like this:
  max accum size = (size * 4 + 2)/3


but look what Ian R. said for Lvl_n=10:
  It would require 
     (0x55555555 >> (32 - (2 * 11))) = 1398101 
  available texels.

or more generic
  It would require 
     (0x55555555 >> (32 - (2 * (n+1))))
  available texels.

This formula Lvl_n would be limited to a maximum of n=15,
meaning an amount of 1.431.655.765 Mega Pixel in the total
texture level stack. Not really of current practical meaning
but if the 4/3 factor hits as well, why should we limit ourselves?

The problem with such formulas is that they do cover only
a few cases. Textures where both widths are a power of two
but are different in power value would require introduce more
complexity into such bit shifting shortcuts.

Why dont use th empircial counting up method instead?
It would only count up for some 10 or 12 cycles on real
world systems untill the availabel memory is spent up.

If you dont like looping, just multiply the memory amount 
by a factor 3/4 and get the square root out of it and you
got it the reverse way. sqrt calculation isnt that lengthy 
any more on current ALUs. If a power lies in between the
edge wides of the regula texture level stack, then divide
by that power first to get the measure of the smaller edge width.

Think of this. There are concepts out there where not only
the texture size might increase endlessly, but the size is clipped
to a specific maximum value whilst the contained image data is
bound to a specific viewpoint that is expected to be approached.
So even huge image datbases (like earth textures) could be used
with hardware texturing units whilst there is no need for huge
textur storage. Working this into some looping algorithm would
be much easier than into an already complex formula because of
the clipper effects could be tuned for any axis and its width
separately. think of wide screen video as a rendering goal, 
its not squared and therefor a non squared texture might give
nicer results, if the texture orientation towards the screen 
is taken into consideration.

Regards, AlexS.


_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to