The current mach64 (mesa 4.x) driver doesn't seem to be doing z depth 
test. After assuring that the mach64's z control register was being set 
properly I realized that the vertex buffers had the z in a [0,1] scale 
while the primitive drawing functions expected them in a a [0,0xffff].

The previous mach64 (mesa 3.x) driver defined the coord setup as

#define COORD                                                           \
do {                                                                    \
    GLfloat *win = VB->Win.data[i];                                      \
    v->v.x =   win[0] + xoffset;                                         \
    v->v.y = - win[1] + yoffset;                                         \
    v->v.z =   win[2];                                                   \
    v->v.rhw = win[3];                                                   \
} while (0)

while for example the R128 defined as

#define COORD                                                           \
do {                                                                    \
    GLfloat *win = VB->Win.data[i];                                      \
    v->v.x =               win[0] + xoffset;                             \
    v->v.y =             - win[1] + yoffset;                             \
    v->v.z = depth_scale * win[2];                                       \
    v->v.rhw = v->v.rhw2 = win[3];                                       \
} while (0)

So I removed the 'depth_scale' in calculation of hw_viewport, in 
mach64CalcViewport, and everything worked fine.

But I still don't understand what's the relationship between *CalcViewport 
and the viewport calculations made in _mesa_set_viewport. At 
_mesa_set_viewport, for instance, there is a comment that says "This is 
really driver-specific and should be maintained elsewhere if at all.". It 
seems that _mesa_set_viewport sets the scale to [0,MaxDepth], but the 
*CalcViewport in most DRI drivers "undo" this scaling, rescaling to a 
[0,1].

My question is why the other DRI drivers do this (don't the chips expect 
the depths in integer format as well?) and in what depth scale should the 
vertex buffers be after all?

This understanding would be important because the current mach64 triangle 
setup engine is able to specify the z values in 16.1 format, but only the 
16 integer part is being used, so I would like to implement that as well.

Regards,

José Fonseca

_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to