2010/3/6  <randrianas...@gmail.com>:
> В сообщении от Friday 05 March 2010 23:49:03 Stephane Marchesin написал(а):
>> 2010/3/5  <randrianas...@gmail.com>:
>> > В сообщении от Friday 05 March 2010 16:55:20 Jesse Barnes написал(а):
>> >> make realclean and configure and make again.
>> >
>> > Removing all new functions and reverting to mainstream mesa works OK,
>> > even without realclean ... So, it is purely my fault, as coder. But what
>> > exactly I forgot? Init current_primitive = -1 on context creation?  State
>> > management in nouveau changed since first dri1 attempt ....
>> >
>> > As far as i understand this code - you  plug in not just fev line
>> > functions but two big function tables, filled with many functions
>> > (pointers at functions). This way one can plug in much more optimized
>> > (for different OpenGL cases) render functions, if hw permits this. So,
>> > you plug inly one "Chooser" function into mesa's TNL module, and  keep
>> > current primitive type somewhere in your context data .....
>> >
>> > Ideally, various fixups (like polygonOffset) will be layred on top of
>> > this, making nv04.render.c a bit hard for reading ...
>> >
>> > Thanks Jesse and Francisco for your time, you spended reading my mails
>> > and trying to figure out what was my fault ....
>>
>> As discussed on irc, and for posterity:
>>
>> 00:35 < marcheu> you have a 16 or 8 (with multitexture) vertex cache
>> 00:35 < marcheu> these number you see (0xFEDCBA) are not magic
>> 00:35 < marcheu> these are the index of the vertices we want to emit
>> 00:36 < marcheu> so FEDCBA emits vertices 15,14,13,12,11 and 10
>> 00:36 < marcheu> but that means you have to actually place data at
>> these locations
>> 00:37 < marcheu> which means that if you want to do a single-pass
>> emission you have to place the first vertex at spot 10
>> 00:37 < marcheu> so basically the layout of the nv4 3D object is like that:
>> 00:37 < marcheu> - vertex 0
>> 00:37 < marcheu> - vertex 1
>> 00:37 < marcheu> ...
>> 00:37 < marcheu> - vertex 15
>> 00:38 < marcheu> - vertex indices that we want fired
>> 00:38 < marcheu> so if you want to do a 1-swoop submission, you have
>> to start from 10 or so
>> 00:38 < marcheu> that also means you _place_ vertex data at the right
>> spot. right now you place it at 0
>> 00:39 < marcheu> also you reserve one too little places on the fifo (6
>> instead of 7 on line 206)
>> 00:39 < marcheu> becuase you need one spot for the FEDCBA
>> 00:39 < marcheu> so you need 3 things:
>> 00:39 < marcheu> - start emitting at the right place, which probably
>> means an extra argument to BEGIN_PRIMITIVE to tell which place
>> 00:40 < marcheu> - increase the size of the BEGIN_PRIMITIVE
>> 00:40 < marcheu> - that was only two things :)
>>
>> Stephane
>
> New patch is here, tested with trivial/tri, DANGEROUS, may lock up your
> machine hard with anything else!
>
> Strange thing about code - in function swtnl_render_triangles_verts it was
> going on and on, causing segfault, until i added
>
> if (count == 3)
>        {
>        swtnl_triangle(ctx, i+0,i+1,i+2);
>        return;
>        }
>
>
> Was found under gdb
>
> =======
>
> swtnl_render_triangles_verts (ctx=0x950ca88, start=0, count=3, flags=52)
>    at nv04_render.c:285
> 285             for(i=start;i<count-5;i+=6)
> (gdb) print i
> $1 = 18
>
> =======
>
> Please, someone, enlight me on this small C secret, what was wrong in this
> for() ?
>

count is GLuint, i.e. unsigned. If count < 5, count-5 ~ around 4
billion due to overflow. Changing the check to i+5 < count should make
it work.

Mike.

------------------------------------------------------------------------------
Download Intel&#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
_______________________________________________
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev

Reply via email to