https://bugs.freedesktop.org/show_bug.cgi?id=83570

--- Comment #9 from Roland Scheidegger <srol...@vmware.com> ---
(In reply to comment #8)
> for the llvm version I was going to return bld->undef. I figure I might as
> well change tgsi_exec.c version to 0xffffffff for consistency...but I don't
> have strong feelings either way.

I'm not sure how your code is going to look like, but I can't really see a way
to return bld->undef since you will be required to replace the input vector
with something sensible (similar to udiv_emit_cpu()) (*). That said, I guess
using the same code as udiv_emit_cpu() but omitting the final "or" would also
work - in which case the result for the idiv for x / 0 would be x / -1, why
not...
I'm not really sure it makes sense to return 0xffffffff just for consistency,
for uint div this value sort of makes sense, but for signed div it does not -
MAX_INT or MIN_INT probably would make more sense. But since d3d10 doesn't
support it, and glsl doesn't care, I think whatever is cheapest is ok (another
cheap option would be to use a "andnot" instead of the "or" udiv does, in which
case you'd get zero - MIN_INT/MAX_INT are going to be slightly more
complicated).

(*) Because sse doesn't have int div, llvm actually breaks down the vector div
to ordinary multiple int divs anyway (and yes it's bound to be very slow), thus
breaking it down manually and do per element selection wouldn't really be too
bad. Still wouldn't really make sense imho however.

-- 
You are receiving this mail because:
You are the assignee for the bug.
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to