Use uint64_t instead of int64_t in the calculation,
as the result is uint64_t.
Signed-off-by: Axel Davy
---
This corrects the small mistake from
218459771a1801d7ad20dd340ac35a50f2b5b81a
as reported by Emil.
src/gallium/auxiliary/os/os_misc.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletion
Shouldn't' the size argument itself be size_t instead of uint64_t?
- Chuck
On Tue, Oct 11, 2016 at 12:59 PM, Axel Davy wrote:
> Use uint64_t instead of int64_t in the calculation,
> as the result is uint64_t.
>
> Signed-off-by: Axel Davy
> ---
> This corrects the small mistake from
> 21845977
size_t could be uint32_t when compiling 32 bits.
However that would lead to overflow, which was the initial reason of the
patch
218459771a1801d7ad20dd340ac35a50f2b5b81a
On 11/10/2016 19:43, Chuck Atkins wrote:
Shouldn't' the size argument itself be size_t instead of uint64_t?
- Chuck
On Tu
Specifically, the point is that size_t is for CPU sizes, while this code
is about GPU sizes. The patch is
Reviewed-by: Nicolai Hähnle
On 11.10.2016 19:57, Axel Davy wrote:
size_t could be uint32_t when compiling 32 bits.
However that would lead to overflow, which was the initial reason of th