On 14/10/16 15:36, Brian Paul wrote:
On 10/13/2016 09:38 PM, srol...@vmware.com wrote:
From: Roland Scheidegger <srol...@vmware.com>

Compilation to actual machine code can easily take as much time as the
optimization passes on the IR if not more, so print this out too.
---
  src/gallium/auxiliary/gallivm/lp_bld_init.c | 11 +++++++++++
  1 file changed, 11 insertions(+)

diff --git a/src/gallium/auxiliary/gallivm/lp_bld_init.c
b/src/gallium/auxiliary/gallivm/lp_bld_init.c
index 7114cde..d1b2369 100644
--- a/src/gallium/auxiliary/gallivm/lp_bld_init.c
+++ b/src/gallium/auxiliary/gallivm/lp_bld_init.c
@@ -659,13 +659,24 @@ gallivm_jit_function(struct gallivm_state *gallivm,
  {
     void *code;
     func_pointer jit_func;
+   int64_t time_begin = 0;

I think we might want to put MAYBE_UNUSED on that decl so there's not an
unused var warning in a non-debug build.

I think it might be OK because Roland's using `if` isntead of `#if`. That said, some compilers are too smart for their own good.



     assert(gallivm->compiled);
     assert(gallivm->engine);

+   if (gallivm_debug & GALLIVM_DEBUG_PERF)
+      time_begin = os_time_get();
+
     code = LLVMGetPointerToGlobal(gallivm->engine, func);
     assert(code);
     jit_func = pointer_to_func(code);

+   if (gallivm_debug & GALLIVM_DEBUG_PERF) {
+      int64_t time_end = os_time_get();
+      int time_msec = (int)(time_end - time_begin) / 1000;
+      debug_printf("   jitting func %s took %d msec\n",
+                   LLVMGetValueName(func), time_msec);
+   }
+
     return jit_func;
  }


Looks OK otherwise.

Reviewed-by: Brian Paul <bri...@vmware.com>

Reviewed-by: Jose Fonseca <jfons...@vmware.com>

Jose

_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to