The technique of putting functions in separate section to identify them during stack traversal is broken when optimizations are enabled. This is because gcc might optimize some function calls into jumps instead of the actual call instruction. So new stack frame is not created for such "calls" and when stack walker is called from such function it will not find out it is in VM native method code.
Solution for this problem is to maintain a per-thread stack structure for storing function address and ESP value just before VM native call. These values are pushed onto the stack before every VM native call and removed after the call. The ESP value is used to calculate the frame pointer of the called function. It is not stored directly because code generated at call sites should be as fast as possible. Similar mechanism is used for JNI native calls. Another per-thread stack structure is maintained to keep track of JNI method calls (further referred to as 'stack'). JNI method calls are more complicated than VM native calls. That's because we cannot assume that frame pointer is valid in JNI methods. This can happen for example when native library has been compiled with -fomit-frame-pointer option. Before calling JNI method information is pushed onto special stack: caller frame pointer (EBP), call site address (EIP) and struct vm_method pointer for JNI method. The caller's frame pointer and call site address are needed to allow the stack walker to skip to the caller of JNI method. The pointer to struct vm_method is needed to obtain the native function address for JNI method when stack trace element is created for the JNI call. In general we don't know JNI function's address at compilation time because those are dynamically linked. Because of the lack of frame pointer in JNI methods we must also keep track of transitions from JNI to VM. This happens when JNI interface functions are called. JNI stack entry has a special field 'jni_interface_frame' which holds frame pointer of JNI interface function that was called. This allows the stack walker to detect when the transition should be made using the information in jni stack entry - when jni_interface_frame is hit we should skip to the JNI method's caller frame. Field's value is undefined/invalid when no JNI interface is currently called for given JNI method call. This behavior is correct because stack walker is called only from VM which can be reached from JNI only by calling some JNI interface function. Per-thread stacks for VM and JNI native calls have limited size. Therefore we must check for stack overflow. The check is done by polling a preallocated guard memory region. For overflow offsets memory pages are hidden and SIGSEGV handler throws StackOverflowError when such access happens. This makes stack overflow check require only a single test instruction. Signed-off-by: Tomek Grabiec <tgrab...@gmail.com> --- arch/x86/include/arch/call.h | 35 ++++++ arch/x86/insn-selector.brg | 122 ++++++++++++++++++++++- include/jit/exception.h | 17 ++-- include/vm/preload.h | 2 + include/vm/stack-trace.h | 104 ++++++++++++++++++- test/arch-x86/Makefile | 3 + test/include/arch/stack-frame.h | 4 + test/vm/preload-stub.c | 19 ++++ vm/call.c | 14 +++ vm/jni-interface.c | 90 ++++++++++++---- vm/preload.c | 9 ++ vm/signal.c | 21 ++++ vm/stack-trace.c | 218 +++++++++++++++++++++++++++++++++++++-- 13 files changed, 614 insertions(+), 44 deletions(-) diff --git a/arch/x86/include/arch/call.h b/arch/x86/include/arch/call.h index 18dbebb..bc9e9ec 100644 --- a/arch/x86/include/arch/call.h +++ b/arch/x86/include/arch/call.h @@ -24,6 +24,41 @@ : "%ecx", "%edi", "cc" \ ); \ } + +/** + * This calls a VM native function with call arguments copied from + * @args array. The array contains @args_count elements of machine + * word size. The @target must be a pointer to a VM function. Call + * result will be stored in @result. + */ +#define vm_native_call(target, args, args_count, result) { \ + __asm__ volatile ( \ + "movl %%ebx, %%ecx \n" \ + "shl $2, %%ebx \n" \ + "subl %%ebx, %%esp \n" \ + "movl %%esp, %%edi \n" \ + "cld \n" \ + "rep movsd \n" \ + "movl %%ebx, %%esi \n" \ + \ + "pushl %%esp \n" \ + "pushl %3 \n" \ + "call vm_enter_vm_native \n" \ + "addl $8, %%esp \n" \ + "test %%eax, %%eax \n" \ + "jnz 1f \n" \ + \ + "call * -8(%%esp)\n" \ + "movl %%eax, %0 \n" \ + \ + "call vm_leave_vm_native \n" \ + \ + "1: addl %%esi, %%esp \n" \ + : "=r" (result) \ + : "b" (args_count), "S"(args), "r"(target) \ + : "%ecx", "%edi", "cc" \ + ); \ + } #else #error NOT IMPLEMENTED #endif diff --git a/arch/x86/insn-selector.brg b/arch/x86/insn-selector.brg index 87bf8f9..d54a86c 100644 --- a/arch/x86/insn-selector.brg +++ b/arch/x86/insn-selector.brg @@ -32,6 +32,7 @@ #include <vm/field.h> #include <vm/method.h> #include <vm/object.h> +#include <vm/stack-trace.h> #define MBCGEN_TYPE struct basic_block #define MBCOST_DATA struct basic_block @@ -2018,6 +2019,118 @@ emulate_op_64(struct _MBState *state, struct basic_block *s, select_insn(s, tree, reg_reg_insn(INSN_MOV_REG_REG, edx, state->reg2)); } +static void select_jni_call(struct basic_block *s, struct tree_node *tree, + struct insn *call_insn, struct vm_method *method) +{ + struct var_info *offset_reg; + struct var_info *frame_reg; + unsigned long offset_tls; + unsigned long tr_addr; + unsigned long guard; + unsigned long field; + + frame_reg = get_fixed_var(s->b_parent, REG_EBP); + offset_reg = get_var(s->b_parent, J_REFERENCE); + + offset_tls = get_thread_local_offset(&jni_stack_offset); + select_insn(s, tree, + memdisp_reg_insn(INSN_MOV_THREAD_LOCAL_MEMDISP_REG, + offset_tls, offset_reg)); + + tr_addr = get_thread_local_offset(&jni_stack); + + /* Check for stack overflow */ + guard = (unsigned long) jni_stack_offset_guard; + select_insn(s, tree, membase_reg_insn(INSN_TEST_MEMBASE_REG, + offset_reg, guard, offset_reg)); + + /* Set ->caller_frame */ + field = tr_addr + offsetof(struct jni_stack_entry, caller_frame); + select_insn(s, tree, reg_membase_insn(INSN_MOV_REG_THREAD_LOCAL_MEMBASE, + frame_reg, offset_reg, field)); + + /* Set ->call_site_addr */ + field = tr_addr + offsetof(struct jni_stack_entry, call_site_addr); + select_insn(s, tree, + membase_insn(INSN_MOV_IP_THREAD_LOCAL_MEMBASE, + offset_reg, field)); + + /* Set ->method */ + field = tr_addr + offsetof(struct jni_stack_entry, method); + select_insn(s, tree, imm_membase_insn(INSN_MOV_IMM_THREAD_LOCAL_MEMBASE, + (unsigned long) method, offset_reg, field)); + + /* Advance jni_stack_offset */ + select_insn(s, tree, + imm_reg_insn(INSN_ADD_IMM_REG, sizeof(struct jni_stack_entry), + offset_reg)); + select_insn(s, tree, + reg_memdisp_insn(INSN_MOV_REG_THREAD_LOCAL_MEMDISP, + offset_reg, offset_tls)); + + select_insn(s, tree, call_insn); + + /* Restore jni_stack_ffset (pop) */ + select_insn(s, tree, + imm_reg_insn(INSN_SUB_IMM_REG, sizeof(struct jni_stack_entry), + offset_reg)); + select_insn(s, tree, + reg_memdisp_insn(INSN_MOV_REG_THREAD_LOCAL_MEMDISP, + offset_reg, offset_tls)); +} + +static void select_vm_native_call(struct basic_block *s, struct tree_node *tree, + struct insn *call_insn, void *target) +{ + struct var_info *offset_reg; + struct var_info *sp_reg; + unsigned long offset_tls; + unsigned long tr_addr; + unsigned long guard; + unsigned long field; + + sp_reg = get_fixed_var(s->b_parent, REG_ESP); + offset_reg = get_var(s->b_parent, J_REFERENCE); + + offset_tls = get_thread_local_offset(&vm_native_stack_offset); + select_insn(s, tree, + memdisp_reg_insn(INSN_MOV_THREAD_LOCAL_MEMDISP_REG, + offset_tls, offset_reg)); + + tr_addr = get_thread_local_offset(&vm_native_stack); + + /* Check for stack overflow */ + guard = (unsigned long) vm_native_stack_offset_guard; + select_insn(s, tree, membase_reg_insn(INSN_TEST_MEMBASE_REG, + offset_reg, guard, offset_reg)); + + /* Set ->stack_ptr */ + field = tr_addr + offsetof(struct vm_native_stack_entry, stack_ptr); + select_insn(s, tree, reg_membase_insn(INSN_MOV_REG_THREAD_LOCAL_MEMBASE, + sp_reg, offset_reg, field)); + + /* Set ->target */ + field = tr_addr + offsetof(struct vm_native_stack_entry, target); + select_insn(s, tree, imm_membase_insn(INSN_MOV_IMM_THREAD_LOCAL_MEMBASE, + (unsigned long) target, offset_reg, field)); + + /* Advance vm_native_stack_offset */ + select_insn(s, tree, imm_reg_insn(INSN_ADD_IMM_REG, + sizeof(struct vm_native_stack_entry), offset_reg)); + select_insn(s, tree, + reg_memdisp_insn(INSN_MOV_REG_THREAD_LOCAL_MEMDISP, + offset_reg, offset_tls)); + + select_insn(s, tree, call_insn); + + /* Restore vm_native_stack_offset (pop) */ + select_insn(s, tree, imm_reg_insn(INSN_SUB_IMM_REG, + sizeof(struct vm_native_stack_entry), offset_reg)); + select_insn(s, tree, + reg_memdisp_insn(INSN_MOV_REG_THREAD_LOCAL_MEMDISP, + offset_reg, offset_tls)); +} + static void invoke(struct basic_block *s, struct tree_node *tree, struct compilation_unit *cu, struct vm_method *method) { bool is_compiled; @@ -2036,7 +2149,14 @@ static void invoke(struct basic_block *s, struct tree_node *tree, struct compila target = vm_method_trampoline_ptr(method); call_insn = rel_insn(INSN_CALL_REL, (unsigned long) target); - select_insn(s, tree, call_insn); + + if (vm_method_is_jni(method)) + select_jni_call(s, tree, call_insn, method); + else if (vm_method_is_vm_native(method)) + select_vm_native_call(s, tree, call_insn, + vm_method_native_ptr(method)); + else + select_insn(s, tree, call_insn); if (!is_compiled) { struct fixup_site *fixup; diff --git a/include/jit/exception.h b/include/jit/exception.h index ea5f7e6..68a1da1 100644 --- a/include/jit/exception.h +++ b/include/jit/exception.h @@ -12,6 +12,7 @@ #include "vm/die.h" #include "vm/method.h" +#include "vm/stack-trace.h" #include "vm/vm.h" struct cafebabe_code_attribute_exception; @@ -97,16 +98,18 @@ static inline struct vm_object *exception_occurred(void) void *eh; \ \ native_ptr = __builtin_return_address(0) - 1; \ - if (is_native((unsigned long)native_ptr)) \ - die("must not be called from not-JIT code"); \ + if (!is_native((unsigned long)native_ptr)) { \ + frame = __builtin_frame_address(1); \ \ - frame = __builtin_frame_address(1); \ + if (vm_native_stack_get_frame() == frame) \ + vm_leave_vm_native(); \ \ - cu = jit_lookup_cu((unsigned long)native_ptr); \ - eh = throw_exception_from(cu, frame, native_ptr); \ + cu = jit_lookup_cu((unsigned long)native_ptr); \ + eh = throw_exception_from(cu, frame, native_ptr); \ \ - __override_return_address(eh); \ - __cleanup_args(args_size); \ + __override_return_address(eh); \ + __cleanup_args(args_size); \ + } \ }) #endif /* JATO_JIT_EXCEPTION_H */ diff --git a/include/vm/preload.h b/include/vm/preload.h index 584ae9f..f480408 100644 --- a/include/vm/preload.h +++ b/include/vm/preload.h @@ -23,6 +23,7 @@ extern struct vm_class *vm_java_lang_RuntimeException; extern struct vm_class *vm_java_lang_ExceptionInInitializerError; extern struct vm_class *vm_java_lang_NoSuchFieldError; extern struct vm_class *vm_java_lang_NoSuchMethodError; +extern struct vm_class *vm_java_lang_StackOverflowError; extern struct vm_class *vm_boolean_class; extern struct vm_class *vm_char_class; extern struct vm_class *vm_float_class; @@ -44,6 +45,7 @@ extern struct vm_method *vm_java_lang_Throwable_initCause; extern struct vm_method *vm_java_lang_Throwable_getCause; extern struct vm_method *vm_java_lang_Throwable_toString; extern struct vm_method *vm_java_lang_Throwable_getStackTrace; +extern struct vm_method *vm_java_lang_Throwable_setStackTrace; extern struct vm_method *vm_java_lang_StackTraceElement_getFileName; extern struct vm_method *vm_java_lang_StackTraceElement_getClassName; extern struct vm_method *vm_java_lang_StackTraceElement_getMethodName; diff --git a/include/vm/stack-trace.h b/include/vm/stack-trace.h index e3daf1d..504c450 100644 --- a/include/vm/stack-trace.h +++ b/include/vm/stack-trace.h @@ -9,26 +9,121 @@ #include <stdbool.h> +struct compilation_unit; +struct vm_method; +struct vm_class; + +struct jni_stack_entry { + void *caller_frame; + unsigned long call_site_addr; + + /* We don't know the address of JNI callee at compilation time + * so code generated for JNI call site stores a pointer to + * vm_method from which we obtain target pointer at the time + * when stack is traversed. */ + struct vm_method *method; + + /* This field is filled in on entry to any JNI interface + * function. Having this field filled in allows the + * stackwalker to skip whole JNI call. When JNI interface + * fuction returns to JNI caller nothing is done because this + * structure can be accessed only when we're in VM which can + * happen only after some JNI interface function was + * called. */ + void *jni_interface_frame; +} __attribute__((packed)); + +struct vm_native_stack_entry { + void *stack_ptr; + void *target; +} __attribute__((packed)); + +#define VM_NATIVE_STACK_SIZE 256 +#define JNI_STACK_SIZE 1024 + +extern void *vm_native_stack_offset_guard; +extern void *vm_native_stack_badoffset; +extern void *jni_stack_offset_guard; +extern void *jni_stack_badoffset; + +extern __thread struct jni_stack_entry jni_stack[JNI_STACK_SIZE]; +extern __thread unsigned long jni_stack_offset; +extern __thread struct vm_native_stack_entry vm_native_stack[VM_NATIVE_STACK_SIZE]; +extern __thread unsigned long vm_native_stack_offset; + +int vm_enter_jni(void *caller_frame, unsigned long call_site_addr, + struct vm_method *method); +int vm_enter_vm_native(void *target, void *stack_ptr); +void vm_leave_jni(void); +void vm_leave_vm_native(void); + +static inline int jni_stack_index(void) +{ + return jni_stack_offset / sizeof(struct jni_stack_entry); +} + +static inline int vm_native_stack_index(void) +{ + return vm_native_stack_offset / + sizeof(struct vm_native_stack_entry); +} + +static inline void *vm_native_stack_get_frame(void) +{ + if (vm_native_stack_offset == 0) + return NULL; + + return vm_native_stack[vm_native_stack_index() - 1].stack_ptr - + sizeof(struct native_stack_frame); +} + +/* + * This is defined as a macro because we must assure that + * __builtin_frame_address() returns the macro user's frame. Compiler + * optimizations might optimize some function calls so that the target + * function runs in the caller's frame. We want to avoid this situation. + */ +#define vm_enter_jni_interface() { \ + jni_stack[jni_stack_index() - 1].jni_interface_frame = \ + __builtin_frame_address(0); \ + } + + /* * Points to a native stack frame that is considered as bottom-most * for given thread. */ extern __thread struct native_stack_frame *bottom_stack_frame; +enum stack_trace_elem_type { + STACK_TRACE_ELEM_TYPE_JIT, + STACK_TRACE_ELEM_TYPE_JNI, + STACK_TRACE_ELEM_TYPE_VM_NATIVE, + + STACK_TRACE_ELEM_TYPE_OTHER, + STACK_TRACE_ELEM_TYPE_TRAMPOLINE, +}; + struct stack_trace_elem { /* Holds instruction address of this stack trace element. */ unsigned long addr; + enum stack_trace_elem_type type; + + int vm_native_stack_index; + int jni_stack_index; + /* * If true then @frame has format of struct native_stack_frame * and struct jit_stack_frame otherwise. */ bool is_native; - /* If true then frame belongs to a trampoline */ - bool is_trampoline; - - /* Points to a stack frame of this stack trace element. */ + /* + * Points to a stack frame of this stack trace element. Note + * that for type == STACK_TRACE_ELEM_TYPE_JNI value of @frame + * is undefined. + */ void *frame; }; @@ -44,5 +139,6 @@ struct vm_object * __vm_native native_vmthrowable_get_stack_trace(struct vm_obje bool called_from_jit_trampoline(struct native_stack_frame *frame); void vm_print_exception(struct vm_object *exception); +struct vm_object *vm_alloc_stack_overflow_error(void); #endif /* JATO_VM_STACK_TRACE_H */ diff --git a/test/arch-x86/Makefile b/test/arch-x86/Makefile index b44f29f..a272946 100644 --- a/test/arch-x86/Makefile +++ b/test/arch-x86/Makefile @@ -53,14 +53,17 @@ OBJS = \ ../../lib/list.o \ ../../lib/radix-tree.o \ ../../lib/string.o \ + ../../vm/call.o \ ../../vm/class.o \ ../../vm/die.o \ ../../vm/field.o \ ../../vm/guard-page.o \ ../../vm/itable.o \ + ../../vm/jni-interface.o \ ../../vm/method.o \ ../../vm/object.o \ ../../vm/stack.o \ + ../../vm/stack-trace.o \ ../../vm/static.o \ ../../vm/types.o \ ../../vm/utf8.o \ diff --git a/test/include/arch/stack-frame.h b/test/include/arch/stack-frame.h index 8441d3f..be65acb 100644 --- a/test/include/arch/stack-frame.h +++ b/test/include/arch/stack-frame.h @@ -7,4 +7,8 @@ struct jit_stack_frame { unsigned long return_address; }; +struct native_stack_frame { + unsigned long return_address; +}; + #endif /* MMIX_STACK_FRAME_H */ diff --git a/test/vm/preload-stub.c b/test/vm/preload-stub.c index ea53d09..0fb7714 100644 --- a/test/vm/preload-stub.c +++ b/test/vm/preload-stub.c @@ -10,7 +10,13 @@ struct vm_class *vm_java_lang_String; struct vm_field *vm_java_lang_String_offset; struct vm_field *vm_java_lang_String_count; struct vm_field *vm_java_lang_String_value; +struct vm_field *vm_java_lang_Throwable_detailMessage; +struct vm_field *vm_java_lang_VMThrowable_vmdata; +struct vm_class *vm_java_lang_Throwable; +struct vm_class *vm_java_lang_VMThrowable; +struct vm_class *vm_java_lang_StackTraceElement; +struct vm_class *vm_array_of_java_lang_StackTraceElement; struct vm_class *vm_java_lang_Error; struct vm_class *vm_java_lang_ArithmeticException; struct vm_class *vm_java_lang_NullPointerException; @@ -22,5 +28,18 @@ struct vm_class *vm_java_lang_RuntimeException; struct vm_class *vm_java_lang_ExceptionInInitializerError; struct vm_class *vm_java_lang_NegativeArraySizeException; struct vm_class *vm_java_lang_ClassCastException; +struct vm_class *vm_java_lang_NoSuchFieldError; +struct vm_class *vm_java_lang_NoSuchMethodError; +struct vm_class *vm_java_lang_StackOverflowError; struct vm_method *vm_java_lang_Throwable_initCause; +struct vm_method *vm_java_lang_Throwable_getCause; +struct vm_method *vm_java_lang_Throwable_toString; +struct vm_method *vm_java_lang_Throwable_getStackTrace; +struct vm_method *vm_java_lang_Throwable_setStackTrace; +struct vm_method *vm_java_lang_StackTraceElement_getFileName; +struct vm_method *vm_java_lang_StackTraceElement_getClassName; +struct vm_method *vm_java_lang_StackTraceElement_getMethodName; +struct vm_method *vm_java_lang_StackTraceElement_getLineNumber; +struct vm_method *vm_java_lang_StackTraceElement_isNativeMethod; +struct vm_method *vm_java_lang_StackTraceElement_equals; diff --git a/vm/call.c b/vm/call.c index cd8d7e6..09ac7d1 100644 --- a/vm/call.c +++ b/vm/call.c @@ -44,8 +44,22 @@ vm_call_method_a(struct vm_method *method, unsigned long *args) void *target; target = vm_method_call_ptr(method); + + if (vm_method_is_vm_native(method)) { + vm_native_call(target, args, method->args_count, + result); + return result; + } + + if (vm_method_is_jni(method)) + if (vm_enter_jni(__builtin_frame_address(0), 0, method)) + return 0; + native_call(target, args, method->args_count, result); + if (vm_method_is_jni(method)) + vm_leave_jni(); + return result; } diff --git a/vm/jni-interface.c b/vm/jni-interface.c index 7614200..a09fe34 100644 --- a/vm/jni-interface.c +++ b/vm/jni-interface.c @@ -52,10 +52,13 @@ if (!vm_object_is_instance_of((x), vm_java_lang_Class)) \ return NULL; -static jclass vm_jni_find_class(struct vm_jni_env *env, const char *name) +static jclass +vm_jni_find_class(struct vm_jni_env *env, const char *name) { struct vm_class *class; + vm_enter_jni_interface(); + class = classloader_load(name); if (!class) { signal_new_exception(vm_java_lang_NoClassDefFoundError, @@ -70,12 +73,15 @@ static jclass vm_jni_find_class(struct vm_jni_env *env, const char *name) return class->object; } -static jmethodID vm_jni_get_method_id(struct vm_jni_env *env, jclass clazz, - const char *name, const char *sig) +static jmethodID +vm_jni_get_method_id(struct vm_jni_env *env, jclass clazz, const char *name, + const char *sig) { struct vm_method *mb; struct vm_class *class; + vm_enter_jni_interface(); + check_null(clazz); check_class_object(clazz); @@ -96,12 +102,15 @@ static jmethodID vm_jni_get_method_id(struct vm_jni_env *env, jclass clazz, return mb; } -static jfieldID vm_jni_get_field_id(struct vm_jni_env *env, jclass clazz, - const char *name, const char *sig) +static jfieldID +vm_jni_get_field_id(struct vm_jni_env *env, jclass clazz, const char *name, + const char *sig) { struct vm_field *fb; struct vm_class *class; + vm_enter_jni_interface(); + check_null(clazz); check_class_object(clazz); @@ -122,12 +131,15 @@ static jfieldID vm_jni_get_field_id(struct vm_jni_env *env, jclass clazz, return fb; } -static jmethodID vm_jni_get_static_method_id(struct vm_jni_env *env, - jclass clazz, const char *name, const char *sig) +static jmethodID +vm_jni_get_static_method_id(struct vm_jni_env *env, jclass clazz, + const char *name, const char *sig) { struct vm_method *mb; struct vm_class *class; + vm_enter_jni_interface(); + check_null(clazz); class = vm_class_get_class_from_class_object(clazz); @@ -147,11 +159,14 @@ static jmethodID vm_jni_get_static_method_id(struct vm_jni_env *env, return mb; } -static const jbyte* vm_jni_get_string_utf_chars(struct vm_jni_env *env, jobject string, - jboolean *is_copy) +static const jbyte* +vm_jni_get_string_utf_chars(struct vm_jni_env *env, jobject string, + jboolean *is_copy) { jbyte *array; + vm_enter_jni_interface(); + if (!string) return NULL; @@ -165,14 +180,20 @@ static const jbyte* vm_jni_get_string_utf_chars(struct vm_jni_env *env, jobject return array; } -static void vm_release_string_utf_chars(struct vm_jni_env *env, jobject string, - const char *utf) +static void +vm_release_string_utf_chars(struct vm_jni_env *env, jobject string, + const char *utf) { + vm_enter_jni_interface(); + free((char *)utf); } -static jint vm_jni_throw(struct vm_jni_env *env, jthrowable exception) +static jint +vm_jni_throw(struct vm_jni_env *env, jthrowable exception) { + vm_enter_jni_interface(); + if (!vm_object_is_instance_of(exception, vm_java_lang_Throwable)) return -1; @@ -180,11 +201,13 @@ static jint vm_jni_throw(struct vm_jni_env *env, jthrowable exception) return 0; } -static jint vm_jni_throw_new(struct vm_jni_env *env, jclass clazz, - const char *message) +static jint +vm_jni_throw_new(struct vm_jni_env *env, jclass clazz, const char *message) { struct vm_class *class; + vm_enter_jni_interface(); + if (!clazz) return -1; @@ -198,48 +221,66 @@ static jint vm_jni_throw_new(struct vm_jni_env *env, jclass clazz, static jthrowable vm_jni_exception_occurred(struct vm_jni_env *env) { + vm_enter_jni_interface(); + return exception_occurred(); } static void vm_jni_exception_describe(struct vm_jni_env *env) { + vm_enter_jni_interface(); + if (exception_occurred()) vm_print_exception(exception_occurred()); } static void vm_jni_exception_clear(struct vm_jni_env *env) { + vm_enter_jni_interface(); + clear_exception(); } -static void vm_jni_fatal_error(struct vm_jni_env *env, const char *msg) +static void +vm_jni_fatal_error(struct vm_jni_env *env, const char *msg) { + vm_enter_jni_interface(); + die("%s", msg); } -static void vm_jni_call_static_void_method(struct vm_jni_env *env, jclass clazz, - jmethodID methodID, ...) +static void +vm_jni_call_static_void_method(struct vm_jni_env *env, jclass clazz, + jmethodID methodID, ...) { va_list args; + vm_enter_jni_interface(); + va_start(args, methodID); vm_call_method_v(methodID, args); va_end(args); } +extern void print_trace(void); + static void vm_jni_call_static_void_method_v(struct vm_jni_env *env, jclass clazz, jmethodID methodID, va_list args) { + vm_enter_jni_interface(); vm_call_method_v(methodID, args); } -static jobject vm_jni_call_static_object_method(struct vm_jni_env *env, - jclass clazz, jmethodID methodID, ...) +static jobject +vm_jni_call_static_object_method(struct vm_jni_env *env, jclass clazz, + jmethodID methodID, ...) { jobject result; va_list args; + vm_enter_jni_interface(); + va_start(args, methodID); result = (jobject) vm_call_method_v(methodID, args); va_end(args); @@ -251,15 +292,20 @@ static jobject vm_jni_call_static_object_method_v(struct vm_jni_env *env, jclass clazz, jmethodID methodID, va_list args) { + vm_enter_jni_interface(); + return (jobject) vm_call_method_v(methodID, args); } -static jbyte vm_jni_call_static_byte_method(struct vm_jni_env *env, - jclass clazz, jmethodID methodID, ...) +static jbyte +vm_jni_call_static_byte_method(struct vm_jni_env *env, jclass clazz, + jmethodID methodID, ...) { jbyte result; va_list args; + vm_enter_jni_interface(); + va_start(args, methodID); result = (jbyte) vm_call_method_v(methodID, args); va_end(args); @@ -271,6 +317,8 @@ static jbyte vm_jni_call_static_byte_method_v(struct vm_jni_env *env, jclass clazz, jmethodID methodID, va_list args) { + vm_enter_jni_interface(); + return (jbyte) vm_call_method_v(methodID, args); } diff --git a/vm/preload.c b/vm/preload.c index 4d2c5f6..5b767e0 100644 --- a/vm/preload.c +++ b/vm/preload.c @@ -59,6 +59,7 @@ struct vm_class *vm_java_lang_RuntimeException; struct vm_class *vm_java_lang_ExceptionInInitializerError; struct vm_class *vm_java_lang_NoSuchFieldError; struct vm_class *vm_java_lang_NoSuchMethodError; +struct vm_class *vm_java_lang_StackOverflowError; struct vm_class *vm_boolean_class; struct vm_class *vm_char_class; struct vm_class *vm_float_class; @@ -91,6 +92,7 @@ static const struct preload_entry preload_entries[] = { { "java/lang/UnsatisfiedLinkError", &vm_java_lang_UnsatisfiedLinkError }, { "java/lang/NoSuchFieldError", &vm_java_lang_NoSuchFieldError }, { "java/lang/NoSuchMethodError", &vm_java_lang_NoSuchMethodError }, + { "java/lang/StackOverflowError", &vm_java_lang_StackOverflowError }, }; static const struct preload_entry primitive_preload_entries[] = { @@ -139,6 +141,7 @@ struct vm_method *vm_java_lang_Throwable_initCause; struct vm_method *vm_java_lang_Throwable_getCause; struct vm_method *vm_java_lang_Throwable_toString; struct vm_method *vm_java_lang_Throwable_getStackTrace; +struct vm_method *vm_java_lang_Throwable_setStackTrace; struct vm_method *vm_java_lang_StackTraceElement_getFileName; struct vm_method *vm_java_lang_StackTraceElement_getClassName; struct vm_method *vm_java_lang_StackTraceElement_getMethodName; @@ -173,6 +176,12 @@ static const struct method_preload_entry method_preload_entries[] = { }, { &vm_java_lang_Throwable, + "setStackTrace", + "([Ljava/lang/StackTraceElement;)V", + &vm_java_lang_Throwable_setStackTrace, + }, + { + &vm_java_lang_Throwable, "toString", "()Ljava/lang/String;", &vm_java_lang_Throwable_toString, diff --git a/vm/signal.c b/vm/signal.c index cba8460..85d69a4 100644 --- a/vm/signal.c +++ b/vm/signal.c @@ -29,6 +29,8 @@ #include "vm/preload.h" #include "vm/backtrace.h" #include "vm/signal.h" +#include "vm/stack-trace.h" +#include "vm/call.h" #include "vm/class.h" #include "vm/object.h" #include "vm/jni.h" @@ -75,6 +77,19 @@ static unsigned long throw_null_pointer_exception(unsigned long src_addr) return throw_from_signal_bh(src_addr); } +static unsigned long throw_stack_overflow_error(unsigned long src_addr) +{ + struct vm_object *obj; + + obj = vm_alloc_stack_overflow_error(); + if (!obj) + error("failed to allocate instance of StackOverflowError."); + + signal_exception(obj); + + return throw_from_signal_bh(src_addr); +} + static void sigfpe_handler(int sig, siginfo_t *si, void *ctx) { if (signal_from_native(ctx)) @@ -137,6 +152,12 @@ static void sigsegv_handler(int sig, siginfo_t *si, void *ctx) return; } + if (si->si_addr == jni_stack_badoffset || + si->si_addr == vm_native_stack_badoffset) { + install_signal_bh(ctx, throw_stack_overflow_error); + return; + } + exit: vm_jni_check_trap(si->si_addr); diff --git a/vm/stack-trace.c b/vm/stack-trace.c index 4e8b05b..fe8e418 100644 --- a/vm/stack-trace.c +++ b/vm/stack-trace.c @@ -24,14 +24,17 @@ * Please refer to the file LICENSE for details. */ +#include "vm/call.h" #include "vm/class.h" #include "vm/classloader.h" +#include "vm/guard-page.h" +#include "vm/jni.h" #include "vm/object.h" #include "vm/method.h" #include "vm/natives.h" #include "vm/object.h" -#include "vm/stack-trace.h" #include "vm/preload.h" +#include "vm/stack-trace.h" #include "vm/system.h" #include "jit/bc-offset-mapping.h" @@ -42,6 +45,16 @@ #include <malloc.h> #include <stdio.h> +void *vm_native_stack_offset_guard; +void *vm_native_stack_badoffset; +void *jni_stack_offset_guard; +void *jni_stack_badoffset; + +__thread struct jni_stack_entry jni_stack[JNI_STACK_SIZE]; +__thread unsigned long jni_stack_offset; +__thread struct vm_native_stack_entry vm_native_stack[VM_NATIVE_STACK_SIZE]; +__thread unsigned long vm_native_stack_offset; + __thread struct native_stack_frame *bottom_stack_frame; typedef void (*ste_init_fn)(struct vm_object *, struct vm_object *, int, @@ -59,6 +72,23 @@ void init_stack_trace_printing(void) struct vm_method *throwable_tostring_mb; struct vm_method *throwable_stacktracestring_mb; + vm_native_stack_offset = 0; + jni_stack_offset = 0; + + /* Initialize JNI and VM native stacks' offset guards */ + unsigned long valid_size; + + valid_size = VM_NATIVE_STACK_SIZE * + sizeof(struct vm_native_stack_entry); + vm_native_stack_offset_guard = alloc_offset_guard(valid_size, 1); + vm_native_stack_badoffset = + valid_size + vm_native_stack_offset_guard; + + valid_size = JNI_STACK_SIZE * sizeof(struct jni_stack_entry); + jni_stack_offset_guard = alloc_offset_guard(valid_size, 1); + jni_stack_badoffset = valid_size + jni_stack_offset_guard; + + /* Preload methods */ ste_init_mb = vm_class_get_method_recursive( vm_java_lang_StackTraceElement, "<init>", @@ -83,11 +113,85 @@ void init_stack_trace_printing(void) error("initialization failed"); } +static bool jni_stack_is_full(void) +{ + return jni_stack_index() == JNI_STACK_SIZE; +} + +static bool vm_native_stack_is_full(void) +{ + return vm_native_stack_index() == VM_NATIVE_STACK_SIZE; +} + +static inline struct jni_stack_entry *new_jni_stack_entry(void) +{ + struct jni_stack_entry *tr = (void*)jni_stack + jni_stack_offset; + + jni_stack_offset += sizeof(struct jni_stack_entry); + return tr; +} + +static inline struct vm_native_stack_entry *new_vm_native_stack_entry(void) +{ + struct vm_native_stack_entry *tr = (void*)vm_native_stack + + vm_native_stack_offset; + + vm_native_stack_offset += sizeof(struct vm_native_stack_entry); + return tr; +} + +int vm_enter_jni(void *caller_frame, unsigned long call_site_addr, + struct vm_method *method) +{ + if (jni_stack_is_full()) { + struct vm_object *e = vm_alloc_stack_overflow_error(); + if (!e) + error("failed to allocate exception"); + + signal_exception(e); + return -1; + } + + struct jni_stack_entry *tr = new_jni_stack_entry(); + + tr->caller_frame = caller_frame; + tr->call_site_addr = call_site_addr; + tr->method = method; + return 0; +} + +int vm_enter_vm_native(void *target, void *stack_ptr) +{ + if (vm_native_stack_is_full()) { + struct vm_object *e = vm_alloc_stack_overflow_error(); + if (!e) + error("failed to allocate exception"); + + signal_exception(e); + return -1; + } + + struct vm_native_stack_entry *tr = new_vm_native_stack_entry(); + + tr->stack_ptr = stack_ptr; + tr->target = target; + return 0; +} + +void vm_leave_jni() +{ + jni_stack_offset -= sizeof(struct jni_stack_entry); +} + +void vm_leave_vm_native() +{ + vm_native_stack_offset -= sizeof(struct vm_native_stack_entry); +} + /** - * get_caller_stack_trace_elem - makes @elem to point to the stack - * trace element corresponding to the caller of given element. + * get_caller_stack_trace_elem - sets @elem to the previous element. * - * Returns 0 on success and -1 when bottom of stack trace reached. + * Returns 0 on success and -1 when bottom of stack is reached. */ static int get_caller_stack_trace_elem(struct stack_trace_elem *elem) { @@ -95,6 +199,41 @@ static int get_caller_stack_trace_elem(struct stack_trace_elem *elem) unsigned long ret_addr; void *new_frame; + /* If previous element was a JNI call then we move to the JNI + * caller's frame. We use the JNI stack_entry info to get the + * frame because we don't trust JNI methods's frame + * pointers. */ + if (elem->type == STACK_TRACE_ELEM_TYPE_JNI) { + struct jni_stack_entry *tr = + &jni_stack[elem->jni_stack_index--]; + + new_frame = tr->caller_frame; + new_addr = tr->call_site_addr; + goto out; + } + + /* Check if we hit the JNI interface frame */ + if (elem->jni_stack_index >= 0) { + struct jni_stack_entry *tr = + &jni_stack[elem->jni_stack_index]; + + if (tr->jni_interface_frame == elem->frame) { + elem->type = STACK_TRACE_ELEM_TYPE_JNI; + elem->is_native = false; + + /* + * We don't need to lock the compilation_unit + * because when JNI method is present in stack + * trace it means that it has been resolved + * and ->native_ptr can not change after that. + */ + elem->addr = (unsigned long) + tr->method->compilation_unit->native_ptr; + elem->frame = NULL; + return 0; + } + } + if (elem->is_native) { struct native_stack_frame *frame; @@ -121,9 +260,39 @@ static int get_caller_stack_trace_elem(struct stack_trace_elem *elem) if (new_frame == bottom_stack_frame) return -1; - elem->is_trampoline = elem->is_native && - called_from_jit_trampoline(elem->frame); - elem->is_native = is_native(new_addr) || elem->is_trampoline; + out: + /* Check if we hit the VM native caller frame */ + if (elem->vm_native_stack_index >= 0) { + struct vm_native_stack_entry *tr = + &vm_native_stack[elem->vm_native_stack_index]; + + if (tr->stack_ptr - sizeof(struct native_stack_frame) + == new_frame) + { + elem->type = STACK_TRACE_ELEM_TYPE_VM_NATIVE; + elem->is_native = true; + new_addr = (unsigned long) tr->target; + --elem->vm_native_stack_index; + + goto out2; + } + } + + /* Check if previous elemement was called from JIT trampoline. */ + if (elem->is_native && called_from_jit_trampoline(elem->frame)) { + elem->type = STACK_TRACE_ELEM_TYPE_TRAMPOLINE; + elem->is_native = true; + goto out2; + } + + elem->is_native = is_native(new_addr); + + if (elem->is_native) + elem->type = STACK_TRACE_ELEM_TYPE_OTHER; + else + elem->type = STACK_TRACE_ELEM_TYPE_JIT; + + out2: elem->addr = new_addr; elem->frame = new_frame; @@ -139,9 +308,8 @@ static int get_caller_stack_trace_elem(struct stack_trace_elem *elem) */ int get_prev_stack_trace_elem(struct stack_trace_elem *elem) { - while (get_caller_stack_trace_elem(elem) == 0) { - if (is_vm_native(elem->addr) || - !(elem->is_trampoline || elem->is_native)) + while (get_caller_stack_trace_elem(elem) == 0) { + if (elem->type < STACK_TRACE_ELEM_TYPE_OTHER) return 0; } @@ -157,10 +325,13 @@ int get_prev_stack_trace_elem(struct stack_trace_elem *elem) int init_stack_trace_elem(struct stack_trace_elem *elem) { elem->is_native = true; - elem->is_trampoline = false; + elem->type = STACK_TRACE_ELEM_TYPE_OTHER; elem->addr = (unsigned long)&init_stack_trace_elem; elem->frame = __builtin_frame_address(0); + elem->vm_native_stack_index = vm_native_stack_index() - 1; + elem->jni_stack_index = jni_stack_index() - 1; + return get_prev_stack_trace_elem(elem); } @@ -631,3 +802,28 @@ error: vm_print_exception_description(exception); } + +/** + * Creates an instance of StackOverflowError. We create exception + * object and fill stack trace in manually because throwable + * constructor calls fillInStackTrace which can cause StackOverflowError + * when VM native stack is full. + */ +struct vm_object *vm_alloc_stack_overflow_error(void) + +{ struct vm_object *stacktrace; + struct vm_object *obj; + + obj = vm_object_alloc(vm_java_lang_StackOverflowError); + if (!obj) { + NOT_IMPLEMENTED; + return NULL; + } + + stacktrace = get_stack_trace(); + if (stacktrace) + vm_call_method(vm_java_lang_Throwable_setStackTrace, obj, + stacktrace); + + return obj; +} -- 1.6.0.6 ------------------------------------------------------------------------------ _______________________________________________ Jatovm-devel mailing list Jatovm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/jatovm-devel