[PATCH 13/14] jit: optimize register use and block position calculation in regalloc

2009-08-30 Thread Tomek Grabiec
With multiple ranges per interval it is expensive to calculate
interval intersection. We do not have to check intersection between
current and intervals which have incompatible register type. That's
because this information will not be used after all. pick_register()
will not consider those registers. For example, we do not have to
check at which position XMM0 register is available if we are
allocating a general purpose register.

Signed-off-by: Tomek Grabiec 
---
 jit/linear-scan.c |9 +
 1 files changed, 9 insertions(+), 0 deletions(-)

diff --git a/jit/linear-scan.c b/jit/linear-scan.c
index 187eb49..c824104 100644
--- a/jit/linear-scan.c
+++ b/jit/linear-scan.c
@@ -210,6 +210,9 @@ static void allocate_blocked_reg(struct live_interval 
*current,
if (it->fixed_reg)
continue;
 
+   if (!reg_supports_type(it->reg, current->var_info->vm_type))
+   continue;
+
if (intervals_intersect(it, current)) {
pos = next_use_pos(it, interval_start(current));
set_use_pos(use_pos, it->reg, pos);
@@ -227,6 +230,9 @@ static void allocate_blocked_reg(struct live_interval 
*current,
if (!it->fixed_reg)
continue;
 
+   if (!reg_supports_type(it->reg, current->var_info->vm_type))
+   continue;
+
if (intervals_intersect(it, current)) {
unsigned long pos;
 
@@ -277,6 +283,9 @@ static void try_to_allocate_free_reg(struct live_interval 
*current,
}
 
list_for_each_entry(it, inactive, interval_node) {
+   if (!reg_supports_type(it->reg, current->var_info->vm_type))
+   continue;
+
if (intervals_intersect(it, current)) {
unsigned long pos;
 
-- 
1.6.3.3


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Jatovm-devel mailing list
Jatovm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/jatovm-devel


[PATCH 12/14] jit: do not put to inactive list fixed reg intervals for ESP and EBP

2009-08-30 Thread Tomek Grabiec
Those register are not considered for allocation and their numbers are
> NR_REGISTERS. Letting those fixed intervals into register allocator
can cause memory corruption becuase use position arrays are of size
NR_REGISTERS.

Signed-off-by: Tomek Grabiec 
---
 jit/linear-scan.c |9 ++---
 1 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/jit/linear-scan.c b/jit/linear-scan.c
index 018daaa..187eb49 100644
--- a/jit/linear-scan.c
+++ b/jit/linear-scan.c
@@ -75,6 +75,7 @@ static void set_use_pos(unsigned long *use_pos, enum 
machine_reg reg,
/*
 * This function does the same as set_free_pos so we call this directly
 */
+   assert(reg < NR_REGISTERS);
set_free_pos(use_pos, reg, pos);
 }
 
@@ -84,6 +85,7 @@ static void set_block_pos(unsigned long *block_pos, unsigned 
long *use_pos,
/*
 * This function does the same as set_free_pos so we call this directly
 */
+   assert(reg < NR_REGISTERS);
set_free_pos(block_pos, reg, pos);
set_free_pos(use_pos, reg, pos);
 }
@@ -345,9 +347,10 @@ int allocate_registers(struct compilation_unit *cu)
 
var->interval->current_range = 
interval_first_range(var->interval);
 
-   if (var->interval->fixed_reg)
-   list_add(&var->interval->interval_node, &inactive);
-   else
+   if (var->interval->fixed_reg) {
+   if (var->interval->reg < NR_REGISTERS)
+   list_add(&var->interval->interval_node, 
&inactive);
+   } else
pqueue_insert(unhandled, var->interval);
}
 
-- 
1.6.3.3


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Jatovm-devel mailing list
Jatovm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/jatovm-devel


[PATCH 10/14] jit: print variable types in regalloc trace

2009-08-30 Thread Tomek Grabiec

Signed-off-by: Tomek Grabiec 
---
 jit/trace-jit.c |1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/jit/trace-jit.c b/jit/trace-jit.c
index 58ecaa4..915571d 100644
--- a/jit/trace-jit.c
+++ b/jit/trace-jit.c
@@ -360,6 +360,7 @@ void trace_regalloc(struct compilation_unit *cu)
 interval_end(interval));
 
trace_printf("\t%s", reg_name(interval->reg));
+   trace_printf("\t%-11s", get_vm_type_name(var->vm_type));
trace_printf("\t%s", interval->fixed_reg ? "fixed\t" : 
"non-fixed");
if (interval->need_spill) {
unsigned long ndx = -1;
-- 
1.6.3.3


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Jatovm-devel mailing list
Jatovm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/jatovm-devel


[PATCH 11/14] jit: fix spilling of 64-bit registers.

2009-08-30 Thread Tomek Grabiec
This cleans handling of 64-bit stack slots up and fixes the buggy spilling code.
We always allocated 32-bit spill slot regardless of register type which caused
memory corruption.

Signed-off-by: Tomek Grabiec 
---
 arch/mmix/include/arch/instruction.h |   15 +-
 arch/mmix/instruction.c  |   12 +
 arch/x86/emit-code.c |4 +-
 arch/x86/include/arch/instruction.h  |   19 +--
 arch/x86/include/arch/stack-frame.h  |1 +
 arch/x86/insn-selector.brg   |   57 +--
 arch/x86/instruction.c   |   85 ++
 arch/x86/stack-frame.c   |   15 --
 include/jit/stack-slot.h |4 ++
 include/vm/types.h   |7 +++
 jit/spill-reload.c   |   42 +++--
 jit/stack-slot.c |   10 
 12 files changed, 168 insertions(+), 103 deletions(-)

diff --git a/arch/mmix/include/arch/instruction.h 
b/arch/mmix/include/arch/instruction.h
index 2fe1686..6846eb3 100644
--- a/arch/mmix/include/arch/instruction.h
+++ b/arch/mmix/include/arch/instruction.h
@@ -78,6 +78,9 @@ struct insn *ld_insn(enum insn_type, struct stack_slot *, 
struct var_info *);
  * instructions.
  */
 
+int insert_copy_slot_32_insns(struct stack_slot *, struct stack_slot *, struct 
list_head *, unsigned long);
+int insert_copy_slot_64_insns(struct stack_slot *, struct stack_slot *, struct 
list_head *, unsigned long);
+
 static inline struct insn *
 spill_insn(struct var_info *var, struct stack_slot *slot)
 {
@@ -91,18 +94,6 @@ reload_insn(struct stack_slot *slot, struct var_info *var)
 }
 
 static inline struct insn *
-push_slot_insn(struct stack_slot *slot)
-{
-   return NULL;
-}
-
-static inline struct insn *
-pop_slot_insn(struct stack_slot *slot)
-{
-   return NULL;
-}
-
-static inline struct insn *
 exception_spill_insn(struct stack_slot *slot)
 {
return NULL;
diff --git a/arch/mmix/instruction.c b/arch/mmix/instruction.c
index f79a2ca..baa4262 100644
--- a/arch/mmix/instruction.c
+++ b/arch/mmix/instruction.c
@@ -122,3 +122,15 @@ struct insn *ld_insn(enum insn_type insn_type, struct 
stack_slot *slot, struct v
}
return insn;
 }
+
+int insert_copy_slot_32_insns(struct stack_slot *from, struct stack_slot *to,
+ struct list_head *add_before, unsigned long 
bc_offset)
+{
+   return 0;
+}
+
+int insert_copy_slot_64_insns(struct stack_slot *from, struct stack_slot *to,
+ struct list_head *add_before, unsigned long 
bc_offset)
+{
+   return 0;
+}
diff --git a/arch/x86/emit-code.c b/arch/x86/emit-code.c
index a0b85e0..2253202 100644
--- a/arch/x86/emit-code.c
+++ b/arch/x86/emit-code.c
@@ -734,7 +734,7 @@ emit_mov_64_memlocal_xmm(struct buffer *buf, struct operand 
*src, struct operand
unsigned long disp;
 
dest_reg = mach_reg(&dest->reg);
-   disp = slot_offset(src->slot);
+   disp = slot_offset_64(src->slot);
 
emit(buf, 0xf2);
emit(buf, 0x0f);
@@ -905,7 +905,7 @@ static void emit_mov_64_xmm_memlocal(struct buffer *buf, 
struct operand *src,
unsigned long disp;
int mod;
 
-   disp = slot_offset(dest->slot);
+   disp = slot_offset_64(dest->slot);
 
if (is_imm_8(disp))
mod = 0x01;
diff --git a/arch/x86/include/arch/instruction.h 
b/arch/x86/include/arch/instruction.h
index c33bafa..063e857 100644
--- a/arch/x86/include/arch/instruction.h
+++ b/arch/x86/include/arch/instruction.h
@@ -240,6 +240,9 @@ struct insn *membase_insn(enum insn_type, struct var_info 
*, long);
  * instructions.
  */
 
+int insert_copy_slot_32_insns(struct stack_slot *, struct stack_slot *, struct 
list_head *, unsigned long);
+int insert_copy_slot_64_insns(struct stack_slot *, struct stack_slot *, struct 
list_head *, unsigned long);
+
 static inline struct insn *
 spill_insn(struct var_info *var, struct stack_slot *slot)
 {
@@ -282,22 +285,6 @@ reload_insn(struct stack_slot *slot, struct var_info *var)
return memlocal_reg_insn(insn_type, slot, var);
 }
 
-static inline struct insn *
-push_slot_insn(struct stack_slot *from)
-{
-   assert(from != NULL);
-
-   return memlocal_insn(INSN_PUSH_MEMLOCAL, from);
-}
-
-static inline struct insn *
-pop_slot_insn(struct stack_slot *to)
-{
-   assert(to != NULL);
-
-   return memlocal_insn(INSN_POP_MEMLOCAL, to);
-}
-
 static inline struct insn *jump_insn(struct basic_block *bb)
 {
return branch_insn(INSN_JMP_BRANCH, bb);
diff --git a/arch/x86/include/arch/stack-frame.h 
b/arch/x86/include/arch/stack-frame.h
index b0b42a2..bf69b27 100644
--- a/arch/x86/include/arch/stack-frame.h
+++ b/arch/x86/include/arch/stack-frame.h
@@ -43,6 +43,7 @@ struct jit_stack_frame {
 
 unsigned long frame_local_offset(struct vm_method *, struct expression *);
 unsigned long slot_offset(struct stack_slot *slot);
+unsigned long slot_offset_64(struct stack_slot *slot);

[PATCH 14/14] x86: remove unconditional saving and restoring of XMM registers

2009-08-30 Thread Tomek Grabiec
We do not longer need to do this because this bug has been solved:
http://jato.lighthouseapp.com/projects/29055/tickets/5-sse-registers-are-saved-and-registered-unconditionally

Signed-off-by: Tomek Grabiec 
---
 arch/x86/emit-code.c|   83 ---
 arch/x86/include/arch/stack-frame.h |1 -
 2 files changed, 0 insertions(+), 84 deletions(-)

diff --git a/arch/x86/emit-code.c b/arch/x86/emit-code.c
index 2253202..92044fd 100644
--- a/arch/x86/emit-code.c
+++ b/arch/x86/emit-code.c
@@ -64,13 +64,6 @@ static void emit_indirect_jump_reg(struct buffer *buf, enum 
machine_reg reg);
 static void emit_exception_test(struct buffer *buf, enum machine_reg reg);
 static void emit_restore_regs(struct buffer *buf);
 
-static void __emit_mov_xmm_membase(struct buffer *buf, enum machine_reg src,
-  enum machine_reg base, unsigned long offs);
-static void __emit_mov_membase_xmm(struct buffer *buf, enum machine_reg base, 
unsigned long offs, enum machine_reg dst);
-static void __emit_mov_64_xmm_membase(struct buffer *buf, enum machine_reg src,
-  enum machine_reg base, unsigned long offs);
-static void __emit_mov_64_membase_xmm(struct buffer *buf, enum machine_reg 
base, unsigned long offs, enum machine_reg dst);
-
 /
  * Common code emitters *
  /
@@ -1009,27 +1002,6 @@ void emit_prolog(struct buffer *buf, unsigned long 
nr_locals)
__emit_push_reg(buf, MACH_REG_ESI);
__emit_push_reg(buf, MACH_REG_EBX);
 
-   __emit_sub_imm_reg(buf, 8 * 8, MACH_REG_ESP);
-   if (cpu_has(X86_FEATURE_SSE2)) {
-   __emit_mov_64_xmm_membase(buf, MACH_REG_XMM0, MACH_REG_ESP, 0);
-   __emit_mov_64_xmm_membase(buf, MACH_REG_XMM1, MACH_REG_ESP, 8);
-   __emit_mov_64_xmm_membase(buf, MACH_REG_XMM2, MACH_REG_ESP, 16);
-   __emit_mov_64_xmm_membase(buf, MACH_REG_XMM3, MACH_REG_ESP, 24);
-   __emit_mov_64_xmm_membase(buf, MACH_REG_XMM4, MACH_REG_ESP, 32);
-   __emit_mov_64_xmm_membase(buf, MACH_REG_XMM5, MACH_REG_ESP, 40);
-   __emit_mov_64_xmm_membase(buf, MACH_REG_XMM6, MACH_REG_ESP, 48);
-   __emit_mov_64_xmm_membase(buf, MACH_REG_XMM7, MACH_REG_ESP, 56);
-   } else {
-   __emit_mov_xmm_membase(buf, MACH_REG_XMM0, MACH_REG_ESP, 0);
-   __emit_mov_xmm_membase(buf, MACH_REG_XMM1, MACH_REG_ESP, 8);
-   __emit_mov_xmm_membase(buf, MACH_REG_XMM2, MACH_REG_ESP, 16);
-   __emit_mov_xmm_membase(buf, MACH_REG_XMM3, MACH_REG_ESP, 24);
-   __emit_mov_xmm_membase(buf, MACH_REG_XMM4, MACH_REG_ESP, 32);
-   __emit_mov_xmm_membase(buf, MACH_REG_XMM5, MACH_REG_ESP, 40);
-   __emit_mov_xmm_membase(buf, MACH_REG_XMM6, MACH_REG_ESP, 48);
-   __emit_mov_xmm_membase(buf, MACH_REG_XMM7, MACH_REG_ESP, 56);
-   }
-
__emit_push_reg(buf, MACH_REG_EBP);
__emit_mov_reg_reg(buf, MACH_REG_ESP, MACH_REG_EBP);
 
@@ -1076,27 +1048,6 @@ static void emit_push_imm(struct buffer *buf, struct 
operand *operand)
 
 static void emit_restore_regs(struct buffer *buf)
 {
-   if (cpu_has(X86_FEATURE_SSE2)) {
-   __emit_mov_64_membase_xmm(buf, MACH_REG_ESP, 0, MACH_REG_XMM0);
-   __emit_mov_64_membase_xmm(buf, MACH_REG_ESP, 8, MACH_REG_XMM1);
-   __emit_mov_64_membase_xmm(buf, MACH_REG_ESP, 16, MACH_REG_XMM2);
-   __emit_mov_64_membase_xmm(buf, MACH_REG_ESP, 24, MACH_REG_XMM3);
-   __emit_mov_64_membase_xmm(buf, MACH_REG_ESP, 32, MACH_REG_XMM4);
-   __emit_mov_64_membase_xmm(buf, MACH_REG_ESP, 40, MACH_REG_XMM5);
-   __emit_mov_64_membase_xmm(buf, MACH_REG_ESP, 48, MACH_REG_XMM6);
-   __emit_mov_64_membase_xmm(buf, MACH_REG_ESP, 56, MACH_REG_XMM7);
-   } else {
-   __emit_mov_membase_xmm(buf, MACH_REG_ESP, 0, MACH_REG_XMM0);
-   __emit_mov_membase_xmm(buf, MACH_REG_ESP, 8, MACH_REG_XMM1);
-   __emit_mov_membase_xmm(buf, MACH_REG_ESP, 16, MACH_REG_XMM2);
-   __emit_mov_membase_xmm(buf, MACH_REG_ESP, 24, MACH_REG_XMM3);
-   __emit_mov_membase_xmm(buf, MACH_REG_ESP, 32, MACH_REG_XMM4);
-   __emit_mov_membase_xmm(buf, MACH_REG_ESP, 40, MACH_REG_XMM5);
-   __emit_mov_membase_xmm(buf, MACH_REG_ESP, 48, MACH_REG_XMM6);
-   __emit_mov_membase_xmm(buf, MACH_REG_ESP, 56, MACH_REG_XMM7);
-   }
-   __emit_add_imm_reg(buf, 8 * 8, MACH_REG_ESP);
-
__emit_pop_reg(buf, MACH_REG_EBX);
__emit_pop_reg(buf, MACH_REG_ESI);
__emit_pop_reg(buf, MACH_REG_EDI);
@@ -1631,40 +1582,6 @@ static void emit_mov_memindex_xmm(struct buffer *buf, 
struct operand *src,
emit(buf, encode_sib(src->shift, encode_reg(&src->index_reg), 
encode_reg(&src->base_reg)));
 }
 
-static void __emit_mov_xmm_membase(st

[PATCH 06/14] jit: implement precise live range calculation

2009-08-30 Thread Tomek Grabiec
For each variable its live range is calculated precisely as described
in Wimmer's master thesis "Linear Scan Register Allocator" in 5.6.3
"Build Intervals".

This patch reduces register allocator stress by generating shorter,
more precise live ranges and therefore reduces number of interval spills.

This patch also introduces distinction between even and odd use
positions. Even use positions represent input to instruction and odd
positions represent output. This allows for better register
utilization. Example:

mov r1, r2
add r2, r3

after allocation:

mov ebx, ebx  ; this can be optimized out in the future
add ebx, ebx

Signed-off-by: Tomek Grabiec 
---
 arch/x86/include/arch/instruction.h |5 --
 arch/x86/instruction.c  |5 ++
 arch/x86/use-def.c  |   28 +++
 include/jit/instruction.h   |6 +++
 include/jit/use-position.h  |6 +++
 include/jit/vars.h  |   15 ++
 jit/interval.c  |   38 +--
 jit/linear-scan.c   |   21 +++--
 jit/liveness.c  |   44 +++---
 jit/spill-reload.c  |   85 --
 test/jit/liveness-test.c|   30 ++--
 test/jit/spill-reload-test.c|   14 +++---
 12 files changed, 209 insertions(+), 88 deletions(-)

diff --git a/arch/x86/include/arch/instruction.h 
b/arch/x86/include/arch/instruction.h
index 5e04d92..be321de 100644
--- a/arch/x86/include/arch/instruction.h
+++ b/arch/x86/include/arch/instruction.h
@@ -214,11 +214,6 @@ struct insn {
 
 void insn_sanity_check(void);
 
-static inline unsigned long lir_position(struct use_position *reg)
-{
-   return reg->insn->lir_pos;
-}
-
 struct insn *insn(enum insn_type);
 struct insn *memlocal_reg_insn(enum insn_type, struct stack_slot *, struct 
var_info *);
 struct insn *membase_reg_insn(enum insn_type, struct var_info *, long, struct 
var_info *);
diff --git a/arch/x86/instruction.c b/arch/x86/instruction.c
index 8213e8b..0b1e145 100644
--- a/arch/x86/instruction.c
+++ b/arch/x86/instruction.c
@@ -107,6 +107,7 @@ static void init_membase_operand(struct insn *insn, 
unsigned long idx,
operand->disp = disp;
 
init_register(&operand->base_reg, insn, base_reg->interval);
+   operand->base_reg.kind = USE_KIND_INPUT;
 }
 
 static void init_memdisp_operand(struct insn *insn, unsigned long idx,
@@ -131,6 +132,9 @@ static void init_memindex_operand(struct insn *insn, 
unsigned long idx,
 
init_register(&operand->base_reg, insn, base_reg->interval);
init_register(&operand->index_reg, insn, index_reg->interval);
+
+   operand->base_reg.kind  = USE_KIND_INPUT;
+   operand->index_reg.kind = USE_KIND_INPUT;
 }
 
 static void init_memlocal_operand(struct insn *insn, unsigned long idx,
@@ -152,6 +156,7 @@ static void init_reg_operand(struct insn *insn, unsigned 
long idx,
operand->type = OPERAND_REG;
 
init_register(&operand->reg, insn, reg->interval);
+   operand->reg.kind = insn_operand_use_kind(insn, idx);
 }
 
 static void init_rel_operand(struct insn *insn, unsigned long idx,
diff --git a/arch/x86/use-def.c b/arch/x86/use-def.c
index 59e1f2a..0730a07 100644
--- a/arch/x86/use-def.c
+++ b/arch/x86/use-def.c
@@ -248,3 +248,31 @@ int insn_uses(struct insn *insn, struct var_info **uses)
 
return nr;
 }
+
+int insn_operand_use_kind(struct insn *insn, int idx)
+{
+   struct insn_info *info;
+   int use_mask;
+   int def_mask;
+   int kind_mask;
+
+   info = get_info(insn);
+
+   if (idx == 0) {
+   use_mask = USE_SRC;
+   def_mask = DEF_SRC;
+   } else {
+   assert(idx == 1);
+   use_mask = USE_DST;
+   def_mask = DEF_DST;
+   }
+
+   kind_mask = 0;
+   if (info->flags & use_mask)
+   kind_mask |= USE_KIND_INPUT;
+
+   if (info->flags & def_mask)
+   kind_mask |= USE_KIND_OUTPUT;
+
+   return kind_mask;
+}
diff --git a/include/jit/instruction.h b/include/jit/instruction.h
index cc303fe..d360c82 100644
--- a/include/jit/instruction.h
+++ b/include/jit/instruction.h
@@ -9,11 +9,17 @@ static inline struct insn *next_insn(struct insn *insn)
return list_entry(insn->insn_list_node.next, struct insn, 
insn_list_node);
 }
 
+static inline struct insn *prev_insn(struct insn *insn)
+{
+   return list_entry(insn->insn_list_node.prev, struct insn, 
insn_list_node);
+}
+
 struct insn *alloc_insn(enum insn_type);
 void free_insn(struct insn *);
 
 int insn_defs(struct compilation_unit *, struct insn *, struct var_info **);
 int insn_uses(struct insn *, struct var_info **);
+int insn_operand_use_kind(struct insn *, int);
 
 #define for_each_insn(insn, insn_list) list_for_each_entry(insn, insn_list, 
insn_list_node)
 
diff --git a/include/jit/use-position.h b/include/jit/use-position.h
index ee968d0..c2f215a 100644
--- a/includ

[PATCH 09/14] jit: ensure that spill variable has the same vm_type as original variable.

2009-08-30 Thread Tomek Grabiec
This is a bug fix. The bug caused that floating point variables were
spilled as if they were a general purpose registers which led to
corruption of general purpose registers.

Signed-off-by: Tomek Grabiec 
---
 jit/compilation-unit.c |4 ++--
 jit/interval.c |1 +
 jit/spill-reload.c |1 +
 3 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/jit/compilation-unit.c b/jit/compilation-unit.c
index 44b5c46..0dd4415 100644
--- a/jit/compilation-unit.c
+++ b/jit/compilation-unit.c
@@ -55,10 +55,10 @@ do_get_var(struct compilation_unit *cu, enum vm_type 
vm_type)
 
ret->vreg = cu->nr_vregs++;
ret->next = cu->var_infos;
-   ret->interval = alloc_interval(ret);
-
ret->vm_type = vm_type;
 
+   ret->interval = alloc_interval(ret);
+
cu->var_infos = ret;
   out:
return ret;
diff --git a/jit/interval.c b/jit/interval.c
index 23703a1..8eb7d32 100644
--- a/jit/interval.c
+++ b/jit/interval.c
@@ -102,6 +102,7 @@ struct live_interval *alloc_interval(struct var_info *var)
interval->reg = MACH_REG_UNASSIGNED;
interval->fixed_reg = false;
interval->spill_reload_reg.interval = interval;
+   interval->spill_reload_reg.vm_type = var->vm_type;
INIT_LIST_HEAD(&interval->interval_node);
INIT_LIST_HEAD(&interval->use_positions);
INIT_LIST_HEAD(&interval->range_list);
diff --git a/jit/spill-reload.c b/jit/spill-reload.c
index 622966b..70a5aa9 100644
--- a/jit/spill-reload.c
+++ b/jit/spill-reload.c
@@ -141,6 +141,7 @@ spill_interval(struct live_interval *interval,
if (!slot)
return NULL;
 
+   assert(interval->spill_reload_reg.vm_type == 
interval->var_info->vm_type);
spill = spill_insn(&interval->spill_reload_reg, slot);
if (!spill)
return NULL;
-- 
1.6.3.3


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Jatovm-devel mailing list
Jatovm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/jatovm-devel


[PATCH 08/14] jit: force spill of intervals containing caller saved registers before calls.

2009-08-30 Thread Tomek Grabiec
This fixes the follwing bug:
http://jato.lighthouseapp.com/projects/29055/tickets/7-ebcdx-are-unavailable-for-allocation-after-some-call-instructions

Signed-off-by: Tomek Grabiec 
---
 arch/mmix/include/arch/instruction.h |5 +
 arch/mmix/include/arch/registers.h   |3 +++
 arch/mmix/register.c |2 ++
 arch/x86/include/arch/instruction.h  |   11 +++
 arch/x86/include/arch/registers_32.h |3 +++
 arch/x86/registers_32.c  |   16 
 arch/x86/use-def.c   |   22 ++
 jit/liveness.c   |   15 ---
 8 files changed, 54 insertions(+), 23 deletions(-)

diff --git a/arch/mmix/include/arch/instruction.h 
b/arch/mmix/include/arch/instruction.h
index 27dc801..2fe1686 100644
--- a/arch/mmix/include/arch/instruction.h
+++ b/arch/mmix/include/arch/instruction.h
@@ -118,4 +118,9 @@ static inline const char *reg_name(enum machine_reg reg)
return "";
 }
 
+static inline bool insn_is_call(struct insn *insn)
+{
+   return false;
+}
+
 #endif /* __ARCH_INSTRUCTION_H */
diff --git a/arch/mmix/include/arch/registers.h 
b/arch/mmix/include/arch/registers.h
index 8faa73f..05d71bd 100644
--- a/arch/mmix/include/arch/registers.h
+++ b/arch/mmix/include/arch/registers.h
@@ -20,6 +20,9 @@ enum machine_reg {
MACH_REG_UNASSIGNED = INT_MAX,
 };
 
+#define NR_CALLER_SAVE_REGS 0
+extern enum machine_reg caller_save_regs[NR_CALLER_SAVE_REGS];
+
 static inline bool reg_supports_type(enum machine_reg reg, enum vm_type type)
 {
return true;
diff --git a/arch/mmix/register.c b/arch/mmix/register.c
index 1cde43d..8aa587b 100644
--- a/arch/mmix/register.c
+++ b/arch/mmix/register.c
@@ -1,2 +1,4 @@
 #include "arch/registers.h"
 #include "jit/vars.h"
+
+enum machine_reg caller_save_regs[NR_CALLER_SAVE_REGS] = {};
diff --git a/arch/x86/include/arch/instruction.h 
b/arch/x86/include/arch/instruction.h
index be321de..c33bafa 100644
--- a/arch/x86/include/arch/instruction.h
+++ b/arch/x86/include/arch/instruction.h
@@ -323,4 +323,15 @@ static inline bool insn_is_branch(struct insn *insn)
}
 }
 
+static inline bool insn_is_call(struct insn *insn)
+{
+   switch (insn->type) {
+   case INSN_CALL_REG:
+   case INSN_CALL_REL:
+   return true;
+   default:
+   return false;
+   }
+}
+
 #endif
diff --git a/arch/x86/include/arch/registers_32.h 
b/arch/x86/include/arch/registers_32.h
index ac6e308..30fa29d 100644
--- a/arch/x86/include/arch/registers_32.h
+++ b/arch/x86/include/arch/registers_32.h
@@ -48,6 +48,9 @@ enum machine_reg {
 
 #define GPR_VM_TYPEJ_INT
 
+#define NR_CALLER_SAVE_REGS 11
+extern enum machine_reg caller_save_regs[NR_CALLER_SAVE_REGS];
+
 const char *reg_name(enum machine_reg reg);
 
 bool reg_supports_type(enum machine_reg reg, enum vm_type type);
diff --git a/arch/x86/registers_32.c b/arch/x86/registers_32.c
index 5d88f1c..ce3c476 100644
--- a/arch/x86/registers_32.c
+++ b/arch/x86/registers_32.c
@@ -26,10 +26,26 @@
 
 #include "arch/registers.h"
 #include "jit/vars.h"
+#include "vm/system.h"
 
 #include 
 #include 
 
+enum machine_reg caller_save_regs[NR_CALLER_SAVE_REGS] = {
+   MACH_REG_EAX,
+   MACH_REG_ECX,
+   MACH_REG_EDX,
+
+   MACH_REG_XMM0,
+   MACH_REG_XMM1,
+   MACH_REG_XMM2,
+   MACH_REG_XMM3,
+   MACH_REG_XMM4,
+   MACH_REG_XMM5,
+   MACH_REG_XMM6,
+   MACH_REG_XMM7
+};
+
 static const char *register_names[] = {
[MACH_REG_EAX] = "EAX",
[MACH_REG_ECX] = "ECX",
diff --git a/arch/x86/use-def.c b/arch/x86/use-def.c
index 0730a07..1653195 100644
--- a/arch/x86/use-def.c
+++ b/arch/x86/use-def.c
@@ -22,7 +22,6 @@ enum {
USE_NONE= 512,
USE_SRC = 1024,
USE_FP  = 2048, /* frame pointer */
-   DEF_CALLER_SAVED= 4096,
 
 #ifdef CONFIG_X86_32
DEF_EAX = DEF_xAX,
@@ -50,8 +49,8 @@ static struct insn_info insn_infos[] = {
DECLARE_INFO(INSN_ADD_REG_REG, USE_SRC | USE_DST | DEF_DST),
DECLARE_INFO(INSN_AND_MEMBASE_REG, USE_SRC | USE_DST | DEF_DST),
DECLARE_INFO(INSN_AND_REG_REG, USE_SRC | USE_DST | DEF_DST),
-   DECLARE_INFO(INSN_CALL_REG, USE_SRC | DEF_CALLER_SAVED),
-   DECLARE_INFO(INSN_CALL_REL, USE_NONE | DEF_CALLER_SAVED),
+   DECLARE_INFO(INSN_CALL_REG, USE_SRC | DEF_NONE),
+   DECLARE_INFO(INSN_CALL_REL, USE_NONE | DEF_NONE),
DECLARE_INFO(INSN_CLTD_REG_REG, USE_SRC | DEF_SRC | DEF_DST),
DECLARE_INFO(INSN_CMP_IMM_REG, USE_DST),
DECLARE_INFO(INSN_CMP_MEMBASE_REG, USE_SRC | USE_DST),
@@ -188,25 +187,8 @@ static struct mach_reg_def checkregs[] = {
{ MACH_REG_xAX, DEF_xAX },
{ MACH_REG_xCX, DEF_xCX },
{ MACH_REG_xDX, DEF_xDX },
-
-#ifdef CONFIG_X86_32
-   { MACH_REG_EAX, DEF_CALLER_SAVED },
-   { MACH_REG_ECX, DEF_CALLER_SAVED },
-   { MACH_REG_EDX, DEF_CALLER_SAVED },
-#

[PATCH 04/14] jit: introduce multiple live ranges per interval.

2009-08-30 Thread Tomek Grabiec
This is needed for precise modeling of live ranges.

Signed-off-by: Tomek Grabiec 
---
 include/jit/vars.h  |   83 +---
 jit/interval.c  |  228 +-
 jit/linear-scan.c   |   51 ++
 jit/liveness.c  |   24 +++--
 jit/spill-reload.c  |   12 +-
 jit/trace-jit.c |   26 +++--
 test/jit/linear-scan-test.c |   18 +---
 test/jit/live-range-test.c  |   50 ++
 test/jit/liveness-test.c|8 +-
 9 files changed, 421 insertions(+), 79 deletions(-)

diff --git a/include/jit/vars.h b/include/jit/vars.h
index 6afb16b..f00c5f9 100644
--- a/include/jit/vars.h
+++ b/include/jit/vars.h
@@ -10,18 +10,9 @@
 
 struct live_range {
unsigned long start, end;   /* end is exclusive */
+   struct list_head range_list_node;
 };
 
-static inline unsigned long range_last_insn_pos(struct live_range *range)
-{
-   return (range->end - 1) & ~1;
-}
-
-static inline unsigned long range_first_insn_pos(struct live_range *range)
-{
-   return range->start & ~1;
-}
-
 static inline bool in_range(struct live_range *range, unsigned long offset)
 {
return (offset >= range->start) && (offset < range->end);
@@ -69,8 +60,21 @@ struct live_interval {
/* Parent variable of this interval.  */
struct var_info *var_info;
 
-   /* Live range of this interval.  */
-   struct live_range range;
+   /* Live ranges of this interval. List of not overlaping and
+  not adjacent ranges sorted in ascending order. */
+   struct list_head range_list;
+
+   /*
+* Points to a range from range_list which should be
+* considered as interval's starting range in operations:
+* intervals_intersect(), interval_intersection_start(),
+* interval_range_at(). It's used to speedup register
+* allocation. Intervals can have a lot of live ranges. Linear
+* scan algorithm goes through intervals in ascending order by
+* interval start. We can take advantage of this and don't
+* browse ranges past current position in some operations.
+*/
+   struct live_range *current_range;
 
/* Linked list of child intervals.  */
struct live_interval *next_child, *prev_child;
@@ -118,11 +122,66 @@ mark_need_reload(struct live_interval *it, struct 
live_interval *parent)
it->spill_parent = parent;
 }
 
+static inline struct live_range *node_to_range(struct list_head *node)
+{
+   return list_entry(node, struct live_range, range_list_node);
+}
+
+static inline struct live_range *
+next_range(struct list_head *list, struct live_range *range)
+{
+   if (range->range_list_node.next == list)
+   return NULL;
+
+   return list_entry(range->range_list_node.next, struct live_range,
+ range_list_node);
+}
+
+static inline unsigned long interval_start(struct live_interval *it)
+{
+   assert(!list_is_empty(&it->range_list));
+   return node_to_range(it->range_list.next)->start;
+}
+
+static inline unsigned long interval_end(struct live_interval *it)
+{
+   assert(!list_is_empty(&it->range_list));
+   return node_to_range(it->range_list.prev)->end;
+}
+
+static inline unsigned long interval_last_insn_pos(struct live_interval *it)
+{
+   return (interval_end(it) - 1) & ~1ul;
+}
+
+static inline unsigned long interval_first_insn_pos(struct live_interval *it)
+{
+   return interval_start(it) & ~1ul;
+}
+
+static inline bool interval_is_empty(struct live_interval *it)
+{
+   return list_is_empty(&it->range_list);
+}
+
+static inline struct live_range *interval_first_range(struct live_interval *it)
+{
+   assert(!interval_is_empty(it));
+   return list_first_entry(&it->range_list, struct live_range,
+   range_list_node);
+}
+
 struct live_interval *alloc_interval(struct var_info *);
 void free_interval(struct live_interval *);
 struct live_interval *split_interval_at(struct live_interval *, unsigned long 
pos);
 unsigned long next_use_pos(struct live_interval *, unsigned long);
 struct live_interval *vreg_start_interval(struct compilation_unit *, unsigned 
long);
 struct live_interval *interval_child_at(struct live_interval *, unsigned long);
+bool intervals_intersect(struct live_interval *, struct live_interval *);
+unsigned long interval_intersection_start(struct live_interval *, struct 
live_interval *);
+bool interval_covers(struct live_interval *, unsigned long);
+int interval_add_range(struct live_interval *, unsigned long, unsigned long);
+struct live_range *interval_range_at(struct live_interval *, unsigned long);
+void interval_update_current_range(struct live_interval *, unsigned long);
 
 #endif /* __JIT_VARS_H */
diff --git a/jit/interval.c b/jit/interval.c
index 9ad9d97..9e22c0c 100644
--- a/jit/interval.c
+++ b/jit/interval.c
@@ -35,6 +35,65 @@
 #include 
 #include 
 
+static struct live_range *a

[PATCH 05/14] jit: move arch independent stuff from arch/instruction.h to jit/instruction.h

2009-08-30 Thread Tomek Grabiec

Signed-off-by: Tomek Grabiec 
---
 arch/mmix/include/arch/instruction.h |8 
 arch/x86/include/arch/instruction.h  |8 
 arch/x86/instruction.c   |3 +--
 arch/x86/use-def.c   |2 +-
 include/jit/instruction.h|   10 ++
 jit/basic-block.c|3 +--
 jit/bc-offset-mapping.c  |3 +--
 jit/compilation-unit.c   |2 +-
 jit/emit.c   |5 ++---
 jit/liveness.c   |2 +-
 jit/trace-jit.c  |1 +
 test/jit/compilation-unit-test.c |2 +-
 12 files changed, 20 insertions(+), 29 deletions(-)

diff --git a/arch/mmix/include/arch/instruction.h 
b/arch/mmix/include/arch/instruction.h
index d12bf83..27dc801 100644
--- a/arch/mmix/include/arch/instruction.h
+++ b/arch/mmix/include/arch/instruction.h
@@ -113,17 +113,9 @@ static inline bool insn_is_branch(struct insn *insn)
return insn->type == INSN_JMP;
 }
 
-struct insn *alloc_insn(enum insn_type);
-void free_insn(struct insn *);
-
-int insn_defs(struct compilation_unit *, struct insn *, struct var_info **);
-int insn_uses(struct insn *, struct var_info **);
-
 static inline const char *reg_name(enum machine_reg reg)
 {
return "";
 }
 
-#define for_each_insn(insn, insn_list) list_for_each_entry(insn, insn_list, 
insn_list_node)
-
 #endif /* __ARCH_INSTRUCTION_H */
diff --git a/arch/x86/include/arch/instruction.h 
b/arch/x86/include/arch/instruction.h
index 962dc0a..5e04d92 100644
--- a/arch/x86/include/arch/instruction.h
+++ b/arch/x86/include/arch/instruction.h
@@ -328,12 +328,4 @@ static inline bool insn_is_branch(struct insn *insn)
}
 }
 
-struct insn *alloc_insn(enum insn_type);
-void free_insn(struct insn *);
-
-int insn_defs(struct compilation_unit *, struct insn *, struct var_info **);
-int insn_uses(struct insn *, struct var_info **);
-
-#define for_each_insn(insn, insn_list) list_for_each_entry(insn, insn_list, 
insn_list_node)
-
 #endif
diff --git a/arch/x86/instruction.c b/arch/x86/instruction.c
index c8e1044..8213e8b 100644
--- a/arch/x86/instruction.c
+++ b/arch/x86/instruction.c
@@ -25,8 +25,7 @@
  */
 
 #include "jit/bc-offset-mapping.h"
-
-#include "arch/instruction.h"
+#include "jit/instruction.h"
 
 #include 
 #include 
diff --git a/arch/x86/use-def.c b/arch/x86/use-def.c
index 9f76d13..59e1f2a 100644
--- a/arch/x86/use-def.c
+++ b/arch/x86/use-def.c
@@ -6,7 +6,7 @@
  */
 
 #include "jit/compilation-unit.h"
-#include "arch/instruction.h"
+#include "jit/instruction.h"
 #include "jit/vars.h"
 
 enum {
diff --git a/include/jit/instruction.h b/include/jit/instruction.h
index 376e278..cc303fe 100644
--- a/include/jit/instruction.h
+++ b/include/jit/instruction.h
@@ -9,4 +9,14 @@ static inline struct insn *next_insn(struct insn *insn)
return list_entry(insn->insn_list_node.next, struct insn, 
insn_list_node);
 }
 
+struct insn *alloc_insn(enum insn_type);
+void free_insn(struct insn *);
+
+int insn_defs(struct compilation_unit *, struct insn *, struct var_info **);
+int insn_uses(struct insn *, struct var_info **);
+
+#define for_each_insn(insn, insn_list) list_for_each_entry(insn, insn_list, 
insn_list_node)
+
+#define for_each_insn_reverse(insn, insn_list) 
list_for_each_entry_reverse(insn, insn_list, insn_list_node)
+
 #endif /* JATO_JIT_INSTRUCTION_H */
diff --git a/jit/basic-block.c b/jit/basic-block.c
index bcb2866..e19ee7e 100644
--- a/jit/basic-block.c
+++ b/jit/basic-block.c
@@ -9,10 +9,9 @@
 
 #include "jit/compilation-unit.h"
 #include "jit/basic-block.h"
+#include "jit/instruction.h"
 #include "jit/statement.h"
 
-#include "arch/instruction.h"
-
 #include "vm/die.h"
 
 #include 
diff --git a/jit/bc-offset-mapping.c b/jit/bc-offset-mapping.c
index ac92221..7db42f9 100644
--- a/jit/bc-offset-mapping.c
+++ b/jit/bc-offset-mapping.c
@@ -31,8 +31,7 @@
 #include "jit/bc-offset-mapping.h"
 #include "jit/statement.h"
 #include "jit/expression.h"
-
-#include "arch/instruction.h"
+#include "jit/instruction.h"
 
 #include "lib/buffer.h"
 
diff --git a/jit/compilation-unit.c b/jit/compilation-unit.c
index 956d8b8..44b5c46 100644
--- a/jit/compilation-unit.c
+++ b/jit/compilation-unit.c
@@ -23,11 +23,11 @@
  *
  * Please refer to the file LICENSE for details.
  */
-#include "arch/instruction.h"
 #include "arch/registers.h"
 
 #include "jit/basic-block.h"
 #include "jit/compilation-unit.h"
+#include "jit/instruction.h"
 #include "jit/stack-slot.h"
 #include "jit/statement.h"
 #include "jit/vars.h"
diff --git a/jit/emit.c b/jit/emit.c
index a65f35f..ee5c4d9 100644
--- a/jit/emit.c
+++ b/jit/emit.c
@@ -16,14 +16,13 @@
 
 #include "jit/compilation-unit.h"
 #include "jit/basic-block.h"
+#include "jit/compiler.h"
 #include "jit/emit-code.h"
 #include "jit/exception.h"
-#include "jit/compiler.h"
+#include "jit/instruction.h"
 #include "jit/statement.h"
 #include "jit/text.h"
 
-#include "arch/instruction.h"
-
 #include 
 #include 
 #incl

[PATCH 07/14] x86: ensure fixed-reg variables are not returned as rule results

2009-08-30 Thread Tomek Grabiec
Fixed-reg variables should never be used outside a rule. When they are
returned as rule results, they can reach another rule as input
register. If that rule uses fixed register for the same machine
register, the conflict occurs and is not detected. This causes that
result is incorrect.

Lets consider a rule:
reg:OP_DIV(reg, reg)
{
...
select_insn(s, tree, reg_reg_insn(INSN_MOV_REG_REG, state->left->reg1, 
eax));
select_insn(s, tree, reg_reg_insn(INSN_CLTD_REG_REG, eax, edx));
select_insn(s, tree, reg_reg_insn(INSN_DIV_REG_REG, state->right->reg1, 
eax));
select_insn(s, tree, reg_reg_insn(INSN_MOV_REG_REG, eax, result));
}

It uses fixed variables for EAX and EDX. If another rule puts a fixed
variable for EAX as OP_DIV's right input, then result will be
incorrect because content of EAX is overridden.

This example also shows, that spilling of fixed intervals will not
solve this problem. Instruction INSN_DIV_REG_REG has two inputs:
state->right->reg1 and eax. Suppose that state->right->reg1 is a fixed
variable for eax too. So we would have two fixed intervals with the
same use position. This conflict can not be solved by fixed interval
spilling. It requires reloading one of intervals to another machine
register. This can be done for regular registers.

The conclusion is: we should use fixed-reg variables only to prepare
and save registers around some special instructions. Fixed-reg
variables should not be used in place of regular virtual registers.

Signed-off-by: Tomek Grabiec 
---
 arch/x86/insn-selector.brg |  134 ++--
 1 files changed, 79 insertions(+), 55 deletions(-)

diff --git a/arch/x86/insn-selector.brg b/arch/x86/insn-selector.brg
index 8522667..c4f2ccf 100644
--- a/arch/x86/insn-selector.brg
+++ b/arch/x86/insn-selector.brg
@@ -367,14 +367,16 @@ freg: OP_FSUB(freg, freg) 1
 
 reg:   OP_MUL(reg, EXPR_LOCAL) 1
 {
-   struct var_info *eax;
+   struct var_info *eax, *result;
 
+   result = get_var(s->b_parent, J_INT);
eax = get_fixed_var(s->b_parent, MACH_REG_xAX);
 
-   state->reg1 = eax;
+   state->reg1 = result;
 
select_insn(s, tree, reg_reg_insn(INSN_MOV_REG_REG, state->left->reg1, 
eax));
__binop_reg_local(state, s, tree, INSN_MUL_MEMBASE_EAX, eax, 0);
+   select_insn(s, tree, reg_reg_insn(INSN_MOV_REG_REG, eax, result));
 }
 
 reg:   OP_MUL(reg, reg) 1
@@ -406,7 +408,7 @@ reg:OP_MUL_64(reg, reg) 1
eax = get_fixed_var(s->b_parent, MACH_REG_xAX);
edx = get_fixed_var(s->b_parent, MACH_REG_xDX);
 
-   state->reg1 = eax;
+   state->reg1 = get_var(s->b_parent, J_INT);
state->reg2 = get_var(s->b_parent, J_INT);
 
tmp1 = get_var(s->b_parent, J_INT);
@@ -418,28 +420,39 @@ reg:  OP_MUL_64(reg, reg) 1
 
select_insn(s, tree, reg_reg_insn(INSN_MOV_REG_REG, state->right->reg1, 
eax));
select_insn(s, tree, reg_reg_insn(INSN_MUL_REG_EAX, state->left->reg1, 
eax));
+   select_insn(s, tree, reg_reg_insn(INSN_MOV_REG_REG, eax, state->reg1));
 
select_insn(s, tree, reg_reg_insn(INSN_ADD_REG_REG, edx, state->reg2));
 }
 
 reg:   OP_DIV(reg, EXPR_LOCAL) 1
 {
+   struct var_info *eax;
+
div_reg_local(state, s, tree);
+
+   eax = get_fixed_var(s->b_parent, MACH_REG_xAX);
+   state->reg1 = get_var(s->b_parent, J_INT);
+
+   select_insn(s, tree, reg_reg_insn(INSN_MOV_REG_REG, eax, state->reg1));
 }
 
 reg:   OP_DIV(reg, reg) 1
 {
+   struct var_info *eax;
struct var_info *edx;
struct var_info *result;
 
edx = get_fixed_var(s->b_parent, MACH_REG_xDX);
-   result = get_fixed_var(s->b_parent, MACH_REG_xAX);
+   eax = get_fixed_var(s->b_parent, MACH_REG_xAX);
 
+   result = get_var(s->b_parent, J_INT);
state->reg1 = result;
 
-   select_insn(s, tree, reg_reg_insn(INSN_MOV_REG_REG, state->left->reg1, 
result));
-   select_insn(s, tree, reg_reg_insn(INSN_CLTD_REG_REG, result, edx));
-   select_insn(s, tree, reg_reg_insn(INSN_DIV_REG_REG, state->right->reg1, 
result));
+   select_insn(s, tree, reg_reg_insn(INSN_MOV_REG_REG, state->left->reg1, 
eax));
+   select_insn(s, tree, reg_reg_insn(INSN_CLTD_REG_REG, eax, edx));
+   select_insn(s, tree, reg_reg_insn(INSN_DIV_REG_REG, state->right->reg1, 
eax));
+   select_insn(s, tree, reg_reg_insn(INSN_MOV_REG_REG, eax, result));
 }
 
 freg:  OP_DDIV(freg, freg) 1
@@ -464,29 +477,32 @@ reg:  OP_DIV_64(reg, reg) 1
 
 reg:   OP_REM(reg, EXPR_LOCAL) 1
 {
-   struct var_info *result, *remainder;
+   struct var_info *edx;
 
div_reg_local(state, s, tree);
 
-   result = get_fixed_var(s->b_parent, MACH_REG_xAX);
-   remainder = get_fixed_var(s->b_parent, MACH_REG_xDX);
+   edx = get_fixed_var(s->b_parent, MACH_REG_xDX);
+   state->reg1 = get_var(s->b_parent, J_INT);
 
-   select_insn(s, tree, reg_reg_insn(INSN_MOV_REG_REG, remainder, result));

[PATCH 03/14] jit: cleanup interval spilling

2009-08-30 Thread Tomek Grabiec

Signed-off-by: Tomek Grabiec 
---
 jit/linear-scan.c |   86 +++--
 1 files changed, 37 insertions(+), 49 deletions(-)

diff --git a/jit/linear-scan.c b/jit/linear-scan.c
index 5538bc7..8baa914 100644
--- a/jit/linear-scan.c
+++ b/jit/linear-scan.c
@@ -110,35 +110,40 @@ static enum machine_reg pick_register(unsigned long 
*free_until_pos, enum vm_typ
return ret;
 }
 
+static void spill_interval(struct live_interval *it, unsigned long pos,
+  struct pqueue *unhandled)
+{
+   struct live_interval *new;
+
+   new = split_interval_at(it, pos);
+   if (has_use_positions(new)) {
+   unsigned long next_pos = next_use_pos(new, 0);
+
+   /* Trim interval if it does not start with a use position. */
+   if (next_pos > new->range.start)
+   new = split_interval_at(new, next_pos);
+
+   it->need_spill = true;
+   mark_need_reload(new, it);
+   pqueue_insert(unhandled, new);
+   }
+}
+
 static void __spill_interval_intersecting(struct live_interval *current,
  enum machine_reg reg,
  struct live_interval *it,
  struct pqueue *unhandled)
 {
-   struct live_interval *new;
-   unsigned long next_pos;
-
if (it->reg != reg)
return;
 
if (!ranges_intersect(&it->range, ¤t->range))
return;
 
-   new = split_interval_at(it, current->range.start);
-   it->need_spill = true;
-
-   next_pos = next_use_pos(new, new->range.start);
-
-   if (next_pos == LONG_MAX)
-   return;
-
-   new = split_interval_at(new, next_pos);
-
-   if (!has_use_positions(new))
+   if (current->range.start == it->range.start)
return;
 
-   mark_need_reload(new, it);
-   pqueue_insert(unhandled, new);
+   spill_interval(it, current->range.start, unhandled);
 }
 
 static void spill_all_intervals_intersecting(struct live_interval *current,
@@ -165,7 +170,7 @@ static void allocate_blocked_reg(struct live_interval 
*current,
 struct pqueue *unhandled)
 {
unsigned long use_pos[NR_REGISTERS], block_pos[NR_REGISTERS];
-   struct live_interval *it, *new;
+   struct live_interval *it;
int i;
enum machine_reg reg;
 
@@ -224,26 +229,16 @@ static void allocate_blocked_reg(struct live_interval 
*current,
 * so it is best to spill current itself
 */
pos = next_use_pos(current, current->range.start);
-   new = split_interval_at(current, pos);
-
-   if (has_use_positions(new)) {
-   mark_need_reload(new, current);
-   pqueue_insert(unhandled, new);
-   }
-
-   current->need_spill = 1;
-   } else if (block_pos[reg] >= current->range.end) {
-   /* Spilling made a register free for the whole current */
-   current->reg = reg;
-   spill_all_intervals_intersecting(current, reg, active,
-inactive, unhandled);
+   spill_interval(current, pos, unhandled);
} else {
-   new = split_interval_at(current, block_pos[reg]);
+   /*
+* Register is available for whole or some part of interval
+*/
+   current->reg = reg;
 
-   if (has_use_positions(new))
-   pqueue_insert(unhandled, new);
+   if (block_pos[reg] < current->range.end)
+   spill_interval(current, block_pos[reg], unhandled);
 
-   current->reg = reg;
spill_all_intervals_intersecting(current, reg, active,
 inactive, unhandled);
}
@@ -255,7 +250,7 @@ static void try_to_allocate_free_reg(struct live_interval 
*current,
 struct pqueue *unhandled)
 {
unsigned long free_until_pos[NR_REGISTERS];
-   struct live_interval *it, *new;
+   struct live_interval *it;
enum machine_reg reg;
int i;
 
@@ -292,16 +287,8 @@ static void try_to_allocate_free_reg(struct live_interval 
*current,
/*
 * Register available for the first part of the interval.
 */
-   new = split_interval_at(current, free_until_pos[reg]);
-
-   if (has_use_positions(new)) {
-   new = split_interval_at(new, next_use_pos(new, 0));
-   mark_need_reload(new, current);
-   pqueue_insert(unhandled, new);
-   }
-
+   spill_interval(current, free_until_pos[reg], unhandled);
current->reg = reg;
-   

[PATCH 01/14] jit: add missing trace_flush() to trace_return_value()

2009-08-30 Thread Tomek Grabiec

Signed-off-by: Tomek Grabiec 
---
 jit/trace-jit.c |4 +++-
 1 files changed, 3 insertions(+), 1 deletions(-)

diff --git a/jit/trace-jit.c b/jit/trace-jit.c
index 0246c3b..1bde5e1 100644
--- a/jit/trace-jit.c
+++ b/jit/trace-jit.c
@@ -811,8 +811,10 @@ void trace_return_value(struct vm_method *vmm, unsigned 
long long value)
 
trace_printf("trace return: %s.%s%s\n", vmm->class->name, vmm->name,
 vmm->type);
-   if (type == J_VOID)
+   if (type == J_VOID) {
+   trace_flush();
return;
+   }
 
trace_printf("%12s: ", get_vm_type_name(type));
print_arg(type,(unsigned long *)  &value, &dummy);
-- 
1.6.3.3


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Jatovm-devel mailing list
Jatovm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/jatovm-devel


[PATCH 02/14] jit: assign two LIR positions for each instruction.

2009-08-30 Thread Tomek Grabiec
We will need this to optimize register allocation. Every LIR
instruction has two positions assigned - consecutive even and
odd. Even interval use positions correspond to instruction input and
odd positions correspond to instruction output. Distinction between
those allow to allocate the same physical register to adjacent
intervals where first ends at instruction input and the second starts
at instruction output. There are some more advantages of this
described in "Linear Scan Register Allocation for the Java HotSpot
Client Compiler", C. Wimmer.

This is a preliminary patch. All use positions are even yet.

Signed-off-by: Tomek Grabiec 
---
 include/jit/compilation-unit.h   |2 +
 include/jit/vars.h   |   10 +
 jit/compilation-unit.c   |4 ++-
 jit/liveness.c   |4 +-
 jit/spill-reload.c   |   16 +++--
 jit/trace-jit.c  |   68 +++--
 test/jit/compilation-unit-test.c |2 +-
 test/jit/liveness-test.c |   22 ++--
 8 files changed, 77 insertions(+), 51 deletions(-)

diff --git a/include/jit/compilation-unit.h b/include/jit/compilation-unit.h
index f6fb0e9..4114bce 100644
--- a/include/jit/compilation-unit.h
+++ b/include/jit/compilation-unit.h
@@ -85,6 +85,8 @@ struct compilation_unit {
 */
struct radix_tree *lir_insn_map;
 
+   unsigned long last_insn;
+
/*
 * This maps machine-code offset (of gc safepoint) to gc map
 */
diff --git a/include/jit/vars.h b/include/jit/vars.h
index 177c283..6afb16b 100644
--- a/include/jit/vars.h
+++ b/include/jit/vars.h
@@ -12,6 +12,16 @@ struct live_range {
unsigned long start, end;   /* end is exclusive */
 };
 
+static inline unsigned long range_last_insn_pos(struct live_range *range)
+{
+   return (range->end - 1) & ~1;
+}
+
+static inline unsigned long range_first_insn_pos(struct live_range *range)
+{
+   return range->start & ~1;
+}
+
 static inline bool in_range(struct live_range *range, unsigned long offset)
 {
return (offset >= range->start) && (offset < range->end);
diff --git a/jit/compilation-unit.c b/jit/compilation-unit.c
index cf349b8..956d8b8 100644
--- a/jit/compilation-unit.c
+++ b/jit/compilation-unit.c
@@ -259,9 +259,11 @@ void compute_insn_positions(struct compilation_unit *cu)
 
radix_tree_insert(cu->lir_insn_map, pos, insn);
 
-   ++pos;
+   pos += 2;
}
 
bb->end_insn = pos;
}
+
+   cu->last_insn = pos;
 }
diff --git a/jit/liveness.c b/jit/liveness.c
index cc82933..3e0f586 100644
--- a/jit/liveness.c
+++ b/jit/liveness.c
@@ -142,8 +142,8 @@ static void __analyze_use_def(struct basic_block *bb, 
struct insn *insn)
 */
if (!test_bit(bb->def_set->bits, var->vreg))
set_bit(bb->use_set->bits, var->vreg);
-   }   
-   
+   }
+
nr_defs = insn_defs(bb->b_parent, insn, defs);
for (i = 0; i < nr_defs; i++) {
struct var_info *var = defs[i];
diff --git a/jit/spill-reload.c b/jit/spill-reload.c
index af50046..5964682 100644
--- a/jit/spill-reload.c
+++ b/jit/spill-reload.c
@@ -50,7 +50,7 @@ static struct insn *first_insn(struct compilation_unit *cu, 
struct live_interval
 {
struct insn *ret;
 
-   ret = radix_tree_lookup(cu->lir_insn_map, interval->range.start);
+   ret = radix_tree_lookup(cu->lir_insn_map, 
range_first_insn_pos(&interval->range));
assert(ret != NULL);
 
return ret;
@@ -60,7 +60,7 @@ static struct insn *last_insn(struct compilation_unit *cu, 
struct live_interval
 {
struct insn *ret;
 
-   ret = radix_tree_lookup(cu->lir_insn_map, interval->range.end - 1);
+   ret = radix_tree_lookup(cu->lir_insn_map, 
range_last_insn_pos(&interval->range));
assert(ret != NULL);
 
return ret;
@@ -86,7 +86,7 @@ static struct list_head *bb_last_spill_node(struct 
basic_block *bb)
if (bb->end_insn == bb->start_insn)
return &bb->insn_list;
 
-   last = radix_tree_lookup(bb->b_parent->lir_insn_map, bb->end_insn - 1);
+   last = radix_tree_lookup(bb->b_parent->lir_insn_map, bb->end_insn - 2);
assert(last);
 
if (insn_is_branch(last))
@@ -190,6 +190,16 @@ static int __insert_spill_reload_insn(struct live_interval 
*interval, struct com
goto out;
 
if (interval->need_reload) {
+   /*
+* Intervals which start with a DEF position (odd
+* numbers) should not be reloaded. One reason for
+* this is that they do not have to because register
+* content is overriden. Another reason is that we
+* can't insert a reload instruction in the middle of
+* instruction.
+*/
+   assert((interval->range.start & 1) == 0);

[penberg/jato] bee749: vm: add trace_flush() to itable tracing and skip e...

2009-08-30 Thread noreply
Branch: refs/heads/master
Home:   http://github.com/penberg/jato

Commit: bee7499e91d3e12a8d17aad990bad356a44532f1

http://github.com/penberg/jato/commit/bee7499e91d3e12a8d17aad990bad356a44532f1
Author: Vegard Nossum 
Date:   2009-08-30 (Sun, 30 Aug 2009)

Changed paths:
  M vm/itable.c

Log Message:
---
vm: add trace_flush() to itable tracing and skip empty tables

Signed-off-by: Vegard Nossum 
Signed-off-by: Pekka Enberg 


Commit: aa336c6e0991b889600c0b4a10e5311c49b0b310

http://github.com/penberg/jato/commit/aa336c6e0991b889600c0b4a10e5311c49b0b310
Author: Vegard Nossum 
Date:   2009-08-30 (Sun, 30 Aug 2009)

Changed paths:
  M arch/x86/emit-code.c
  A arch/x86/include/arch/itable.h
  M vm/itable.c

Log Message:
---
vm: move itable_resolver_stub_error() to arch/x86

This method was always arch-specific anyway, and now we need to call it
from emit-code.c anyway.

Signed-off-by: Vegard Nossum 
Signed-off-by: Pekka Enberg 


Commit: fb66529adef82352f5b8266adcf5c7ed824797b3

http://github.com/penberg/jato/commit/fb66529adef82352f5b8266adcf5c7ed824797b3
Author: Vegard Nossum 
Date:   2009-08-30 (Sun, 30 Aug 2009)

Changed paths:
  M arch/x86/emit-code.c

Log Message:
---
x86: add debug check to emit_itable_bsearch()

This check ensures that we actually found the method we were searching
for. (If we didn't, we die.)

Signed-off-by: Vegard Nossum 
Signed-off-by: Pekka Enberg 



--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Jatovm-devel mailing list
Jatovm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/jatovm-devel


[PATCH 3/3] x86: add debug check to emit_itable_bsearch()

2009-08-30 Thread Vegard Nossum
This check ensures that we actually found the method we were searching
for. (If we didn't, we die.)

Signed-off-by: Vegard Nossum 
---
 arch/x86/emit-code.c |   56 +++--
 1 files changed, 40 insertions(+), 16 deletions(-)

diff --git a/arch/x86/emit-code.c b/arch/x86/emit-code.c
index 695b588..d3e7907 100644
--- a/arch/x86/emit-code.c
+++ b/arch/x86/emit-code.c
@@ -2056,29 +2056,53 @@ static void emit_itable_bsearch(struct buffer *buf,
 
/* No point in emitting the "cmp" if we're not going to test
 * anything */
-   if (b - a >= 1)
+   if (b - a >= 1) {
__emit_cmp_imm_reg(buf, (long) table[m]->i_method, 
MACH_REG_EAX);
 
-   if (m - a > 0) {
-   /* open-coded "jb" */
-   emit(buf, 0x0f);
-   emit(buf, 0x82);
+   if (m - a > 0) {
+   /* open-coded "jb" */
+   emit(buf, 0x0f);
+   emit(buf, 0x82);
 
-   /* placeholder address */
-   jb_addr = buffer_current(buf);
-   emit_imm32(buf, 0);
-   }
+   /* placeholder address */
+   jb_addr = buffer_current(buf);
+   emit_imm32(buf, 0);
+   }
 
-   if (b - m > 0) {
-   /* open-coded "ja" */
-   emit(buf, 0x0f);
-   emit(buf, 0x87);
+   if (b - m > 0) {
+   /* open-coded "ja" */
+   emit(buf, 0x0f);
+   emit(buf, 0x87);
 
-   /* placeholder address */
-   ja_addr = buffer_current(buf);
-   emit_imm32(buf, 0);
+   /* placeholder address */
+   ja_addr = buffer_current(buf);
+   emit_imm32(buf, 0);
+   }
}
 
+#ifndef NDEBUG
+   /* Make sure what we wanted is what we got;
+*
+* cmp i_method, %eax
+* je .okay
+* jmp itable_resolver_stub_error
+* .okay:
+*
+*/
+   __emit_cmp_imm_reg(buf, (long) table[m]->i_method, MACH_REG_EAX);
+
+   /* open-coded "je" */
+   emit(buf, 0x0f);
+   emit(buf, 0x84);
+
+   uint8_t *je_addr = buffer_current(buf);
+   emit_imm32(buf, 0);
+
+   __emit_jmp(buf, (unsigned long) &itable_resolver_stub_error);
+
+   fixup_branch_target(je_addr, buffer_current(buf));
+#endif
+
__emit_add_imm_reg(buf, 4 * table[m]->c_method->virtual_index, 
MACH_REG_ECX);
emit_really_indirect_jump_reg(buf, MACH_REG_ECX);
 
-- 
1.6.0.4


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Jatovm-devel mailing list
Jatovm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/jatovm-devel


[PATCH 2/3] vm: move itable_resolver_stub_error() to arch/x86

2009-08-30 Thread Vegard Nossum
This method was always arch-specific anyway, and now we need to call it
from emit-code.c anyway.

Signed-off-by: Vegard Nossum 
---
 arch/x86/emit-code.c   |   20 
 arch/x86/include/arch/itable.h |   10 ++
 vm/itable.c|   22 ++
 3 files changed, 32 insertions(+), 20 deletions(-)
 create mode 100644 arch/x86/include/arch/itable.h

diff --git a/arch/x86/emit-code.c b/arch/x86/emit-code.c
index a0b85e0..695b588 100644
--- a/arch/x86/emit-code.c
+++ b/arch/x86/emit-code.c
@@ -21,11 +21,13 @@
 #include "lib/list.h"
 #include "lib/buffer.h"
 
+#include "vm/backtrace.h"
 #include "vm/method.h"
 #include "vm/object.h"
 
 #include "arch/init.h"
 #include "arch/instruction.h"
+#include "arch/itable.h"
 #include "arch/memory.h"
 #include "arch/stack-frame.h"
 #include "arch/thread.h"
@@ -2023,6 +2025,24 @@ void emit_jni_trampoline(struct buffer *buf, struct 
vm_method *vmm,
jit_text_unlock();
 }
 
+/* The regparm(1) makes GCC get the first argument from %ecx and the rest
+ * from the stack. This is convenient, because we use %ecx for passing the
+ * hidden "method" parameter. Interfaces are invoked on objects, so we also
+ * always get the object in the first stack parameter. */
+void __attribute__((regparm(1)))
+itable_resolver_stub_error(struct vm_method *method, struct vm_object *obj)
+{
+   fprintf(stderr, "itable resolver stub error!\n");
+   fprintf(stderr, "invokeinterface called on method %s.%s%s "
+   "(itable index %d)\n",
+   method->class->name, method->name, method->type,
+   method->itable_index);
+   fprintf(stderr, "object class %s\n", obj->class->name);
+
+   print_trace();
+   abort();
+}
+
 /* Note: a < b, always */
 static void emit_itable_bsearch(struct buffer *buf,
struct itable_entry **table, unsigned int a, unsigned int b)
diff --git a/arch/x86/include/arch/itable.h b/arch/x86/include/arch/itable.h
new file mode 100644
index 000..7824cf2
--- /dev/null
+++ b/arch/x86/include/arch/itable.h
@@ -0,0 +1,10 @@
+#ifndef ARCH_X86_ITABLE_H
+#define ARCH_X86_ITABLE_H
+
+struct vm_method;
+struct vm_object;
+
+void __attribute__((regparm(1)))
+itable_resolver_stub_error(struct vm_method *method, struct vm_object *obj);
+
+#endif
diff --git a/vm/itable.c b/vm/itable.c
index bf38d6b..88b80d0 100644
--- a/vm/itable.c
+++ b/vm/itable.c
@@ -39,6 +39,8 @@
 #include "vm/method.h"
 #include "vm/trace.h"
 
+#include "arch/itable.h"
+
 bool opt_trace_itable;
 
 static uint32_t itable_hash_string(const char *str)
@@ -119,26 +121,6 @@ static int itable_add_entries(struct vm_class *vmc, struct 
list_head *itable)
return 0;
 }
 
-/* The regparm(1) makes GCC get the first argument from %ecx and the rest
- * from the stack. This is convenient, because we use %ecx for passing the
- * hidden "method" parameter. Interfaces are invoked on objects, so we also
- * always get the object in the first stack parameter.
- *
- * XXX: This is arch-specific (x86_32) code, should do something else here. */
-static void __attribute__((regparm(1)))
-itable_resolver_stub_error(struct vm_method *method, struct vm_object *obj)
-{
-   fprintf(stderr, "itable resolver stub error!\n");
-   fprintf(stderr, "invokeinterface called on method %s.%s%s "
-   "(itable index %d)\n",
-   method->class->name, method->name, method->type,
-   method->itable_index);
-   fprintf(stderr, "object class %s\n", obj->class->name);
-
-   print_trace();
-   abort();
-}
-
 static int itable_entry_compare(const void *a, const void *b)
 {
const struct itable_entry *ae = *(const struct itable_entry **) a;
-- 
1.6.0.4


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Jatovm-devel mailing list
Jatovm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/jatovm-devel


[PATCH 1/3] vm: add trace_flush() to itable tracing and skip empty tables

2009-08-30 Thread Vegard Nossum
Signed-off-by: Vegard Nossum 
---
 vm/itable.c |   15 +++
 1 files changed, 15 insertions(+), 0 deletions(-)

diff --git a/vm/itable.c b/vm/itable.c
index 6f93457..bf38d6b 100644
--- a/vm/itable.c
+++ b/vm/itable.c
@@ -189,6 +189,18 @@ static void *itable_create_conflict_resolver(struct 
vm_class *vmc,
 
 static void trace_itable(struct vm_class *vmc, struct list_head *itable)
 {
+   bool empty = true;
+   for (unsigned int i = 0; i < VM_ITABLE_SIZE; ++i) {
+   if (list_is_empty(&itable[i]))
+   continue;
+
+   empty = false;
+   break;
+   }
+
+   if (empty)
+   return;
+
trace_printf("trace itable (duplicates included): %s\n", vmc->name);
 
for (unsigned int i = 0; i < VM_ITABLE_SIZE; ++i) {
@@ -212,6 +224,9 @@ static void trace_itable(struct vm_class *vmc, struct 
list_head *itable)
c_vmm->class->name);
}
}
+
+   trace_printf("\n");
+   trace_flush();
 }
 
 int vm_itable_setup(struct vm_class *vmc)
-- 
1.6.0.4


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Jatovm-devel mailing list
Jatovm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/jatovm-devel


[penberg/jato] c66149: vm: fix itable construction

2009-08-30 Thread noreply
Branch: refs/heads/master
Home:   http://github.com/penberg/jato

Commit: c66149ad16958c85d822fcdbcc2ef49bdfcd6741

http://github.com/penberg/jato/commit/c66149ad16958c85d822fcdbcc2ef49bdfcd6741
Author: Vegard Nossum 
Date:   2009-08-30 (Sun, 30 Aug 2009)

Changed paths:
  M regression/jvm/InvokeinterfaceTest.java
  M vm/itable.c

Log Message:
---
vm: fix itable construction

We need to add the inherited, synthetic methods to the itable as well.

Reported-by: Pekka Enberg 
Signed-off-by: Vegard Nossum 
Signed-off-by: Pekka Enberg 



--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Jatovm-devel mailing list
Jatovm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/jatovm-devel


[penberg/jato] 2489e6: jit: Fix pc_map_for_each_reverse()

2009-08-30 Thread noreply
Branch: refs/heads/master
Home:   http://github.com/penberg/jato

Commit: 2489e650b4b5a60bf0e43da85c181411a9a9dd41

http://github.com/penberg/jato/commit/2489e650b4b5a60bf0e43da85c181411a9a9dd41
Author: Pekka Enberg 
Date:   2009-08-30 (Sun, 30 Aug 2009)

Changed paths:
  M include/jit/pc-map.h

Log Message:
---
jit: Fix pc_map_for_each_reverse()

The pc_map_for_each_reverse macro iterates backwards so we need to
decrement the value pointer. This fixes the following SIGSEGV in DaCapo hsqldb
benchmark:

  SIGSEGV at EIP 0806c837 while accessing memory address 0ea2a000.
  Registers:
   eax:    ebx: 0ea1c160   ecx: 0001   edx: 0ea2a000
   esi: bfa72478   edi: 006b   ebp: bfa723c4   esp: bfa723bc
  Native and Java stack trace:
   [<0806c837>] native : pc_map_get_max_lesser_than+28 
(/home/penberg/src/jato/jit/pc-map.c:200)
   [<0806c42c>] native : 
   [<08063144>] native : compile+38 
(/home/penberg/src/jato/jit/compiler.c:59)
   [<08069c55>] native : 
   [] trampoline : 
dacapo/hsqldb/PseudoJDBCBench.(PseudoJDBCBench.java:216)
   [] jit: 
dacapo/hsqldb/PseudoJDBCBench.main(PseudoJDBCBench.java:208)
   [] jit: 
dacapo/hsqldb/HsqldbHarness.iterate(HsqldbHarness.java:19)
   [] jit: dacapo/Benchmark.run(Benchmark.java:126)
   [] jit: 
dacapo/TestHarness.runBenchmark(TestHarness.java:302)
   [] jit: dacapo/TestHarness.main(TestHarness.java:242)
   [] jit: Harness.main(Harness.java:5)
   [<08071f06>] native : do_main_class+146 
(/home/penberg/src/jato/vm/jato.c:1234)
   [<0807227d>] native : 
   [] native : 
   [<08054b80>] native : 

Acked-by: Tomek Grabiec 
Signed-off-by: Pekka Enberg 



--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Jatovm-devel mailing list
Jatovm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/jatovm-devel


[penberg/jato] 16681c: jni: Implement SetStaticField JNI functions

2009-08-30 Thread noreply
Branch: refs/heads/master
Home:   http://github.com/penberg/jato

Commit: 16681ce414d39d00740510c7d4d8bc67776a6496

http://github.com/penberg/jato/commit/16681ce414d39d00740510c7d4d8bc67776a6496
Author: Pekka Enberg 
Date:   2009-08-30 (Sun, 30 Aug 2009)

Changed paths:
  M vm/jni-interface.c

Log Message:
---
jni: Implement SetStaticField JNI functions

Signed-off-by: Pekka Enberg 



--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Jatovm-devel mailing list
Jatovm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/jatovm-devel


[penberg/jato] d77e21: jit: remove unused basic block sorting code

2009-08-30 Thread noreply
Branch: refs/heads/master
Home:   http://github.com/penberg/jato

Commit: d77e21cb9412e2b391e52dfeefd3cf6dedeeb9c7

http://github.com/penberg/jato/commit/d77e21cb9412e2b391e52dfeefd3cf6dedeeb9c7
Author: Tomek Grabiec 
Date:   2009-08-30 (Sun, 30 Aug 2009)

Changed paths:
  M include/jit/compilation-unit.h
  M jit/compilation-unit.c

Log Message:
---
jit: remove unused basic block sorting code

Signed-off-by: Tomek Grabiec 
Signed-off-by: Pekka Enberg 


Commit: 0e5c0071c485b4d1c5ffe0edeb797e4456bd01cc

http://github.com/penberg/jato/commit/0e5c0071c485b4d1c5ffe0edeb797e4456bd01cc
Author: Tomek Grabiec 
Date:   2009-08-30 (Sun, 30 Aug 2009)

Changed paths:
  M jit/compilation-unit.c
  M jit/liveness.c

Log Message:
---
jit: remove redundant compute_boundaries()

The values of .start_insn and .end_insn can be computed in 
compute_insn_positions().

Signed-off-by: Tomek Grabiec 
Signed-off-by: Pekka Enberg 


Commit: 3036abb96716a190e2886898bb8c25a3632690cb

http://github.com/penberg/jato/commit/3036abb96716a190e2886898bb8c25a3632690cb
Author: Tomek Grabiec 
Date:   2009-08-30 (Sun, 30 Aug 2009)

Changed paths:
  M arch/x86/insn-selector.brg

Log Message:
---
x86: fix wrong argument cleanup count for EXPR_ANEWARRAY

Selected code was adding 4 bytes too much to ESP, which led to
a memory corruption.

Signed-off-by: Tomek Grabiec 
Signed-off-by: Pekka Enberg 



--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Jatovm-devel mailing list
Jatovm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/jatovm-devel