On 2012년 10월 10일 03:49, Richard Henderson wrote:
On 10/09/2012 05:37 AM, Yeongkyoon Lee wrote:
+#if defined(CONFIG_QEMU_LDST_OPTIMIZATION) && defined(CONFIG_SOFTMMU)
+ /* Initialize qemu_ld/st labels to assist code generation at the end of TB
+ for TLB miss cases at the end of TB */
+ s->qemu_ldst_labels = tcg_malloc(sizeof(TCGLabelQemuLdst) *
+ TCG_MAX_QEMU_LDST);
+ s->nb_qemu_ldst_labels = 0;
+#endif
I said before that I wasn't fond of this sort of "constant" dynamic allocation.
Regardless of what surrounding code does. You could clean those up too,
as a separate patch...
I can change the dynamic allocation to static one as you said, however,
one concern is that we might use redundant memory on non-TCG
environment, such as, KVM mode.
What's you opinion about this?
+#if defined(CONFIG_QEMU_LDST_OPTIMIZATION) && defined(CONFIG_SOFTMMU)
+ /* Generate slow paths of qemu_ld/st IRs which call MMU helpers at
+ the end of block */
+ tcg_out_qemu_ldst_slow_path(s);
+#endif
This interface is so close to "tcg_out_ldst_and_constant_pools(s)" that
I don't think the function should be specific to ldst. Just call it
tcg_out_tb_finalize or something.
That looks good.
I'll do refactoring for the function names later.
+/* Macros/structures for qemu_ld/st IR code optimization:
+ TCG_MAX_HELPER_LABELS is defined as same as OPC_BUF_SIZE in exec-all.h. */
+#define TCG_MAX_QEMU_LDST 640
+#define HL_LDST_SHIFT 4
+#define HL_LDST_MASK (1 << HL_LDST_SHIFT)
+#define HL_ST_MASK HL_LDST_MASK
+#define HL_OPC_MASK (HL_LDST_MASK - 1)
+#define IS_QEMU_LD_LABEL(L) (!((L)->opc_ext & HL_LDST_MASK))
+#define IS_QEMU_ST_LABEL(L) ((L)->opc_ext & HL_LDST_MASK)
+
+typedef struct TCGLabelQemuLdst {
+ int opc_ext; /* | 27bit(reserved) | 1bit(ld/st) | 4bit(opc) | */
Any good reason to use all these masks when the compiler can do it
for you with bitfields?
No. It is just my coding style.
However, there might be no compiler problems and bitfields might look
somewhat pretty, so I'll use bitfields later.
r~