Hello!
I’m currently modifying the RISC-V backend for a manycore processor where each
core is connected over a network. Each core has a local scratchpad memory, but
can also read and write other cores’ scratchpads. I’d like to add an attribute
to give a hint to the optimizer about which loads will be remote and therefore
longer latency than others.
Concretely, I was envisioning something like:
int foo(__attribute__((remote(5))) int *a, int *b)
{
return *b + *a;
}
I’ve already added the attribute, and I do the checking to ensure that it’s
only applied to pointer types, etc. In the TARGET_RTX_COSTS hook, I then check
if a MEM expression has the attribute applied to it, and if so return the
appropriate cost. (Code here:
https://github.com/save-buffer/riscv-gcc/blob/656bf7960899d95ba3358f90a0e04f5c0a964c14/gcc/config/riscv/riscv.c#L1628
<https://github.com/save-buffer/riscv-gcc/blob/656bf7960899d95ba3358f90a0e04f5c0a964c14/gcc/config/riscv/riscv.c#L1628>)
Unfortunately, it looks like even in the simple example above the cost doesn’t
get applied (by checking with the -dP flag). However, in the following example,
the load does get the cost applied to it but the store to B does not.
void bar(__attribute__((remote(5)) int *a, int *b)
{
if(*A > 5)
*A = 10;
*B = *A;
}
I was wondering if this is the correct way to approach this problem, and also
why the attribute sometimes gets applied and sometimes not.
Thank you!
Sasha Krassovsky