Hi Jakub,

I forgot to mention that all patches are against mid-July trunk, I was hoping I'd have no conflicts. Anyway thanks for letting me know, if there are conflicts with my other patches please let me know, and I'll post an updated version at a later date.

All your other concerns are valid and I'll try addressing them in the future. I didn't like hashing addresses either, and I was surprised I saw no regressions.


Dimitris



This patch isn't against the trunk, where p->offset and p->size aren't rtxes
anymore, but HOST_WIDE_INTs.  Furthermore, it is a bad idea to hash
the p->expr address itself, it doesn't make any sense to hash on what
p->expr points to in that case.  And p->offset and p->size should be ignored
if the *known_p corresponding fields are false.  So, if you really think
using iterative_hash_object is a win, it should be something like:
 mem_attrs q = *p;
 q.expr = NULL;
 if (!q.offset_known_p) q.offset = 0;
 if (!q.size_known_p) q.size = 0;
 return iterative_hash_object (q, iterative_hash_expr (p->expr, 0));
(or better yet avoid q.expr = NULL and instead start hashing from the next
field after expr).  Hashing the struct padding might not be a good idea
either.

        Jakub

Reply via email to