[Bug tree-optimization/103168] Value numbering for PRE of pure functions can be improved
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103168 Richard Biener changed: What|Removed |Added Resolution|--- |FIXED Status|ASSIGNED|RESOLVED --- Comment #19 from Richard Biener --- Fixed.
[Bug tree-optimization/103168] Value numbering for PRE of pure functions can be improved
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103168 --- Comment #18 from CVS Commits --- The master branch has been updated by Richard Biener : https://gcc.gnu.org/g:6180f5c8d6d1dc7b6634c41a46f0f8f5ca2e5b9d commit r12-5499-g6180f5c8d6d1dc7b6634c41a46f0f8f5ca2e5b9d Author: Richard Biener Date: Wed Nov 24 09:08:44 2021 +0100 tree-optimization/103168 - Improve VN of pure function calls This improves value-numbering of calls that read memory, calls to const functions with aggregate arguments and calls to pure functions where the latter include const functions we demoted to pure for the fear of interposing with a less optimized version. Note that for pure functions we do not handle functions that access global memory. 2021-11-24 Richard Biener Jan Hubicka PR tree-optimization/103168 * ipa-modref.h (struct modref_summary): Add load_accesses. * ipa-modref.c (modref_summary::finalize): Initialize load_accesses. * tree-ssa-sccvn.c (visit_reference_op_call): Use modref info to walk the virtual use->def chain to CSE const/pure function calls possibly reading from memory. * g++.dg/tree-ssa/pr103168.C: New testcase.
[Bug tree-optimization/103168] Value numbering for PRE of pure functions can be improved
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103168 --- Comment #17 from Jan Hubicka --- Created attachment 51853 --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=51853=edit Updated patch Patch attached. Indeed the oracle ICEs on ref=base=NULL. I also checked that during cc1plus build we eliminate 187 calls.
[Bug tree-optimization/103168] Value numbering for PRE of pure functions can be improved
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103168 --- Comment #16 from Richard Biener --- (In reply to Jan Hubicka from comment #13) > Concerning comment #10, the problem was that the loop walking all accesses > was missing loads->every_base check. This is used to represent that we > track no useful info about loads performed at all. > > Anyway if I read the code correctly, it does nothing useful if the access > tree contains any access for which we can not construct ref and thus one can > simply check global_memory_access and do not care about > every_base/every_ref/every_access since these must be all false. > > I simplified the walk a bit and added code pre-computing number of accesses > in the tree into the summary. > > What we can also do is, when hitting access for which we can not construct > ref, or when hitting every_ref/every_acccess, is to construct ref with > base_alias_set/ref_alias_set as given by the access tree but with base=NULL, > offset=0 and size=max_size=-1. This should still let the basic TBAA oracle > to disambiguate. I don't think we support base = NULL in the oracle (ref = NULL is supported though). Can you attach your patch instead of cut so I can take it from there? Thanks.
[Bug tree-optimization/103168] Value numbering for PRE of pure functions can be improved
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103168 --- Comment #15 from hubicka at kam dot mff.cuni.cz --- The patch passed testing on x86_64-linux.
Re: [Bug tree-optimization/103168] Value numbering for PRE of pure functions can be improved
The patch passed testing on x86_64-linux.
[Bug tree-optimization/103168] Value numbering for PRE of pure functions can be improved
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103168 --- Comment #14 from hubicka at kam dot mff.cuni.cz --- This is bit modified patch I am testing. I added pre-computation of the number of accesses, enabled the path for const functions (in case they have memory operand), initialized alias sets and clarified the logic around every_* and global_memory_accesses PR tree-optimization/103168 (modref_summary::finalize): Initialize load_accesses. * ipa-modref.h (struct modref_summary): Add load_accesses. * tree-ssa-sccvn.c (visit_reference_op_call): Use modref info to walk the virtual use->def chain to CSE pure function calls. * g++.dg/tree-ssa/pr103168.C: New testcase. diff --git a/gcc/ipa-modref.c b/gcc/ipa-modref.c index 4f9323165ea..595eb6e0d8f 100644 --- a/gcc/ipa-modref.c +++ b/gcc/ipa-modref.c @@ -725,6 +727,23 @@ modref_summary::finalize (tree fun) break; } } + if (loads->every_base) +load_accesses = 1; + else +{ + load_accesses = 0; + for (auto base_node : loads->bases) + { + if (base_node->every_ref) + load_accesses++; + else + for (auto ref_node : base_node->refs) + if (ref_node->every_access) + load_accesses++; + else + load_accesses += ref_node->accesses->length (); + } +} } /* Get function summary for FUNC if it exists, return NULL otherwise. */ diff --git a/gcc/ipa-modref.h b/gcc/ipa-modref.h index f868eb6de07..a7937d74945 100644 --- a/gcc/ipa-modref.h +++ b/gcc/ipa-modref.h @@ -53,6 +53,8 @@ struct GTY(()) modref_summary /* Flags coputed by finalize method. */ + /* Total number of accesses in loads tree. */ + unsigned int load_accesses; /* global_memory_read is not set for functions calling functions with !binds_to_current_def which, after interposition, may read global memory but do nothing useful with it (except for crashing if some diff --git a/gcc/testsuite/g++.dg/tree-ssa/pr103168.C b/gcc/testsuite/g++.dg/tree-ssa/pr103168.C new file mode 100644 index 000..82924a3e3ce --- /dev/null +++ b/gcc/testsuite/g++.dg/tree-ssa/pr103168.C @@ -0,0 +1,24 @@ +// { dg-do compile } +// { dg-options "-O2 -fdump-tree-fre1-details" } + +struct a +{ + int a; + static __attribute__ ((noinline)) + int ret (int v) {return v;} + + __attribute__ ((noinline)) + int inca () {return a++;} +}; + +int +test() +{ + struct a av; + av.a=1; + int val = av.ret (0) + av.inca(); + av.a=2; + return val + av.ret(0) + av.inca(); +} + +/* { dg-final { scan-tree-dump-times "Replaced a::ret" 1 "fre1" } } */ diff --git a/gcc/tree-ssa-sccvn.c b/gcc/tree-ssa-sccvn.c index 149674e6a16..719f5184654 100644 --- a/gcc/tree-ssa-sccvn.c +++ b/gcc/tree-ssa-sccvn.c @@ -71,6 +71,8 @@ along with GCC; see the file COPYING3. If not see #include "tree-ssa-loop-niter.h" #include "builtins.h" #include "fold-const-call.h" +#include "ipa-modref-tree.h" +#include "ipa-modref.h" #include "tree-ssa-sccvn.h" /* This algorithm is based on the SCC algorithm presented by Keith @@ -5084,12 +5086,118 @@ visit_reference_op_call (tree lhs, gcall *stmt) struct vn_reference_s vr1; vn_reference_t vnresult = NULL; tree vdef = gimple_vdef (stmt); + modref_summary *summary; /* Non-ssa lhs is handled in copy_reference_ops_from_call. */ if (lhs && TREE_CODE (lhs) != SSA_NAME) lhs = NULL_TREE; vn_reference_lookup_call (stmt, , ); + + /* If the lookup did not succeed for pure functions try to use + modref info to find a candidate to CSE to. */ + const int accesses_limit = 8; + if (!vnresult + && !vdef + && lhs + && gimple_vuse (stmt) + && (((summary = get_modref_function_summary (stmt, NULL)) + && !summary->global_memory_read + && summary->load_accesses < accesses_limit) + || gimple_call_flags (stmt) & ECF_CONST)) +{ + /* First search if we can do someting useful and build a +vector of all loads we have to check. */ + bool unknown_memory_access = false; + auto_vec accesses; + + if (summary) + { + for (auto base_node : summary->loads->bases) + if (unknown_memory_access) + break; + else for (auto ref_node : base_node->refs) + if (unknown_memory_access) + break; + else for (auto access_node : ref_node->accesses) + { + accesses.quick_grow (accesses.length () + 1); + if (!access_node.get_ao_ref (stmt, ())) + { + /* We could use get_call_arg (...) and initialize +a ref based on the argument and unknown offset in +some cases, but we have to get a ao_ref to +disambiguate against other stmts. */ + unknown_memory_access = true; +
Re: [Bug tree-optimization/103168] Value numbering for PRE of pure functions can be improved
This is bit modified patch I am testing. I added pre-computation of the number of accesses, enabled the path for const functions (in case they have memory operand), initialized alias sets and clarified the logic around every_* and global_memory_accesses PR tree-optimization/103168 (modref_summary::finalize): Initialize load_accesses. * ipa-modref.h (struct modref_summary): Add load_accesses. * tree-ssa-sccvn.c (visit_reference_op_call): Use modref info to walk the virtual use->def chain to CSE pure function calls. * g++.dg/tree-ssa/pr103168.C: New testcase. diff --git a/gcc/ipa-modref.c b/gcc/ipa-modref.c index 4f9323165ea..595eb6e0d8f 100644 --- a/gcc/ipa-modref.c +++ b/gcc/ipa-modref.c @@ -725,6 +727,23 @@ modref_summary::finalize (tree fun) break; } } + if (loads->every_base) +load_accesses = 1; + else +{ + load_accesses = 0; + for (auto base_node : loads->bases) + { + if (base_node->every_ref) + load_accesses++; + else + for (auto ref_node : base_node->refs) + if (ref_node->every_access) + load_accesses++; + else + load_accesses += ref_node->accesses->length (); + } +} } /* Get function summary for FUNC if it exists, return NULL otherwise. */ diff --git a/gcc/ipa-modref.h b/gcc/ipa-modref.h index f868eb6de07..a7937d74945 100644 --- a/gcc/ipa-modref.h +++ b/gcc/ipa-modref.h @@ -53,6 +53,8 @@ struct GTY(()) modref_summary /* Flags coputed by finalize method. */ + /* Total number of accesses in loads tree. */ + unsigned int load_accesses; /* global_memory_read is not set for functions calling functions with !binds_to_current_def which, after interposition, may read global memory but do nothing useful with it (except for crashing if some diff --git a/gcc/testsuite/g++.dg/tree-ssa/pr103168.C b/gcc/testsuite/g++.dg/tree-ssa/pr103168.C new file mode 100644 index 000..82924a3e3ce --- /dev/null +++ b/gcc/testsuite/g++.dg/tree-ssa/pr103168.C @@ -0,0 +1,24 @@ +// { dg-do compile } +// { dg-options "-O2 -fdump-tree-fre1-details" } + +struct a +{ + int a; + static __attribute__ ((noinline)) + int ret (int v) {return v;} + + __attribute__ ((noinline)) + int inca () {return a++;} +}; + +int +test() +{ + struct a av; + av.a=1; + int val = av.ret (0) + av.inca(); + av.a=2; + return val + av.ret(0) + av.inca(); +} + +/* { dg-final { scan-tree-dump-times "Replaced a::ret" 1 "fre1" } } */ diff --git a/gcc/tree-ssa-sccvn.c b/gcc/tree-ssa-sccvn.c index 149674e6a16..719f5184654 100644 --- a/gcc/tree-ssa-sccvn.c +++ b/gcc/tree-ssa-sccvn.c @@ -71,6 +71,8 @@ along with GCC; see the file COPYING3. If not see #include "tree-ssa-loop-niter.h" #include "builtins.h" #include "fold-const-call.h" +#include "ipa-modref-tree.h" +#include "ipa-modref.h" #include "tree-ssa-sccvn.h" /* This algorithm is based on the SCC algorithm presented by Keith @@ -5084,12 +5086,118 @@ visit_reference_op_call (tree lhs, gcall *stmt) struct vn_reference_s vr1; vn_reference_t vnresult = NULL; tree vdef = gimple_vdef (stmt); + modref_summary *summary; /* Non-ssa lhs is handled in copy_reference_ops_from_call. */ if (lhs && TREE_CODE (lhs) != SSA_NAME) lhs = NULL_TREE; vn_reference_lookup_call (stmt, , ); + + /* If the lookup did not succeed for pure functions try to use + modref info to find a candidate to CSE to. */ + const int accesses_limit = 8; + if (!vnresult + && !vdef + && lhs + && gimple_vuse (stmt) + && (((summary = get_modref_function_summary (stmt, NULL)) + && !summary->global_memory_read + && summary->load_accesses < accesses_limit) + || gimple_call_flags (stmt) & ECF_CONST)) +{ + /* First search if we can do someting useful and build a +vector of all loads we have to check. */ + bool unknown_memory_access = false; + auto_vec accesses; + + if (summary) + { + for (auto base_node : summary->loads->bases) + if (unknown_memory_access) + break; + else for (auto ref_node : base_node->refs) + if (unknown_memory_access) + break; + else for (auto access_node : ref_node->accesses) + { + accesses.quick_grow (accesses.length () + 1); + if (!access_node.get_ao_ref (stmt, ())) + { + /* We could use get_call_arg (...) and initialize +a ref based on the argument and unknown offset in +some cases, but we have to get a ao_ref to +disambiguate against other stmts. */ + unknown_memory_access = true; + break; + } + else + { +
[Bug tree-optimization/103168] Value numbering for PRE of pure functions can be improved
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103168 --- Comment #13 from Jan Hubicka --- Concerning comment #10, the problem was that the loop walking all accesses was missing loads->every_base check. This is used to represent that we track no useful info about loads performed at all. Anyway if I read the code correctly, it does nothing useful if the access tree contains any access for which we can not construct ref and thus one can simply check global_memory_access and do not care about every_base/every_ref/every_access since these must be all false. I simplified the walk a bit and added code pre-computing number of accesses in the tree into the summary. What we can also do is, when hitting access for which we can not construct ref, or when hitting every_ref/every_acccess, is to construct ref with base_alias_set/ref_alias_set as given by the access tree but with base=NULL, offset=0 and size=max_size=-1. This should still let the basic TBAA oracle to disambiguate.
[Bug tree-optimization/103168] Value numbering for PRE of pure functions can be improved
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103168 --- Comment #12 from hubicka at kam dot mff.cuni.cz --- > unsigned p; > unsigned __attribute__((noinline)) test (void) > { > return p; > } > > modref analyzing 'test' (ipa=0) (pure) > - Analyzing load: p >- Recording base_set=0 ref_set=0 > - modref done with result: tracked. Modref does not record accesses to global vars as known baes and just as accesses relative to MODREF_UNKONWN_PARM (since ipa-reference does this) So only useful info it collect from loads/stores is the alias set and here it is 0 (why?). Then it drops the summary as useless since function is detected as pure earlier and that holds the same info.
[Bug tree-optimization/103168] Value numbering for PRE of pure functions can be improved
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103168 --- Comment #11 from Richard Biener --- And interestingly for unsigned p; unsigned __attribute__((noinline)) test (void) { return p; } I do not get any modref summary!? ;; Function test (test, funcdef_no=0, decl_uid=1979, cgraph_uid=1, symbol_order=1) modref analyzing 'test' (ipa=0) (pure) - Analyzing load: p - Recording base_set=0 ref_set=0 - modref done with result: tracked. __attribute__((noinline)) unsigned int test () { unsigned int _2; [local count: 1073741824]: _2 = p; return _2; }
[Bug tree-optimization/103168] Value numbering for PRE of pure functions can be improved
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103168 --- Comment #10 from Richard Biener --- Created attachment 51847 --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=51847=edit patch Looks like I have to exclude summary->global_memory_read since otherwise a trivial testcase like the following is miscompiled (the access tree does not have the access for 'p'). unsigned p; unsigned __attribute__((noinline)) test (int) { return p; } int main() { p = 1; if (test (0) != 1) __builtin_abort (); p = 2; if (test (0) != 2) __builtin_abort (); return 0; }
[Bug tree-optimization/103168] Value numbering for PRE of pure functions can be improved
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103168 --- Comment #9 from hubicka at kam dot mff.cuni.cz --- > so indeed that's an issue. So it's a bug fixed, not an optimization > regression. I know, but the bug was fixed in unnecesarily generous way preventing a lot of valid tranforms (esnetially disabling CSE just because DSE is not safe)
[Bug tree-optimization/103168] Value numbering for PRE of pure functions can be improved
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103168 Richard Biener changed: What|Removed |Added Status|NEW |ASSIGNED Assignee|unassigned at gcc dot gnu.org |rguenth at gcc dot gnu.org Summary|[9/10/11/12 Regression] |Value numbering for PRE of |Value numbering for PRE of |pure functions can be |pure functions can be |improved |improved| --- Comment #8 from Richard Biener --- (In reply to Jan Hubicka from comment #7) > The testcase in comment #3 is optimized by GCC 6 but not by GCC 7 up. That's beacause GCC 6 detected a::ret () as const while with GCC 7 and up it's only pure. ;; Function a::ret (_ZN1a3retEi, funcdef_no=0, decl_uid=2371, cgraph_uid=1, symbol_order=0) neither local analysis of static int a::ret(int)/0 scanning: return v_1(D); checking previously known: static int a::ret(int)/0 is not a malloc candidate, reason: Return value is not SSA_NAME or not a pointer type. Function is locally const. Function found to be const: static int a::ret(int)/0 Dropping state to PURE because function does not bind to current def. Declaration updated to be const: static int a::ret(int)/0 __attribute__((noinline)) int a::ret (int v) { [local count: 1073741824]: return v_1(D); } the comment in the code says /* Consider function: bool a(int *p) { return *p==*p; } During early optimization we will turn this into: bool a(int *p) { return true; } Now if this function will be detected as CONST however when interposed it may end up being just pure. We always must assume the worst scenario here. */ so indeed that's an issue. So it's a bug fixed, not an optimization regression.
[Bug tree-optimization/103168] Value numbering for PRE of pure functions can be improved
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103168 --- Comment #6 from hubicka at kam dot mff.cuni.cz --- > bool unknown_memory_access = false; > if (summary = get_modref_function_summary (stmt, NULL)) > { > /* First search if we can do someting useful. > Like for dse it is easy to precompute in the summary now > and will be happy to implement that. */ > for (auto base_node : summary->loads->bases) > if (base_node->all_bases || unknown_memory_access) > { > unknown_memory_access = true; > break; > } > else > for (auto ref_node : base_node->refs) > if (ref_node->all_refs) > { > unknown_memory_access = true; > break; > } > > /* Do the walking. */ > if (!unknown_memory_access) > for (auto base_node : summary->loads->bases) > for (auto ref_node : base_node->refs) > if (ref_node->all_refs) > unknown_memory_access = true; > else > for (auto access_node : ref_node->accesses) > if (access_node.get_ao_ref (stmt, ) > { > ref.base_alias_set = base_node->base; > ref.ref_alias_set = ref_node->base; ^^^ ref
[Bug tree-optimization/103168] Value numbering for PRE of pure functions can be improved
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103168 --- Comment #5 from hubicka at kam dot mff.cuni.cz --- > https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103168 > > --- Comment #4 from Richard Biener --- > (In reply to Jan Hubicka from comment #3) > > This is simple (and fairly common) case we don't optimize > > struct a { > > int a; > > static __attribute__ ((noinline)) > > int ret (int v) {return v;} > > > > __attribute__ ((noinline)) > > int inca () {return a++;} > > }; > > int > > test() > > { > > struct a av; > > av.a=1; > > int val = av.ret (0) + av.inca(); > > av.a=2; > > return val + av.ret(0) + av.inca(); > > } > > > > what ret is const however becaue it is in COMDAT group we only detect it as > > pure which makes GVN to give up on proving that its value did not change > > over av.a=2 store. We could easily work this out from modref summary (which > > says that function makes no memory load) > > This case is a bit different since it just exposes we do not perform any > sort of alias walking for calls in VN. In fact even with modref we'd need > to perform multiple walks with all stored "pieces" ao_refs. At least that > should be doable though. Yep, it was my original intention to point out this :) > > If you can provide a cut place to walk & create those ao_refs I could > see to cook up the relevant VN bits. But for next stage1 of course. The following should work (pretty much same loop is in dse_optimize_call but for stores instead of loads.) bool unknown_memory_access = false; if (summary = get_modref_function_summary (stmt, NULL)) { /* First search if we can do someting useful. Like for dse it is easy to precompute in the summary now and will be happy to implement that. */ for (auto base_node : summary->loads->bases) if (base_node->all_bases || unknown_memory_access) { unknown_memory_access = true; break; } else for (auto ref_node : base_node->refs) if (ref_node->all_refs) { unknown_memory_access = true; break; } /* Do the walking. */ if (!unknown_memory_access) for (auto base_node : summary->loads->bases) for (auto ref_node : base_node->refs) if (ref_node->all_refs) unknown_memory_access = true; else for (auto access_node : ref_node->accesses) if (access_node.get_ao_ref (stmt, ) { ref.base_alias_set = base_node->base; ref.ref_alias_set = ref_node->base; ... do refwalking } else if (get_call_arg (stmt)) ... we do not know offset but can still walk using ptr_deref_may_alias_ref_p_1 else unknown_memory_access = true; ... unlikely to happen (for example when number of args in call stmt mismatches the actual function body) } if (unknown_memory_access) ... even here walking makes sense to skip stores to that passes !ref_maybe_used_by_call_p ... do walk for function arguments passed by value > > -- > You are receiving this mail because: > You reported the bug.
[Bug tree-optimization/103168] Value numbering for PRE of pure functions can be improved
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103168 --- Comment #4 from Richard Biener --- (In reply to Jan Hubicka from comment #3) > This is simple (and fairly common) case we don't optimize > struct a { > int a; > static __attribute__ ((noinline)) > int ret (int v) {return v;} > > __attribute__ ((noinline)) > int inca () {return a++;} > }; > int > test() > { > struct a av; > av.a=1; > int val = av.ret (0) + av.inca(); > av.a=2; > return val + av.ret(0) + av.inca(); > } > > what ret is const however becaue it is in COMDAT group we only detect it as > pure which makes GVN to give up on proving that its value did not change > over av.a=2 store. We could easily work this out from modref summary (which > says that function makes no memory load) This case is a bit different since it just exposes we do not perform any sort of alias walking for calls in VN. In fact even with modref we'd need to perform multiple walks with all stored "pieces" ao_refs. At least that should be doable though. If you can provide a cut place to walk & create those ao_refs I could see to cook up the relevant VN bits. But for next stage1 of course.
[Bug tree-optimization/103168] Value numbering for PRE of pure functions can be improved
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103168 --- Comment #3 from Jan Hubicka --- This is simple (and fairly common) case we don't optimize struct a { int a; static __attribute__ ((noinline)) int ret (int v) {return v;} __attribute__ ((noinline)) int inca () {return a++;} }; int test() { struct a av; av.a=1; int val = av.ret (0) + av.inca(); av.a=2; return val + av.ret(0) + av.inca(); } what ret is const however becaue it is in COMDAT group we only detect it as pure which makes GVN to give up on proving that its value did not change over av.a=2 store. We could easily work this out from modref summary (which says that function makes no memory load)
[Bug tree-optimization/103168] Value numbering for PRE of pure functions can be improved
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103168 Richard Biener changed: What|Removed |Added Last reconfirmed||2021-11-11 Summary|Value numbering of pure |Value numbering for PRE of |functions can be improved |pure functions can be ||improved Ever confirmed|0 |1 Status|UNCONFIRMED |NEW --- Comment #2 from Richard Biener --- loop invariant motion can only move const calls right now. Not sure what value-numbering has to do with this? There is probably a duplicate somewhere. The odd thing is that PRE doesn't handle it - it fails to discover that on the backedge the value is the same. Ah, that's because ao_ref_init_from_vn_reference does not handle calls - hmm, yeah we can't represent calls in an ao_ref, at least not easily. I suppose for a 'pure' call an ao_ref could be a dereference of a pointer to global memory, but we'd have to build a fake SSA for the pointer and fake points-to-info for this. So let's have this PR track the lack of PRE of pure calls.