> Mark Mitchell wrote:
> >
> >  struct A {...};
> >  struct B { ...; struct A a; ...; };
> >
> >
> >  void f() {
> >    B b;
> >    g(&b.a);
> >  }
> >
> >
> >    does the compiler have to assume that "g" may access the parts of
> >    "b" outside of "a". If the compiler can see the body of "g" than
> >    it may be able to figure out that it can't access any other
> >    parts, or figure out which parts it can access, and in that case
> >    it can of course use that information. The interesting case,
> >    therefore, is when the body of "g" is not available, or is
> >    insufficient to make a conclusive determination.
> >
> 
> I attended a UK C++ panel meeting yesterday, and took the opportunity
> to solicit opinions on this.  The question I posed was
>       struct A {
>               ...
>               T1 a;
>               T2 b;
>       };
>       void g(T1 &a);
>       void Foo () {
>          A v;
>          v.b = 2;
>          g (v.a);
>          if (v.b == 2) ...
>         }
> Does the compiler have a right to presume v.b does indeed == 2 in the if
> condition? -- assuming T2 is a suitable type for that :)
> 
> 
> After I explained the optimization (and the related structure splitting
> optimization), the general consensus was 'yes that would be a useful
> optimization'.  But no one was sufficiently confident of proving it
> was allowable.  The opinion was expressed that if it was not allowable,
> there was a bug in the std.
> 
> 
> The observation was made that if A is non-POD, one cannot play offsetof
> tricks to get from A::a to A::b, so the optimization is safe on non-PODs.
> (Of course one would have to prove the address of 'v' did not escape,
> so I guess the ctor and dtor would need to be trivial or visible.)
> 
> 
> One of the panel members is looking at C++'s memory model WRT
> multithreading.  This question has a direct impact there too, as
> separate threads might be operating on v.a and v.b.  He indicated
> he would consider the issue.
> 
> 
> I also outlined the approach gcc would take with a compile time
> switch and an attribute.  The preference was expressed that
> the attribute should be of the form
>       void badfunc (A & __I_AM_BAD__ m);
> rather than
>       void goodfunc (A & __I_AM_GOOD__ m);
> because (a) badfuncs are more than likely rare and (b) it would be a useful
> aid to the programmer.[1] Mark outlines an __I_AM_GOOD__ attribute,  I think
> it would be better to have both flavours and then the compiler switch can
> specify which way the default goes.
> 

I would like to point out some problems with this approach.  Consider
the case where you have three modules: A, B and C.  Each with a single
function a, b, and c respectively where a calls b and b calls c.  Also
assume that c has the __I_AM_BAD__ attribute.

What is known when compiling the A module?  Function a does not know
that b calls c.  Are you going to require that b's prototype also have
the __I_AM_BAD__ attribute because it calls c?  Where does this stop?

I believe that the only way to have this work is to have the attribute
be associated with the data type.  This attribute discriminates this
type from a similar type that does not have the attribute, i.e. you
cannot just assign a pointer to a bad int to a pointer to an int.

This will force the programmer to have separate versions of functions
that take the bad pointer and the good pointer but this lets the
compiler compiler the good functions in a manner that rewards the
good.  It is also easy to track thru the maze of separate compilation.

Kenny

Disclaimer:  I have never written a single line of C++ in my life nor
have I ever read any C++ books or specifications.   I am firmly rooted
in the ML, Modula-3 and Java school of strongly typed, well defined
languages.

> 
> nathan
> 
> [1] it was of course noted that that looked stunningly like 'restrict', and
> the suggestion that it be spelled 'noalias' was jokingly made :)
> 
> 
> --
> Nathan Sidwell    ::   http://www.codesourcery.com   ::     CodeSourcery LLC
> [EMAIL PROTECTED]    ::
> http://www.planetfall.pwp.blueyonder.co.uk

Reply via email to