Re: Official GCC git repository

2008-04-14 Thread Bernie Innocenti

Daniel Berlin wrote:

I put my version of the gcc conversion (which has all branches but no
tags) at git://gcc.gnu.org/git/gcc.git and set a script up to update
it appropriately.

Note that i will not announce this anywhere until someone steps
forward to actually maintain it because i do not know GIT.  Neither of
the people who volunteered have done it even after repeated prodding
:(
(I don't mean to shame them, i am simply pointing out that it appears
we need new volunteers)


Yes, unfortunately I don't seem to find much time to dedicate
to this lately.  Sorry, I'm overwhelmed by higher priority
things at this time :-(

--
  \___/
  |___|Bernie Innocenti - http://www.codewiz.org/
   \___\   CTO OLPC Europe  - http://www.laptop.org/


Re: A query regarding the implementation of pragmas

2008-04-14 Thread Mohamed Shafi
On Mon, Apr 14, 2008 at 11:44 PM, Jim Wilson <[EMAIL PROTECTED]> wrote:
>
> Mohamed Shafi wrote:
>
> > For a function call will i be able to implement long call/short call
> > for the same function at different locations?
> > Say fun1 calls bar and fun2 calls bar. I want short-call to be
> > generated for bar in fun1 and long-call to be generated in fun2.
> > Is to possible to implement this in the back-end using pragmas?
> >
>
>  A simple grep command shows that both arm and rs6000 already both support
> long call pragmas.
>

I did see those but i coudn't determine whether it is possible to
change the attribute of the same function at different points of
compilation.

Regards,
Shafi


Re: Moving statements from one BB to other BB.

2008-04-14 Thread Sandeep Maram
On Tue, Apr 15, 2008 at 10:34 AM, Daniel Berlin <[EMAIL PROTECTED]> wrote:
> To clarify what Richard means, your assertion that "you have updated
>  SSA information" is false.
>  If you had updated the SSA information, the error would not occur :).
>
>  How exactly are you updating the ssa information?

I am calling update_ssa (TODO_update_ssa), after all the statements
are transferred.

>
>  The general way to update SSA for this case would be:
>
>  For each statement you have moved:
>   Call update_stmt (t);
>
>  Then call update_ssa (TODO_update_ssa) (or instead use
>  rewrite_into_loop_closed_ssa if this is a loop pass).
>
>  If you do not call update_stmt in this case, update_ssa won't actually
>  do anything.
>
>  Diego, the bsi iterators do not update the statements for you though
>  it is not clear if this is a bug or not.
>
>  The bsi iterators call update_modified_stmts, which says:
>
>  /* Mark statement T as modified, and update it.  */
>  static inline void
>  update_modified_stmts (tree t)
>
>  However, this only calls update_stmt_if_modified (IE it does not mark
>  the statement as modified and update it, as it claims to).
>
>  Sandeep, it should also suffice to call mark_stmt_modified *before*
>  moving the statements (since the above routine should then update
>  them).
>

Thanks. I will use update_stmt, update_ssa now.

Best Regards,
Sandeep.

>  On Mon, Apr 14, 2008 at 7:10 AM, Richard Guenther
>
> <[EMAIL PROTECTED]> wrote:
>
>
> > On Mon, Apr 14, 2008 at 12:54 PM, Sandeep Maram <[EMAIL PROTECTED]> wrote:
>  >  > Hi,
>  >  >
>  >  >  I have transferred all the statements of one BB( header of one loop)
>  >  >  to another BB. After that I have updated SSA information too.
>  >  >  But I get this error-
>  >  >
>  >  >   definition in block 6 does not dominate use in block 3
>  >  >  for SSA_NAME: i_25 in statement:
>  >
>  >  This is the problem.
>  >
>  >
>  >
>  >  >  # VUSE 
>  >  >  D.1189_10 = a[i_25];
>  >  >  loop.c:8: internal compiler error: verify_ssa failed
>  >  >
>  >  >  Can any one please tell me what is the problem?
>  >  >
>  >  >  Thanks,
>  >  >  Sandeep.
>  >  >
>  >
>


Re: Official GCC git repository

2008-04-14 Thread Daniel Berlin
I put my version of the gcc conversion (which has all branches but no
tags) at git://gcc.gnu.org/git/gcc.git and set a script up to update
it appropriately.

Note that i will not announce this anywhere until someone steps
forward to actually maintain it because i do not know GIT.  Neither of
the people who volunteered have done it even after repeated prodding
:(
(I don't mean to shame them, i am simply pointing out that it appears
we need new volunteers)

On Mon, Apr 14, 2008 at 10:49 PM, Kirill A. Shutemov
<[EMAIL PROTECTED]> wrote:
>
> On Wed, Mar 26, 2008 at 02:38:53PM -0400, Daniel Berlin wrote:
>  > On Wed, Mar 26, 2008 at 12:30 PM, Frank Ch. Eigler <[EMAIL PROTECTED]> 
> wrote:
>  > > Hi -
>  > >
>  > >  On Fri, Mar 14, 2008 at 10:41:42AM -0400, Frank Ch. Eigler wrote:
>  > >
>  > >  > [...]
>  > >
>  > > > OK, /git/gcc.git appears ready for you to populate & maintain.  Access
>  > >  > as {http,git,ssh}://gcc.gnu.org/gcc.git should all work.
>  > >
>  > >  Just a reminder - an empty git repository has been ready for you for 
> some time.
>  > >
>  >
>  > Guys, if you want, i can populate it with my git-svn clone, which has
>  > a svn root id of ssh://gcc.gnu.org//svn/gcc (IE the proper root if you
>  > wanted to be able to dcommit), and has all branches (but no tags).
>
>  Any progress?
>
>  --
>  Regards,  Kirill A. Shutemov
>   + Belarus, Minsk
>   + ALT Linux Team, http://www.altlinux.com/
>
> -BEGIN PGP SIGNATURE-
>  Version: GnuPG v1.4.9 (GNU/Linux)
>
>  iEYEARECAAYFAkgEF60ACgkQbWYnhzC5v6oJ0gCfcCHQqB3TXubihzAvauZHttaJ
>  f+cAn1zgCkx8MggOjoKDYNW2pqNiGTtc
>  =l6Vg
>  -END PGP SIGNATURE-
>
>


Re: Moving statements from one BB to other BB.

2008-04-14 Thread Daniel Berlin
To clarify what Richard means, your assertion that "you have updated
SSA information" is false.
If you had updated the SSA information, the error would not occur :).

How exactly are you updating the ssa information?

The general way to update SSA for this case would be:

For each statement you have moved:
  Call update_stmt (t);

Then call update_ssa (TODO_update_ssa) (or instead use
rewrite_into_loop_closed_ssa if this is a loop pass).

If you do not call update_stmt in this case, update_ssa won't actually
do anything.

Diego, the bsi iterators do not update the statements for you though
it is not clear if this is a bug or not.

The bsi iterators call update_modified_stmts, which says:

/* Mark statement T as modified, and update it.  */
static inline void
update_modified_stmts (tree t)

However, this only calls update_stmt_if_modified (IE it does not mark
the statement as modified and update it, as it claims to).

Sandeep, it should also suffice to call mark_stmt_modified *before*
moving the statements (since the above routine should then update
them).

On Mon, Apr 14, 2008 at 7:10 AM, Richard Guenther
<[EMAIL PROTECTED]> wrote:
> On Mon, Apr 14, 2008 at 12:54 PM, Sandeep Maram <[EMAIL PROTECTED]> wrote:
>  > Hi,
>  >
>  >  I have transferred all the statements of one BB( header of one loop)
>  >  to another BB. After that I have updated SSA information too.
>  >  But I get this error-
>  >
>  >   definition in block 6 does not dominate use in block 3
>  >  for SSA_NAME: i_25 in statement:
>
>  This is the problem.
>
>
>
>  >  # VUSE 
>  >  D.1189_10 = a[i_25];
>  >  loop.c:8: internal compiler error: verify_ssa failed
>  >
>  >  Can any one please tell me what is the problem?
>  >
>  >  Thanks,
>  >  Sandeep.
>  >
>


Re: IA-64 ICE on integer divide due to trap_if and cfgrtl

2008-04-14 Thread Ian Lance Taylor
Jim Wilson <[EMAIL PROTECTED]> writes:

> It seems odd that cfgrtl allows a conditional trap inside a basic block,
> but not an unconditional trap.  The way things are now, it means we need
> to fix up the basic blocks after running combine or any other pass that
> might be able to simplify a conditional trap into an unconditional trap.
>
> I can work around this in the IA64 port.  For instance I could use
> different patterns for conditional and unconditional traps so that one
> can't be converted to the other.  Or I could try to hide the conditional
> trap inside some pattern that doesn't get expanded until after reload.
> None of these solutions seems quite right.
>
> But changing the basic block tree during/after combine doesn't seem
> quite right either.
>
> The other solution would be to fix cfgbuild to treat all trap
> instructions are control flow insns, instead of just the unconditional
> ones.  I'm not sure why it was written this way though, so I don't know
> if this will cause other problems.  I see that sibling and noreturn
> calls are handled the same way as trap instructions, implying that they
> are broken too.

I think the current cfgbuild behaviour is just to avoid putting a
barrier in the middle of a basic block.  A conditional trap
instruction is not necessarily a control flow instruction for the
compiler--it's similar to a function call which may or may not return.
An unconditional trap is similar to a function call which is known to
not return.

I guess the difference is that combine can't turn a function call
which may or may not return into a function call which will not
return.

Since traps are uncommon, and since you can't do a lot of optimization
around them anyhow, it's probably OK to make them always return true
from control_flow_insn_p.  At least it's worth trying to see if
anything breaks.

Ian


Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Ian Lance Taylor
Robert Dewar <[EMAIL PROTECTED]> writes:

> An optimziation is dubious to me if
>
> a) it produces surprising changes in behavior (note the importance of
> the word surprising here)
>
> b) it does not provide significant performance gains (note the
> importance of the word significant here).
>
> I find this optimization qualifies as meeting both criteria a) and b),
> so that's why I consider it dubious.

To this particular optimization does not meet criteria a).  I never
write code that constructs a pointer which does not point to any
object, because I know that is invalid.  I think it is rather weird
that anybody would write code like this.  If I want to check whether
an index is out of bounds, I compare the index and the length of the
buffer.  I don't use a pointer comparison to check for an out of
bounds index.

I don't know about criteria b), I haven't measured.  It's easy to come
up with test cases where it gives a performance gain--many loops with
a pointer induction variable will benefit--but I don't know how
significant they are.

Ian


Can't build correctly working crosscompiler with 4.3.0. 4.2.1 worked like charm

2008-04-14 Thread Denys Vlasenko
Ok, I give up.

I killed many hours trying to build a cross-compiling
x86_64-linux-uclibc-gcc, version 4.3.0.

After many "WTF?" moments I decided to go back and try
to build a cross-compiler which I already have,
just older version: I decided to build i486 one,
not x86_64.

Because I already have i486-linux-uclibc-gcc version 4.2.1
and it works.

I unpacked and built i486-linux-uclibc-gcc version 4.3.0
with absolutely the same configure and make command lines,
and it does not work. Specifically, it seems to mess up
paths:

i486-linux-uclibc-gcc: error trying to exec 'cc1': execvp: No such file or 
directory

stracing gcc invocation reveals typical 
"..bin/../lib/something/../../../../../something-else/.."
stuff, but this time, it definitely fails to pick up cc1.

(See below for strace comparison between 4.2.1 and 4.3.0)

Yeah, yeah, I saw it with x86_64-linux-uclibc-gcc too
and was able to overcome it with even more hacks and
options in configure, but it won't stop there!
It will use wrong as and/or ld; then it will also try
to link wrong crtX.o files which do not exist!

So I won't try to fix that now - instead, let's concentrate
on "how in hell? it was working before!"

Well, hrm last sanity check:

I remove i486-linux-uclibc-gcc version 4.2.1, unpack
fresh 4.2.1, build it with the very same configure and
make options and IT WORKS!

So, something definitely is changed incompatibly between 4.2.1 and 4.3.0

Help...


STATIC=/usr/app/gcc-4.3.0-i486-linux-uclibc

"configure" invocation:

../gcc-4.3.0/configure \
--prefix=$STATIC\
--exec-prefix=$STATIC   \
--bindir=/usr/bin   \
--sbindir=/usr/sbin \
--libexecdir=$STATIC/libexec\
--datadir=$STATIC/share \
--sysconfdir=/etc   \
--sharedstatedir=$STATIC/var/com\
--localstatedir=$STATIC/var \
--libdir=/usr/lib   \
--includedir=/usr/include   \
--infodir=/usr/info \
--mandir=/usr/man   \
\
--with-slibdir=$STATIC/lib  \
--disable-shared\
--with-local-prefix=/usr/local  \
--with-gxx-include-dir=$STATIC/include/g++-v3   \
--enable-languages="c,c++"  \
--disable-nls   \
\
--build=i386-pc-linux-gnu   \
--host=i386-pc-linux-gnu\
--target=i486-linux-uclibc  \
\
--disable-__cxa_atexit  \
--enable-target-optspace\
--with-gnu-ld   \
--disable-threads   \
--disable-multilib  \
--without-headers   \

"make" invocation:

make all-gcc

"make install" invocation:

make \
prefix=$STATIC  \
exec-prefix=$STATIC \
bindir=$STATIC/bin  \
sbindir=$STATIC/sbin\
libexecdir=$STATIC/libexec  \
datadir=$STATIC/share   \
sysconfdir=$STATIC/var/etc  \
sharedstatedir=$STATIC/var/com  \
localstatedir=$STATIC/var   \
libdir=$STATIC/lib  \
includedir=$STATIC/include  \
infodir=$STATIC/info\
mandir=$STATIC/man  \
\
install-gcc

gcc 4.3.0 fails to find cc1:

30816 
stat64("/.share/usr/app/gcc-4.3.0-i486-linux-uclibc/bin/../app/gcc-4.3.0-i486-linux-uclibc/libexec/gcc/i486-linux-uclibc/4.3.0/cc1",
 0xffde613c) = -1 ENOENT (No such file or directory)
30816 
stat64("/.share/usr/app/gcc-4.3.0-i486-linux-uclibc/bin/../app/gcc-4.3.0-i486-linux-uclibc/libexec/gcc/cc1",
 0xffde613c) = -1 ENOENT (No such file or directory)
30816 
stat64("/.share/usr/app/gcc-4.3.0-i486-linux-uclibc/bin/../lib/gcc/i486-linux-uclibc/4.3.0/../../../../../i486-linux-uclibc/bin/i486-linux-uclibc/4.3.0/cc1",
 0xffde613c) = -1 ENOENT (No such file or directory)
30816 
stat64("/.share/usr/app/gcc-4.3.0-i486-linux-uclibc/bin/../lib/gcc/i486-linux-uclibc/4.3.0/../../../../../i486-linux-uclibc/bin/cc1",
 0xffde613c) = -1 ENOENT (No such file or directory)
30816 vfork()   = 30817
30817 execve("/root/bin/cc1", ["cc1", "-quiet", "-Iinclude", "-Ilibbb", 
"-I/.1/usr/srcdevel/bbox/fix/busy"..., "-iprefix", 
"/.share/usr/app/gcc-4.3.0-i486-l"..., "-D_GNU_SOURCE", "-DNDEBUG", 
"-D_LARGEFILE_SOURCE", "-D_LARGEFILE64_SOURCE", "-D_FILE_OFFSET_BITS=64", 
"-DBB_VER=KBUILD_STR(1.11.0.svn)", "-DBB_BT=AUTOCONF_TIMESTAMP", 
"-DKBUILD_STR(s)=#s", "-DKBUILD_BASENAME=KBUILD_STR(app"..., ...], [/* 35 vars 
*/]) = -1 ENOENT (No such file or directory)
30817 execve("/bin/cc1", ["cc1", "-quiet", "-Iinclude", "-Ilibbb", 
"-I/.1/usr/srcdevel/bbox/fix/busy"..., "-iprefix", 
"/.share/usr/app/gcc-4.3.0-i486-l"..., "-D_GNU_SOURCE", "-DNDEBUG", 
"-D_LARGEFILE_SOURCE

Re: [ARM] Lack of __aeabi_fsqrt / __aeabi_dsqrt (sqrtsf2 / sqrtdf2) functions

2008-04-14 Thread Hasjim Williams

On Mon, 14 Apr 2008 23:09:00 -0400, "Daniel Jacobowitz" <[EMAIL PROTECTED]>
said:
> On Tue, Apr 15, 2008 at 12:58:40PM +1000, Hasjim Williams wrote:
> > Both FPA and VFP coprocessors implement sqrt opcodes:
> 
> So?  Glibc does not rely on that.  I've been building soft-float
> versions of glibc for non-Crunch targets for scarily close to a decade
> now, so this is clearly not the problem :-)  Please say what actual
> error you've encountered.

Of course you can build glibc and have it work correctly, as you are
configuring glibc --without-fp.  Try building glibc --with-fp and seeing
whether it works.

Suffice to say, it will compile, but when you try to run it, and your
program tries to do the libcall to the sqrt function it will segfault,
because there is no libcall sqrt defined.

As far as I can tell, sqrt and div are the only major opcodes that are
needed (for ieee754 glibc --with-fp) that aren't implemented in
MaverickCrunch.



Re: [ARM] Lack of __aeabi_fsqrt / __aeabi_dsqrt (sqrtsf2 / sqrtdf2) functions

2008-04-14 Thread Daniel Jacobowitz
On Tue, Apr 15, 2008 at 12:58:40PM +1000, Hasjim Williams wrote:
> Both FPA and VFP coprocessors implement sqrt opcodes:

So?  Glibc does not rely on that.  I've been building soft-float
versions of glibc for non-Crunch targets for scarily close to a decade
now, so this is clearly not the problem :-)  Please say what actual
error you've encountered.

-- 
Daniel Jacobowitz
CodeSourcery


Re: [ARM] Lack of __aeabi_fsqrt / __aeabi_dsqrt (sqrtsf2 / sqrtdf2) functions

2008-04-14 Thread Hasjim Williams
On Mon, 14 Apr 2008 22:41:36 -0400, "Daniel Jacobowitz" <[EMAIL PROTECTED]>
said:
> On Tue, Apr 15, 2008 at 12:33:38PM +1000, Hasjim Williams wrote:
> > Hello all,
> > 
> > I've been working on MaverickCrunch support in gcc, and could never get
> > a completely working glibc (with-fp), since there is no soft-float sqrt
> > libcall function.  This is a big problem for MaverickCrunch as there are
> > no hard div or sqrt opcodes.
> > 
> Can you be more specific about the actual problem?  I don't see why
> there needs to be an __aeabi_sqrt; sqrt is a function in libm.

Both FPA and VFP coprocessors implement sqrt opcodes:

arm.md:

(define_expand "sqrtsf2"
  [(set (match_operand:SF 0 "s_register_operand" "")
(sqrt:SF (match_operand:SF 1 "s_register_operand" "")))]
  "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
  "")

fpa.md:

(define_insn "*sqrtsf2_fpa"
  [(set (match_operand:SF 0 "s_register_operand" "=f")
(sqrt:SF (match_operand:SF 1 "s_register_operand" "f")))]
  "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
  "sqt%?s\\t%0, %1"
  [(set_attr "type" "float_em")
   (set_attr "predicable" "yes")]
)

vfp.md:


(define_insn "*sqrtsf2_vfp"
  [(set (match_operand:SF0 "s_register_operand" "=t")
(sqrt:SF (match_operand:SF 1 "s_register_operand" "t")))]
  "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
  "fsqrts%?\\t%0, %1"
  [(set_attr "predicable" "yes")
   (set_attr "type" "fdivs")]
)

Now, when you build glibc configured "--with-fp", you won't use the
generic glibc/soft-fp functions, only those in gcc, i.e. the above. 
Only if you configure glibc "--without-fp" will it not use the above
opcodes, but the soft-fp sqrt etc.


Re: Official GCC git repository

2008-04-14 Thread Kirill A. Shutemov
On Wed, Mar 26, 2008 at 02:38:53PM -0400, Daniel Berlin wrote:
> On Wed, Mar 26, 2008 at 12:30 PM, Frank Ch. Eigler <[EMAIL PROTECTED]> wrote:
> > Hi -
> >
> >  On Fri, Mar 14, 2008 at 10:41:42AM -0400, Frank Ch. Eigler wrote:
> >
> >  > [...]
> >
> > > OK, /git/gcc.git appears ready for you to populate & maintain.  Access
> >  > as {http,git,ssh}://gcc.gnu.org/gcc.git should all work.
> >
> >  Just a reminder - an empty git repository has been ready for you for some 
> > time.
> >
> 
> Guys, if you want, i can populate it with my git-svn clone, which has
> a svn root id of ssh://gcc.gnu.org//svn/gcc (IE the proper root if you
> wanted to be able to dcommit), and has all branches (but no tags).

Any progress?

-- 
Regards,  Kirill A. Shutemov
 + Belarus, Minsk
 + ALT Linux Team, http://www.altlinux.com/


signature.asc
Description: Digital signature


Re: [ARM] Lack of __aeabi_fsqrt / __aeabi_dsqrt (sqrtsf2 / sqrtdf2) functions

2008-04-14 Thread Daniel Jacobowitz
On Tue, Apr 15, 2008 at 12:33:38PM +1000, Hasjim Williams wrote:
> Hello all,
> 
> I've been working on MaverickCrunch support in gcc, and could never get
> a completely working glibc (with-fp), since there is no soft-float sqrt
> libcall function.  This is a big problem for MaverickCrunch as there are
> no hard div or sqrt opcodes.
> 
> It seems that this is the only other thing that is missing to let glibc
> be compiled with-fp for soft float arm, too.  
> 
> Is it possible to add these functions to ieee754-sf.S and ieee754-df.S
> ???  There is of course a c implementation in glibc/soft-fp but I don't
> know of any arm assembly implementation...
> 
> I know that ARM IHI 0043A doesn't specific this as part of the EABI, but
> perhaps it needs to be added?

Can you be more specific about the actual problem?  I don't see why
there needs to be an __aeabi_sqrt; sqrt is a function in libm.

-- 
Daniel Jacobowitz
CodeSourcery


[ARM] Lack of __aeabi_fsqrt / __aeabi_dsqrt (sqrtsf2 / sqrtdf2) functions

2008-04-14 Thread Hasjim Williams
Hello all,

I've been working on MaverickCrunch support in gcc, and could never get
a completely working glibc (with-fp), since there is no soft-float sqrt
libcall function.  This is a big problem for MaverickCrunch as there are
no hard div or sqrt opcodes.

It seems that this is the only other thing that is missing to let glibc
be compiled with-fp for soft float arm, too.  

Is it possible to add these functions to ieee754-sf.S and ieee754-df.S
???  There is of course a c implementation in glibc/soft-fp but I don't
know of any arm assembly implementation...

I know that ARM IHI 0043A doesn't specific this as part of the EABI, but
perhaps it needs to be added?

Thanks


IA-64 ICE on integer divide due to trap_if and cfgrtl

2008-04-14 Thread Jim Wilson
This testcase extracted from libgcc2.c
int
sub (int i)
{
  if (i == 0)
return 1 / i;

  return i + 2;
}
compiled with -minline-int-divide-min-latency for IA-64 generates an
ICE.
tmp2.c:8: error: flow control insn inside a basic block
(insn 18 17 19 3 tmp2.c:5 (trap_if (const_int 1 [0x1])
(const_int 1 [0x1])) 352 {*trap} (nil))
tmp2.c:8: internal compiler error: in rtl_verify_flow_info_1, at
cfgrtl.c:1920


The problem is that IA-64 ABI specifies that integer divides trap, so we
must emit a conditional trap instruction.  cse simplifies the compare.
combine substitutes the compare into the conditional trap changing it to
an unconditional trap.  The next pass then fails a consistency check in
cfgrtl.

It seems odd that cfgrtl allows a conditional trap inside a basic block,
but not an unconditional trap.  The way things are now, it means we need
to fix up the basic blocks after running combine or any other pass that
might be able to simplify a conditional trap into an unconditional trap.

I can work around this in the IA64 port.  For instance I could use
different patterns for conditional and unconditional traps so that one
can't be converted to the other.  Or I could try to hide the conditional
trap inside some pattern that doesn't get expanded until after reload.
None of these solutions seems quite right.

But changing the basic block tree during/after combine doesn't seem
quite right either.

The other solution would be to fix cfgbuild to treat all trap
instructions are control flow insns, instead of just the unconditional
ones.  I'm not sure why it was written this way though, so I don't know
if this will cause other problems.  I see that sibling and noreturn
calls are handled the same way as trap instructions, implying that they
are broken too.

I'm looking for suggestions here as what I should do to fix this.

Jim




Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Robert Dewar

Joe Buck wrote:

On Mon, Apr 14, 2008 at 06:42:30PM -0400, Robert Dewar wrote:

[In fact,
after GCC does something to warn users about this, it'll be
much "safer" than those other compilers.]

For sure you want a warning, the compiler should never be
removing explicit tests in the users code without generating
a warning I would think.


I vaguely recall a paper from Dawson Engler's group (the people who
did the Stanford Checker and Coverity) about warnings for dead code
removal.  They are often bugs if seen in straight-line code, but
macros as well as inlining of functions will produce many warnings
of this kind.  They focused their work on addressing what the user
could be expected to know, the idea being to issue warnings if
the code on a single level is redundant, but suppress warnings
if the redundant text came from macros or inlining.


Right, we have heuristics in the Ada front end along these lines.
For instance, you generally want to be warned if a test is
always true or always false, but if the test is

   if XYZ then

where XYZ is a boolean constant, then probably this is
conditional compilation type activity that is legitimate.



Re: Problem with reloading in a new backend...

2008-04-14 Thread Jim Wilson

On Tue, 2008-04-15 at 00:06 +0200, Stelian Pop wrote:
>   - I had to add a PLUS case in PREFERRED_RELOAD_CLASS() or else reload
> kept generating incorrect insn (putting constants into EVEN_REGS for
> example). I'm not sure this is correct or if it hides something else...

It does sound odd, but I can't really say it is wrong, as you have an
odd set of requirements here.  At least it is working which is good.

Jim



Re: INSTALL/configure.html mentions ${gcc_tooldir} - what's that?

2008-04-14 Thread Jim Wilson

Denys Vlasenko wrote:

Please, can somebody add an explanation to INSTALL/configure.html
what ${gcc_tooldir} is, and how to set it (I guess with configure
option or something?)


gcc_tooldir is a makefile variable.  You can't change it directly.  It 
is effectively $prefix/$target, though if you look at the details you 
will see it is a bit more complicated than that.


Yes, the docs should be clarified.  You might want to submit a bug report.

Jim


Re: MAX_CONSTRAINT VALUE

2008-04-14 Thread Ben Elliston
Hi there Balaji,

>Here is the patch for it (if a value is not provided, then the
> default value of 30 is assumed). I tried to build this for x86 and arm
> and they seem to work fine with no problems.

Thanks for the patch.  You should send your patch to gcc-patches,
though, not the main GCC list.

Cheers, Ben




Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Paul Schlie
> Robert Dewar <[EMAIL PROTECTED]>
> 
>> Richard Guenther wrote:
>> 
>> In absence of any declared object (like with this testcase where we just
>> have an incoming pointer to some unknown object) the compiler can
>> still assume that any valid object ends at the end of the address space.
>> Thus, an object either declared or allocated via malloc never "wraps"
>> around to address zero.  Thus, ptr + int never "overflows".
> 
> Indeed,
> 
> An interesting case is the special allowance to point just past the
> end of an array if the pointer is not deferenced, this allows the
> C idiom
> 
> for (x = arr; x < &arr[10]; x++) ...
> 
> where arr has bounds 0..9, the limit pointer is used only for
> testing, and this test must be valid. This means that you can't
> have an array allocated up to the extreme end of the address
> space if this would not work properly. I remember this issue
> arising on the 286, where the maximum size of an array was
> one element less than 64K bytes on one compiler to avoid
> this anomoly.

Further, although admittedly contrived, if a pointer to a second
element of an array is passed, to which an (unsigned) -1 is added, which in
effect generates an unsigned wrapping overflow and ends up pointing to the
first element of the same array; whereby such a sum will be both less than
the pointer passed and be known to reference the first element of the same
array. (So thereby if for some reason someone wants to pass not a pointer
to the first element of an array, but rather the Nth element and
subsequently access the Nth-X element, it is conceivable that one may want
to verify that the resulting pointer will always be less than the pointer
passed, and seemingly be legitimate.)




Re: RFC: named address space support

2008-04-14 Thread Ben Elliston
Hi Mark

> I'm not terribly familiar with this proposal.

> Ben, to answer your original question, I don't think that lack of nested 
> address spaces is a fatal flaw, as long as the implementation otherwise 
> meets the spec, and as long as the implementation doesn't somehow make 
> it harder to add that.  However, I'd like to know how final this 
> proposal is, and how likely it is to make the WG14 WP.

According to:
http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=30822

.. the embedded C proposal as of 2008-01-18 is at stage 90.92.  This
suggests that it's very close to being incorporated into the standard.
Have I understood that correctly?

> As always, I'm  concerned about putting things into GCC and then
> finding out that we have to change them in ways that are not backwards
> compatible.  And, I'd like to know what our C maintainers make of the
> proposal overall; if they see language issues, then we might want to
> resolve those with WG14.

Sure.  Any comments from Joseph or Richard?

Cheers, Ben



MAX_CONSTRAINT VALUE

2008-04-14 Thread Balaji V. Iyer
Hello Everyone,
I am currently working on OpenRISC port of GCC and I am trying to
add more constraints to the machine-dependent part and the default
number of constrant seem to be only 30 (and obviously I have more than
30 constraints, and thus it was failing). I tried making this a #define
value and moved this to the machine dependent part. This is advantageous
because now the backend designer has more flexibility.

   Here is the patch for it (if a value is not provided, then the
default value of 30 is assumed). I tried to build this for x86 and arm
and they seem to work fine with no problems.

Here is the patch for it (I am working on GCC 4.0.2).

==
diff -Naur gcc.old/recog.c gcc.new/recog.c
--- gcc.old/recog.c 2008-04-14 19:57:58.5 -0400
+++ gcc.new/recog.c 2008-04-14 20:08:31.34375 -0400
@@ -2039,7 +2039,7 @@
   = (recog_data.constraints[i][0] == '=' ? OP_OUT
 : recog_data.constraints[i][0] == '+' ? OP_INOUT
 : OP_IN);
-
+   
   gcc_assert (recog_data.n_alternatives <= MAX_RECOG_ALTERNATIVES);
 }
 
diff -Naur gcc.old/recog.h gcc.new/recog.h
--- gcc.old/recog.h 2008-04-14 19:57:58.5 -0400
+++ gcc.new/recog.h 2008-04-14 19:54:44.828125000 -0400
@@ -20,7 +20,12 @@
 02111-1307, USA.  */
 
 /* Random number that should be large enough for all purposes.  */
-#define MAX_RECOG_ALTERNATIVES 30
+
+#ifdef TARGET_MAX_RECOG_ALTERNATIVES
+#define MAX_RECOG_ALTERNATIVES TARGET_MAX_RECOG_ALTERNATIVES
+#else
+#define MAX_RECOG_ALTERNATIVES 30
+#endif
 
 /* Types of operands.  */
 enum op_type {
diff -Naur gcc.old/target-def.h gcc.new/target-def.h
--- gcc.old/target-def.h2008-04-14 19:58:00.46875 -0400
+++ gcc.new/target-def.h2008-04-14 19:54:45.71875 -0400
@@ -187,6 +187,11 @@
 #define TARGET_ASM_MARK_DECL_PRESERVED hook_void_constcharptr
 #endif
 
+#ifndef TARGET_MAX_RECOG_ALTERNATIVES 
+#define TARGET_MAX_RECOG_ALTERNATIVES 32
+#endif
+
+
 #define TARGET_ASM_ALIGNED_INT_OP  \
   {TARGET_ASM_ALIGNED_HI_OP,   \
TARGET_ASM_ALIGNED_SI_OP,   \


==

Thanks,

Balaji V. Iyer.

 
-- 
 
Balaji V. Iyer
PhD Student, 
Center for Efficient, Scalable and Reliable Computing,
Department of Electrical and Computer Engineering,
North Carolina State University.




INSTALL/configure.html mentions ${gcc_tooldir} - what's that?

2008-04-14 Thread Denys Vlasenko
Hi,

INSTALL/configure.html mentions ${gcc_tooldir}, just once. Here:

Cross-Compiler-Specific Options

The following options only apply to building cross compilers.

--with-sysroot
--with-sysroot=dir
Tells GCC to consider dir as the root of a tree that contains
a (subset of) the root filesystem of the target operating system.
Target system headers, libraries and run-time object files
will be searched in there. The specified directory is not copied
into the install tree, unlike the options --with-headers
and --with-libs that this option obsoletes. The default value,
in case --with-sysroot is not given an argument, is ${gcc_tooldir}/sys-root.
^^^
If the specified directory is a subdirectory of ${exec_prefix},
then it will be found relative to the GCC binaries if the installation
tree is moved.


Well, that's not too helpful.

Please, can somebody add an explanation to INSTALL/configure.html
what ${gcc_tooldir} is, and how to set it (I guess with configure
option or something?)

--
vda


Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Joe Buck
On Mon, Apr 14, 2008 at 06:42:30PM -0400, Robert Dewar wrote:
> >[In fact,
> >after GCC does something to warn users about this, it'll be
> >much "safer" than those other compilers.]
> 
> For sure you want a warning, the compiler should never be
> removing explicit tests in the users code without generating
> a warning I would think.

I vaguely recall a paper from Dawson Engler's group (the people who
did the Stanford Checker and Coverity) about warnings for dead code
removal.  They are often bugs if seen in straight-line code, but
macros as well as inlining of functions will produce many warnings
of this kind.  They focused their work on addressing what the user
could be expected to know, the idea being to issue warnings if
the code on a single level is redundant, but suppress warnings
if the redundant text came from macros or inlining.



gcc-4.1-20080414 is now available

2008-04-14 Thread gccadmin
Snapshot gcc-4.1-20080414 is now available on
  ftp://gcc.gnu.org/pub/gcc/snapshots/4.1-20080414/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.

This snapshot has been generated from the GCC 4.1 SVN branch
with the following options: svn://gcc.gnu.org/svn/gcc/branches/gcc-4_1-branch 
revision 134294

You'll find:

gcc-4.1-20080414.tar.bz2  Complete GCC (includes all of below)

gcc-core-4.1-20080414.tar.bz2 C front end and core compiler

gcc-ada-4.1-20080414.tar.bz2  Ada front end and runtime

gcc-fortran-4.1-20080414.tar.bz2  Fortran front end and runtime

gcc-g++-4.1-20080414.tar.bz2  C++ front end and runtime

gcc-java-4.1-20080414.tar.bz2 Java front end and runtime

gcc-objc-4.1-20080414.tar.bz2 Objective-C front end and runtime

gcc-testsuite-4.1-20080414.tar.bz2The GCC testsuite

Diffs from 4.1-20080407 are available in the diffs/ subdirectory.

When a particular snapshot is ready for public consumption the LATEST-4.1
link is updated and a message is sent to the gcc list.  Please do not use
a snapshot before it has been announced that way.


Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Robert Dewar

[EMAIL PROTECTED] wrote:


It's already been acknowledged that the source code is wrong
to assume that the compiler knows about wrapping of pointers.
The real issue at this stage is how to warn users who may be
using GCC and implicitly relying on its old behavior, without
unintentionally pushing people in the wrong direction. Since
this optimization is performed by many other compilers, the
ship has already sailed on this one, so to speak.


that's a strong argument I agree


[In fact,
after GCC does something to warn users about this, it'll be
much "safer" than those other compilers.]


For sure you want a warning, the compiler should never be
removing explicit tests in the users code without generating
a warning I would think.


I agree that on the face of it, it seems like you wouldn't
want to optimize away tests like this when you can know that
pointer arithmetic really does look the same as unsigned
arithmetic (for a particular architecture, etc.). However,
sometimes an optimization may enable thirty more, so I for
one am not going to argue against it. Especially not when
many other compilers do it also.


Not sure I agree with that, but I can certainly live with
a warning.


-Jerry

P.S. I'm having some déjà vu, recalling discussions back in
the GCC 2.7 days about whether it was really OK to change
the behavior for signed arithmetic to support devices with
saturation. We've come a long way since then.


:-)


Re: Problem with reloading in a new backend...

2008-04-14 Thread Stelian Pop

Le vendredi 11 avril 2008 à 11:14 -0700, Jim Wilson a écrit :
> Stelian Pop wrote:
> > #define PREFERRED_RELOAD_CLASS(X, CLASS)\
> >   ((CONSTANT_P(X)) ? EIGHT_REGS :   \
> >(MEM_P(X)) ? EVEN_REGS : CLASS)
> > 
> > #define PREFERRED_OUTPUT_RELOAD_CLASS(X, CLASS) \
> >   ((CONSTANT_P(X)) ? EIGHT_REGS :   \
> >(MEM_P(X)) ? EVEN_REGS : CLASS)
> 
> I think most of your trouble is here.  Suppose we are trying to reload a 
> constant into an even-reg.  We call PREFERRED_RELOAD_CLASS, which says 
> to use eight_regs instead, and you get a fatal_insn error because you 
> didn't get the even-reg that the instruction needed.
[...]

I've tried the suggestion above and it did indeed help. However, I had a
few additional issues:
- the stack pointer and the frame pointer MUST be placed into an
even-reg, or else reload will generate (mem (plus (reg) (const))) insn
(when eliminating the pointers).
- I had to add a PLUS case in PREFERRED_RELOAD_CLASS() or else reload
kept generating incorrect insn (putting constants into EVEN_REGS for
example). I'm not sure this is correct or if it hides something else...

#define STACK_POINTER_REGNUM 30

#define FRAME_POINTER_REGNUM 28

#define PREFERRED_RELOAD_CLASS(X, CLASS) ardac_preferred_reload_class(X, CLASS)

#define PREFERRED_OUTPUT_RELOAD_CLASS(X, CLASS) ardac_preferred_reload_class(X, 
CLASS)

enum reg_class
ardac_preferred_reload_class(rtx x, enum reg_class class)
{
if (CONSTANT_P(x)) {
switch (class) {
case NO_REGS:
case STACK_REGS:
return NO_REGS;
case EVEN_REGS:
case EIGHTEVEN_REGS:
return EIGHTEVEN_REGS;
case EIGHT_REGS:
case GENERAL_REGS:
return EIGHT_REGS;
default:
gcc_unreachable ();
}
}
else if (MEM_P(x)) {
switch (class) {
case NO_REGS:
case STACK_REGS:
return NO_REGS;
case EIGHT_REGS:
case EIGHTEVEN_REGS:
return EIGHTEVEN_REGS;
case EVEN_REGS:
case GENERAL_REGS:
return EVEN_REGS;
default:
gcc_unreachable ();
}
}
else {
if (GET_CODE (x) == PLUS
&& GET_CODE (XEXP (x, 0)) == REG
&& GET_CODE (XEXP (x, 1)) == CONST_INT) {
  return EIGHTEVEN_REGS;
}
return class;
}
}

Now it compiler 100+ files from libgcc without error so I guess the
register assignment problem is solved. It now fails later:

/home/tiniou/LTD/LTD/aRDAC/wip/src/gcc-4.3.0/libgcc/../gcc/unwind-dw2-fde.c: In 
function ‘frame_heapsort’:
/home/tiniou/LTD/LTD/aRDAC/wip/src/gcc-4.3.0/libgcc/../gcc/unwind-dw2-fde.c:521:
 internal compiler error: in expand_call, at calls.c:3149

I haven't investigated why yet, but this is probably not related to the above.

Thanks,

-- 
Stelian Pop <[EMAIL PROTECTED]>



RE: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Gerald.Williams
Robert Dewar wrote:
> An optimziation is dubious to me if
> 
> a) it produces surprising changes in behavior (note the importance of
> the word surprising here)
> 
> b) it does not provide significant performance gains (note the
> importance of the word significant here).
> 
> I find this optimization qualifies as meeting both criteria a) and b),
> so that's why I consider it dubious.

I don't think this is a particularly fruitful argument to be
having at this stage.

It's already been acknowledged that the source code is wrong
to assume that the compiler knows about wrapping of pointers.
The real issue at this stage is how to warn users who may be
using GCC and implicitly relying on its old behavior, without
unintentionally pushing people in the wrong direction. Since
this optimization is performed by many other compilers, the
ship has already sailed on this one, so to speak. [In fact,
after GCC does something to warn users about this, it'll be
much "safer" than those other compilers.]

I agree that on the face of it, it seems like you wouldn't
want to optimize away tests like this when you can know that
pointer arithmetic really does look the same as unsigned
arithmetic (for a particular architecture, etc.). However,
sometimes an optimization may enable thirty more, so I for
one am not going to argue against it. Especially not when
many other compilers do it also.

-Jerry

P.S. I'm having some déjà vu, recalling discussions back in
the GCC 2.7 days about whether it was really OK to change
the behavior for signed arithmetic to support devices with
saturation. We've come a long way since then.


Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Robert Dewar

Joe Buck wrote:

On Mon, Apr 14, 2008 at 04:27:40PM -0400, Robert Dewar wrote:

Ian Lance Taylor wrote:


A theoretical argument for why somebody might write problematic code
is http://www.fefe.de/openldap-mail.txt .

I don't know where, or even if, such code is actually found in the
wild.

Ian

Fair enough question. The other question of course is how much this
optimization saves.


The big savings are in loops, where the compiler can determine that
it doesn't have to consider the possibility of aliasing and can therefore
use values in registers instead of reloading from memory.


Savings? yes
Big? (from this particular optimization) dubious without data.


Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Robert Dewar

Florian Weimer wrote:


Existing safe C implementations take a performance hit which is a factor
between 5 and 11 in terms of execution time.  There is some new research
that seems to get away with a factor less than 2, but it's pretty recent
and I'm not sure if it's been reproduced independently.  If GCC users
are actually willing to take that hit for some gain in security
(significant gains for a lot of legacy code, of course), then most of
the recent work on GCC has been wasted.  I don't think this is the case.\


This is wholly excessive rhetoric, it's using a common invalid device
in argument, sometimes called extenso ad absurdum. It goes like this

You advocate A

But then to be consistent you should also advocate B,C,D,E

I will now argue against the combination A,B,C,D,E :-) :-)

These implementations that are 5-11 times slower are doing all sorts
of things that

a) I am not advocating in this discussion
b) I would not advocate in any case

Are you really saying that this particular optimization is costly to
eliminate? If so I just don't believe that allegation without data.


Keep in mind it's not the comparison that's the real problem here, it's
the subsequent buffer overflow.  And plugging that hole in full
generality is either difficult to do, or involves a significant run-time
performance overhead (or both).


And there you go! I do NOT advocate "plugging that hole in full
generality", so go try this argument on someone who does (I don't
think there are any such people around here!)



To me, dubious optimizations like this at the very least should
be optional and able to be turned off.


Why is this optimization dubious?  We would need to look at real-world
code to tell, and so far, we haven't heard anything about the context in
which the issue was originally encountered.


An optimziation is dubious to me if

a) it produces surprising changes in behavior (note the importance of
the word surprising here)

b) it does not provide significant performance gains (note the
importance of the word significant here).

I find this optimization qualifies as meeting both criteria a) and b),
so that's why I consider it dubious.



Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Joe Buck
On Mon, Apr 14, 2008 at 04:27:40PM -0400, Robert Dewar wrote:
> Ian Lance Taylor wrote:
> 
> >A theoretical argument for why somebody might write problematic code
> >is http://www.fefe.de/openldap-mail.txt .
> >
> >I don't know where, or even if, such code is actually found in the
> >wild.
> >
> >Ian
> 
> Fair enough question. The other question of course is how much this
> optimization saves.

The big savings are in loops, where the compiler can determine that
it doesn't have to consider the possibility of aliasing and can therefore
use values in registers instead of reloading from memory.


Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Robert Dewar

Ian Lance Taylor wrote:


A theoretical argument for why somebody might write problematic code
is http://www.fefe.de/openldap-mail.txt .

I don't know where, or even if, such code is actually found in the
wild.

Ian


Fair enough question. The other question of course is how much this
optimization saves.


Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Ian Lance Taylor
Florian Weimer <[EMAIL PROTECTED]> writes:

>> To me, dubious optimizations like this at the very least should
>> be optional and able to be turned off.
>
> Why is this optimization dubious?  We would need to look at real-world
> code to tell, and so far, we haven't heard anything about the context in
> which the issue was originally encountered.

The basis of the optimization in question is
http://gcc.gnu.org/PR27039 .

A theoretical argument for why somebody might write problematic code
is http://www.fefe.de/openldap-mail.txt .

I don't know where, or even if, such code is actually found in the
wild.

Ian


Re: Mapping back to original variables after SSA optimizations

2008-04-14 Thread Diego Novillo

On 4/10/08 8:16 AM, Fran Baena wrote:

Hi all,

i have a doubt about unSSA: is it allways possible to map back the
versioned variables to the original variable? If it could be possible,
is there an algorithm that describe this translation back?


It is not always possible.  If there are overlapping live ranges for two 
names of the same symbol, two different symbols need to be created. 
That's the reason why we do not allow overlapping live-ranges on memory 
variables.


Memory variables are not put in standard SSA form.  We build FUD 
(factored use-def) chains for those.  See Wolfe's book "High performance 
compilers for parallel computing" for details.




I have read the paper "Efficiently computing static single assignment
form and the control dependence graph (cytron91)" and no way to
translate back from SSA is explained, it only points out that after
SSA optimizations "dead code elminitation" and "allocation by
colloring" are recommended to be performed.


The out-of-SSA pass was modeled after the algorithm in Robert Morgan's 
book "Building an Optimizing Compiler".



Diego.


Re: A query regarding the implementation of pragmas

2008-04-14 Thread Jim Wilson

Mohamed Shafi wrote:

For a function call will i be able to implement long call/short call
for the same function at different locations?
Say fun1 calls bar and fun2 calls bar. I want short-call to be
generated for bar in fun1 and long-call to be generated in fun2.
Is to possible to implement this in the back-end using pragmas?


A simple grep command shows that both arm and rs6000 already both 
support long call pragmas.


Jim


Re: Problem compiling apache 2.0.63, libiconv.so: wrong ELF class: ELFCLASS32

2008-04-14 Thread Ian Lance Taylor
Persson Håkan <[EMAIL PROTECTED]> writes:

> I'm having problem when making apache 2.0.63. 

Wrong mailing list.  gcc@gcc.gnu.org is about developing gcc.
[EMAIL PROTECTED] is about using gcc.

I don't know the answer to your question.  It looks specific to your
distribution.

Ian


Re: A Query regarding jump pattern

2008-04-14 Thread Ian Lance Taylor
"Mohamed Shafi" <[EMAIL PROTECTED]> writes:

> I have read in the internals that indirect_jump and jump pattern are
> necessary in any back-end for the compiler to be build and work
> successfully. For any back-end there will be some limitation as to how
> big the offset used in the jump instructions can be. If the offset is
> too big then the indirect_jump pattern has to utilized. Now my
> question is how will i be able to specify the limit of the offset so
> the gcc generates indirect_jump pattern instead of the jump pattern? I
> hope i am clear.

>From the perspective of insn names, it's not quite accurate to say
that jump turns into indirect_jump.  It would be more correct to say
that when there is a limit to the offset for jump, it needs to use a
register.

The way to handle this is to set the "length" attribute correctly for
each insn, and to change code generation based on the length.  See,
e.g., the mips.md "jump" insn for an example.

Ian


Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Florian Weimer
* Robert Dewar:

> Florian Weimer wrote:
>> * Robert C. Seacord:
>>
>>> i agree that the optimization is allowed by C99.  i think this is a
>>> quality of implementation issue,  and that it would be preferable for
>>> gcc to emphasize security over performance, as might be expected.
>>
>> I don't think this is reasonable.  If you use GCC and its C frontend,
>> you want performance, not security.
>
> I find this a *VERY* dubious claim, in my experience VERY few users
> are at the boundary where small factors in performance are critical,
> but MANY users are definitely concerned with security.

Existing safe C implementations take a performance hit which is a factor
between 5 and 11 in terms of execution time.  There is some new research
that seems to get away with a factor less than 2, but it's pretty recent
and I'm not sure if it's been reproduced independently.  If GCC users
are actually willing to take that hit for some gain in security
(significant gains for a lot of legacy code, of course), then most of
the recent work on GCC has been wasted.  I don't think this is the case.

Keep in mind it's not the comparison that's the real problem here, it's
the subsequent buffer overflow.  And plugging that hole in full
generality is either difficult to do, or involves a significant run-time
performance overhead (or both).

> To me, dubious optimizations like this at the very least should
> be optional and able to be turned off.

Why is this optimization dubious?  We would need to look at real-world
code to tell, and so far, we haven't heard anything about the context in
which the issue was originally encountered.


Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Joe Buck

Robert C. Seacord wrote:
> > i agree that the optimization is allowed by C99.  i think this is a
> > quality of implementation issue,  and that it would be preferable for
> > gcc to emphasize security over performance, as might be expected.

On Sun, Apr 13, 2008 at 11:51:00PM +0200, Florian Weimer wrote:
> I don't think this is reasonable.  If you use GCC and its C frontend,
> you want performance, not security.

Furthermore, there are a number of competitors to GCC.  These competitors
do not advertise better security than GCC.  Instead they claim better
performance (though such claims should be taken with a grain of salt).
To achieve high performance, it is necessary to take advantage of all of
the opportunities for optimization that the C language standard permits.

For CERT to simulataneously argue that GCC should be crippled (to
emphasize security over performance) but that nothing negative should
be said about competing compilers is the height of irresponsibility.
Any suggestion that users should avoid new versions of GCC will drive
users to competing compilers that optimize at least as aggressively.



Re: Where is scheduling going wrong? - GCC-4.1.2

2008-04-14 Thread Jim Wilson

On Sun, 2008-04-13 at 17:05 +0530, Mohamed Shafi wrote:
> Well i tracked down the cause to the md file. In the md file i had a
> define_expand for the jump pattern. Inside the pattern i was checking
> whether the value of the offset for the jump is out of range and if
> its out of range then force the offset into a register and emit
> indirect_jump. Though this didnt work, every time an unconditional
> jump was being emitted a barrier was also being emitted. It looks like
> in define_expand for jump we should consider all the case and emit
> DONE for all the cases, if you are considering any case, otherwise a
> barrier will be generated for cases not covered in DONE. Am i right?

Sorry, I don't understand what the problem is.  We always emit a barrier
after an unconditional branch.  Whether or not you call DONE inside the
"jump" pattern is irrelevant.  Also, whether you emit a PC-relative
branch or an indirect branch is irrelevant.

> The following link is the reply from Ian for a query of mine regarding
> scheduling.
> http://gcc.gnu.org/ml/gcc/2008-04/msg00245.html
> After reading this, i feel that gcc should have looked for barrier
> insn while scheduling and should have given an ICE if it found one.
> True, barrier got into the instruction stream because of my mistake,
> but then thats what ICEs are for. Then again i might be wrong about
> this.

We do have consistency checks for many problems with the RTL, but it
isn't possible to catch all of them all of the time.

> P.S. I am still searching for a solution to choose between jump and
> indirect_jump pattern when the offset is out of range.
> http://gcc.gnu.org/ml/gcc/2008-04/msg00290.html
> May be you can help with that

This is what the shorten_branches optimization pass is for.  Define a
length attribute that says how long a branch is for each offset to the
target label.  Then when emitting assembly language code, you can choose
the correct instruction to emit based on the instruction length computed
by the shorten branches pass.  If you need to allocate a register, that
gets a bit tricky, but there are various solutions.  See the sh, mips16
(mips) and thumb (arm) ports for ideas.

Jim



Re: Request copyright assignment form

2008-04-14 Thread Gerald Pfeifer
Hi Bob,

On Sat, 12 Apr 2008, Bob Walters wrote:
> Can you send me any reference to the current copyright assignment
> form, so that I can get this taken care of.  I found something online
> at http://gcc.gnu.org/ml/gcc/2002-09/msg00678.html, but have no idea
> if that is current, so wanted to check with you first.

please find below a form I just obtained from the FSF servers to start
the process of assigning past (= existing) and future changes.

If you fill this out and email it to [EMAIL PROTECTED] (putting your full
name as the subject of that message), this will get the process going.
I recommend to not just specify libstdc++ but GCC as well.

Thanks for joining the team, and happy hacking! :-)

Gerald

 snip 
[What is the name of the program or package you're contributing to?]


[Did you copy any files or text written by someone else in these changes?
Even if that material is free software, we need to know about it.]


[Do you have an employer who might have a basis to claim to own
your changes?  Do you attend a school which might make such a claim?]


[For the copyright registration, what country are you a citizen of?]


[What year were you born?]


[Please write your email address here.]


[Please write your snail address here.]





[Which files have you changed so far, and which new files have you written
so far?]



[PATCH][RFC] middle-end array expressions (II)

2008-04-14 Thread Richard Guenther


This is the final proposal for the introduction of arrays as first
class citizens of the middle-end.  The goal is still to retain
the high-level information that the GFortran frontend has for array
assignments up to the high-level loop optimization passses and to
not lower Fortran array assignments in the frontend.

After several tries I settled on the following scheme (explained in
detail in a paper for the GCC Summit this year).

Whole-array loads and stores to gimple array registers (that will be
put in SSA form) are done via a new variable length tree code that
contains array extents and strides as operands (thus it properly
lowers VLA objects to gimple).  (VLA_VIEW_EXPR)

Expressions operating on gimple array registers are doing so by
means of operating on scalar placeholders, "indexed" array registers,
that are just scalar results of a new indexing operation (VLA_IDX_EXPR).
Results of these expressions are put back into array registers
by doing a reverse transformation (VLA_RIDX_EXPR).

To represent reductions VLA_DELTA_EXPR implements contraction of
dimensions by iterating and summing over all values of certain indices.

Usually examples are more useful than words (and I didn't want to repeat
the whole paper here), so the following GIMPLE would compute the
matrix product of two n x n matrices A and B (pointed to by a and b)
and stores it into C.

  float A[n][n] = VLA(_VIEW_EXPR)  (*a);
  float B[n][n] = VLA  (*b);
  float Aik = VLA_IDX(_EXPR)  (A);
  float Bkj = VLA_IDX  (B);
  float tmp = Aik * Bkj;
  float Cij = VLA_DELTA(_EXPR)  (tmp);
  float C[n][n] = VLA_RIDX(_EXPR)  (Cij);
  VLA  (*c) = C;

More usual Fortran expressions like

  A(2:n-1) = B(1:n-2) + B(3:n)

would look like

  float Btmp[n] = VLA  (B);
  float B1 = VLA_IDX  (Btmp);
  float B2 = VLA_IDX  (Btmp);
  float tmp = B1 + B2;
  float Atmp = VLA_RIDX  (tmp);
  VLA  (A) = Atmp;

The patch below doesn't touch the Fortran frontend but implements a
(hacky) interface to C and C++ using builtins with covariant return types
(see the testsuite entries to get an idea how they work).  It also
implements a lowering pass that turns the array expressions back to
loops.  It doesn't work at -O0 (because we're not in SSA form there)
and I expect the lowering to be done by GRAPHITE in the end, not by
a separate lowering pass.

Now, I'm open to questions and I'll send the paper to anyone that
wants it (but I gues I'm not supposed to put it somewhere public
before the summit?).

Please re-direct followups to gcc/gcc-patches according to subject.

Thanks,
Richard.



middle-end-arrays-SSA.gz
Description: GNU Zip compressed data


Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Andreas Schwab
Robert Dewar <[EMAIL PROTECTED]> writes:

> Alex Stepanov told me once that he preferred Ada to C, because Ada
> has proper pointer arithmetic (via the type Integer_Address) which
> is defined to work in Ada in the manner that Paul mistakenly expects
> for C. Integer_Address would be a bit of a pain to implement on
> a 286 :-)

In C it is called uintptr_t.

Andreas.

-- 
Andreas Schwab, SuSE Labs, [EMAIL PROTECTED]
SuSE Linux Products GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany
PGP key fingerprint = 58CA 54C7 6D53 942B 1756  01D3 44D5 214B 8276 4ED5
"And now for something completely different."


Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Robert Dewar

Richard Guenther wrote:


In absence of any declared object (like with this testcase where we just
have an incoming pointer to some unknown object) the compiler can
still assume that any valid object ends at the end of the address space.
Thus, an object either declared or allocated via malloc never "wraps"
around to address zero.  Thus, ptr + int never "overflows".


Indeed,

An interesting case is the special allowance to point just past the
end of an array if the pointer is not deferenced, this allows the
C idiom

   for (x = arr; x < &arr[10]; x++) ...

where arr has bounds 0..9, the limit pointer is used only for
testing, and this test must be valid. This means that you can't
have an array allocated up to the extreme end of the address
space if this would not work properly. I remember this issue
arising on the 286, where the maximum size of an array was
one element less than 64K bytes on one compiler to avoid
this anomoly.


Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Robert Dewar

Paul Schlie wrote:

(as an aside, as most target implementations treat pointers as unsigned
values, its not clear that presuming signed integer overflow semantics are
a reasonable choice for pointer comparison optimization)

The point is not of presuming signed integer overflow semantics (I was
corrected on this by Ian Taylor). It is of presuming that pointers never
move before the beginning of their object. If you have an array of 20
elements, pointers &a[0] to &a[20] are valid (accessing &a[20] is not valid),
but the compiler can assume that the program does not refer to &a[-2].

Paolo


Yes (and in which case if the compiler is smart enough to recognize
this it should generate an error, not emit arbitrary [or absents] of
code); but the example in question was:

void f(char *buf)  {
 unsigned int len = 0xFF00u; /* or similar */

if (buf+len < buf) puts("true");

}

In which case buf is merely a pointer which may point to any char, not a
char within a particular array, implying buf+len is also just a pointer,
ultimately being compared against buf;


nope, that may be in your mind what C means, but it's not what the
C language says. A pointer can only point within the allocated
object. Given that constraint, it is obvious that the condition can
never be true. Of course if the compiler elides the test on this
basis, it is nice if it warns you (certainly there is no basis
for an error message in the above code).


If all such pointers are presumed to be restricted to pointing to the
element they were originally assigned, then all composite pointer arithmetic
such as buf+len would be invalid. 


no, that's just wrong, the computation buf+len is valild if the 
resulting pointer is within the original object, a condition that

cannot be caught statically, and with typical implementations
cannot be caught dynamically in all cases either.

You might want to think of an implementation of C where pointers are
a pair, a base pointer and an offset, and pointer arithmetic only
works on the offset. This is a valid implementation, and is in some
sense the formal semantic model of C. It is used in some debugging
versions of C that always check this condition at run time.

It was also used in effect on the 286, where pointers at the
hardware level are segment+offset, and it is valid in C on the
286 to do arithmetic only on the offset part of the address,
and there were indeed C compilers that worked this way.

Alex Stepanov told me once that he preferred Ada to C, because Ada
has proper pointer arithmetic (via the type Integer_Address) which
is defined to work in Ada in the manner that Paul mistakenly expects
for C. Integer_Address would be a bit of a pain to implement on
a 286 :-)

Of course in C in practice, pointers are just machine addresses,
and more general pointer arithmetic does "work", but any program
taking advantage of this is not written in the C language.



All this being said, I understand that in

general this is an anomalous case, however on small embedded machines with
small memory spaces or when writing drivers or memory allocators, such
pointer arithmetic may be perfectly legitimate it would seem.





Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Richard Guenther
On Mon, Apr 14, 2008 at 1:55 PM, Paul Schlie <[EMAIL PROTECTED]> wrote:
>
>
>  >> (as an aside, as most target implementations treat pointers as unsigned
>  >> values, its not clear that presuming signed integer overflow semantics are
>  >> a reasonable choice for pointer comparison optimization)
>  >
>  > The point is not of presuming signed integer overflow semantics (I was
>  > corrected on this by Ian Taylor). It is of presuming that pointers never
>  > move before the beginning of their object. If you have an array of 20
>  > elements, pointers &a[0] to &a[20] are valid (accessing &a[20] is not 
> valid),
>  > but the compiler can assume that the program does not refer to &a[-2].
>  >
>  > Paolo
>
>  Yes (and in which case if the compiler is smart enough to recognize
>  this it should generate an error, not emit arbitrary [or absents] of
>  code); but the example in question was:
>
>  void f(char *buf)  {
>   unsigned int len = 0xFF00u; /* or similar */
>
>
>  if (buf+len < buf) puts("true");
>
>  }
>
>  In which case buf is merely a pointer which may point to any char, not a
>  char within a particular array, implying buf+len is also just a pointer,
>  ultimately being compared against buf;
>
>  If all such pointers are presumed to be restricted to pointing to the
>  element they were originally assigned, then all composite pointer arithmetic
>  such as buf+len would be invalid. All this being said, I understand that in
>  general this is an anomalous case, however on small embedded machines with
>  small memory spaces or when writing drivers or memory allocators, such
>  pointer arithmetic may be perfectly legitimate it would seem.

In absence of any declared object (like with this testcase where we just
have an incoming pointer to some unknown object) the compiler can
still assume that any valid object ends at the end of the address space.
Thus, an object either declared or allocated via malloc never "wraps"
around to address zero.  Thus, ptr + int never "overflows".

Richard.


Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Paul Schlie

>> (as an aside, as most target implementations treat pointers as unsigned
>> values, its not clear that presuming signed integer overflow semantics are
>> a reasonable choice for pointer comparison optimization)
>
> The point is not of presuming signed integer overflow semantics (I was
> corrected on this by Ian Taylor). It is of presuming that pointers never
> move before the beginning of their object. If you have an array of 20
> elements, pointers &a[0] to &a[20] are valid (accessing &a[20] is not valid),
> but the compiler can assume that the program does not refer to &a[-2].
>
> Paolo

Yes (and in which case if the compiler is smart enough to recognize
this it should generate an error, not emit arbitrary [or absents] of
code); but the example in question was:

void f(char *buf)  {
 unsigned int len = 0xFF00u; /* or similar */

if (buf+len < buf) puts("true");

}

In which case buf is merely a pointer which may point to any char, not a
char within a particular array, implying buf+len is also just a pointer,
ultimately being compared against buf;

If all such pointers are presumed to be restricted to pointing to the
element they were originally assigned, then all composite pointer arithmetic
such as buf+len would be invalid. All this being said, I understand that in
general this is an anomalous case, however on small embedded machines with
small memory spaces or when writing drivers or memory allocators, such
pointer arithmetic may be perfectly legitimate it would seem.




Re: omp workshare (PR35423) & beginner questions

2008-04-14 Thread Jakub Jelinek
Hi!

On Wed, Apr 09, 2008 at 11:29:24PM -0500, Vasilis Liaskovitis wrote:
> I am a beginner interested in learning gcc internals and contributing
> to the community.

Thanks for showing interest in this area!

> I have started implementing PR35423 - omp workshare in the fortran
> front-end. I have some questions - any guidance and suggestions are
> welcome:
> 
> - For scalar assignments, wrapping them in OMP_SINGLE clause.

Yes, though if there is a couple of adjacent scalar assignments which don't
involve function calls and won't take too long to execute, you want
to put them all into one OMP_SINGLE.  If the assignments make take long
because of function calls and there are several such ones adjacent,
you can use OMP_WORKSHARE.

Furthermore, for all statements, not just the scalar ones, you want to
do dependency analysis between all the statements within !$omp workshare,
and make OMP_SINGLE, OMP_FOR or OMP_SECTIONS and add OMP_CLAUSE_NOWAIT
to them where no barrier is needed.

> - Array/subarray assignments: For assignments handled by the
> scalarizer,  I now create an OMP_FOR loop instead of a LOOP_EXPR for
> the outermost scalarized loop. This achieves worksharing at the
> outermost loop level.

Yes, though on gomp-3_0-branch you actually could use collapsed OMP_FOR
loop too.  Just bear in mind that for best performance at least with
static OMP_FOR scheduling ideally the same memory (part of array in this
case) is accessed by the same thread, as then it is in that CPU's caches.
Of course that's not always possible, but if it can be done, gfortran
should try that.

> Some array assignments are handled by functions (e.g.
> gfc_build_memcpy_call generates calls to memcpy). For these, I believe
> we need to divide the arrays into chunks and have each thread call the
> builtin function on its own chunk. E.g. If we have the following call
> in a parallel workshare construct:
> 
> memcpy(dst, src, len)
> 
> I generate this pseudocode:
> 
> {
>   numthreads = omp_get_numthreads();
>   chunksize = len / numthreads;
>   chunksize = chunksize + ( len != chunksize*numthreads)
> }
> 
> #omp for
>for (i = 0; i < numthreads; i++) {
>   mysrc = src + i*chunksize;
>   mydst = dst + i*chunksize;
>   mylen = min(chunksize, len - (i*chunksize));
>   memcpy(mydst, mysrc, mylen);
>   }
> 
> If you have a suggestion to implement this in a simpler way, let me know.

Yeah, this possible.  Note though what I said above about cache locality.
And, if the memcpy size is known to be small doing it in OMP_SINGLE might
have advantages too.

> The above code executes parallel in every thread. Alternatively, the
> first block above can be wrapped in omp_single, but the numthreads &
> chunksize variables should then be
> declared shared instead of private. All the variables above
> are private by default, since they are declared in a parallel
> construct.

omp_get_num_threads is very cheap, and even with a division and
multiplication it most probably still be cheaper than OMP_SINGLE,
especially because it could not be NOWAIT.

> How can I set the scoping for a specific variable in a given
> omp for construct? Is the following correct to make a variable shared:
> 
> tmp = build_omp_clause(OMP_CLAUSE_SHARED);
> OMP_CLAUSE_DECL(tmp) = variable;
> omp_clauses = gfc_tran_add_clause(tmp, );

That, or just by letting the gimplifier set that up - if you don't
add OMP_CLAUSE_DEFAULT, by default loop iterators will be private,
the rest shared.

> -  I still need to do worksharing for array reduction operators (e.g.
> SUM,ALL, MAXLOC etc). For these, I think a combination of OMP_FOR/OMP_SINGLE 
> or
> OMP_REDUCTION is needed. I will also try to work on WHERE and
> FORALL statements.

I guess OMP_CLAUSE_REDUCTION for sum, max etc. will be best.  But testing
several variants on a bunch of testcases and benchmarking what is fastest
under what conditions is certainly the way to go in many cases.
Either you code it up in gfortran and try, or transform your original
!$omp workshare benchmarks into !$omp single, !$omp sections, !$omp for etc.
by hand and testing that is certainly possible too.

BTW, whenever you create OMP_FOR to handle part or whole !$omp workshare,
you should also choose the best scheduling kind.  You could just use
schedule(auto) and let the middle-end choose the best scheduling when
that support is actually added, but often the gfortran frontend might
know even better.

> I am also interested in gomp3 implementation and performance issues.
> If there are not-worked-on issues suitable for newbies, please share
> or update http://gcc.gnu.org/wiki/openmp. Can someone elaborate on the
> "Fine tune the auto scheduling feature for parallel loops" issue?

ATM the largest unfinished part of OpenMP 3.0 support is the tasking
support in libgomp using {[sg]et,make,swap}context family of functions,
but it is quite high on my todo list and I'd like to work on it soon.

As OpenMP 3.0 allows unsigned iterators for

Re: Moving statements from one BB to other BB.

2008-04-14 Thread Richard Guenther
On Mon, Apr 14, 2008 at 12:54 PM, Sandeep Maram <[EMAIL PROTECTED]> wrote:
> Hi,
>
>  I have transferred all the statements of one BB( header of one loop)
>  to another BB. After that I have updated SSA information too.
>  But I get this error-
>
>   definition in block 6 does not dominate use in block 3
>  for SSA_NAME: i_25 in statement:

This is the problem.

>  # VUSE 
>  D.1189_10 = a[i_25];
>  loop.c:8: internal compiler error: verify_ssa failed
>
>  Can any one please tell me what is the problem?
>
>  Thanks,
>  Sandeep.
>


Moving statements from one BB to other BB.

2008-04-14 Thread Sandeep Maram
Hi,

I have transferred all the statements of one BB( header of one loop)
to another BB. After that I have updated SSA information too.
But I get this error-

 definition in block 6 does not dominate use in block 3
for SSA_NAME: i_25 in statement:
# VUSE 
D.1189_10 = a[i_25];
loop.c:8: internal compiler error: verify_ssa failed

Can any one please tell me what is the problem?

Thanks,
Sandeep.


Re: A query regarding the implementation of pragmas

2008-04-14 Thread Andrew Haley
Mohamed Shafi wrote:

> For a function call will i be able to implement long call/short call
> for the same function at different locations?
> Say fun1 calls bar and fun2 calls bar. I want short-call to be
> generated for bar in fun1 and long-call to be generated in fun2.
> Is to possible to implement this in the back-end using pragmas?

I'm not at all sure it's appropriate to do this in gcc at all.  Of course
you can do this with a target-specific attribute, but wouldn't this
best be left to the linker?

Andrew.


A query regarding the implementation of pragmas

2008-04-14 Thread Mohamed Shafi
Hello all,

For a function call will i be able to implement long call/short call
for the same function at different locations?
Say fun1 calls bar and fun2 calls bar. I want short-call to be
generated for bar in fun1 and long-call to be generated in fun2.
Is to possible to implement this in the back-end using pragmas?

Regards,
Shafi.


Problem compiling apache 2.0.63, libiconv.so: wrong ELF class: ELFCLASS32

2008-04-14 Thread Persson Håkan
Hello

I'm having problem when making apache 2.0.63. 

Im using the configure command like this:

CC="gcc" CFLAGS=" -O2 -mcpu=v9 -m64" CPPFLAGS=" -m64 -I/usr/sfw/include" 
LDFLAGS="-m64" \
./configure --prefix=/usr/apache2 --enable-mods-shared=most --with-mpm=prefork

And the problem I got looks like this:

/usr/tmp/httpd-2.0.63/srclib/apr/libtool --silent --mode=link gcc-O2 
-mcpu=v9 -m64  -m64 -I/usr/sfw/include -DSOLARIS2=10 -D_POSIX_PTHREAD_SEMANTICS 
-D_REENTRANT -DAP_HAVE_DESIGNATED_INITIALIZER -m64 -I/usr/sfw/include  
-I/usr/tmp/httpd-2.0.63/srclib/apr/include 
-I/usr/tmp/httpd-2.0.63/srclib/apr-util/include 
-I/usr/tmp/httpd-2.0.63/srclib/apr-util/xml/expat/lib -I. 
-I/usr/tmp/httpd-2.0.63/os/unix -I/usr/tmp/httpd-2.0.63/server/mpm/prefork 
-I/usr/tmp/httpd-2.0.63/modules/http -I/usr/tmp/httpd-2.0.63/modules/filters 
-I/usr/tmp/httpd-2.0.63/modules/proxy -I/usr/tmp/httpd-2.0.63/include 
-I/usr/tmp/httpd-2.0.63/modules/generators 
-I/usr/tmp/httpd-2.0.63/modules/dav/main -export-dynamic 
-L/usr/tmp/httpd-2.0.63/srclib/apr-util/xml/expat/lib  -m64 -o htpasswd  
htpasswd.lo/usr/tmp/httpd-2.0.63/srclib/pcre/libpcre.la 
/usr/tmp/httpd-2.0.63/srclib/apr-util/libaprutil-0.la 
/usr/tmp/httpd-2.0.63/srclib/apr-util/xml/expat/lib/libexpat.la -liconv 
/usr/tmp/httpd-2.0.63/srclib/apr/libapr-0.la -lsendfile -lrt -lm -lsocket -lnsl 
-lresolv -lpthread
ld: fatal: file /usr/local/lib/libiconv.so: wrong ELF class: ELFCLASS32
ld: fatal: File processing errors. No output written to .libs/htpasswd
collect2: ld returned 1 exit status
make[2]: *** [htpasswd] Error 1
make[2]: Leaving directory `/usr/tmp/httpd-2.0.63/support'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/usr/tmp/httpd-2.0.63/support'
make: *** [all-recursive] Error 1


Any ideas why the gcc compiler not finds the 64-bit variant of libiconv.so??

file /usr/local/lib.64/libiconv.so
/usr/local/lib.64/libiconv.so:  ELF 64-bit MSB dynamic lib SPARCV9 Version 1, 
dynamically linked, not stripped





Best regards
Håkan Persson