Re: Signed int overflow behaviour in the security context

2007-01-23 Thread James Dennett
Richard Kenner wrote:
>> Oh, and teaching all of the programmers out there all the subtle nuances
>> of C and trying to get them to write proper code: good luck.  That
>> simply won't happen.
> 
> If people who write security-critical code in a programming language
> can't take time to learn the details of that language relevant to
> security issues (such as overflow handling),

Many of them can't, or don't...

> I think our society is in
> a great deal of trouble.

Your conclusion may well be correct.  The question for this group is:
what's the best that GCC can do to serve the community/society?

-- James



Re: [RFC] Our release cycles are getting longer

2007-01-23 Thread Marcin Dalecki


Wiadomość napisana w dniu 2007-01-24, o godz04:32, przez Andrew Pinski:



It's "too good" to be usable. The time required for a full test suite
run can be measured by days not hours.


Days, only for slow machines.  For our PS3 toolchain (which is really
two sperate compilers), it takes 6 hours to run the testsuite, this
is doing one target with -fPIC.  So I don't see how you can say it
takes days.


Quantitatively:

gcc/testsuite dalecki$ find ./ -name "*.[ch]" | wc
66446644  213514
ejh216:~/gcc-osx/gcc/testsuite dalecki$ find ./ -name "*.[ch]" -exec  
cat {} \; | wc

  254741 1072431 6891636

That's just about a quarter million lines of code to process and you
think the infrastructure around it isn't crap on the order of 100?  
Well... since
one "can drive a horse dead only once" the whole argument could  
actually stop here.



No, not really, it took me a day max to get a spu-elf cross compile
building and runing with newlib and all.


Building and running fine, but testing??? And I guess of course that  
it wasn't
a true cross, since the SPUs are actually integrated in to the same  
OS image as

the main CPU for that particular target.


My favorite tactic to decrease the number of
bugs is to set up a unit test framework for your code base (so  
you can

test changes to individual functions without having to run the whole
compiler), and to strongly encourage patches to be accompanied by  
unit

tests.


That's basically a pipe dream with the auto based build system.


Actually the issues here are unrelated at all to auto* and unit  
test framework.


So what do the words "full bootstrap/testing" mean, which you hear  
when providing
any kind of tinny fix? What about the involvement of those utilities  
through
zillions of command line defines and embedded shell scripting for  
code generation
on the ACUTAL code which makes up the gcc executables? Coverage? Unit  
testing?  How?!
Heck even just a full reliable symbol index for an editor isn't easy  
to come by...

Or are your just going to permute all possible configure options?


The real reason why toplevel libgcc took years to come
by is because nobody cared enough about libgcc to do any kind of  
clean up.


Because there are actually not that many people who love to dvelve  
inside the
whole .m4 .guess and so on... Actually It's not that seldom that  
people are incapable

to reproduce the currently present build setup.


  The attitude has
changed recently (when I say recent I mean the last 3-4 years) to  
all of these problems and
in fact all major issues with GCC's build and internals are  
changing for the better.


And now please compare this with the triviality of relocating source  
files in:


1. The FreeBSD bsdmake structure. (Which is pretty portable BTW.)
2. The solaris source tree.
3. A visual studio project.
4. xcode project.


PS auto* is not to blame for GCC's problems, GCC is older than auto*.


It sure isn't the biggest problem by far. However it's the upfront  
one, if you
start to seriously look in to GCC. Remember - I'm the guy who  
compiles the whole
of GCC with C++, so it should be clear where I think the real issues  
are.


Re: Level to do such a modification...

2007-01-23 Thread Ben Elliston
> I am working on gcc 4.0.0. I want to use gcc to intercept each call to
> read, and taint the data readed in. For example:
> transform
>   read(fd, buf, size)
> to
>   read(fd, buf, size)
>   if(is_socket(fd))
>   taint(buf, size)

> So, what is the best suitable level to do this modification in gcc? My
> own thought is in finish_function, before calling c_genericize,as I
> discovered that in c front-end, there's no GENERIC tree... In
> c_genericize, it directly calls gimplify_function_tree.

You don't need to modify the compiler.  Just write your own read
function that taints the data and wrap it around calls to read using
ld's --wrap option.  See the linker documentation for more details.

Cheers, Ben




Re: [RFC] Our release cycles are getting longer

2007-01-23 Thread Brooks Moses

Marcin Dalecki wrote:

A trivial by nature change like the
top level build of libgcc took actually years to come by.


I'm not sure how much that's inherently evidence that it was 
inappropriately difficult to do, though.


For example, the quite trivial change of having "make pdf" support for 
creating decently usable PDF documentation also took quite a number of 
years to come by, counting from the first messages I found in the list 
archives that suggested that it would be a good idea.  However, I think 
I spent less than eight hours total on implementing it, testing it, 
getting it through the necessary red tape for approval, and committing 
the final patch.  There weren't any technical difficulties in the way at 
all.


- Brooks



Re: Level to do such a modification...

2007-01-23 Thread 吴曦

Besides that, as far as I know, valgrind can not run on itanium... but
I am now working on it :-(

2007/1/24, Nicholas Nethercote <[EMAIL PROTECTED]>:

On Wed, 24 Jan 2007, [GB2312] ÎâêØ wrote:

> I know valgrind, it is an emulator ,but we are restricted not to use
> an emulator. :-(

Well, for some definition of "emulator".

Nick



Re: Level to do such a modification...

2007-01-23 Thread 吴曦

Anyway, the program is supervised...would you mind giving some advices
with the compiler-based approach, after recompilation, I could finish
this modification.

2007/1/24, Nicholas Nethercote <[EMAIL PROTECTED]>:

On Wed, 24 Jan 2007, [GB2312] ÎâêØ wrote:

> I know valgrind, it is an emulator ,but we are restricted not to use
> an emulator. :-(

Well, for some definition of "emulator".

Nick



Re: char should be signed by default

2007-01-23 Thread Andrew Pinski
On Tue, 2007-01-23 at 23:19 -0600, [EMAIL PROTECTED] wrote:
> GCC should treat plain char in the same fashion on all types of machines
> (by default).

No, no, no.  It is up to the ABI what char is.

> The ISO C standard leaves it up to the implementation whether a char
> declared plain char is signed or not. This in effect creates two
> alternative dialects of C.

So ... char is seperate type from unsigned char and signed char in C and
C++ anyways.


> The preferred dialect makes plain char signed, because this is simplest.
> Since int is the same as signed int, short is the same as signed short,
> etc., it is cleanest for char to be the same.

No, no.

> Some computer manufacturers have published Application Binary Interface
> standards which specify that plain char should be unsigned. It is a
> mistake, however, to say anything about this issue in an ABI. This is
> because the handling of plain char distinguishes two dialects of C. Both
> dialects are meaningful on every type of machine. Whether a particular
> object file was compiled using signed char or unsigned is of no concern
> to other object files, even if they access the same chars in the same
> data structures.

No it is not.  The reason is for an example on PPC, there is no signed
extended byte loads, only unsigned.  There is a reason why it is
unsigned on those targets.

> Many users appreciate the GNU C compiler because it provides an
> environment that is uniform across machines. These users would be
> inconvenienced if the compiler treated plain char differently on certain
> machines.

So don't depend on this behavior.  There are plenty of implementation
details.

> There are some arguments for making char unsigned by default on all
> machines. If, for example, this becomes a universal de facto standard,
> it would make sense for GCC to go along with it. This is something to be
> considered in the future.

No, no, no.  GCC cannot change right now, as it would change the ABI.
If you don't believe me, try this program with/without -fsigned-char
and -funsigned-char:

static inline int f(int a)
{
  return a==0xFF;
}

int g(char *b)
{
  return f(*b);
}

---
You will see with -fsigned-char, we get 0; with -funsigned-char, we
compare to *(unsigned char*)b 0xFF

-- Pinski



char should be signed by default

2007-01-23 Thread devils_advocate
GCC should treat plain char in the same fashion on all types of machines
(by default).

The ISO C standard leaves it up to the implementation whether a char
declared plain char is signed or not. This in effect creates two
alternative dialects of C.

The GNU C compiler supports both dialects; you can specify the signed
dialect with -fsigned-char and the unsigned dialect with
-funsigned-char. However, this leaves open the question of which dialect
to use by default.

The preferred dialect makes plain char signed, because this is simplest.
Since int is the same as signed int, short is the same as signed short,
etc., it is cleanest for char to be the same.

Some computer manufacturers have published Application Binary Interface
standards which specify that plain char should be unsigned. It is a
mistake, however, to say anything about this issue in an ABI. This is
because the handling of plain char distinguishes two dialects of C. Both
dialects are meaningful on every type of machine. Whether a particular
object file was compiled using signed char or unsigned is of no concern
to other object files, even if they access the same chars in the same
data structures.

A given program is written in one or the other of these two dialects.
The program stands a chance to work on most any machine if it is
compiled with the proper dialect. It is unlikely to work at all if
compiled with the wrong dialect.

Many users appreciate the GNU C compiler because it provides an
environment that is uniform across machines. These users would be
inconvenienced if the compiler treated plain char differently on certain
machines.

Occasionally users write programs intended only for a particular machine
type. On these occasions, the users would benefit if the GNU C compiler
were to support by default the same dialect as the other compilers on
that machine. But such applications are rare. And users writing a
program to run on more than one type of machine cannot possibly benefit
from this kind of compatibility.

There are some arguments for making char unsigned by default on all
machines. If, for example, this becomes a universal de facto standard,
it would make sense for GCC to go along with it. This is something to be
considered in the future.

(Of course, users strongly concerned about portability should indicate
explicitly whether each char is signed or not. In this way, they write
programs which have the same meaning in both C dialects.)



Re: Level to do such a modification...

2007-01-23 Thread Nicholas Nethercote

On Wed, 24 Jan 2007, [GB2312] ÎâêØ wrote:


I know valgrind, it is an emulator ,but we are restricted not to use
an emulator. :-(


Well, for some definition of "emulator".

Nick

Re: Level to do such a modification...

2007-01-23 Thread 吴曦

I know valgrind, it is an emulator ,but we are restricted not to use
an emulator. :-(

2007/1/24, Nicholas Nethercote <[EMAIL PROTECTED]>:

On Wed, 24 Jan 2007, [GB2312] ÎâêØ wrote:

> I am working on gcc 4.0.0. I want to use gcc to intercept each call to
> read, and taint the data readed in. For example:
> transform
>   read(fd, buf, size)
> to
>   read(fd, buf, size)
>   if(is_socket(fd))
>   taint(buf, size)
> So, what is the best suitable level to do this modification in gcc? My
> own thought is in finish_function, before calling c_genericize,as I
> discovered that in c front-end, there's no GENERIC tree... In
> c_genericize, it directly calls gimplify_function_tree.

Are you sure you want to do this in GCC?  You might find it easier to use a
dynamic binary instrumentation framework such as Valgrind or Pin to do this
kind of thing.

Nick



Re: Level to do such a modification...

2007-01-23 Thread Nicholas Nethercote

On Wed, 24 Jan 2007, [GB2312] ÎâêØ wrote:


I am working on gcc 4.0.0. I want to use gcc to intercept each call to
read, and taint the data readed in. For example:
transform
read(fd, buf, size)
to
read(fd, buf, size)
if(is_socket(fd))
taint(buf, size)
So, what is the best suitable level to do this modification in gcc? My
own thought is in finish_function, before calling c_genericize,as I
discovered that in c front-end, there's no GENERIC tree... In
c_genericize, it directly calls gimplify_function_tree.


Are you sure you want to do this in GCC?  You might find it easier to use a 
dynamic binary instrumentation framework such as Valgrind or Pin to do this 
kind of thing.


Nick

Re: [RFC] Our release cycles are getting longer

2007-01-23 Thread Andrew Pinski
> 
> On Tue, 23 Jan 2007 17:54:10 -0500, Diego Novillo <[EMAIL PROTECTED]> said:
> 
> > So, I was doing some archeology on past releases and we seem to be
> > getting into longer release cycles.
> 
> Interesting.
> 
> I'm a GCC observer, not a participant, but here are some thoughts:
> 
> As far as I can tell, it looks to me like there's a vicious cycle
> going on.  Picking an arbitrary starting point:
> 

Let me bring up another point:

0) bugs go unnoticed for a couple of releases and then become part of
the release criteria.

This is 0) what really causes stage 3 to become long rather than what
you mention in 1).  Most of the bugs introduced during stage 1 that
are caught during stage 3 or earlier are fixed quicker than the ones
which were caught after the .2 release was done.  

> Now, the good news is that this cycle can be a virtuous cycle rather
> than a vicious cycle: if you can lower one of these measurements
> (length of stage 3, size of branches, size of patches, number of
> bugs), then the other measurements will start going down.  "All" you
> have to do is find a way to mute one of the links somehow, focus on
> the measurement at the end of that link, and then things will start
> getting better.

I don't think this will work because more bugs will just be found, the
longer stage 3 is.


> It's not obvious to what the best way is to do that, but here are some
> ideas.  Taking the links one by one:
> 
> 1: Either fix bugs faster, or release with more bugs.
> 
> 2: Artificially shorten the lifespan of development branches somehow,
> so that big branches don't appear during stage 3.

We already did this for 4.2 which is one reason why 4.2 has almost no
new features and why a lot of patches were left out.

> 3: Throttle the size of patches: don't let people do gigantic
> merges, no matter the size of the branch.

This is wrong as the gigantic merges are needed in some cases to
be able change the infrastuture of GCC.  A good example is recently
(and soon) are mem-ssa, GIMPLE_MODIFY_STMT, and dataflow.  All really
could not be done by simple little patches.  Tree-ssa was another example.

> 4: Don't have buggy code in your branches: improve code quality of
> development branches somehow.

Some of the bigger branches actually have requirements for merging. 
Both the tree-ssa  and dataflow branch have a couple of requirements,
this will not find all bugs/issues.  Remember GCC runs on around 30
processors (not counting variants on each one or even OSs).  Running
on all targets would take a month plus fixes and then making sure you
don't break all of them.



> For link 3, you'd change the rules to alternate between stage 1 and
> stage 3 on a fast basis (no stage 2 would be necessary): do a small
> merge (of a portion of a branch, if necessary), shake out bugs, and
> repeat.  Concretely, you could have two rules in GCC's development
> process:

Even though I proposed this before, I don't think it will help unless
people are testing GCC with lots of code daily or right after a big
merge.  

> * Patches more than a certain size aren't allowed.

This won't work, see above.

> * No patches are allowed if there are more than X release-blocking
>   bugs outstanding.  (For some small value of X; 0 is one
>   possibility.)

I don't think this will work out because you are punishing all developers
while one developer gets his/her act together.  In some cases, they
could have just had some bad news about their mother.


> With this, the trunk is almost always in a releasable state; you can
> release almost whenever you want to, since you'd basically be at the
> end of stage 3 every week, or every day, or every hour.  Moving to
> these rules would be painful, but once you start making progress, I
> bet you'd find that, for example, the pressures leading to long-lived
> branches will diminish.  (Not go away, but diminish.)

I don't think so because most of the regressions which are being reported
are actually after even the .1 release.


> For 4, you should probably spend some time figuring out why bugs are
> being introduced into the code in the first place.  Is test coverage
> not good enough?  If so, why - do people not write enough tests, is it
> hard to write good enough tests, something else?  Is the review
> process inadequate?  If so, why: are rules insufficiently stringent,
> are reviewers sloppy, are there not enough reviewers, are patches too
> hard to review?

Some cases is because test coverage is not good enough in general, C++.
Other cases, you just did not think about a corner case.  Even in other
cases, you exposed a latent bug in another part of the code which did
not think about a corner case.

> My guess is that most or all of those are factors, but some are more
> important than others.  My favorite tactic to decrease the number of
> bugs is to set up a unit test framework for your code base (so you can
> test changes to individual functions without having to run the whole
> compiler), and to st

Level to do such a modification...

2007-01-23 Thread 吴曦

Hi,
I am working on gcc 4.0.0. I want to use gcc to intercept each call to
read, and taint the data readed in. For example:
transform
read(fd, buf, size)
to
read(fd, buf, size)
if(is_socket(fd))
taint(buf, size)
So, what is the best suitable level to do this modification in gcc? My
own thought is in finish_function, before calling c_genericize,as I
discovered that in c front-end, there's no GENERIC tree... In
c_genericize, it directly calls gimplify_function_tree.


Re: [RFC] Our release cycles are getting longer

2007-01-23 Thread Andrew Pinski
> 
> 
> Wiadomo¶æ napisana w dniu 2007-01-24, o godz02:30, przez David Carlton:
> 
> > For 4, you should probably spend some time figuring out why bugs are
> > being introduced into the code in the first place.  Is test coverage
> > not good enough?

The test coverage is not good for C++ while it is great for C and most
middle-end issues.  Most of the regressions being reported are not
really in any real code but made up examples by a select few GCC 
developers.

> It's "too good" to be usable. The time required for a full test suite
> run can be measured by days not hours.

Days, only for slow machines.  For our PS3 toolchain (which is really
two sperate compilers), it takes 6 hours to run the testsuite, this
is doing one target with -fPIC.  So I don't see how you can say it
takes days.

>  The main reason is plain and
> simple the use of an inadequate build infrastructure and not the pure
> size of code compiled for coverage. Those things get completely  
> ridiculous
> for cross build targets.

No, not really, it took me a day max to get a spu-elf cross compile
building and runing with newlib and all.


> No. The problems are entierly technical in nature. It's not a pure human
> resources management issue.

Actually they are poltical reasons rather than technical.  Human resource
management is actually the biggest issue because most developers have a
day job working on GCC and not supporting the FSF mainline.

> > My favorite tactic to decrease the number of
> > bugs is to set up a unit test framework for your code base (so you can
> > test changes to individual functions without having to run the whole
> > compiler), and to strongly encourage patches to be accompanied by unit
> > tests.
> 
> That's basically a pipe dream with the auto based build system.  

Actually the issues here are unrelated at all to auto* and unit test framework.

> It's even
> not trivial to identify dead code...

It is almost hard to identify dead code in any program that has much history as 
GCC.  GCC
is on its 20th birthday this year.  What code out there has lasted that long 
and does
not have hard places to identify dead code?

> A trivial by nature change like the
> top level build of libgcc took actually years to come by.

Unrelated to any of the above issues.  Once the patch was written, there was 
only small
changes to the code to have toplevel libgcc work on weirder targets like Darwin 
or Netware.
Nothing special was needed really.  The real reason why toplevel libgcc took 
years to come
by is because nobody cared enough about libgcc to do any kind of clean up.  The 
attitude has
changed recently (when I say recent I mean the last 3-4 years) to all of these 
problems and
in fact all major issues with GCC's build and internals are changing for the 
better.

-- Pinski

PS auto* is not to blame for GCC's problems, GCC is older than auto*.


Re: [RFC] Our release cycles are getting longer

2007-01-23 Thread Marcin Dalecki


Wiadomość napisana w dniu 2007-01-24, o godz02:30, przez David Carlton:


For 4, you should probably spend some time figuring out why bugs are
being introduced into the code in the first place.  Is test coverage
not good enough?


It's "too good" to be usable. The time required for a full test suite
run can be measured by days not hours. The main reason is plain and
simple the use of an inadequate build infrastructure and not the pure
size of code compiled for coverage. Those things get completely  
ridiculous

for cross build targets.


If so, why - do people not write enough tests, is it
hard to write good enough tests, something else?  Is the review
process inadequate?  If so, why: are rules insufficiently stringent,
are reviewers sloppy, are there not enough reviewers, are patches too
hard to review?

My guess is that most or all of those are factors, but some are more
important than others.


No. The problems are entierly technical in nature. It's not a pure human
resources management issue.


My favorite tactic to decrease the number of
bugs is to set up a unit test framework for your code base (so you can
test changes to individual functions without having to run the whole
compiler), and to strongly encourage patches to be accompanied by unit
tests.


That's basically a pipe dream with the auto based build system.  
It's even

not trivial to identify dead code... A trivial by nature change like the
top level build of libgcc took actually years to come by.


Re: Signed int overflow behaviour in the security context

2007-01-23 Thread Richard Kenner
> Oh, and teaching all of the programmers out there all the subtle nuances
> of C and trying to get them to write proper code: good luck.  That
> simply won't happen.

If people who write security-critical code in a programming language
can't take time to learn the details of that language relevant to
security issues (such as overflow handling), I think our society is in
a great deal of trouble.


Re: [RFC] Our release cycles are getting longer

2007-01-23 Thread David Carlton
On Tue, 23 Jan 2007 17:54:10 -0500, Diego Novillo <[EMAIL PROTECTED]> said:

> So, I was doing some archeology on past releases and we seem to be
> getting into longer release cycles.

Interesting.

I'm a GCC observer, not a participant, but here are some thoughts:

As far as I can tell, it looks to me like there's a vicious cycle
going on.  Picking an arbitrary starting point:

1) Because lots of bugs are introduced during stage 1 (and stage 2),
   stage 3 takes a long time.

2) Because stage 3 takes a long time, development branches are
   long-lived.  (After all, development branches are the only way to
   do work during stage 3.)

3) Because development branches are long-lived, the stage 1 merges
   involve a lot of code.

4) Because the stage 1 merges involve a lot of code, lots of bugs are
   introduced during stage 1.  (After all, code changes come with
   bugs, and large code changes come with lots of bugs.)

1) Because lots of bugs are introduced during stage 1, stage 3 takes a
   long time.


Now, the good news is that this cycle can be a virtuous cycle rather
than a vicious cycle: if you can lower one of these measurements
(length of stage 3, size of branches, size of patches, number of
bugs), then the other measurements will start going down.  "All" you
have to do is find a way to mute one of the links somehow, focus on
the measurement at the end of that link, and then things will start
getting better.

It's not obvious to what the best way is to do that, but here are some
ideas.  Taking the links one by one:

1: Either fix bugs faster, or release with more bugs.

2: Artificially shorten the lifespan of development branches somehow,
so that big branches don't appear during stage 3.

3: Throttle the size of patches: don't let people do gigantic
merges, no matter the size of the branch.

4: Don't have buggy code in your branches: improve code quality of
development branches somehow.


I'm not optimistic about breaking either the link 1 or link 2.  The
first alternative in link 1 is hard (especially without a strong
social contract), and the second alternative in link 1 is, to say the
least, distasteful.  Link 2 is similarly hard to fix without a strong
social contract.  So I would focus on either link 3 or link 4.


For link 3, you'd change the rules to alternate between stage 1 and
stage 3 on a fast basis (no stage 2 would be necessary): do a small
merge (of a portion of a branch, if necessary), shake out bugs, and
repeat.  Concretely, you could have two rules in GCC's development
process:

* Patches more than a certain size aren't allowed.

* No patches are allowed if there are more than X release-blocking
  bugs outstanding.  (For some small value of X; 0 is one
  possibility.)

With this, the trunk is almost always in a releasable state; you can
release almost whenever you want to, since you'd basically be at the
end of stage 3 every week, or every day, or every hour.  Moving to
these rules would be painful, but once you start making progress, I
bet you'd find that, for example, the pressures leading to long-lived
branches will diminish.  (Not go away, but diminish.)


For 4, you should probably spend some time figuring out why bugs are
being introduced into the code in the first place.  Is test coverage
not good enough?  If so, why - do people not write enough tests, is it
hard to write good enough tests, something else?  Is the review
process inadequate?  If so, why: are rules insufficiently stringent,
are reviewers sloppy, are there not enough reviewers, are patches too
hard to review?

My guess is that most or all of those are factors, but some are more
important than others.  My favorite tactic to decrease the number of
bugs is to set up a unit test framework for your code base (so you can
test changes to individual functions without having to run the whole
compiler), and to strongly encourage patches to be accompanied by unit
tests.


And, of course, you could attack both links 3 and 4 at once.


David Carlton
[EMAIL PROTECTED]


Re: [RFC] Our release cycles are getting longer

2007-01-23 Thread Marcin Dalecki


Wiadomość napisana w dniu 2007-01-24, o godz01:48, przez David Daney:

I missed the discussion on IRC, but neither of those front-ends are  
release blockers.


I cannot speak for ADA, but I am not aware that the Java front-end  
has caused any release delays recently.  I am sure you will correct  
me if I have missed something.


What's blocking is not the formal process per se but instead the  
technical

side of things. And from a technical point of view both seriously add
impedance to the overall package.


Re: [RFC] Our release cycles are getting longer

2007-01-23 Thread David Daney

Marcin Dalecki wrote:


Wiadomość napisana w dniu 2007-01-23, o godz23:54, przez Diego Novillo:



So, I was doing some archeology on past releases and we seem to be 
getting into longer release cycles.  With 4.2 we have already crossed 
the 1 year barrier.


For 4.3 we have already added quite a bit of infrastructure that is 
all good in paper but still needs some amount of TLC.


There was some discussion on IRC that I would like to move to the 
mailing list so that we get a wider discussion.  There's been thoughts 
about skipping 4.2 completely, or going to an extended Stage 3, etc.


Thoughts?


Just forget ADA and Java in mainstream. Both of them are seriously 
impeding casual contributions.


I missed the discussion on IRC, but neither of those front-ends are 
release blockers.


I cannot speak for ADA, but I am not aware that the Java front-end has 
caused any release delays recently.  I am sure you will correct me if I 
have missed something.


David Daney


Re: Signed int overflow behaviour in the security context

2007-01-23 Thread Richard Kenner
> Yes, absolutely.  There is a difference between well-defined and
> understood semantics on one hand, and undefined and probably dangerous
> behaviour on the other hand.  It's the difference between security
> audits of C software being hard and completely hopeless.

I disagree.  Code written with security in mind should not cause overflows.
The audit should check for absence of overflows.  What would happen if
the overflow were to occur seems irrelevant to me from an audit perspective.

> To be more precise, the LIA-1 definition is the one people have burned
> deeply into their neurons.  It's the one that should be used by default.

Perhaps.  Perhaps not.  But when one is writing security- or safety-critical
software, one usually uses a subset of the language and it would seem to
be that the subset used should certainly forbid overflows.  In that
case, this doesn't matter.


Re: [RFC] Our release cycles are getting longer

2007-01-23 Thread Joe Buck
On Wed, Jan 24, 2007 at 12:55:29AM +0100, Steven Bosscher wrote:
> On 1/23/07, Diego Novillo <[EMAIL PROTECTED]> wrote:
> >
> >So, I was doing some archeology on past releases and we seem to be
> >getting into longer release cycles.  With 4.2 we have already crossed
> >the 1 year barrier.
> 
> Heh.
> 
> Maybe part of the problem here is that the release manager isn't very
> actively persuing a release. The latest GCC 4.2 status report is from
> October 17, 2006, according to the web site.  That is already more
> than 100 days ago.

Mark's focusing on 4.1.2 at the moment; I believe he plans to shift focus
to 4.2 once that's out.  I think that this is appropriate.



Re: [RFC] Our release cycles are getting longer

2007-01-23 Thread Marcin Dalecki


Wiadomość napisana w dniu 2007-01-23, o godz23:54, przez Diego Novillo:



So, I was doing some archeology on past releases and we seem to be  
getting into longer release cycles.  With 4.2 we have already  
crossed the 1 year barrier.


For 4.3 we have already added quite a bit of infrastructure that is  
all good in paper but still needs some amount of TLC.


There was some discussion on IRC that I would like to move to the  
mailing list so that we get a wider discussion.  There's been  
thoughts about skipping 4.2 completely, or going to an extended  
Stage 3, etc.


Thoughts?


Just forget ADA and Java in mainstream. Both of them are seriously  
impeding casual contributions.
The build setup through autoconf/automake/autogen/m4/. has  
problems in this area as well.


Re: [RFC] Our release cycles are getting longer

2007-01-23 Thread Steven Bosscher

On 1/23/07, Diego Novillo <[EMAIL PROTECTED]> wrote:


So, I was doing some archeology on past releases and we seem to be
getting into longer release cycles.  With 4.2 we have already crossed
the 1 year barrier.


Heh.

Maybe part of the problem here is that the release manager isn't very
actively persuing a release. The latest GCC 4.2 status report is from
October 17, 2006, according to the web site.  That is already more
than 100 days ago.



For 4.3 we have already added quite a bit of infrastructure that is all
good in paper but still needs some amount of TLC.


And the entire backend dataflow engine is about to be replaced, too.
GCC 4.3 is probably going to be the most experimental release since
GCC 4.0...



There was some discussion on IRC that I would like to move to the
mailing list so that we get a wider discussion.  There's been thoughts
about skipping 4.2 completely, or going to an extended Stage 3, etc.


Has there ever been a discussion about releasing "on demand"? Almost
all recent Linux and BSD distributions appear to converge on GCC 4.1
as the system compiler, so maybe there just isn't a "market" for GCC
4.2.

I don't see any point in an extended Stage 3.  People work on what
they care about, and we see time and again that developers just work
on branches instead of on bug fixes for the trunk when it is in Stage
3.

IMHO the real issue with the GCC release plan, is that there is no way
for the RM to make people fix bugs. I know the volunteer blah-blah,
but at the end of the day many bugs are caused by the people who work
on new projects on a branch when the trunk is in Stage 3.

Maybe there should just be some rules about accepting projects for the
next release cycle. Like, folks with many bugs assigned to them, or in
their area of expertise, are not allowed to merge a branch or big
patches into the trunk during Stage 1.

Not that I *really* believe that would work...  But skipping releases
is IMHO not really a better idea.

Gr.
Steven


[RFC] Our release cycles are getting longer

2007-01-23 Thread Diego Novillo


So, I was doing some archeology on past releases and we seem to be 
getting into longer release cycles.  With 4.2 we have already crossed 
the 1 year barrier.


For 4.3 we have already added quite a bit of infrastructure that is all 
good in paper but still needs some amount of TLC.


There was some discussion on IRC that I would like to move to the 
mailing list so that we get a wider discussion.  There's been thoughts 
about skipping 4.2 completely, or going to an extended Stage 3, etc.


Thoughts?


release-cycle.pdf
Description: Adobe PDF document


Re: Signed int overflow behaviour in the security context

2007-01-23 Thread Andreas Bogk
Ian Lance Taylor wrote:
>> You have just seen somebody who can be considered an expert in 
>> matters of writing C sofware come up with a check that looks 
>> correct, but is broken under current gcc semantics.  That should 
>> make you think.
> I'm not entirely unsympathetic to your arguments, but, assuming you 
> are referring to the code I wrote three messages up, this comment is
>  unfair.  The code I wrote was correct and unbroken.

It might not have been broken in a world where you would have written a
requirements specification to exclude MAX_INT as a legal value for
vp->len, plus tests in the upper layer to enforce that, plus an audit
team to verify that anybody who creates that structs takes care not to
send you a MAX_INT.

In the absence of that, I would believe that reading code like:

struct s { int len; char* p; };

inline char
bar (struct s *sp, int n)
{
  if (n < 0)
abort ();
  if (n > sp->len)
abort ();
  return sp->p[n];
}

would make you think that calling bar is safe with any n, as long as
sp->p points to some allocated memory of sp->len bytes.  In fact, if
anybody ever did security audits of code, and wouldn't let the above
code pass as "prevents bad things from happening for invalid n, as long
as vp is sound", please raise your hand now.

> You suggest that it is broken because an attacker could take control 
> of vp->len and set it to INT_MAX.  But that just means that code in 
> some other part of the program must check user input and prevent that
>  from occurring.
> In fact, given the proposed attack of extract data from memory,
> INT_MAX is a red herring; any value larger than the actual memory
> buffer would suffice to read memory which should not be accessible.

Oh, I might not have sufficiently gone into detail about what I am
meaning by "controlling the value."  I am of course assuming that the
input layer does proper input validation.  Controlling vp->len means
nothing more than sending enough input to the system that I know will be
managed by a struct v.  Take for instance some protocol that expects me
to send the length of some datum, and then the announced number of bytes
 after that.  ASN.1 BER encoding works like this, for instance.

I'll just send INT_MAX data to the application, and vp->len will be INT_MAX.

> I think a better way to describe your argument is that the compiler 
> can remove a redundant test which would otherwise be part of a 
> defense in depth.  That is true.  The thing is, most people want the 
> compiler to remove redundant comparisons; most people don't want 
> their code to have defense in depth, they want it to have just one 
> layer of defense, because that is what will run fastest.  We gcc 
> developers get many more bug reports about missed optimizations than 
> we do about confusing language conformant code generation.

No. My argument is that the test that is being removed is not redundant:
it simply *is* your single layer of defense.  Imagine the network
application sketched around your code snippet: somebody might have
carefully verified that any write access to a struct v maintains the
invariance of that structure, that any read access goes through bar(),
and on top of that that bar() properly checks the bounds of the array.
He's very proud of his security architecture (for a reason, this is way
more than your average C programmer will do), but of course he knows
that his users expect performance, so he'll make the accessor function
inline, and adds no further checks, as he has convinced himself that
they are not needed.

And then he has a bad day, and needs to add some unloved reporting
module, and writes code like:

void
foo (struct s *sp, int n)
{
  int len = sp->len;
  int i;
  int tot = 0;
  for (i = 0; i <= len; ++i)
tot += bar (sp, i);
  return tot;
}

which happens to trigger an endless loop on len==INT_MAX.  And let's
face it, everybody has bugs in his code.  So he gives the code a second
glance, to see whether there are any security risks lurking there.  He
still doesn't see his bug, but figures, alright, this calls the safe
accessor function, nothing bad will happen.  And starts working on the
next function.

But since he wrote code that's undefined under the C standard, gcc
figures it is ok to break the single line of defense that was in his
code.  I think this is not acceptable.  I need to be able to rely on the
fact that an expression like

if (n < 0) abort();

will actually abort if n is negative to stand any chance of ever writing
reasonably secure code.  Instead of having to find the bugs in a few
critical places, I have to make sure that my whole program doesn't
trigger some unknown behaviour somewhere.

Regarding the number of bug reports you get: everybody understands why
performance is important, so everybody complains about it.  The number
of people who sufficiently understand what the compiler is doing is much
smaller, so less people complain about it. That doesn't mean their
arguments are less valid, or

Re: raising minimum version of Flex

2007-01-23 Thread Mark Kettenis
Vaclav Haisman wrote:
> Gerald Pfeifer wrote:
> [...]
> > openSUSE 10.2 now comes with flex 2.5.33, but FreeBSD, for example, still 
> > is at flex 2.5.4.  Just some additional data pointes...
> FreeBSD has version 2.5.33 as textproc/flex port.

But that will not replace the system flex, so it will require tweaking
environment variables or passing configure options.

OpenBSD also still ships with flex 2.5.4.  That version has been the
defacto standard for years and is by far the most widespread version.
In my experience newer versions of flex are much less stable, and I
think requiring a newer version should not be done lightly.

Mark


Re: RFC: Wextra digest (fixing PR7651)

2007-01-23 Thread Manuel López-Ibáñez

On 23/01/07, Joe Buck <[EMAIL PROTECTED]> wrote:

On Tue, Jan 23, 2007 at 07:52:30PM +, Manuel López-Ibáñez wrote:
> * A base class is not initialized in a derived class' copy constructor.
>
> Proposed: move this warning to -Wuninitialized seems the appropriate
> solution. However, I am afraid that this warning will turn out to be
> too noisy and hard to avoid to be in Wuninitialized (see PR 11159).
> Perhaps a new option -Wuninitialized-base-class enabled by -Wextra
> would be better if that PR cannot be easily fixed.

Yuck.  Until PR 11159 is fixed, we can't move that warning into anything
that is enabled by -Wall.


Agreed. And what about the name, -Wuninitialized-base-class? Is it fitting?


Re: About building conditional expressions

2007-01-23 Thread Ferad Zyulkyarov

Hi,


I've noticed that you've asked a few questions about trees on the
list.  You might want to read a tutorial on trees in GCC; there are a
few kicking around out there.


Sure I would like to look at any tutorial. I found some, but most of
them were not complete :( I would appreciate if you can recommend me
any tutorial about gcc. For the next months I will have to do some
modifications for my project in fornt-end and back-end.

Thanks a lot

--
Ferad Zyulkyarov


Re: About building conditional expressions

2007-01-23 Thread Tom Tromey
> "Ferad" == Ferad Zyulkyarov <[EMAIL PROTECTED]> writes:

Ferad> build(EQ_EXPR, integet_type_node, left, rith);
Ferad> which is left == right

Ferad> But, as I noticed this function "build" is not maintained (used) by
Ferad> gcc any more. Instead build, what else may I use to create a
Ferad> conditional expression node?

I've noticed that you've asked a few questions about trees on the
list.  You might want to read a tutorial on trees in GCC; there are a
few kicking around out there.

Tom



Re: RFC: Wextra digest (fixing PR7651)

2007-01-23 Thread Joe Buck
On Tue, Jan 23, 2007 at 07:52:30PM +, Manuel López-Ibáñez wrote:
> * A base class is not initialized in a derived class' copy constructor.
> 
> Proposed: move this warning to -Wuninitialized seems the appropriate
> solution. However, I am afraid that this warning will turn out to be
> too noisy and hard to avoid to be in Wuninitialized (see PR 11159).
> Perhaps a new option -Wuninitialized-base-class enabled by -Wextra
> would be better if that PR cannot be easily fixed.

Yuck.  Until PR 11159 is fixed, we can't move that warning into anything
that is enabled by -Wall.  



Re: [c++] switch ( enum ) vs. default statment.

2007-01-23 Thread David Nicol

On 1/23/07, Paweł Sikora <[EMAIL PROTECTED]> wrote:

typedef enum { X, Y } E;
int f( E e )
{
switch ( e )
{
case X: return -1;
case Y: return +1;
}


+ throw runtime_error("invalid value got shoehorned into E enum")


}

In this example g++ produces a warning:

e.cpp: In function 'int f(E)':
e.cpp:9: warning: control reaches end of non-void function

Adding `default' statemnet to `switch' removes the warning but
in C++ out-of-range values in enums are undefined.


nevertheless, that integer type might get its bits twiddled somehow.


Re: RFC: Wextra digest (fixing PR7651)

2007-01-23 Thread Manuel López-Ibáñez

A summary of what has been proposed so far to clean up Wextra follows.
Please, your feedback is appreciated. And reviewing patches even more
;-)


* Subscripting an array which has been declared register.
* Taking the address of a variable which has been declared register.

Proposed: new option -Waddress-of-register that is enabled by Wextra.
A patch is available here:
http://gcc.gnu.org/ml/gcc-patches/2006-12/msg01676.html


* A base class is not initialized in a derived class' copy constructor.

Proposed: move this warning to -Wuninitialized seems the appropriate
solution. However, I am afraid that this warning will turn out to be
too noisy and hard to avoid to be in Wuninitialized (see PR 11159).
Perhaps a new option -Wuninitialized-base-class enabled by -Wextra
would be better if that PR cannot be easily fixed.


* A non-static reference or non-static const member appears in a class
without constructors.

Proposed: move this warning to -Wuninitialized


* Ambiguous virtual bases (virtual base inaccessible due to
ambiguity).

Proposed: move this warning to -Woverloaded-virtual


* An enumerator and a non-enumerator both appear in a conditional
expression.

Proposed: move this warning to (the new) -Wconversion


* A function can return either with or without a value.

This is warned already by Wreturn-type: "'return' with no value, in
function returning non-void" and I wasn't able to come up with a
testcase that is warned by Wextra but not by Wreturn-type.

Proposed: move to Wreturn-type whatever is not there yet.


* An expression-statement or the left-hand side of a comma expression
contains no side effects. For example, an expression such as x[i,j].

This is also warned by Wunused-value. In addition, Wextra enables
Wunused-value but this is not documented (and -Wunused-value is
already enabled by -Wall).

Proposed: Wextra should not enable Wunused-value. Patch:
http://gcc.gnu.org/ml/gcc-patches/2007-01/msg00440.html


* A pointer is compared against integer zero with <, <=, >, or >=.
This is a pedwarn and it can also be enabled by using -pedantic. If
the pointer is the rightmost operator, there is no warning for Wextra
(surely a bug).

Proposed: Fix the bug. a) Enable the warning with -pedantic or
-Wpointer-arith; or b) Enable the warning with -pedantic or its own
option -Wordered-pointer-comparison (which would be enabled by
Wextra).
There is a patch for option (b):
http://gcc.gnu.org/ml/gcc-patches/2007-01/msg00608.html


* In ./gcc/config/sh/symbian.c:158 there is a warning enabled by Wextra
but conditional on Wattributes.

Proposed: drop the test for Wextra.


* The manual page claims that Wextra warns for any of several
floating-point events that often indicate errors, such as overflow,
underflow, loss of precision, etc. I wasn't able to find any instance
of this. I am fairly sure that Wextra doesn't do such thing.

Proposed: remove text from doc/invoke.texi


* In Java, Wextra warns for unreachable bytecode.

Proposed: a) This should be warned by -Wunreachable-code or b) a new
option -Wunreachable-bytecode that is enabled by Wextra.


* An unsigned value is compared against zero with < or >=.

There is also an unconditional warning for expressions that are always
true or false due to the range of types.

Proposal: my proposal is a new option that takes over both warnings
and is enabled by Wextra. A patch is available at
http://gcc.gnu.org/ml/gcc-patches/2007-01/msg01933.html


That is a lot to do! Well, I hope you find some time to make some suggestions.

Cheers,

Manuel.


Re: Signed int overflow behaviour in the security context

2007-01-23 Thread Mark Mitchell
Ian Lance Taylor wrote:
> Andreas Bogk <[EMAIL PROTECTED]> writes:

> I think a better way to describe your argument is that the compiler
> can remove a redundant test which would otherwise be part of a defense
> in depth.  That is true.  The thing is, most people want the compiler
> to remove redundant comparisons; most people don't want their code to
> have defense in depth, they want it to have just one layer of defense,
> because that is what will run fastest.

Exactly.  I think that Ian's approach (giving us a warning to help track
down problems in real-world code, together with an option to disable the
optimizations) is correct.  Even if the LIA-1 behavior would make GCC
magically better as a compiler for applications that have
not-quite-right security checks, it wouldn't make it better as a
compiler for lots of other applications.

I would rather hope that secure applications would define a set of
library calls for some of these frequently-occurring checks (whether, in
GLIBC, or libiberty, or some new library) so that application
programmers can use them.

(I've also been known to claim that writing secure applications in C may
provide performance advantages, but makes the security part harder.  If
someone handed me a contract to write a secure application, with a
penalty clause for security bugs, I'd sure be looking for a language
that raised exceptions on overflow, bounds-checking failures, etc.)

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


[c++] switch ( enum ) vs. default statment.

2007-01-23 Thread Paweł Sikora
Hi,

Please consider following testcase which is a core of PR c++/28236.

typedef enum { X, Y } E;
int f( E e )
{
switch ( e )
{
case X: return -1;
case Y: return +1;
}
}

In this example g++ produces a warning:

e.cpp: In function ‘int f(E)’:
e.cpp:9: warning: control reaches end of non-void function

Adding `default' statemnet to `switch' removes the warning but
in C++ out-of-range values in enums are undefined.
I see no reason to handling any kind of UB ( especially this ).
IMHO this warning is a bug in C++ frontend.

Comments are appreciated.

BR,
Paweł.


Re: Signed int overflow behaviour in the security context

2007-01-23 Thread Florian Weimer
* Joe Buck:

> You appear to mistakenly believe that wrapping around on overflow is
> a more secure option.  It might be, but it usually is not.  There
> are many CERT security flaws involving integer overflow; the fact
> that they are security bugs has nothing to do with the way gcc
> generates code, as the "wrapv" output is insecure.

These flaws are typically fixed by post-overflow checking.  A more
recent example from PCRE:

| /* Read the minimum value and do a paranoid check: a negative value indicates
| an integer overflow. */
| 
| while ((digitab[*p] & ctype_digit) != 0) min = min * 10 + *p++ - '0';
| if (min < 0 || min > 65535)
|   {
|   *errorcodeptr = ERR5;
|   return p;
|   }

Philip Hazel is quite a diligent programmer, and if he gets it wrong
(and the OpenSSL and Apache developers, who are supposed to do code
review on their own, not relying on random external input), maybe this
should tell us something.

Of course, it might be possible that the performance gains are worth
reintroducing security bugs into object code (where previously,
testing and perhaps even manual code inspection has shown they have
been fixed).  It's not true that -fwrapv magically makes security
defects involving integer overflow disappear (which is quite unlikely,
as you point out).  It's the fixes which require -fwrapv semantics
that concern me.


Re: bug management: WAITING bugs that have timed out

2007-01-23 Thread Mark Mitchell
Mike Stump wrote:
> On Jan 11, 2007, at 10:47 PM, Joe Buck wrote:
>> The description of WORKSFORME sounds closest: we don't know how to
>> reproduce the bug.  Should that be used?
> 
> No, not generally. 

Of the states we have, WORKSFORME seems best to me, and I agree with Joe
that there's benefit in getting these closed out.  On the other hand, if
someone wants to create an UNREPRODUCIBLE state (which is a "terminal"
state, like WONTFIX), that's OK with me too.  But, let's not dither too
much over what state to use.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Signed int overflow behaviour in the security context

2007-01-23 Thread Ian Lance Taylor
Andreas Bogk <[EMAIL PROTECTED]> writes:

> > Making it defined and wrapping doesn't help at all. It just means you
> > write different checks, not less of them.
> 
> You have just seen somebody who can be considered an expert in matters
> of writing C sofware come up with a check that looks correct, but is
> broken under current gcc semantics.  That should make you think.

I'm not entirely unsympathetic to your arguments, but, assuming you
are referring to the code I wrote three messages up, this comment is
unfair.  The code I wrote was correct and unbroken.  You suggest that
it is broken because an attacker could take control of vp->len and set
it to INT_MAX.  But that just means that code in some other part of
the program must check user input and prevent that from occurring.  In
fact, given the proposed attack of extract data from memory, INT_MAX
is a red herring; any value larger than the actual memory buffer would
suffice to read memory which should not be accessible.

I think a better way to describe your argument is that the compiler
can remove a redundant test which would otherwise be part of a defense
in depth.  That is true.  The thing is, most people want the compiler
to remove redundant comparisons; most people don't want their code to
have defense in depth, they want it to have just one layer of defense,
because that is what will run fastest.  We gcc developers get many
more bug reports about missed optimizations than we do about confusing
language conformant code generation.

One simple way to avoid problems in which the compiler removes
redundant tests: compile without optimization.  Another simple way:
learn the language semantics and think about them.

In any case, later today I hope to send out a patch for the
-fstrict-overflow option.

Ian


Re: Signed int overflow behaviour in the security context

2007-01-23 Thread Andreas Bogk
Daniel Berlin wrote:
> And you think that somehow defining it (which the definition people
> seem to favor would be to make it wrapping) ameliorates any of these
> concerns?

Yes, absolutely.  There is a difference between well-defined and
understood semantics on one hand, and undefined and probably dangerous
behaviour on the other hand.  It's the difference between security
audits of C software being hard and completely hopeless.

To be more precise, the LIA-1 definition is the one people have burned
deeply into their neurons.  It's the one that should be used by default.
 Sun cc does that, by the way.

> User parameters can't be trusted no matter whether signed overflow is
> defined  or not.

But what if the compiler subtly breaks your tests in ways you wouldn't
expect?

> Making it defined and wrapping doesn't help at all. It just means you
> write different checks, not less of them.

You have just seen somebody who can be considered an expert in matters
of writing C sofware come up with a check that looks correct, but is
broken under current gcc semantics.  That should make you think.

Andreas


Re: Signed int overflow behaviour in the security context

2007-01-23 Thread Daniel Berlin


> This is a typical example of removing an if branch because signed
> overflow is undefined.  This kind of code is common enough.

I could not have made my point any better myself.


And you think that somehow defining it (which the definition people
seem to favor would be to make it wrapping) ameliorates any of these
concerns?

User parameters can't be trusted no matter whether signed overflow is
defined  or not.
Making it defined and wrapping doesn't help at all. It just means you
write different checks, not less of them.


Re: order of local variables in stack frame

2007-01-23 Thread Markus Franke
Well, you are right. The code looks good and works also. But I have some
kind of a reference implementation which is based on GCC 2.7.2.3. In
this version the local variables are allocated the other way around, the
way in which I expected. Obviously, the order of allocation has changed
till now (4.1.1). I just wanted to know whether I can correct this, but
if not its also OK.

Thanks,
Markus

Robert Dewar wrote:
> Markus Franke wrote:
> 
>> Please let me know whether I missunderstood something completely. If
>> this behaviour is correct what can I do to change it to the other way
>> around. Which macro variable do I have to change?
> 
> 
> There is no legitimate reason to care about the order of variables
> in the local stack frame! Or at least I don't see one, why do *you*
> care? Generally one may want to reorder the variables for alignment
> purposes anyway.
> 

-- 
Nichts ist so praktisch wie eine gute Theorie!


Re: About building conditional expressions

2007-01-23 Thread Ferad Zyulkyarov

Thanks a lot, that's it

On 1/23/07, Steven Bosscher <[EMAIL PROTECTED]> wrote:

On 1/23/07, Ferad Zyulkyarov <[EMAIL PROTECTED]> wrote:
> But, as I noticed this function "build" is not maintained (used) by
> gcc any more. Instead build, what else may I use to create a
> conditional expression node?

Look for buildN where N is a small integer ;-)

I think you want build2 for EQ_EXPR.

Gr.
Steven




--
Ferad Zyulkyarov


Re: About building conditional expressions

2007-01-23 Thread Steven Bosscher

On 1/23/07, Ferad Zyulkyarov <[EMAIL PROTECTED]> wrote:

But, as I noticed this function "build" is not maintained (used) by
gcc any more. Instead build, what else may I use to create a
conditional expression node?


Look for buildN where N is a small integer ;-)

I think you want build2 for EQ_EXPR.

Gr.
Steven


Re: order of local variables in stack frame

2007-01-23 Thread Andrew Haley
Robert Dewar writes:
 > Markus Franke wrote:
 > 
 > > Please let me know whether I missunderstood something completely. If
 > > this behaviour is correct what can I do to change it to the other way
 > > around. Which macro variable do I have to change?
 > 
 > There is no legitimate reason to care about the order of variables
 > in the local stack frame! Or at least I don't see one, why do *you*
 > care? Generally one may want to reorder the variables for alignment
 > purposes anyway.

And also, the optimizers rewrite your code to usch an extent that
there isn't any simple correspondence between the stack slots used and
any variables declared by the programmer.

Andrew.


About building conditional expressions

2007-01-23 Thread Ferad Zyulkyarov

Hi,

In the old references there is a function "build" that is used for
building tree nodes. Using this function one can build a conditional
expression as follows:

build(EQ_EXPR, integet_type_node, left, rith);
which is left == right

But, as I noticed this function "build" is not maintained (used) by
gcc any more. Instead build, what else may I use to create a
conditional expression node?

Thanks, for your advices.

--
Ferad Zyulkyarov


Re: order of local variables in stack frame

2007-01-23 Thread Robert Dewar

Markus Franke wrote:


Please let me know whether I missunderstood something completely. If
this behaviour is correct what can I do to change it to the other way
around. Which macro variable do I have to change?


There is no legitimate reason to care about the order of variables
in the local stack frame! Or at least I don't see one, why do *you*
care? Generally one may want to reorder the variables for alignment
purposes anyway.


order of local variables in stack frame

2007-01-23 Thread Markus Franke
Dear GCC Developers,

I am working on a target backend for the DLX architecture and I have a
question concerning the layout of the stack frame.
Here is a simple test C-program:

---snip---
int main(void)
{
int a = 1;
int b = 2;
int c = a + b;
return c;
}
---snap---

The initialisation of the variables a and b produce the following output:

---snip---
movl$1, -24(%ebp)
movl$2, -20(%ebp)
---snap---

Although I have declared "STACK_GROWS_DOWNWARD" the variables a and b
are lying upwards in memory (-24 < -20). Shouldn't it be the other way
around because the stack should grow downwards towards smaller
addresses. I think it should be like this:

---snip---
movl$1, -20(%ebp)
movl$2, -24(%ebp)
---snap---

Please let me know whether I missunderstood something completely. If
this behaviour is correct what can I do to change it to the other way
around. Which macro variable do I have to change?


Thanks in advance,
Markus Franke



Re: raising minimum version of Flex

2007-01-23 Thread Paolo Bonzini


I'm not at all impressed with the recent series of flex releases, since it 
started using m4 internally and passing user code through m4.  
(cf. bison, which unlike 
flex pays proper attention to assuring that arbitrary valid parsers are 
not mangled by m4).


Fully agreed.  The recent releases of flex are a mess (a pity, because 
they also have interesting features such as yylineno support without 
performance hits).


Paolo


Re: Signed int overflow behaviour in the security context

2007-01-23 Thread Andreas Bogk
Ian Lance Taylor wrote:
> Consider code along these lines:
> 
> struct s { int len; char* p; };
> 
> inline char
> bar (struct s *sp, int n)
> {
>   if (n < 0)
> abort ();
>   if (n > sp->len)
> abort ();
>   return sp->p[n];
> }
> 
> void
> foo (struct s *sp, int n)
> {
>   int len = sp->len;
>   int i;
>   int tot = 0;
>   for (i = 0; i <= len; ++i)
> tot += bar (sp, i);
>   return tot;
> }
> 
> Let's assume that bar() is inlined into foo().  Now consider the
> assert.  If signed overflow is undefined, then we can optimize away
> the "n < 0" test; it will never be true.  If signed overflow is
> defined, then we can not optimize that away.  That is because as far
> as the compiler knows, sp->len might be INT_MAX.  In that case, the
> loop will never terminate, and i will wrap and become negative.  (The
> compiler may also eliminate the "n > sp->len" test, but that does not
> rely on undefined signed overflow.)

This is an excellent example of the kind of subtle vulnerabilities
undefined overflow behaviour causes.  Consider the case where sp->len is
under the control of an attacker.  Let's assume further that sp->p is
dynamically allocated, and we're running in an OS configuration where
malloc(INT_MAX) actually works.

Now an attacker could provoke a situation where sp->len is INT_MAX, and
i becomes negative.  All of a sudden, if the "n < 0" test is folded
away, he's left in a situation where memory is accessed that he's not
supposed to.  If instead of summing up the array elements the code would
write them to a network socket, we'd be getting a free dump of all the
heap objects in memory after sp.  That might be your private key or your
 password, if you're unlucky.

Even worse would be the case where bar would write to sp->p.  Attackers
writing to memory are always bad news, and if the circumstances are
right, this situation could be exploitable.  This is even applicable for
infinite loops: changing the SEH on Windows to point to exploit code
before the infinite loop finally triggers an exception is a popular
exploitation technique on that platform.

See how easy it is to make this kind of security mistake, even for
people who are aware of the undefinedness of signed overflow?  Did you
notice how innocently "nothing can happen here" function bar looks?

> This is a typical example of removing an if branch because signed
> overflow is undefined.  This kind of code is common enough.

I could not have made my point any better myself.

Andreas