Re: VLAs and security

2018-09-04 Thread Uecker, Martin
Am Dienstag, den 04.09.2018, 10:00 +0200 schrieb Dmitry Vyukov:
> On Tue, Sep 4, 2018 at 8:27 AM, Uecker, Martin
>  wrote:
> > Am Montag, den 03.09.2018, 14:28 -0700 schrieb Linus Torvalds:


Hi Dmitry,

> Compiler and KASAN should still be able to do checking against the
> static array size.

...and it is probably true that this is currently more useful
than the limited amount of checking compilers can do for VLAs.

> If you mean that there is some smaller dynamic logical bound n ( and we are not supposed to use memory beyond that, 

Yes, this is what I mean. 

My concern is that this dynamic bound is valuable information
which was put there by programmers by hand and I believe that
this information can not always be recovered automatically
by static analysis. So by removing VLAs from the source tree,
this information ist lost.

> then KMSAN [1] can
> detect uses of the uninitialized part of the array. So we have some
> coverage on the checking side too.
> 
> [1] https://github.com/google/kmsan#kmsan-kernelmemorysanitizer

But detecting reads of uninitialized parts can detect only some
of the errors which could be detected with precise bounds.
It can not detect out-of-bounds writes (which still fall into
the larger fixed-size array) and it does not detect out-of-bounds
reads (which still fall into the larger fixed-size array) if
the larger fixed-size array was completely initialized
before for some reason.

Martin



Re: VLAs and security

2018-09-04 Thread Uecker, Martin
Am Dienstag, den 04.09.2018, 10:00 +0200 schrieb Dmitry Vyukov:
> On Tue, Sep 4, 2018 at 8:27 AM, Uecker, Martin
>  wrote:
> > Am Montag, den 03.09.2018, 14:28 -0700 schrieb Linus Torvalds:


Hi Dmitry,

> Compiler and KASAN should still be able to do checking against the
> static array size.

...and it is probably true that this is currently more useful
than the limited amount of checking compilers can do for VLAs.

> If you mean that there is some smaller dynamic logical bound n ( and we are not supposed to use memory beyond that, 

Yes, this is what I mean. 

My concern is that this dynamic bound is valuable information
which was put there by programmers by hand and I believe that
this information can not always be recovered automatically
by static analysis. So by removing VLAs from the source tree,
this information ist lost.

> then KMSAN [1] can
> detect uses of the uninitialized part of the array. So we have some
> coverage on the checking side too.
> 
> [1] https://github.com/google/kmsan#kmsan-kernelmemorysanitizer

But detecting reads of uninitialized parts can detect only some
of the errors which could be detected with precise bounds.
It can not detect out-of-bounds writes (which still fall into
the larger fixed-size array) and it does not detect out-of-bounds
reads (which still fall into the larger fixed-size array) if
the larger fixed-size array was completely initialized
before for some reason.

Martin



Re: VLAs and security

2018-09-04 Thread Uecker, Martin
Am Montag, den 03.09.2018, 14:28 -0700 schrieb Linus Torvalds:
> On Mon, Sep 3, 2018 at 12:40 AM Uecker, Martin
>  wrote:
> > 
> > But if the true bound is smaller, then IMHO it is really bad advise
> > to tell programmers to use
> > 
> > char buf[MAX_SIZE]
> > 
> > instead of something like
> > 
> > assert(N <= MAX_SIZE);
> > char buf[N]
> 
> No.
> 
> First off, we don't use asserts in the kernel. Not acceptable. You
> handle errors, you don't crash.

Ofcourse. But this is unrelated to my point.

> Secondly, the compiler is usually very stupid, and will generate
> horrible code for VLA's.
> 
> Third, there's no guarantee that the compiler will actually even
> realize that the size is limited, and guarantee that it won't screw up
> the stack.

If this is about the quality of the generated code, ok. 

I just don't buy the idea that removing precise type-based
information about the size of objects from the source code
is good long-term strategy for improving security.

> So no. VLA's are not acceptable in the kernel. Don't do them. We're
> getting rid of them.

All right then.

Martin


Re: VLAs and security

2018-09-04 Thread Uecker, Martin
Am Montag, den 03.09.2018, 14:28 -0700 schrieb Linus Torvalds:
> On Mon, Sep 3, 2018 at 12:40 AM Uecker, Martin
>  wrote:
> > 
> > But if the true bound is smaller, then IMHO it is really bad advise
> > to tell programmers to use
> > 
> > char buf[MAX_SIZE]
> > 
> > instead of something like
> > 
> > assert(N <= MAX_SIZE);
> > char buf[N]
> 
> No.
> 
> First off, we don't use asserts in the kernel. Not acceptable. You
> handle errors, you don't crash.

Ofcourse. But this is unrelated to my point.

> Secondly, the compiler is usually very stupid, and will generate
> horrible code for VLA's.
> 
> Third, there's no guarantee that the compiler will actually even
> realize that the size is limited, and guarantee that it won't screw up
> the stack.

If this is about the quality of the generated code, ok. 

I just don't buy the idea that removing precise type-based
information about the size of objects from the source code
is good long-term strategy for improving security.

> So no. VLA's are not acceptable in the kernel. Don't do them. We're
> getting rid of them.

All right then.

Martin


Re: VLAs and security

2018-09-03 Thread Uecker, Martin
Am Sonntag, den 02.09.2018, 10:40 -0700 schrieb Kees Cook:
> On Sun, Sep 2, 2018 at 1:08 AM, Uecker, Martin
>  wrote:
> > I do not agree that VLAs are generally bad for security.
> > I think the opposite is true. A VLA with the right size
> > allows the compiler to automatically perform or insert
> > meaningful bounds checks, while a fixed upper bound does not.
> 
> While I see what you mean, the trouble is that the compiler has no
> idea what the upper bounds of the _available_ stack is. This means
> that a large VLA might allow code to read/write beyond the stack
> allocation, which also bypasses the "standard" stack buffer overflow
> checks. Additionally, VLAs bypass the existing stack-size checks we've
> added to the kernel.

Limiting the size of the VLA should be sufficient to avoid this.

I don't know about your specific stack-size checks
in the kernel, but for general programming, the full solution
is for the compiler to probe the stack when growing.

But I was not talking about the bounds of the stack, but of the
array itself.

> > For example:
> > 
> > char buf[N];
> > buf[n] = 1;
> > 
> > Here, a compiler / analysis tool can for  n < N  using
> > static analysis or insert a run-time check.
> > 
> > Replacing this with
> > 
> > char buf[MAX_SIZE]
> > 
> > hides the information about the true upper bound
> > from automatic tools.
> 
> While this may be true for some tools, I don't agree VLAs are better
> in general. For example, the compiler actually knows the upper bound
> at build time now, and things like the printf format size checks and
> CONFIG_FORTIFY_SOURCE are now able to produce compile-time warnings
> (since "sizeof(buf)" isn't a runtime value). With a VLA, this is
> hidden from those tools, and detection depends on runtime analysis.

If the correct bound is actually a constant and the array
only ends up being a VLA for some random reason, I fully agree.

But if the true bound is smaller, then IMHO it is really bad advise
to tell programmers to use

char buf[MAX_SIZE]

instead of something like

assert(N <= MAX_SIZE); 
char buf[N]

because then errors of the form 

buf[n] = 1

with N < n < MAX_SIZE can not be detected anymore. Also the
code usually ends up being less readable, which is also a clear
disadvantage in my opinion.


> It should be noted that VLAs are also slow[1], so removing them not
> only improves robustness but also improves performance.

I have to admit that I am always a bit skeptical if somebody makes
generic claims such as "VLAs are slow" and then cites only a
single example. But I am not too surprised if compilers produce
crappy code for VLAs and that this can hurt performance in some
examples. But compared to dynamic allocation VLAs should be much
faster. They also reduce stack usage compared to always allocating
array with a fixed maximum size on the stack.

> > Of course, having predictable stack usage might be more
> > important in the kernel and might be a good argument
> > to still prefer the constant bound.
> 
> Between improved compile-time checking, faster runtime performance,
> and improved robustness against stack exhaustion, I strongly believe
> the kernel to be better off with VLAs entirely removed. And we are
> close: only 6 remain (out of the 115 I counted in v4.15).

Looking at some of the patches, I would say it is not 
clear to me that this is alway an improvement.

> > But loosing the tighter bounds is clearly a disadvantage
> > with respect to security that one should keep it mind.
> 
> Yes: without VLAs, stack array usage is reduced to "standard" stack
> buffer overflow concerns. Removing the VLA doesn't introduce a new
> risk: we already had to worry about fixed-size arrays. Removing VLAsalways
> means we don't have to worry about the VLA-specific risks anymore.

It introduces the new risk that certain logic error can
not be detected anymore by static analysis or run-time bounds
checking.

Best,
Martin


Re: VLAs and security

2018-09-03 Thread Uecker, Martin
Am Sonntag, den 02.09.2018, 10:40 -0700 schrieb Kees Cook:
> On Sun, Sep 2, 2018 at 1:08 AM, Uecker, Martin
>  wrote:
> > I do not agree that VLAs are generally bad for security.
> > I think the opposite is true. A VLA with the right size
> > allows the compiler to automatically perform or insert
> > meaningful bounds checks, while a fixed upper bound does not.
> 
> While I see what you mean, the trouble is that the compiler has no
> idea what the upper bounds of the _available_ stack is. This means
> that a large VLA might allow code to read/write beyond the stack
> allocation, which also bypasses the "standard" stack buffer overflow
> checks. Additionally, VLAs bypass the existing stack-size checks we've
> added to the kernel.

Limiting the size of the VLA should be sufficient to avoid this.

I don't know about your specific stack-size checks
in the kernel, but for general programming, the full solution
is for the compiler to probe the stack when growing.

But I was not talking about the bounds of the stack, but of the
array itself.

> > For example:
> > 
> > char buf[N];
> > buf[n] = 1;
> > 
> > Here, a compiler / analysis tool can for  n < N  using
> > static analysis or insert a run-time check.
> > 
> > Replacing this with
> > 
> > char buf[MAX_SIZE]
> > 
> > hides the information about the true upper bound
> > from automatic tools.
> 
> While this may be true for some tools, I don't agree VLAs are better
> in general. For example, the compiler actually knows the upper bound
> at build time now, and things like the printf format size checks and
> CONFIG_FORTIFY_SOURCE are now able to produce compile-time warnings
> (since "sizeof(buf)" isn't a runtime value). With a VLA, this is
> hidden from those tools, and detection depends on runtime analysis.

If the correct bound is actually a constant and the array
only ends up being a VLA for some random reason, I fully agree.

But if the true bound is smaller, then IMHO it is really bad advise
to tell programmers to use

char buf[MAX_SIZE]

instead of something like

assert(N <= MAX_SIZE); 
char buf[N]

because then errors of the form 

buf[n] = 1

with N < n < MAX_SIZE can not be detected anymore. Also the
code usually ends up being less readable, which is also a clear
disadvantage in my opinion.


> It should be noted that VLAs are also slow[1], so removing them not
> only improves robustness but also improves performance.

I have to admit that I am always a bit skeptical if somebody makes
generic claims such as "VLAs are slow" and then cites only a
single example. But I am not too surprised if compilers produce
crappy code for VLAs and that this can hurt performance in some
examples. But compared to dynamic allocation VLAs should be much
faster. They also reduce stack usage compared to always allocating
array with a fixed maximum size on the stack.

> > Of course, having predictable stack usage might be more
> > important in the kernel and might be a good argument
> > to still prefer the constant bound.
> 
> Between improved compile-time checking, faster runtime performance,
> and improved robustness against stack exhaustion, I strongly believe
> the kernel to be better off with VLAs entirely removed. And we are
> close: only 6 remain (out of the 115 I counted in v4.15).

Looking at some of the patches, I would say it is not 
clear to me that this is alway an improvement.

> > But loosing the tighter bounds is clearly a disadvantage
> > with respect to security that one should keep it mind.
> 
> Yes: without VLAs, stack array usage is reduced to "standard" stack
> buffer overflow concerns. Removing the VLA doesn't introduce a new
> risk: we already had to worry about fixed-size arrays. Removing VLAsalways
> means we don't have to worry about the VLA-specific risks anymore.

It introduces the new risk that certain logic error can
not be detected anymore by static analysis or run-time bounds
checking.

Best,
Martin


VLAs and security

2018-09-02 Thread Uecker, Martin

I do not agree that VLAs are generally bad for security.
I think the opposite is true. A VLA with the right size
allows the compiler to automatically perform or insert
meaningful bounds checks, while a fixed upper bound does not.


For example:

char buf[N];
buf[n] = 1;

Here, a compiler / analysis tool can for  n < N  using
static analysis or insert a run-time check.

Replacing this with

char buf[MAX_SIZE]

hides the information about the true upper bound
from automatic tools.

Limiting the stack usage can also be achieved in
the following way:

assert(N <= MAX_SIZE)
char buf[N];


Of course, having predictable stack usage might be more 
important in the kernel and might be a good argument
to still prefer the constant bound.

But loosing the tighter bounds is clearly a disadvantage
with respect to security that one should keep it mind.


Best,
Martin





VLAs and security

2018-09-02 Thread Uecker, Martin

I do not agree that VLAs are generally bad for security.
I think the opposite is true. A VLA with the right size
allows the compiler to automatically perform or insert
meaningful bounds checks, while a fixed upper bound does not.


For example:

char buf[N];
buf[n] = 1;

Here, a compiler / analysis tool can for  n < N  using
static analysis or insert a run-time check.

Replacing this with

char buf[MAX_SIZE]

hides the information about the true upper bound
from automatic tools.

Limiting the stack usage can also be achieved in
the following way:

assert(N <= MAX_SIZE)
char buf[N];


Of course, having predictable stack usage might be more 
important in the kernel and might be a good argument
to still prefer the constant bound.

But loosing the tighter bounds is clearly a disadvantage
with respect to security that one should keep it mind.


Best,
Martin





Re: [PATCH v6] kernel.h: Retain constant expression output for max()/min()

2018-03-27 Thread Uecker, Martin

To give credit where credit is due, this hack was inspired by 
an equally insane (but different) use of the ?: operator to choose 
the right return type for type-generic macros in tgmath.h.

https://sourceware.org/git/?p=glibc.git;a=blob;f=math/tgmath.h;h=a709a5
9d0fa1168ef03349561169fc5bd27d65aa;hb=d8742dd82f6a00601155c69bad3012e90
5591e1f

(recommendation: don't look)

Martin


Am Montag, den 26.03.2018, 14:52 -1000 schrieb Linus Torvalds:
> On Mon, Mar 26, 2018 at 12:15 PM, Kees Cook 
> wrote:
> > 
> > This patch updates the min()/max() macros to evaluate to a constant
> > expression when called on constant expression arguments.
> 
> Ack.
> 
> I'm of two minds whether that "__is_constant()" macro should be
> explained or not.
> 
> A small voice in my head says "that wants a comment".
> 
> But a bigger voice disagrees.
> 
> It is a work of art, and maybe the best documentation is just the
> name. It does what it says it does.
> 
> Art shouldn't be explained. It should be appreciated.
> 
> Nobody sane really should care about how it works, and if somebody
> cares it is "left as an exercise to the reader".
> 
>   Linus

Re: [PATCH v6] kernel.h: Retain constant expression output for max()/min()

2018-03-27 Thread Uecker, Martin

To give credit where credit is due, this hack was inspired by 
an equally insane (but different) use of the ?: operator to choose 
the right return type for type-generic macros in tgmath.h.

https://sourceware.org/git/?p=glibc.git;a=blob;f=math/tgmath.h;h=a709a5
9d0fa1168ef03349561169fc5bd27d65aa;hb=d8742dd82f6a00601155c69bad3012e90
5591e1f

(recommendation: don't look)

Martin


Am Montag, den 26.03.2018, 14:52 -1000 schrieb Linus Torvalds:
> On Mon, Mar 26, 2018 at 12:15 PM, Kees Cook 
> wrote:
> > 
> > This patch updates the min()/max() macros to evaluate to a constant
> > expression when called on constant expression arguments.
> 
> Ack.
> 
> I'm of two minds whether that "__is_constant()" macro should be
> explained or not.
> 
> A small voice in my head says "that wants a comment".
> 
> But a bigger voice disagrees.
> 
> It is a work of art, and maybe the best documentation is just the
> name. It does what it says it does.
> 
> Art shouldn't be explained. It should be appreciated.
> 
> Nobody sane really should care about how it works, and if somebody
> cares it is "left as an exercise to the reader".
> 
>   Linus

Re: detecting integer constant expressions in macros

2018-03-21 Thread Uecker, Martin


Am Mittwoch, den 21.03.2018, 10:51 +0100 schrieb Martin Uecker:
> 
> Am Dienstag, den 20.03.2018, 17:30 -0700 schrieb Linus Torvalds:
> > On Tue, Mar 20, 2018 at 5:10 PM, Uecker, Martin
> > <martin.uec...@med.uni-goettingen.de> wrote:
> > 
> > > But one could also use __builtin_types_compatible_p instead.
> > 
> > That might be the right approach, even if I like how it only used
> > standard C (although _disgusting_ standard C) without it apart from
> > the small issue of sizeof(void)
> > 
> > So something like
> > 
> >   #define __is_constant(a) \
> > __builtin_types_compatible_p(int *, typeof(1 ? ((void*)((a)
> > *
> > 0l)) : (int*)1 ) )
> > 
> > if I counted the parentheses right..
> 
> This seems to work fine on all recent compilers. Sadly, it
> produces false positives on 4.4.7 and earlier when
> tested on godbolt.org
> 
> Surprisingly, the MAX macro as defined below still seems
> to do the right thing with respect to avoiding the VLA
> even on the old compilers.
> 
> I am probably missing something... or there are two
> compiler bugs cancelling out, or the __builting_choose_expr
> changes things.

Nevermind, of course it avoids the VLA if it produces a false
positive and uses the simple version. So it is unsafe to use
on very old compilers.

Martin


> Martin
> 
> My test code:
> 
> #define ICE_P(x) (__builtin_types_compatible_p(int*, __typeof__(1 ?
> ((void*)((x) * 0l)) : (int*)1)))
> 
> #define SIMPLE_MAX(a, b) ((a) > (b) ? (a) : (b))
> #define SAFE_MAX(a, b) ({ __typeof(a) _a = (a); __typeof(b) _b = (b);
> SIMPLE_MAX(_a, _b); })
> #define MAX(a, b) (__builtin_choose_expr(ICE_P(a) && ICE_P(b),
> SIMPLE_MAX(a, b), SAFE_MAX(a, b)))
> 
> 
> 
> int foo(int x)
> {
> int a[MAX(3, 4)];
> //int a[MAX(3, x)];
> //int a[SAFE_MAX(3, 4)];
> //return ICE_P(MAX(3, 4));
> return ICE_P(MAX(3, x));
> }

Re: detecting integer constant expressions in macros

2018-03-21 Thread Uecker, Martin


Am Mittwoch, den 21.03.2018, 10:51 +0100 schrieb Martin Uecker:
> 
> Am Dienstag, den 20.03.2018, 17:30 -0700 schrieb Linus Torvalds:
> > On Tue, Mar 20, 2018 at 5:10 PM, Uecker, Martin
> >  wrote:
> > 
> > > But one could also use __builtin_types_compatible_p instead.
> > 
> > That might be the right approach, even if I like how it only used
> > standard C (although _disgusting_ standard C) without it apart from
> > the small issue of sizeof(void)
> > 
> > So something like
> > 
> >   #define __is_constant(a) \
> > __builtin_types_compatible_p(int *, typeof(1 ? ((void*)((a)
> > *
> > 0l)) : (int*)1 ) )
> > 
> > if I counted the parentheses right..
> 
> This seems to work fine on all recent compilers. Sadly, it
> produces false positives on 4.4.7 and earlier when
> tested on godbolt.org
> 
> Surprisingly, the MAX macro as defined below still seems
> to do the right thing with respect to avoiding the VLA
> even on the old compilers.
> 
> I am probably missing something... or there are two
> compiler bugs cancelling out, or the __builting_choose_expr
> changes things.

Nevermind, of course it avoids the VLA if it produces a false
positive and uses the simple version. So it is unsafe to use
on very old compilers.

Martin


> Martin
> 
> My test code:
> 
> #define ICE_P(x) (__builtin_types_compatible_p(int*, __typeof__(1 ?
> ((void*)((x) * 0l)) : (int*)1)))
> 
> #define SIMPLE_MAX(a, b) ((a) > (b) ? (a) : (b))
> #define SAFE_MAX(a, b) ({ __typeof(a) _a = (a); __typeof(b) _b = (b);
> SIMPLE_MAX(_a, _b); })
> #define MAX(a, b) (__builtin_choose_expr(ICE_P(a) && ICE_P(b),
> SIMPLE_MAX(a, b), SAFE_MAX(a, b)))
> 
> 
> 
> int foo(int x)
> {
> int a[MAX(3, 4)];
> //int a[MAX(3, x)];
> //int a[SAFE_MAX(3, 4)];
> //return ICE_P(MAX(3, 4));
> return ICE_P(MAX(3, x));
> }

Re: detecting integer constant expressions in macros

2018-03-21 Thread Uecker, Martin


Am Dienstag, den 20.03.2018, 17:30 -0700 schrieb Linus Torvalds:
> On Tue, Mar 20, 2018 at 5:10 PM, Uecker, Martin
> <martin.uec...@med.uni-goettingen.de> wrote:

> 
> > But one could also use __builtin_types_compatible_p instead.
> 
> That might be the right approach, even if I like how it only used
> standard C (although _disgusting_ standard C) without it apart from
> the small issue of sizeof(void)
> 
> So something like
> 
>   #define __is_constant(a) \
> __builtin_types_compatible_p(int *, typeof(1 ? ((void*)((a) *
> 0l)) : (int*)1 ) )
> 
> if I counted the parentheses right..

This seems to work fine on all recent compilers. Sadly, it
produces false positives on 4.4.7 and earlier when
tested on godbolt.org

Surprisingly, the MAX macro as defined below still seems
to do the right thing with respect to avoiding the VLA
even on the old compilers.

I am probably missing something... or there are two
compiler bugs cancelling out, or the __builting_choose_expr
changes things.

Martin

My test code:

#define ICE_P(x) (__builtin_types_compatible_p(int*, __typeof__(1 ?
((void*)((x) * 0l)) : (int*)1)))

#define SIMPLE_MAX(a, b) ((a) > (b) ? (a) : (b))
#define SAFE_MAX(a, b) ({ __typeof(a) _a = (a); __typeof(b) _b = (b);
SIMPLE_MAX(_a, _b); })
#define MAX(a, b) (__builtin_choose_expr(ICE_P(a) && ICE_P(b),
SIMPLE_MAX(a, b), SAFE_MAX(a, b)))



int foo(int x)
{
int a[MAX(3, 4)];
//int a[MAX(3, x)];
//int a[SAFE_MAX(3, 4)];
//return ICE_P(MAX(3, 4));
return ICE_P(MAX(3, x));
}


Re: detecting integer constant expressions in macros

2018-03-21 Thread Uecker, Martin


Am Dienstag, den 20.03.2018, 17:30 -0700 schrieb Linus Torvalds:
> On Tue, Mar 20, 2018 at 5:10 PM, Uecker, Martin
>  wrote:

> 
> > But one could also use __builtin_types_compatible_p instead.
> 
> That might be the right approach, even if I like how it only used
> standard C (although _disgusting_ standard C) without it apart from
> the small issue of sizeof(void)
> 
> So something like
> 
>   #define __is_constant(a) \
> __builtin_types_compatible_p(int *, typeof(1 ? ((void*)((a) *
> 0l)) : (int*)1 ) )
> 
> if I counted the parentheses right..

This seems to work fine on all recent compilers. Sadly, it
produces false positives on 4.4.7 and earlier when
tested on godbolt.org

Surprisingly, the MAX macro as defined below still seems
to do the right thing with respect to avoiding the VLA
even on the old compilers.

I am probably missing something... or there are two
compiler bugs cancelling out, or the __builting_choose_expr
changes things.

Martin

My test code:

#define ICE_P(x) (__builtin_types_compatible_p(int*, __typeof__(1 ?
((void*)((x) * 0l)) : (int*)1)))

#define SIMPLE_MAX(a, b) ((a) > (b) ? (a) : (b))
#define SAFE_MAX(a, b) ({ __typeof(a) _a = (a); __typeof(b) _b = (b);
SIMPLE_MAX(_a, _b); })
#define MAX(a, b) (__builtin_choose_expr(ICE_P(a) && ICE_P(b),
SIMPLE_MAX(a, b), SAFE_MAX(a, b)))



int foo(int x)
{
int a[MAX(3, 4)];
//int a[MAX(3, x)];
//int a[SAFE_MAX(3, 4)];
//return ICE_P(MAX(3, 4));
return ICE_P(MAX(3, x));
}


Re: detecting integer constant expressions in macros

2018-03-20 Thread Uecker, Martin


Am Dienstag, den 20.03.2018, 16:08 -0700 schrieb Linus Torvalds:
> On Tue, Mar 20, 2018 at 3:13 PM, Uecker, Martin
> <martin.uec...@med.uni-goettingen.de> wrote:
> > 
> > here is an idea:
> 
> That's not "an idea".
> 
> That is either genius, or a seriously diseased mind.
> 
> I can't quite tell which.
> 
> > a test for integer constant expressions which returns an
> > integer constant expression itself which should be suitable
> > for passing to __builtin_choose_expr might be:
> > 
> > #define ICE_P(x) (sizeof(int) == sizeof(*(1 ? ((void*)((x) * 0l)) :
> > (int*)1)))

...
> So now the end result is (sizeof(*(void *)(x)), which on gcc is
> generally *different* from 'int'.
> 
> So I see two issues:
> 
>  - "sizeof(*(void *)1)" is not necessalily well-defined. For gcc it
> is
> 1. But it could cause warnings.

It is a documented extension which enables pointer arithmetic
on void pointers, so I am sure neither gcc nor
clang has any problem with it. But one could also use
__builtin_types_compatible_p instead.

>  - this will break the minds of everybody who ever sees that
> expression.
>
> Those two issues might be fine, though.
> 
> > This also does not evaluate x itself on gcc although this is
> > not guaranteed by the standard. (And I haven't tried any older
> > gcc.)
> 
> Oh, I think it's guaranteed by the standard that 'sizeof()' doesn't
> evaluate the argument value, only the type.

It has to evaluate the expression for the length of an array,
but it is not specified whether this is done if it does not have
any effect on the result. I would assume that any sane compiler
does not.

> I'm in awe of your truly marvelously disgusting hack. That is truly a
> work of art.
> 
> I'm sure it doesn't work or causes warnings for various reasons, but
> it's still a thing of beaty.

I thought you might like it ;-)

Martin
 

Re: detecting integer constant expressions in macros

2018-03-20 Thread Uecker, Martin


Am Dienstag, den 20.03.2018, 16:08 -0700 schrieb Linus Torvalds:
> On Tue, Mar 20, 2018 at 3:13 PM, Uecker, Martin
>  wrote:
> > 
> > here is an idea:
> 
> That's not "an idea".
> 
> That is either genius, or a seriously diseased mind.
> 
> I can't quite tell which.
> 
> > a test for integer constant expressions which returns an
> > integer constant expression itself which should be suitable
> > for passing to __builtin_choose_expr might be:
> > 
> > #define ICE_P(x) (sizeof(int) == sizeof(*(1 ? ((void*)((x) * 0l)) :
> > (int*)1)))

...
> So now the end result is (sizeof(*(void *)(x)), which on gcc is
> generally *different* from 'int'.
> 
> So I see two issues:
> 
>  - "sizeof(*(void *)1)" is not necessalily well-defined. For gcc it
> is
> 1. But it could cause warnings.

It is a documented extension which enables pointer arithmetic
on void pointers, so I am sure neither gcc nor
clang has any problem with it. But one could also use
__builtin_types_compatible_p instead.

>  - this will break the minds of everybody who ever sees that
> expression.
>
> Those two issues might be fine, though.
> 
> > This also does not evaluate x itself on gcc although this is
> > not guaranteed by the standard. (And I haven't tried any older
> > gcc.)
> 
> Oh, I think it's guaranteed by the standard that 'sizeof()' doesn't
> evaluate the argument value, only the type.

It has to evaluate the expression for the length of an array,
but it is not specified whether this is done if it does not have
any effect on the result. I would assume that any sane compiler
does not.

> I'm in awe of your truly marvelously disgusting hack. That is truly a
> work of art.
> 
> I'm sure it doesn't work or causes warnings for various reasons, but
> it's still a thing of beaty.

I thought you might like it ;-)

Martin
 

Re: detecting integer constant expressions in macros

2018-03-20 Thread Uecker, Martin


talking of crazy ideas, here is another way to preserve
integer const expressions in macros by storing it a
VLA type (only for positive integers I guess):


#define MAX(a, b) sizeof(*({\
    typedef char _Ta[a];\
typedef char _Tb[b];\
(char(*)[sizeof(_Ta) > sizeof(_Tb) ? sizeof(_Ta) :
sizeof(_Tb)])0; }))

Am Dienstag, den 20.03.2018, 23:13 +0100 schrieb Martin Uecker:
> Hi Linus,
> 
> here is an idea:
> 
> a test for integer constant expressions which returns an
> integer constant expression itself which should be suitable
> for passing to __builtin_choose_expr might be:
> 
> #define ICE_P(x) (sizeof(int) == sizeof(*(1 ? ((void*)((x) * 0l)) :
> (int*)1)))
> 
> This also does not evaluate x itself on gcc although this is
> not guaranteed by the standard. (And I haven't tried any older
> gcc.)
> 
> Best,
> Martin

Re: detecting integer constant expressions in macros

2018-03-20 Thread Uecker, Martin


talking of crazy ideas, here is another way to preserve
integer const expressions in macros by storing it a
VLA type (only for positive integers I guess):


#define MAX(a, b) sizeof(*({\
    typedef char _Ta[a];\
typedef char _Tb[b];\
(char(*)[sizeof(_Ta) > sizeof(_Tb) ? sizeof(_Ta) :
sizeof(_Tb)])0; }))

Am Dienstag, den 20.03.2018, 23:13 +0100 schrieb Martin Uecker:
> Hi Linus,
> 
> here is an idea:
> 
> a test for integer constant expressions which returns an
> integer constant expression itself which should be suitable
> for passing to __builtin_choose_expr might be:
> 
> #define ICE_P(x) (sizeof(int) == sizeof(*(1 ? ((void*)((x) * 0l)) :
> (int*)1)))
> 
> This also does not evaluate x itself on gcc although this is
> not guaranteed by the standard. (And I haven't tried any older
> gcc.)
> 
> Best,
> Martin

detecting integer constant expressions in macros

2018-03-20 Thread Uecker, Martin

Hi Linus,

here is an idea:

a test for integer constant expressions which returns an
integer constant expression itself which should be suitable
for passing to __builtin_choose_expr might be:

#define ICE_P(x) (sizeof(int) == sizeof(*(1 ? ((void*)((x) * 0l)) :
(int*)1)))

This also does not evaluate x itself on gcc although this is
not guaranteed by the standard. (And I haven't tried any older
gcc.)

Best,
Martin

detecting integer constant expressions in macros

2018-03-20 Thread Uecker, Martin

Hi Linus,

here is an idea:

a test for integer constant expressions which returns an
integer constant expression itself which should be suitable
for passing to __builtin_choose_expr might be:

#define ICE_P(x) (sizeof(int) == sizeof(*(1 ? ((void*)((x) * 0l)) :
(int*)1)))

This also does not evaluate x itself on gcc although this is
not guaranteed by the standard. (And I haven't tried any older
gcc.)

Best,
Martin