On 2019-10-12 19:01, Emmanuel Dreyfus wrote:
Mouse <mo...@rodents-montreal.org> wrote:

I'm presumably missing something here, but what?

I suspect Maxime's concern is about uncontrolled stack-based variable
buffer, which could be used to crash the kernel.

But in my case, the data is coming from the bootloader. I cannot think
about a scenario where it makes sense to defend against an attack from
the bootloader. The kernel already has absolute trust in the bootloader.

On this one, I agree with Maxime.

Even if it comes from the bootloader, why would you want to use alloca()?

It's not that it uses the stack, per se. That is obviously not the problem. The problem is that it can allocate arbitrary large amounts of data. And it is not, as Mouse suggested, a case of optimizing memory use. If we get into a function, any memory allocated by alloca() only have the lifetime of the executing function. But allowing arbitrary large chunks of data be allocated here is bad, since it can cause crashes is many silly ways. But any local variable in the function have the same property about lifetime, and have the same property about optimizing memory use, since when that function is not active, there is no memory used. And if we then know of an upper limit to what we would allow to be allocated through alloca(), why don't we just have a function local variable of that size already from the start. You are not going to use more memory than the worst case acceptable anyhow, which any called function still have to be able to run with (meaning how much stack is left).

So what is the gain of using alloca()? You might be using less stack on a good day, but we already know the stack can handle the worst case anyhow, and this is temporary stuff, so there is honestly close to no gain from alloca() (you could possibly argue more locality benefits for values, such as hitting the same cache lines if the alloca() would be grabbing very little memory, but if you are trying to optimize for that detail, then you're pretty lost, since this varies a lot per CPU model and variant that you cannot do this is a sane way unless you are building very CPU specific code with a ton of constraints.

What do we loose by using alloca()? In the best case nothing. In the worse case, you have created new vulnerabilities.

So, again, why would you ever use alloca()? It's only a potential bad idea, with no real gains here. We cannot allow arbitrary size allocations anyhow, and for the limited size allocations, we must be sure we always work in the worse case, so why not do that from the start, and have a (more) sane situation.

alloca() is just bad here. (And potentially bad in general.)
And, sadly, C 11 also added a hidden alloca() into the base language itself, making life more miserable in general.

  Johnny

--
Johnny Billquist                  || "I'm on a bus
                                  ||  on a psychedelic trip
email: b...@softjar.se             ||  Reading murder books
pdp is alive!                     ||  tryin' to stay hip" - B. Idol

Reply via email to