https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95285

--- Comment #4 from Wilco <wilco at gcc dot gnu.org> ---
(In reply to Bu Le from comment #3)
> (In reply to Wilco from comment #2)
> 
> > Is the main usage scenario huge arrays? If so, these could easily be
> > allocated via malloc at startup rather than using bss. It means an extra
> > indirection in some cases (to load the pointer), but it should be much more
> > efficient than using a large code model with all the overheads.
> 
> Thanks for the reply. 
> 
> The large array is just used to construct the test case. It is not a
> neccessary condition for this scenario. The common scenario is that the
> symbol is too far away for small code model to reach it, which cloud also
> result from large amount of small arrays, structures, etc. Meanwhile, the
> large code model is able to reach the symbol but can not be position
> independent, which cause the problem. 
> 
> Besides, the code in CESM is quiet complicated to reconstruct with malloc,
> which is also not an acceptable option for my customer.
> 
> Clear enough for your concern?

Well the question is whether we're talking about more than 4GB of code or more
than 4GB of data. With >4GB code you're indeed stuck with the large model. With
data it is feasible to automatically use malloc for arrays when larger than a
certain size, so there is no need to change the application at all. Something
like that could be the default in the small model so that you don't have any
extra overhead unless you have huge arrays. Making the threshold configurable
means you can tune it for a specific application.

Reply via email to