On October 20, 2019 2:54:48 PM GMT+02:00, Richard Sandiford 
<richard.sandif...@arm.com> wrote:
>Richard Biener <richard.guent...@gmail.com> writes:
>> On October 19, 2019 5:06:40 PM GMT+02:00, Richard Sandiford
><richard.sandif...@arm.com> wrote:
>>>After the previous patch, it seems more natural to apply the
>>>PARAM_SLP_MAX_INSNS_IN_BB threshold as soon as we know what
>>>the region is, rather than delaying it to vect_slp_analyze_bb_1.
>>>(But rather than carve out the biggest region possible and then
>>>reject it, wouldn't it be better to stop when the region gets
>>>too big, so we at least have a chance of vectorising something?)
>>>
>>>It also seems more natural for vect_slp_bb_region to create the
>>>bb_vec_info itself rather than (a) having to pass bits of data down
>>>for the initialisation and (b) forcing vect_slp_analyze_bb_1 to free
>>>on every failure return.
>>>
>>>Tested on aarch64-linux-gnu and x86_64-linux-gnu.  OK to install?
>>
>> Ok. But I wonder what the reason was for this limit? Dependency
>analysis was greatly simplified, being no longer quadratic here. Can
>you check where the limit originally came from? But indeed splitting
>the region makes more sense then, but at dataref group boundaries I'd
>say. 
>
>Yeah, looks it was the complexity of dependence analysis:
>
>  https://gcc.gnu.org/ml/gcc-patches/2009-05/msg01303.html

OK. We no longer
Call compute dependence between all memory refs but only verify we can do those 
code-motions we need to do. That's of course much harder to check a bound on 
upfront (it's still quadratic in the worst case). I'm also not sure this is 
ever a problem, but we might instead count the number of stmts involving 
memory? 

>  > Is there any limit on the size of the BB you consider for
>  > vectorization?  I see we do compute_all_dependences on it - that
>  > might be quite costly.
>
>  I added slp-max-insns-in-bb parameter with initial value 1000.
>
>Thanks,
>Richard

Reply via email to