On Thursday, October 6, 2016 at 1:39:05 PM UTC+2, Jonathan Bober wrote:
>
> I understand the reasons why OpenBLAS shouldn't be multithreading 
> everything, and why it shouldn't necessarily use all available cpu cores 
> when it does do multihreading, but the point is: it currently uses all or 
> one, and sometimes it decides to use multithreading even when using 2 
> threads doesn't really seem to give me a benefit. So I guess there are two 
> points to consider. One is a "public service announcement" that if things 
> don't change, then in Sage 7.4 users might want to strongly consider 
> setting OpenBLAS to be single threaded. The other is that we may want to 
> reconsider the OpenBLAS defaults in Sage.
>
That seems to be a better option at the moment.
And it is still time to open a ticket and get it reviewed in time for 7.4. 

>
> One possibility might be to expose the openblas_set_num_threads function 
> at the top level, and keep the default at 1. Another possibility is to 
> build OpenBLAS single threaded by default and force someone compiling Sage 
> to pass some option for multithreaded OpenBLAS; that way, at least, only 
> "advanced" users will run into and have to deal with the sub-par 
> multithreading behavior.
>
> On Wed, Oct 5, 2016 at 10:34 AM, Clement Pernet <clement...@gmail.com 
> <javascript:>> wrote:
>
>> To follow up on Jean-Pierre summary of the situation:
>>
>> The current version of fflas-ffpack in sage (v2.2.2) uses the BLAS 
>> provided as is. Running it with a multithreaded BLAS may result in a slower 
>> code than with a single threaded BLAS. This is very likely due to memory 
>> transfer and coherence problems.
>>
>> More generally, we strongly suggest to use a single threaded BLAS and let 
>> fflas-ffpack deal with the parallelization. This is common practice for 
>> example with parallel versions of LAPACK.
>>
>> Therefore, after the discussion https://trac.sagemath.org/ticket/21323 
>> we have decided to let fflas-ffpack the possibility to force the number of 
>> threads that OpenBLAS can use at runtime. In this context we will force it 
>> to 1.
>> This is available upsteam and I plan to update sage's fflas-ffpack 
>> whenever we release v2.3.0.
>>
>> Clément
>>
>>
>> Le 05/10/2016 à 11:24, Jean-Pierre Flori a écrit :
>>
>>> Currently OpenBlas does what it wants for multithreading.
>>> We hesitated to disable it but prefered to wait and think about it:
>>> see https://trac.sagemath.org/ticket/21323.
>>>
>>> You can still influence its use of threads setting OPENBLAS_NUM_THREADS.
>>> See the trac ticket, just note that this is not Sage specific.
>>> And as you discovered, it seems it is also influenced by 
>>> OMP_NUM_THREADS...
>>>
>>> On Wednesday, October 5, 2016 at 9:28:23 AM UTC+2, tdumont wrote:
>>>
>>>     What is the size of the matrix you use ?
>>>     Whatever you do, openmp in blas is interesting only if you compute 
>>> with
>>>     large matrices.
>>>     If your computations are embedded  in an @parallel and launch n
>>>     processes, be careful  that your  OMP_NUM_THREADS be less or equal to
>>>     ncores/n.
>>>
>>>     My experience is (I am doing numerical computations)  that there are
>>>     very few cases where using openmp in blas libraries is interesting.
>>>     Parallelism should generally be searched at a higher level.
>>>
>>>     One of the interest of multithreaded blas is for constructors: with
>>>     Intel's mkl blas, you can obtain the maximum possible performances of
>>>     tah machines  when you use DGEMM (ie product of matrices), due to the
>>>     high arithmetic intensity of matrix vector products. On my 2x8 core
>>>     sandy bridge à 2.7GHZ, I have obtained more that 300 giga flops, but
>>>     with matrices of size > 1000 ! And this is only true for DGEMM....
>>>
>>>     t.d.
>>>
>>>     Le 04/10/2016 à 20:26, Jonathan Bober a écrit :
>>>     > See the following timings: If I start Sage with OMP_NUM_THREADS=1, 
>>> a
>>>     > particular computation takes 1.52 cpu seconds and 1.56 wall 
>>> seconds.
>>>     >
>>>     > The same computation without OMP_NUM_THREADS set takes 12.8 cpu 
>>> seconds
>>>     > and 1.69 wall seconds. This is particularly devastating when I'm 
>>> running
>>>     > with @parallel to use all of my cpu cores.
>>>     >
>>>     > My guess is that this is Linbox related, since these computations 
>>> do
>>>     > some exact linear algebra, and Linbox can do some multithreading, 
>>> which
>>>     > perhaps uses OpenMP.
>>>     >
>>>     > jb12407@lmfdb1:~$ OMP_NUM_THREADS=1 sage
>>>     > [...]
>>>     > SageMath version 7.4.beta6, Release Date: 2016-09-24
>>>     > [...]
>>>     > Warning: this is a prerelease version, and it may be unstable.
>>>     > [...]
>>>     > sage: %time M = ModularSymbols(5113, 2, -1)
>>>     > CPU times: user 509 ms, sys: 21 ms, total: 530 ms
>>>     > Wall time: 530 ms
>>>     > sage: %time S = M.cuspidal_subspace().new_subspace()
>>>     > CPU times: user 1.42 s, sys: 97 ms, total: 1.52 s
>>>     > Wall time: 1.56 s
>>>     >
>>>     >
>>>     > jb12407@lmfdb1:~$ sage
>>>     > [...]
>>>     > SageMath version 7.4.beta6, Release Date: 2016-09-24
>>>     > [...]
>>>     > sage: %time M = ModularSymbols(5113, 2, -1)
>>>     > CPU times: user 570 ms, sys: 18 ms, total: 588 ms
>>>     > Wall time: 591 ms
>>>     > sage: %time S = M.cuspidal_subspace().new_subspace()
>>>     > CPU times: user 3.76 s, sys: 9.01 s, total: 12.8 s
>>>     > Wall time: 1.69 s
>>>     >
>>>     > --
>>>     > You received this message because you are subscribed to the Google
>>>     > Groups "sage-devel" group.
>>>     > To unsubscribe from this group and stop receiving emails from it, 
>>> send
>>>     > an email to sage-devel+...@googlegroups.com
>>>     > <mailto:sage-devel+unsubscr...@googlegroups.com <javascript:>>.
>>>     > To post to this group, send email to sage-...@googlegroups.com
>>>     > <mailto:sage-...@googlegroups.com>.
>>>     > Visit this group at https://groups.google.com/group/sage-devel
>>>     <https://groups.google.com/group/sage-devel>.
>>>     > For more options, visit https://groups.google.com/d/optout <
>>> https://groups.google.com/d/optout>.
>>>
>>> --
>>> You received this message because you are subscribed to the Google 
>>> Groups "sage-devel" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to
>>> sage-devel+...@googlegroups.com <javascript:> <mailto:
>>> sage-devel+unsubscr...@googlegroups.com <javascript:>>.
>>> To post to this group, send email to sage-...@googlegroups.com 
>>> <javascript:> <mailto:sage-...@googlegroups.com <javascript:>>.
>>> Visit this group at https://groups.google.com/group/sage-devel.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "sage-devel" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to sage-devel+...@googlegroups.com <javascript:>.
>> To post to this group, send email to sage-...@googlegroups.com 
>> <javascript:>.
>> Visit this group at https://groups.google.com/group/sage-devel.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.

Reply via email to