Re: [sage-devel] VOTE: move Sage development to Github

2022-09-22 Thread Clement Pernet

+1 for Github.

Le 21/09/2022 à 19:23, David Roe a écrit :

Dear Sage developers,
Following extensive discussion, both recently 
 (prompted by issues upgrading 
the trac server) and over  the 
 last 
 decade 
, we are calling a vote on 
switching Sage development from Trac  to Github 
.  We've created a summary of the pros and cons of each system 
, a description of the development 
model to be used on github , 
and a trac ticket  for coordinating work on the transition.  
More work will need to be done to carry out the actual transition once voting is complete.


The voting will last until noon Eastern time (16:00 UTC) on Wednesday, October 5.  Please use this 
thread only for sending votes, to make it easier to count them afterward; there is a parallel thread 
where you can make arguments in favor of either system.


Finally, I will close with a plea to be involved in this vote and discussion even if you are not a 
core Sage developer.  By definition, core Sage developers have become comfortable with trac, and I 
think that one of the major arguments in favor of github is that it will help bring in new 
contributors who are not familiar with Sage's development workfow 
.  Anyone who has ever contributed to the 
Sage code base or who maintains a Sage user package is welcome to vote.

David

--
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email to 
sage-devel+unsubscr...@googlegroups.com .
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sage-devel/CAChs6_%3DyvZ869L66E1tFmziWDirbawSEABf_uc_j9Dy8VBFW8w%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sage-devel/c7e84632-b4b9-5326-a61c-b3e9685b7cf7%40gmail.com.


Re: [sage-devel] Re: Naming conflict between Givaro and Factory

2021-12-06 Thread Clement Pernet
For the record, I reported the problem

https://www.singular.uni-kl.de/forum/viewtopic.php?f=10=2965=0

and a fix has quickly been commited.
I will include this fix as a patch to singular in the branch of

https://trac.sagemath.org/ticket/32959

where the conflict occured.

Clément

Le 06/12/2021 à 17:21, Dima Pasechnik a écrit :
> 
> 
> On Mon, 6 Dec 2021, 14:42 Clement Pernet,  <mailto:clement.per...@gmail.com>> wrote:
> 
> Thanks, I also recently realized it was coming from Singular.
> 
> 
> Le 03/12/2021 à 16:10, Maarten Derickx a écrit :
> 
> > Not really sure why they #define IntegerDomain 1 on line 25 there. But 
> I guess that doesn't
> matter.> It is just an occasion of having to different libraries 
> accidentally using the same
> name for
> > different things.
> Sure, that's why one should use namespaces as much as possible. Macros 
> defined like this one are
> very invasive as they even conflict with names protected in a namespace, 
> like Givaro's. It could be
> fixed by
> - either prefixing the macro with something like __SINGULAR in ordre to 
> emulate a namespace
> - or #undef'ing it at the end of Singular's code.
> 
> >
> > So this just means we should be careful with includes and other things 
> so that these things
> don't clash.
> 
> Not sure that is can be solved by only re-ordering or carefully picking 
> the includes.
> 
> Is anyone from singular around here who sees an alternative way around it?
> I'll report the problem upstream.
> 
> 
> it seems they had that macro for 26 years :-)
> 
> Surely they ought to replace all that with enums...
> 
> 
> Clément
> 
> > Op vrijdag 3 december 2021 om 11:54:08 UTC+1 schreef Clement Pernet:
> >
> >     Hi,
> >
> >     Working on
> >
> >     https://trac.sagemath.org/ticket/32959 
> <https://trac.sagemath.org/ticket/32959>
> <https://trac.sagemath.org/ticket/32959 
> <https://trac.sagemath.org/ticket/32959>>
> >
> >     I hit a compilation error due to
> >
> >     sage/local/include/factory/factory.h: #define IntegerDomain 1
> >
> >     which conflicts with
> >
> >     sage/local/include/givaro/givinteger.h: using IntegerDomain = 
> ZRing
> >
> >     See the compilation log ;
> >
> >     [sagelib-9.5.beta7] In file included from
> >     /home/soft/sage/local/include/singular/coeffs/coeffs.h:19,
> >     [sagelib-9.5.beta7]  from
> >     /home/soft/sage/local/include/singular/polys/monomials/ring.h:12,
> >     [sagelib-9.5.beta7]  from
> /home/soft/sage/local/include/singular/kernel/polys.h:15,
> >     [sagelib-9.5.beta7]  from
> >     /home/soft/sage/local/include/singular/kernel/structs.h:25,
> >     [sagelib-9.5.beta7]  from
> >     /home/soft/sage/local/include/singular/Singular/libsingular.h:7,
> >     [sagelib-9.5.beta7]  from
> >     
> build/cythonized/sage/rings/polynomial/multi_polynomial_libsingular.cpp:724:
> >     [sagelib-9.5.beta7] 
> /home/soft/sage/local/include/givaro/givinteger.h: At global scope:
> >     [sagelib-9.5.beta7] 
> /home/soft/sage/local/include/factory/factory.h:92:23: error: expected
> >     nested-name-specifier before numeric constant
> >     [sagelib-9.5.beta7]    92 | #define IntegerDomain 1
> >     [sagelib-9.5.beta7]   |   ^
> >     [sagelib-9.5.beta7] 
> /home/soft/sage/local/include/givaro/givinteger.h:412:11: note: in
> expansion of
> >     macro ‘IntegerDomain’
> >     [sagelib-9.5.beta7]   412 | using IntegerDomain = 
> ZRing;
> >     [sagelib-9.5.beta7]   |   ^
> >
> >     I have no clue what is this Factory, and why it defines 
> IntegerDomain to 1.
> >
> >     Any insight would be most welcome.
> >
> >     Cheers.
> >
> >     Clément
> >
> >
> > --
> > You received this message because you are subscribed to the Google 
> Groups "sage-devel" group.
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to
> > sage-devel+unsubscr...@googlegroups.com 
> <mailto:sage-devel%2bunsubscr...@googlegroups.com>
> <mailto:sage-devel+unsubscr...@googlegroups.com 
> <mailto:sage-devel%2bunsubscr...

Re: [sage-devel] Re: Naming conflict between Givaro and Factory

2021-12-06 Thread Clement Pernet
Thanks, I also recently realized it was coming from Singular.


Le 03/12/2021 à 16:10, Maarten Derickx a écrit :

> Not really sure why they #define IntegerDomain 1 on line 25 there. But I 
> guess that doesn't matter.> It is just an occasion of having to different 
> libraries accidentally using the same name for
> different things.
Sure, that's why one should use namespaces as much as possible. Macros defined 
like this one are
very invasive as they even conflict with names protected in a namespace, like 
Givaro's. It could be
fixed by
- either prefixing the macro with something like __SINGULAR in ordre to emulate 
a namespace
- or #undef'ing it at the end of Singular's code.

> 
> So this just means we should be careful with includes and other things so 
> that these things don't clash.

Not sure that is can be solved by only re-ordering or carefully picking the 
includes.

Is anyone from singular around here who sees an alternative way around it?
I'll report the problem upstream.

Clément

> Op vrijdag 3 december 2021 om 11:54:08 UTC+1 schreef Clement Pernet:
> 
> Hi,
> 
> Working on
> 
>     https://trac.sagemath.org/ticket/32959 
> <https://trac.sagemath.org/ticket/32959>
> 
> I hit a compilation error due to
> 
> sage/local/include/factory/factory.h: #define IntegerDomain 1
> 
> which conflicts with
> 
> sage/local/include/givaro/givinteger.h: using IntegerDomain = 
> ZRing
> 
> See the compilation log ;
> 
> [sagelib-9.5.beta7] In file included from
> /home/soft/sage/local/include/singular/coeffs/coeffs.h:19,
> [sagelib-9.5.beta7]  from
> /home/soft/sage/local/include/singular/polys/monomials/ring.h:12,
> [sagelib-9.5.beta7]  from 
> /home/soft/sage/local/include/singular/kernel/polys.h:15,
> [sagelib-9.5.beta7]  from
> /home/soft/sage/local/include/singular/kernel/structs.h:25,
> [sagelib-9.5.beta7]  from
> /home/soft/sage/local/include/singular/Singular/libsingular.h:7,
> [sagelib-9.5.beta7]  from
> 
> build/cythonized/sage/rings/polynomial/multi_polynomial_libsingular.cpp:724:
> [sagelib-9.5.beta7] /home/soft/sage/local/include/givaro/givinteger.h: At 
> global scope:
> [sagelib-9.5.beta7] 
> /home/soft/sage/local/include/factory/factory.h:92:23: error: expected
> nested-name-specifier before numeric constant
> [sagelib-9.5.beta7]    92 | #define IntegerDomain 1
> [sagelib-9.5.beta7]   |   ^
> [sagelib-9.5.beta7] 
> /home/soft/sage/local/include/givaro/givinteger.h:412:11: note: in expansion 
> of
> macro ‘IntegerDomain’
> [sagelib-9.5.beta7]   412 | using IntegerDomain = ZRing;
> [sagelib-9.5.beta7]   |   ^
> 
> I have no clue what is this Factory, and why it defines IntegerDomain to 
> 1.
> 
> Any insight would be most welcome.
> 
> Cheers.
> 
> Clément
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sage-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to
> sage-devel+unsubscr...@googlegroups.com 
> <mailto:sage-devel+unsubscr...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sage-devel/41bfbe41-5c62-428a-bd93-a5a2c0c5a0e3n%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/sage-devel/41bfbe41-5c62-428a-bd93-a5a2c0c5a0e3n%40googlegroups.com?utm_medium=email_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sage-devel/72707f27-2041-4bce-e4d2-f103d3dac801%40gmail.com.


[sage-devel] Naming conflict between Givaro and Factory

2021-12-03 Thread Clement Pernet
Hi,

Working on

    https://trac.sagemath.org/ticket/32959

I hit a compilation error due to

sage/local/include/factory/factory.h: #define IntegerDomain 1

which conflicts with

sage/local/include/givaro/givinteger.h: using IntegerDomain = ZRing

See the compilation log ;

[sagelib-9.5.beta7] In file included from 
/home/soft/sage/local/include/singular/coeffs/coeffs.h:19,
[sagelib-9.5.beta7]  from
/home/soft/sage/local/include/singular/polys/monomials/ring.h:12,
[sagelib-9.5.beta7]  from 
/home/soft/sage/local/include/singular/kernel/polys.h:15,
[sagelib-9.5.beta7]  from 
/home/soft/sage/local/include/singular/kernel/structs.h:25,
[sagelib-9.5.beta7]  from
/home/soft/sage/local/include/singular/Singular/libsingular.h:7,
[sagelib-9.5.beta7]  from
build/cythonized/sage/rings/polynomial/multi_polynomial_libsingular.cpp:724:
[sagelib-9.5.beta7] /home/soft/sage/local/include/givaro/givinteger.h: At 
global scope:
[sagelib-9.5.beta7] /home/soft/sage/local/include/factory/factory.h:92:23: 
error: expected
nested-name-specifier before numeric constant
[sagelib-9.5.beta7]    92 | #define IntegerDomain 1
[sagelib-9.5.beta7]   |   ^
[sagelib-9.5.beta7] /home/soft/sage/local/include/givaro/givinteger.h:412:11: 
note: in expansion of
macro ‘IntegerDomain’
[sagelib-9.5.beta7]   412 | using IntegerDomain = ZRing;
[sagelib-9.5.beta7]   |   ^

I have no clue what is this Factory, and why it defines IntegerDomain to 1.

Any insight would be most welcome.

Cheers.

Clément


-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sage-devel/e17e4f7f-2ecd-fbdf-5864-6b4c822dd6cf%40gmail.com.


[sage-devel] Bug(s) in scalar product over GF(p)

2019-06-21 Thread Clement Pernet
Hi,

The following simple session triggers what I think is at least one bug, if not 
two:

sage: p=193379
sage: K=GF(p)
sage: a=K(1)
sage: b=K(191495)
sage: c=K(109320)
sage: d=K(167667)
sage: e=103937
sage: a*c+b*d-e
102041
sage: vector([a,b])*vector([c,d])-e
-91339
sage: -91339+e
12598
sage: vector([a,b])*vector([c,d])
12599


Namely, the scalar product of GF(p) elements followed by a subtraction is 
negative (first bug).

Then the negative value is off by one (second bug).

Some weird coercion seem to happen between IntegerMod_int64 IntegerMod_int and 
ZZ:

sage: type(vector([a,b])[0])

sage: type(vector([a,b])*vector([c,d]))


Are these known bugs?

Best.

Clément

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sage-devel/efa76bd3-53ba-e1af-5e84-b3ea428836de%40gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] avx512 anyone? (Skylake-X or later)

2019-06-15 Thread Clement Pernet
Hi,
I just successfully compiled and tested upstream develop (at
8df690fd2a4e7a0c36e7dbc05c22d4d977217065 ) on a Intel(R) Xeon(R) Gold 6126
CPU @ 2.60GHz server with AVX512.
All tests passed.
Is there any specific compilation to test for #27961 ?
I'll give a look at the ticket later on.
As a side note, OpenBLAS dgemm is still disapointing in terms of
performance on this machine, which make me think that they disabled some
AVX512 specific kernels.

Clément

Le sam. 15 juin 2019 à 10:50, Volker Braun  a écrit :

> Has anybody except for John Palmieri tested the latest Sage beta releases
> on a machine with avx512 support? I.e. at least Core i7 7820X (the X being
> important):
>
> https://en.wikipedia.org/wiki/AVX-512#CPUs_with_AVX-512
>
> Trying to get some data for https://trac.sagemath.org/ticket/27961
>
> --
> You received this message because you are subscribed to the Google Groups
> "sage-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sage-devel+unsubscr...@googlegroups.com.
> To post to this group, send email to sage-devel@googlegroups.com.
> Visit this group at https://groups.google.com/group/sage-devel.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sage-devel/37f6cff5-f134-47fd-b88b-9045aa75b396%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sage-devel/CAPSpcgcbD8HGcGH%3D8RNCMzB9BtP3QbzgSHtR-h9xqrEx8sUxzQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[sage-devel] How would you like your parallel linear algebra ?

2019-03-22 Thread Clement Pernet

Hi,

In https://trac.sagemath.org/ticket/27444 we are exposing the parallel versions (using OpenMP) of 
fflas-ffpack routines used in SageMath. This is only about multicore parallelism, based on OpenMP.


We would like to discuss on the best design to expose them to the end user:

1. Should the default behaviour be sequential or parallel ?

2. If the default is parallel, how many cores should be used by default? I guess the default should 
be whatever value openmp_get_numthreads() returns, which is usually the number of cores, however, in 
most systems where hyperthreading is enabled, this means that we will use twice as many cores as the 
number of real cores, which will slow down the computation.


3. What interface do we want for the sage user who wants to

  3a switch between sequential and parallel,

  3b specify the number of cores to be used ?

Should there be some kind of environment variable containing such informations?

Note that passing an extra argument is not (always) an option, as we would like that the * operator 
for matrices benefits from this parallization.


In its current status, the branch on #27444 calls parallel fgemm when multiplying two modn_dense 
matrices over the maximal number of available OMP threads. It currently possible to use a specified 
number of threads by exporting OMP_NUM_THREADS=XXX before lauching sage.


Thanks for your feedback.

Clément

--
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] Tools to compute Hilbert Poincaré series

2018-09-04 Thread Clement Pernet




Le 02/09/2018 à 14:13, Simon King a écrit :

5. Use Macaulay2
Bad: (a) The experimental spkg is broken, it won't install.
 (b) The installation instructions for Ubuntu on the project
pages are rather verbose and require adding obscure
repositories.
(c) Installation from sources (release-1.12 from github):
i. Although I installed all dependencies listed in the
   INSTALL file, "make" fails because additionally givaro
   is required. And 'export MAKE="make -j3"' makes Macaulay2
   believe that gnu make is not installed.
ii. I tried installation in a Sage shell, as givaro is
   in Sage. However, "make" fails as follows:
/home/king/Projekte/Macaulay2/M2/M2/usr-host/include/fflas-ffpack/fflas/fflas_ftrsm_src.inl:279:27:
 error: ‘openblas_set_num_threads’ was not declared in this scope
openblas_set_num_threads(__FFLASFFPACK_OPENBLAS_NUM_THREADS);
^
cc1plus: warning: unrecognized command line option ‘-Wno-mismatched-tags’
../../include/config.Makefile:226: recipe for target 'interface.o' failed
make[2]: *** [interface.o] Error 1
make[2]: Leaving directory '/home/king/Projekte/Macaulay2/M2/M2/Macaulay2/d'
Makefile:20: recipe for target 'all-in-d' failed
make[1]: *** [all-in-d] Error 2
==> Currently Macaulay2 is no option.


Hi,

Actually, not only givaro but also fflas-ffpack (another dependency of 
macaulay2) is in sagemath.
Your compilation error seem to indicate a mess in the configuration: you have probably 2 installs of 
fflas-ffpack, one in SageMath, the other one in Macaulay2 of different versions and this likely 
causes the compilation failure.
Maybe use the Sage's provided givaro is a bad idea as it forces you to also know about Sage's 
fflas-ffpack and I don't know if it is possible to disable Macaulay's fflas-ffpack when configuring it.


Clément

--
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] Error installing package linbox-1.4.2

2017-07-21 Thread Clement Pernet

Le 19/07/2017 à 22:32, François Bissey a écrit :

One thing that I don’t remember seeing
anywhere before and that I find suspicious “-fabi-version=6” in CFLAGS.


For the record, this -fabi-version=6 option is forced by givaro (a dependency of LinBox) to avoid a 
demangling bug with the standard abi when using SIMD vector types __m128 and __m256.


As for the bug, I confirm that 768Mo of RAM is likely the reason for the 
compilation failure.

Clément


Is this something you somehow added or you have no ideas what I am talking
about?

François


On 20/07/2017, at 06:22, Jose Luis Bracamonte Amavizca  
wrote:

I was trying to install SageMathCell on a VPS with Ubuntu 16.04.2 LTS (GNU/Linux 
2.6.32-042stab120.20 x86_64) following the instructions form the oficial GitHub repository page on 
https://github.com/sagemath/sagecell . In the section "Simple Installation" on the step 3 
after the "make" command, and after waiting a log time for building, an error occurs , 
displaying the following on the terminal.

[linbox-1.4.2] 


[linbox-1.4.2] Error installing package linbox-1.4.2

[linbox-1.4.2] 


[linbox-1.4.2] Please email sage-devel 
(http://groups.google.com/group/sage-devel)

[linbox-1.4.2] explaining the problem and including the relevant part of the 
log file

[linbox-1.4.2]   /home/luisjba/sc_build/sage/logs/pkgs/linbox-1.4.2.log

[linbox-1.4.2] Describe your computer, operating system, etc.

[linbox-1.4.2] If you want to try to fix the problem yourself, *don't* just cd 
to

[linbox-1.4.2] 
/home/luisjba/sc_build/sage/local/var/tmp/sage/build/linbox-1.4.2 and type 
'make' or whatever is appropriate.

[linbox-1.4.2] Instead, the following commands setup all environment variables

[linbox-1.4.2] correctly and load a subshell for you to debug the error:

[linbox-1.4.2]   (cd 
'/home/luisjba/sc_build/sage/local/var/tmp/sage/build/linbox-1.4.2' && 
'/home/luisjba/sc_build/sage/sage' --sh)

[linbox-1.4.2] When you are done debugging, you can type "exit" to leave the 
subshell.

[linbox-1.4.2] 


Makefile:2103: recipe for target 
'/home/luisjba/sc_build/sage/local/var/lib/sage/installed/linbox-1.4.2' failed

make[2]: *** 
[/home/luisjba/sc_build/sage/local/var/lib/sage/installed/linbox-1.4.2] Error 1

make[2]: Leaving directory '/home/luisjba/sc_build/sage/build/make'

Makefile:912: recipe for target 'all' failed

make[1]: *** [all] Error 2

make[1]: Leaving directory '/home/luisjba/sc_build/sage/build/make'



real2m43.118s

user1m54.334s

sys 0m13.253s

***

Error building Sage.



The following package(s) may have failed to build (not necessarily

during this run of 'make all'):



* package: linbox-1.4.2

   log file: /home/luisjba/sc_build/sage/logs/pkgs/linbox-1.4.2.log

   build directory: 
/home/luisjba/sc_build/sage/local/var/tmp/sage/build/linbox-1.4.2



The build directory may contain configuration files and other potentially

helpful information. WARNING: if you now run 'make' again, the build

directory will, by default, be deleted. Set the environment variable

SAGE_KEEP_BUILT_SPKGS to 'yes' to prevent this.



Makefile:16: recipe for target 'all' failed

make: *** [all] Error 1


  The log file linbox-1.4.2.log is attached here.

--
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.





--
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] Experience from Sage Review Day 3: An online hacking event

2017-02-08 Thread Clement Pernet

Hi,

Thanks Johan for this summary.
I definitely second your statement: it was a successful meeting with a very high output / 
organization overhead ratio.


For those interested in the details of this output:
- the framapad: https://bimestriel.framapad.org/p/SageReviewDay3
- trac tickets with keywords rd3: https://trac.sagemath.org/query?keywords=~rd3

Just a few more remarks/suggestions for further editions

- numerous tickets of various difficulty should be open (i.e. problems identified) before the 
beginning of the meeting.
- I found slack quite unfriendly to the user, due to popups (e.g. when writing #...) and weird 
scrolling behaviour. We might want to use gitter next time.
- This format of meeting is very likely less convenient for newcommers to join the project: the 
short time frame and the distance make communication work best with experienced developpers. IIRC, 
dev 1 (2008!) was already in this spirit : mostly directed to developpers with some experience already.


I'm planning to organize another such online dev meeting in the near future for linear algebra and 
linbox interface related topics.


Best
Clément

Le 08/02/2017 à 09:21, Johan S. H. Rosenkilde a écrit :

Hi sage-devel

Yesterday we held Sage Review Day 3, and it was a big success. I just
wanted to briefly share my experience with this.

Overall, 8 developers participated, most of them all day. We
communicated using Slack, Framapad and Trac. We got 14 tickets
positively reviewed, and had good progress on 3 more tickets.

Planning:
Minimal. We started as 5 developers who wished to give an extra push on
coding theory. We set a date, created a Sage wiki page, and announced on
sage-devel, as well as sending emails to participants of SD75. Just
before the event, we made sure the coding theory dev page on Trac,
https://trac.sagemath.org/wiki/SageCodingRoadMap, was updated; we
created a Slack team page; and a framapad with skeleton information.


During the day:
Communication was four-fold:

1) The coding theory dev page on trac served as a static list of tickets
needing work - everyone looked at that list for tasks.

2) Slack served as live chat and gave a nice atmosphere of working
together. Devs announced work, needs reviews, positive reviews, asked
for assistance, and made jokes. It was a huge motivator.

3) Framapad was a live reference list of what people were working on and
what had already been done.

4) Trac tickets were as always where "official" discussions were taking
place (posts/push followed by a poke on Slack usually).


Aftermath:
For me, this was a great way to set aside a well-defined amount of time
for Sage development. It was also much more efficient than usual
scattered efforts, since context switching takes a lot of time.
Interaction btw. developer and reviewer was immediate, and getting
response on a ticket 5 mins after posting the code is immensely
satisfying.

I feel that this is a fruitful, informal way to coordinate work on Sage.
There really was a mini-Sage Days feeling over it! One reason for the
big feeling of success was that coding theory had so many outstanding
tickets waiting review. Actual designing and serious implementation
would lead to much fewer tickets being finished, and perhaps less
dynamic interaction on the live chat. Perhaps this type of event is
therefore best for a "Review/Bug Day", or at least very focused type of
development.

I'm hoping that the other participants will chime in, or anyone else
with experience of similar events to share.

Best,
Johan



--
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] multithreading performance issues

2016-10-05 Thread Clement Pernet

To follow up on Jean-Pierre summary of the situation:

The current version of fflas-ffpack in sage (v2.2.2) uses the BLAS provided as is. Running it with a 
multithreaded BLAS may result in a slower code than with a single threaded BLAS. This is very likely 
due to memory transfer and coherence problems.


More generally, we strongly suggest to use a single threaded BLAS and let fflas-ffpack deal with the 
parallelization. This is common practice for example with parallel versions of LAPACK.


Therefore, after the discussion https://trac.sagemath.org/ticket/21323 we have decided to let 
fflas-ffpack the possibility to force the number of threads that OpenBLAS can use at runtime. In 
this context we will force it to 1.

This is available upsteam and I plan to update sage's fflas-ffpack whenever we 
release v2.3.0.

Clément

Le 05/10/2016 à 11:24, Jean-Pierre Flori a écrit :

Currently OpenBlas does what it wants for multithreading.
We hesitated to disable it but prefered to wait and think about it:
see https://trac.sagemath.org/ticket/21323.

You can still influence its use of threads setting OPENBLAS_NUM_THREADS.
See the trac ticket, just note that this is not Sage specific.
And as you discovered, it seems it is also influenced by OMP_NUM_THREADS...

On Wednesday, October 5, 2016 at 9:28:23 AM UTC+2, tdumont wrote:

What is the size of the matrix you use ?
Whatever you do, openmp in blas is interesting only if you compute with
large matrices.
If your computations are embedded  in an @parallel and launch n
processes, be careful  that your  OMP_NUM_THREADS be less or equal to
ncores/n.

My experience is (I am doing numerical computations)  that there are
very few cases where using openmp in blas libraries is interesting.
Parallelism should generally be searched at a higher level.

One of the interest of multithreaded blas is for constructors: with
Intel's mkl blas, you can obtain the maximum possible performances of
tah machines  when you use DGEMM (ie product of matrices), due to the
high arithmetic intensity of matrix vector products. On my 2x8 core
sandy bridge à 2.7GHZ, I have obtained more that 300 giga flops, but
with matrices of size > 1000 ! And this is only true for DGEMM

t.d.

Le 04/10/2016 à 20:26, Jonathan Bober a écrit :
> See the following timings: If I start Sage with OMP_NUM_THREADS=1, a
> particular computation takes 1.52 cpu seconds and 1.56 wall seconds.
>
> The same computation without OMP_NUM_THREADS set takes 12.8 cpu seconds
> and 1.69 wall seconds. This is particularly devastating when I'm running
> with @parallel to use all of my cpu cores.
>
> My guess is that this is Linbox related, since these computations do
> some exact linear algebra, and Linbox can do some multithreading, which
> perhaps uses OpenMP.
>
> jb12407@lmfdb1:~$ OMP_NUM_THREADS=1 sage
> [...]
> SageMath version 7.4.beta6, Release Date: 2016-09-24
> [...]
> Warning: this is a prerelease version, and it may be unstable.
> [...]
> sage: %time M = ModularSymbols(5113, 2, -1)
> CPU times: user 509 ms, sys: 21 ms, total: 530 ms
> Wall time: 530 ms
> sage: %time S = M.cuspidal_subspace().new_subspace()
> CPU times: user 1.42 s, sys: 97 ms, total: 1.52 s
> Wall time: 1.56 s
>
>
> jb12407@lmfdb1:~$ sage
> [...]
> SageMath version 7.4.beta6, Release Date: 2016-09-24
> [...]
> sage: %time M = ModularSymbols(5113, 2, -1)
> CPU times: user 570 ms, sys: 18 ms, total: 588 ms
> Wall time: 591 ms
> sage: %time S = M.cuspidal_subspace().new_subspace()
> CPU times: user 3.76 s, sys: 9.01 s, total: 12.8 s
> Wall time: 1.69 s
>
> --
> You received this message because you are subscribed to the Google
> Groups "sage-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to sage-devel+...@googlegroups.com
> .
> To post to this group, send email to sage-...@googlegroups.com
> .
> Visit this group at https://groups.google.com/group/sage-devel
.
> For more options, visit https://groups.google.com/d/optout 
.

--
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to
sage-devel+unsubscr...@googlegroups.com 
.
To post to this group, send email to sage-devel@googlegroups.com 
.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 

Re: [sage-devel] linbox 64-bit charpoly

2016-09-28 Thread Clement Pernet

Hi,

Update: I think I finally found the bug that led some rare computations to hang forever: givaro's 
random iterator was seed from the 6 digits of the current time microseconds, and could, with proba 
10^-6 be seeded with 0, and the congurential generator would then always output 0, causing the 
search for a non-zero krylov vector to hang forever!


This might be also a fix to https://trac.sagemath.org/ticket/15535

Meanwhile I also found a corner case with a bug in LUKrylov charpoly. I posted a patch to both 
givaro and fflas-ffpack in https://trac.sagemath.org/ticket/21579


As for the choice of running an early termination for charpoly over ZZ, we already had this 
discussion a very long time ago, and it used to be a deterministic call with Hadamard's bound.

It has been unfortunately turned back to the early termination version since 
then, which I agree is bad.

I'm cleaning up this code, and will provide a boolean argument proof to enable/disable the early 
termination on request.


Clément

Le 28/09/2016 à 08:22, parisse a écrit :



Le mercredi 28 septembre 2016 03:13:11 UTC+2, Jonathan Bober a écrit :


Ah, yes, I'm wrong again, as the multimodular in Flint is pretty new. I 
didn't look at what Sage
has until now (flint 2.5.2, which looks likes it uses a fairly simple 
O(n^4) algorithm). I had
previously looked at the source code of the version of flint that I've 
actually been using
myself, which is from June. As I now recall (after reading an email I sent 
in June) I'm using a
"non-released" version precisely for the nmod_mat_charpoly() function, 
which doesn't exist in
the most recent release (which I guess might be 2.5.2, but flintlib.org 

seems to be having problems at the moment).

I've actually done some fairly extensive real world semi-testing of 
nmod_mat_charpoly() in the
last few months (for almost the same reasons that have lead me to 
investigate Sage/Linbox) but
not fmpz_mat_charpoly(). "semi" means that I haven't actually checked that 
the answers are
correct. I'm actually computing characteristic polynomials of integer 
matrices, but writing down
the integer matrices is too expensive, so I'm computing the polynomials 
more efficiently mod p
and then CRTing. Also, I'm doing exactly what I think Linbox does, in that 
I am just waiting for
the polynomials to stabilize. Step 2, when it eventually happens, will 
separately compute the
roots of these polynomials numerically, which will (heuristically) verify 
that they are correct.
(Step 3 might involve actually proving somehow that everything is correct, 
but I strongly fear
that it might involve confessing that everything is actually only 
"obviously" correct.) Once
step 2 happens, I'll either report some problems or let you know that 
everything went well.


I don't think computing roots numerically is a good idea, because the charpoly 
is ill-conditionned.
It would be interesting to compare outputs and timings with other packages, for 
example giac has its
own multi-modular charpoly implementation (multi-modular, with probabilistic 
answer if
proba_epsilon>0 or certified answer if proba_epsilon=0).


--
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to
sage-devel+unsubscr...@googlegroups.com 
.
To post to this group, send email to sage-devel@googlegroups.com 
.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] sage infrastructure at UW

2016-06-06 Thread Clement Pernet
Le 06/06/2016 17:17, Ralf Stephan a écrit :
> On Monday, June 6, 2016 at 4:22:02 PM UTC+2, Dima Pasechnik wrote:
> 
> Or perhaps someone pulled a full private copy recently?
> 
> 
> Around end of April, I think, someone better?
> 
> ralf@ark:~/sage/.git> find logs/refs/remotes/trac/| wc
>   72317231  368968


[pernet@nisqually] :~/Logiciels/sage/.git > find logs/refs/remotes/trac/|wc
   73827382  376919

This was updated on May 30.

-- 
   Clément

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] What can we assume about our C compiler

2015-09-25 Thread Clement Pernet
The Givaro-LinBox-fflas-ffpack ecosystem now requires C++11 support.
I'm hitting this problem, while working on #17635, upgrading the spkgs of these 
3 libraries.

Therefore:

> 1) Do we want to mandate c++11 support
Y[X]
N[ ]
> 
> 2) if yes what c++11 features do we want?
Feature complete [X]

as many features are used.

> List of features [ ] (list needed features)
Tentative list of what's currently being used (I may have missed some)
 * using
 * enable_if
 * auto
 * fixed width types and related constant suffixes (ex uint64_t x=2_ui64;)
 * ...

Clément
>  * override
> 
> François
>  
>> On 22/09/2015, at 15:27, Ralf Stephan  wrote:
>>
>> On Monday, September 21, 2015 at 6:38:50 PM UTC+2, Volker Braun wrote:
>>> Afaik we already require C++11 support to compile Pynac 
>>
>> Yes, Pynac git master requires it but we're still installing backported 
>> versions (0.3.9.x vs 0.4.x).
>>
>> The ticket that never got finished was
>> http://trac.sagemath.org/ticket/18323
>>
>> Regards,
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "sage-devel" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to sage-devel+unsubscr...@googlegroups.com.
>> To post to this group, send email to sage-devel@googlegroups.com.
>> Visit this group at http://groups.google.com/group/sage-devel.
>> For more options, visit https://groups.google.com/d/optout.
> 

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at http://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] Re: speed regression testing

2014-09-05 Thread Clement Pernet
Hi Jean-Pierre and all,

 Grat!
 At some point, I pushed patches on linbox-devel (or something like that), but 
 never got any reply.
 IIRC they were just tiny patches to ease ocmpilation on unusual archs.
 I did not check recently, but one year ago or so they did not make it 
 upstream.
 It would be great to merge them (if you find them sound of course).

All apologies for having let these reports unanswered and unprocessed. I 
applied your linbox-sage
interface patch upstream (and reported it on sage's trac #).
Regarding the 2 other reports (-no-undefined for windows port, and non x86_64 
int type bugs),
tickets have been created and we'll make sure to adress them in the next 
release.

 
 I also would like to mention that sage rely on Atlas BLAS which is no 
 longer state of the art as
 far
 as computation speed is concerned.
 
 Good to know, and another reason to make Sage BLAS implem agnostic.

Agreed.

Best
Clément

 
 We consider switching to OpenBLAS, (formerely GOTOBLAS), that is now 
 released under BSD.
 
 http://www.openblas.net/
 
 The compile  configuration times are much shorter which would impact 
 sage compile time.
 
 Best
 Clément
 
 Best,
 JP
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sage-devel group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to
 sage-devel+unsubscr...@googlegroups.com 
 mailto:sage-devel+unsubscr...@googlegroups.com.
 To post to this group, send email to sage-devel@googlegroups.com 
 mailto:sage-devel@googlegroups.com.
 Visit this group at http://groups.google.com/group/sage-devel.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sage-devel group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at http://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] Re: speed regression testing

2014-08-29 Thread Clement Pernet
Hi,

Let me clarify a few things:
- LinBox's approach to link against numerical BLAS has never changed, and 
probably will not in the
near future.
- LAPACK is not BLAS: BLAS provides optimized numerical matrix multiplication 
kernels (that LinBox
uses) and other basic routines, whereas LAPACK provides more advanced routines 
such as gaussian
elimination, QR factorization, etc, designed to work on top of a BLAS.
- LinBox links against LAPACK too (in the past and still now), but only for 
some variants of
computations that are AFAIK, not hooked in the sage interface: e.g. computing 
first invariant of the
Smith form.

I also recently remarked that many linalg computations over finite fields and 
ZZ were shamefully
slow. There definitely is a misconfiguration somewhere, as we do not observe 
any such regression in
LinBox. Instead, the library has gotten actually significantly faster.

So in the short term: I'm definitely willing to look into improving the sage 
interface, and fix this
misconfiguration. I'll have very little time before the end of september, 
though, but can start
working on it in october.

We're also planning on cutting a new release of Linbox this fall (october) an 
this could be a good
time to update the linbox spkg.

I also would like to mention that sage rely on Atlas BLAS which is no longer 
state of the art as far
as computation speed is concerned.
We consider switching to OpenBLAS, (formerely GOTOBLAS), that is now released 
under BSD.

http://www.openblas.net/

The compile  configuration times are much shorter which would impact sage 
compile time.

Best
Clément


Le 29/08/2014 10:26, William A Stein a écrit :
 On Fri, Aug 29, 2014 at 10:24 AM, Jean-Pierre Flori jpfl...@gmail.com wrote:
 I don't know for sure, but I think linbox does not link to LAPACK
 (anymore?).
 Maybe that make a small difference already.
 That's precisely what I'm worried about -- not linking lapack would
 perhaps  change linbox from being world class to absolutely terrible.

 I opened a ticket about that some monthes ago.
 Thanks!

 --
 You received this message because you are subscribed to the Google Groups
 sage-devel group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sage-devel+unsubscr...@googlegroups.com.
 To post to this group, send email to sage-devel@googlegroups.com.
 Visit this group at http://groups.google.com/group/sage-devel.
 For more options, visit https://groups.google.com/d/optout.




-- 
   Clément Pernet


Lab. de l'Informatique Parallélisme, AriC office LUG 277
ENS de Lyon, CNRS  tel: +33 (0)4 37 28 74 75
46 Allée d'Italie  fax: +33 (0)4 72 72 80 80
69364 LYON Cedex 07- France http://membres-liglab.imag.fr/pernet
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xAF8B61347AE8C23D



-- 
You received this message because you are subscribed to the Google Groups 
sage-devel group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at http://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] Integer matrices using FLINT

2014-08-13 Thread Clement Pernet
Hi,

Just to mention that fflas-ffpack and LinBox have been improving recently, wrt 
computation speed of
most basic routines, and in particular with addition and improvement of kernels 
for efficient matmul
over Z and large Z/pZ.

Benchmarketing is in progress, but matmul seems to compare favorably wrt FLINT 
especially with large
bitsizes and dimensions.

The instance given in ticket #16803 (n=1000 range(10^6)) runs in 469ms on one 
i5-3320M core of my
laptop.

While there's still more work in progress, the code has been released in 
fflas-ffpack-2.0 10 days
ago and will soon be exposed in LinBox and our next step is of course to update 
the sage spkg's,
which will definitely happen this coming year.

Well all this rant and marketing is just to advocate that sage keeps its 
ability to link against
various libraries and be able to select the best code available at the moment.

Best
Clément

Le 13/08/2014 15:32, Martin Albrecht a écrit :
 On Wednesday 13 Aug 2014 04:27:54 Marc Masdeu wrote:
 Alright, so the data available with the new implementation is different,
 since everything is encapsulated into flint types. What would be the right
 approach? Should I keep the old types and provide conversion functions, or
 should the different functions decide on other algorithms depending on some
 heuristics (which should be re-done)?
 
 I would suggest to write conversion functions from fmpz_mat_t to pari, 
 linbox, 
 iml, NTL and back. Then use those to to convert to, run the computation, and 
 convert back.
 
 As for the heuristics, I would start with FLINT being the default but would 
 check if alternative implementations beat it for the values suggested by the 
 old heuristics.
 
 Also, do you think it is worth keeping the native code (the one written by
 William) -- especially for the basic functionality --, or this should be
 considered superseded by the FLINT functionality?
 
 I would consider it superseded once all missing features are ported over. I 
 would not keep it around, that's a maintenance nightmare.
  
 For now, and as an intermediate step, I will reincorporate the calls to
 other packages but set the defaults to using flint whenever possible.

 Thanks for the feedback!

 Marc.

 On Tuesday, August 12, 2014 7:00:38 PM UTC+1, wstein wrote:
 On Tue, Aug 12, 2014 at 10:52 AM, Martin Albrecht

 martinr...@googlemail.com javascript: wrote:
 Hi, I like the proposal to move some types over to FLINT. However, you

 removed

 some options, e.g. calling Pari, LinBox or IML for solving certain

 problems

 (charpoly, kernel, …). I'd prefer these options to be preserved as it is

 not

 clear to me a priori that FLINT will in all cases be fastest. Also,

 having

 choices allows to compare results.

 +1.   In my experience, having implemented Matrix_integer_dense in the
 first place, most systems that we call are full of bugs.   It's almost
 never the case that any of the claimed functions, e.g., charpoly,
 kernel, etc. aren't buggy.  It's critical (and disturbing) to run test
 code comparing the various systems with various random (and not)
 inputs.
 Also, there are some systems like linbox that have proof=False
 options, which can be faster, but will in fact be very wrong,
 especially in corner cases.

 I also noticed your patch removes a bunch of verbose output.  Why?
 Having the potential for logging when running code is very useful:

 - t = verbose('hermite mod %s'%D, caller_name='matrix_integer_dense')
 cdef Matrix_integer_dense res =
 self._new_uninitialized_matrix(self._nrows, self._ncols)
 self._hnf_modn(res, D)
 - verbose('finished hnf mod', t, caller_name='matrix_integer_dense')

 william

 Cheers,
 Martin

 On Tuesday 12 Aug 2014 10:12:04 Marc Masdeu wrote:
 Hi,

 Recently I noticed that Sage was not using fmpz_mat_t for matrices
 (probably when FLINT was incorporated in Sage it didn't yet have this).

 I

 have opened a ticket (http://trac.sagemath.org/ticket/16803 --thanks
 pbruin!--) with a patch that reimplements matrix_integer_dense with

 FLINT,

 and it would probably be a good idea to do a similar thing for

 fmpq_mat_t.

 In any case, I am new to FLINT so I might not be doing the right

 things, if

 any expert is willing to review the ticket it would be great!

 Best,

 Marc.
 

-- 
You received this message because you are subscribed to the Google Groups 
sage-devel group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at http://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


[sage-devel] 3 postdoc or research engineer positions open in Grenoble, Lyon and Paris, France

2014-02-06 Thread Clement Pernet
Hi,

I'd like to advertise these 3 job offers that should interest the sage 
developper community.
In particular, some of the code development in these jobs concerns 
parallelization of libraries
either used in sage (LinBox; fpLLL) or of potential interest for sage (FGb). 
Part of the job include
improving the integration in Sage of LinBox (which has been frozen for too 
long).
More generally it's about algorithms and code development in computer algebra.

Clément

==

Three research positions (postdoc or research engineer), offered by the French 
ANR project HPAC
(High Performance Algebraic Computation), are open.

Title: High Performance Algebraic Computing

Keywords: parallel computing, computer algebra, linear algebra, C/C++ 
programming

Locations:
- Grenoble, France (LIG-MOAIS, LJK-CASYS),
- Lyon, France (LIP-AriC),
- Paris, France (LIP6-PolSys),

Starting date: between June 2014 and January 2015

Type of position: 3 postdoc or research engineer positions of 1 year each

Detailed descriptions:
  - in english: 
http://hpac.gforge.inria.fr/Offres/PostdocEngineer-GrenobleLyonParis-HPAC-en.pdf
  - in french: 
http://hpac.gforge.inria.fr/Offres/PostdocEngineer-GrenobleLyonParis-HPAC-fr.pdf
  - HPAC project main web page: http://hpac.gforge.inria.fr/

General Context:

The ambition of the project HPAC is to provide international reference 
high-performance libraries
for exact linear algebra and algebraic systems on multi-processor architectures 
and to influence
parallel programming approaches for algebraic computing. It focuses on the 
design of new parallel
algorithms and building blocks dedicated to exact linear algebra routines. 
These blocks will then be
used for the parallelization of the sequential code of the LinBox and FGb 
libraries, state of the
art for exact linear algebra and polynomial systems solving, and used in many 
computer algebra
systems. The project combines several areas of expertise: parallel runtime and 
language, exact,
symbolic and symbolic/numeric algorithmic, and software engineering.

Profile of the positions:
=
We are seeking for candidates with solid expertise in software library design 
and developments (e.g.
C, C++, OpenMP, Autotools, versioning,...) with preferably good background on 
mathematical software
and computer algebra algorithmic. The main outcome of the work will depend on 
the type
of the position (postdoc or engineer) and include code development in 
open-source C/C++ libraries
such as LinBox, FGb, Kaapi and research publications in international journals 
or conferences.

Each location is seeking for candidates matching with the following keywords:

- Lyon: (contact: gilles.vill...@ens-lyon.fr)
High performance/parallel computer algebra, symbolic and mixed symbolic-numeric 
linear algebra,
validated computation, high performance Euclidean lattice computation, lattice 
basis reduction.

- Grenoble: (contact: jean-guillaume.du...@imag.fr)
Library design and development, LinBox, Sage, XKaapi, parallel exact linear 
algebra, work-stealing
and data-flow tasks.

- Paris: (contact: jean-charles.faug...@groebner.org)
Polynomial system solving, Grobner basis computations, parallel exact linear 
algebra, algebraic
cryptanalysis, distributed computing.

Feel free to exchange with the contact person of each site for further 
information.

-- 
You received this message because you are subscribed to the Google Groups 
sage-devel group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at http://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/groups/opt_out.


[sage-devel] Re: Matrix multiplication

2011-07-18 Thread Clement Pernet
Hi Ivo,

On Jul 18, 1:32 pm, hedtke hed...@me.com wrote:
 With the in-memory variant I mean: Boyer, Dumans, Pernet and Zhou: Memory 
 efficient scheduling of Strassen-Winograd's matrix multiplication algorithm. 
 International Symposium on Symbolic and Algebraic Computation 2009.

 This needs only a constant number of auxiliary variables, i think.

Well, we wrote this paper from our experience implementing Strassen-
Winograd's algorithm in LinBox. The version in LinBox is actually not
the fully in-memory version, for it has a sligthly larger constant in
the time complexity.
We use the classic 2/3n^2 extra memory version (for C-AxB) and the
1n^2 version for (C-AxB+-C), as the computation time is usually a
bigger concern than memory.

 Loop Unrolling or Tilling doesn't change the asymptotic complexity. But we 
 can use Knowledge about the computer architecture (L1 Cache, etc), to speed 
 up the computations by a factor of 2.

As Martin already wrote, this is already taken into consideration for
the base case: using the numercial routines BLAS for LinBox, or table
precomputations+fine tuning+... for M4RI over GF2

 Please remember that the parameters for Strassens algorightm from his 
 original paper are far away from optimal. I mean that the original method 
 only was build for matrices of the size m*2^k. Strassen said something like: 
 If you ant to calculate the product of two n x n matrices for an arbitrary n, 
 choose m=f(n) and k=g(n) for some f and g (see his paper from 1969). These f 
 and g are NOT a good choice. There are better ones from R. Probert and others.
Of course we do not use these parameters. In LinBox, they are
determined at install time, by benchmarks.

Clément Pernet

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] Re: Matrix multiplication

2011-07-18 Thread Clement Pernet
Great!!
I have to be in Grenoble during that period for teaching but I'll try
to arrange some free time to hang around on IRC and work remotely with
you guys during that period.
Clément

On Jul 18, 12:05 pm, Martin Albrecht martinralbre...@googlemail.com
wrote:
  When I get a chance I will take a look at redoing the template
  structure in the dense_ctypes.patch. I am back to normal schedule
  only today, so this will probably happen sometime tomorrow.

  Perhaps you can help with the pickling, coercion, etc. problems once I
  get the linbox wrapper working. ;)

 Cool! My plan was to work on this during the upcoming Sage Days in Seattle at
 the end of August. If you have a working prototype by then I'll try to finish
 it off :)

 Cheers,
 Martin

 --
 name: Martin Albrecht
 _pgp:http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0x8EF0DC99
 _otr: 47F43D1A 5D68C36F 468BAEBA 640E8856 D7951CCF
 _www:http://martinralbrecht.wordpress.com/
 _jab: martinralbre...@jabber.ccc.de

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] Re: Matrix multiplication

2011-07-13 Thread Clement Pernet
Hi,

Most questions about finite fields matmul have been answered, in
short:
- one could quickly fix the pb that sage does not use linbox by
default (I'm surprised by this fact)
- in the current state, operands are converted from ints to double,
hence a overhead in both time
and memory.
- Burcin  I started to address this by rewriting the matrix-modn-
dense class using floating point
representation: a prototype code was working but we wen't into
troubles when polishing it and making
it work with the coercion system. Unfortunately I never found time to
get back to it since then.
- Further timing improvements are expected by using floats instead of
doubles for tiny fields (eg
roughly GF(p100)) with no pain (already available in LinBox)
- Non prime fields are supported in LinBox, as long as their
cardinality remains small (LinBox
implementation has to be improved following 
http://hal.archives-ouvertes.fr/hal-00315772
Theorem 2)
  but the wrapping in sage is missing. So I think it might  probably
be simpler to wrapp it rather
than creating a new spkg with MeatAxe.

Clément

PS: weird, my reply by email has been rejected and I'm now posting via
the web interface.

On Jul 13, 3:11 pm, Simon King simon.k...@uni-jena.de wrote:
 Hi Martin,

 On 13 Jul., 15:03, Martin Albrecht martinralbre...@googlemail.com
 wrote:

   BTW, Sage's current implementation needs 906.69 s for a
   single multiplication.

  We suck at this it seems :)

 Totally.

 According to the Boothby-Bradshaw article you were referring to, their
 ideas were supposed to be implemented in Sage. Is that done, yet? I
 wouldn't like to double the work...

 Best regards,
 Simon

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: Adjoint of a matrix

2010-12-08 Thread Clement Pernet
Interesting discussion, I never realized that we are using two interpretations for this same word 
depending on the context!


My 2 cents:
In my favorite linear algebra book:
F. R. Gantmacher, The theory of matrices. (1959)

The adjoint of a matrix is defined 2 times with the two meanings!!! (at least in the french 
translation of the book, originally in Russian).


Clément


Jason Grout a écrit :

On 12/3/10 1:05 AM, Rob Beezer wrote:

On Dec 2, 10:55 pm, Dima Pasechnikdimp...@gmail.com  wrote:

But for conjugate transpose one can just introduce operator ^*, as
usually
the conjugate transpose of $A$ is denoted by $A^*$.


Accepted notation is another can of worms.  Conjugate-transpose can be
an exponent that is a star, dagger or the letter H.  And sometimes a *
just means complex conjugation.



In numpy, the conjugate transpose is A.H, the transpose is A.T, and the 
inverse is A.I.  I'd love if we adopted those shortcuts (as properties 
that return new matrices, not functions).


Jason



--
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: inconsistent results for characteristic polynomial?

2010-06-15 Thread Clement Pernet
As an undergrad in France, I learned the definition of charpoly as det ( 
M- xI ) and remember, that our professor mentionned the other convention 
as exotic.
Since then, I've worked on compting the charpoly during my phd thesis 
and always chose to use the definition det (xI-M) which I found much 
more convenient in almost every aspects.
It seems to me that this painful convention was only used in France but 
I may be wrong. For example in the reference Russian book Theory of 
Matrices by  Gantmacher: det (xI-M) is also used.


I don't know the reasons for this convention, except maybe that the 
determinant there is exactly the constant coefficient.


So I see no inconsistency there.

Btw: in a chapter about linear algebra using sage  dedicated to french 
undergrads that I'm currently writing, I also chose the det(xI-M) 
convention.


Clément

John Cremona a écrit :

On 15 June 2010 13:25, Jason Grout jason-s...@creativetrax.com wrote:
  

On 6/15/10 6:21 AM, Minh Nguyen wrote:



As you can see, these two characteristic polynomials differ in only
their signs. One can be obtained from the other by multiplying through
by -1. What I would like to know is: Is there some reason for this
inconsistency? Or are the two characteristic polynomials above
essentially the same?

  

One is computed using x*Id-M, the other by M-x*Id.  This will lead to a sign
difference for odd-sized matrices.  In my graduate abstract algebra course,
we defined the characteristic polynomial using x*Id-M specifically so that
we'd always have a monic polynomial as the output.




Sure, though in undergraduate teaching one advantage is using M-x*Id
is that they are less likely to make sign errors if they subtract x
from the diagonal entries than if they have to negate all entries and
add x to the diagonal  Having said that, I would definitely agree
that the *definition of the char poly should be something monic, even
if in practice it may be convenient to work with its negative.

John

  

Jason

--
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org




  


--
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] Bug in PARI's frobenius normal form

2010-05-18 Thread Clement Pernet

Hi there,

Sage is producing a wrong answer when asked to compute a Frobenius 
normal form of a matrix, with the transformation matrix:


sage: sage: A=matrix(ZZ,8,[[6,0,-2,4,0,0,0,-2],[14,-1,0,6,0,-1,-1,1],\
[2,2,0,1,0,0,1,0],[-12,0,5,-8,0,0,0,4],[0,4,0,0,0,0,4,0],\
[0,0,0,0,1,0,0,0],[-14,2,0,-6,0,2,2,-1],[-4,0,2,-4,0,0,0,4]])
sage: F,K=A.frobenius(2)
sage: ~K*F*K==a
False

And the answer should be True.
As Sage directly wraps PARI's frobenius form routine, I guess the bug is 
there.
Before I post a ticket, has anyone heard about/met this bug before? Is 
someone already working on it?


Cheers,

Clément

--
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Bug in PARI's frobenius normal form

2010-05-18 Thread Clement Pernet
Hmmm, sorry for the typo, but I still have the bug by replacing a with A 
(I had a copy of A in a).


Now, I realized that I'm using the 4.3.3 version. And it actually works 
on the lastest version (on sagenb.org)

So I assume the bug has been fixed in gp recently, or something.

Problem fixed!
Sorry for the noise.

Clément

John Cremona a écrit :

It works OK in gp:

gp: A = [6, 0, -2, 4, 0, 0, 0, -2; 14, -1, 0, 6, 0, -1, -1, 1; 2, 2,
0, 1, 0, 0, 1, 0; -12, 0, 5, -8, 0, 0, 0, 4; 0, 4, 0, 0, 0, 0, 4, 0;
0, 0, 0, 0, 1, 0, 0, 0; -14, 2, 0, -6, 0, 2, 2, -1; -4, 0, 2, -4, 0,
0, 0, 4]

[6 0 -2 4 0 0 0 -2]

[14 -1 0 6 0 -1 -1 1]

[2 2 0 1 0 0 1 0]

[-12 0 5 -8 0 0 0 4]

[0 4 0 0 0 0 4 0]

[0 0 0 0 1 0 0 0]

[-14 2 0 -6 0 2 2 -1]

[-4 0 2 -4 0 0 0 4]

gp: FB = matfrobenius(A,2)
[[0, 0, 0, 4, 0, 0, 0, 0; 1, 0, 0, 4, 0, 0, 0, 0; 0, 1, 0, 1, 0, 0, 0,
0; 0, 0, 1, 0, 0, 0, 0, 0; 0, 0, 0, 0, 0, 0, 4, 0; 0, 0, 0, 0, 1, 0,
0, 0; 0, 0, 0, 0, 0, 1, 1, 0; 0, 0, 0, 0, 0, 0, 0, 2], [1, -1/2, 1/16,
15/64, 3/128, 7/64, -23/64, 43/128; 0, 0, -5/64, -13/128, -15/256,
17/128, -7/128, 53/256; 0, 0, 9/128, -11/128, -7/128, -1/32, 5/128,
5/32; 0, 0, -5/128, 0, 7/256, -7/128, -1/64, 9/256; 0, 1, 1/16, 5/32,
-17/64, -1/32, 31/32, -21/64; 0, 0, 1/32, 5/64, 31/128, -17/64, -1/64,
-21/128; 0, 0, 1/32, 5/64, -1/128, 15/64, -1/64, -21/128; 0, 0, 1,
5/2, -1/4, -1/2, -1/2, -21/4]]
gp: F=FB[1]; B=FB[2];

gp: B^-1 * F*B == A
1

In your

On 18 May 2010 17:07, Clement Pernet clement.per...@gmail.com wrote:
  

Hi there,

Sage is producing a wrong answer when asked to compute a Frobenius normal
form of a matrix, with the transformation matrix:

sage: sage: A=matrix(ZZ,8,[[6,0,-2,4,0,0,0,-2],[14,-1,0,6,0,-1,-1,1],\
[2,2,0,1,0,0,1,0],[-12,0,5,-8,0,0,0,4],[0,4,0,0,0,0,4,0],\
[0,0,0,0,1,0,0,0],[-14,2,0,-6,0,2,2,-1],[-4,0,2,-4,0,0,0,4]])
sage: F,K=A.frobenius(2)
sage: ~K*F*K==a
False



!!!

sage: ~K*F*K==A
True

so it was just a typo!

John

  

And the answer should be True.
As Sage directly wraps PARI's frobenius form routine, I guess the bug is
there.
Before I post a ticket, has anyone heard about/met this bug before? Is
someone already working on it?

Cheers,

Clément

--
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org




  


--
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] Re: GCC-4.5.0

2010-04-27 Thread Clement Pernet

Hi,

I could not find a gcc-4.5 install on eno, to replicate the bug.
On which machine did you run it? (before I start compile it!)
Could you also attach the linbox  config.log to ticket #8769 ?

Thanks.

Clément

William Stein a écrit :

Hi,

Main point of this email: if anybody else is trying to port Sage work
with GCC-4.5.0, we won't duplicate effort.)

I'm working on trying to port Sage-4.4 to GCC-4.5.0.  There are many
issues on various OS's.

  * pynac (solved) -- http://trac.sagemath.org/sage_trac/ticket/8753
  * libpng -- http://trac.sagemath.org/sage_trac/ticket/8767
  * linbox -- http://trac.sagemath.org/sage_trac/ticket/8769
  * gfan -- http://trac.sagemath.org/sage_trac/ticket/8770

I don't know how many other issues will come up.  I've made all the
above blockers for sage-4.4.1, since they are visible on skynet, and
we *get* skynet in exchange for ensuring that Sage builds on skynet
with the latest released GCC.

I'll be working on trying to fix the above today, and will be logged
into irc in case anybody has any ideas.

 -- William

  


--
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Error in building/installing linbox on Open Solaris 64 bit

2010-01-15 Thread Clement Pernet

Hi,

This part of the interface is actually terrible. In particular, I don't 
see the rationale for the use of static's here.
Hopefully this will be all gone when we'll switch to have LinBox and  
matrix_modn_dense support use natively the same matrix type (#4258).


But given your compilation error trace, I don't see the error: the 
static thing is only a warning. isn't the error somewhere earlier in the 
trace, that you cut off?


Clément

Jaap Spies a écrit :

There seems to be a problem with linbox-sage.C


linbox-sage.C:438:   instantiated from here
linbox-sage.C:312: warning: unused variable ‘k’
linbox-sage.C: At global scope:
linbox-sage.C:72: warning: ‘void linbox_set_modn_matrix(mod_int**, 
LinBox::DenseMatrixLinBox::Modulardouble , size_t, size_t)’ 
declared ‘static’ but never defined

make[5]: *** [linbox-sage.lo] Error 1
make[5]: Leaving directory 
`/export/home/jaap/Downloads/sage-4.3.1.alpha2/spkg/build/linbox-1.1.6.p2/src/interfaces/sage' 


make[4]: *** [all-recursive] Error 1
make[4]: Leaving directory 
`/export/home/jaap/Downloads/sage-4.3.1.alpha2/spkg/build/linbox-1.1.6.p2/src/interfaces' 


make[3]: *** [all-recursive] Error 1
make[3]: Leaving directory 
`/export/home/jaap/Downloads/sage-4.3.1.alpha2/spkg/build/linbox-1.1.6.p2/src' 


make[2]: *** [all] Error 2
make[2]: Leaving directory 
`/export/home/jaap/Downloads/sage-4.3.1.alpha2/spkg/build/linbox-1.1.6.p2/src' 


Error building linbox

[...]

--
Libraries have been installed in:
   /export/home/jaap/Downloads/sage-4.3.1.alpha2/local/lib

[...]

linbox-sage.C:438:   instantiated from here
linbox-sage.C:312: warning: unused variable ‘k’
linbox-sage.C: At global scope:
linbox-sage.C:72: warning: ‘void linbox_set_modn_matrix(mod_int**, 
LinBox::DenseMatrixLinBox::Modulardouble , size_t, size_t)’ 
declared ‘static’ but never defined

make[4]: *** [linbox-sage.lo] Error 1
make[4]: Leaving directory 
`/export/home/jaap/Downloads/sage-4.3.1.alpha2/spkg/build/linbox-1.1.6.p2/src/interfaces/sage' 


make[3]: *** [install-recursive] Error 1
make[3]: Leaving directory 
`/export/home/jaap/Downloads/sage-4.3.1.alpha2/spkg/build/linbox-1.1.6.p2/src/interfaces' 


make[2]: *** [install-recursive] Error 1
make[2]: Leaving directory 
`/export/home/jaap/Downloads/sage-4.3.1.alpha2/spkg/build/linbox-1.1.6.p2/src' 


Error installing linbox

real0m51.795s
user0m34.213s
sys 0m15.195s
sage: An error occurred while installing linbox-1.1.6.p2


Thought?

Jaap





-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Linbox complains GMP is not installed on Solaris 10 (64-bit mode)

2010-01-14 Thread Clement Pernet
The spkg-install passes the argument  --with-gmp=$SAGE_LOCAL to 
configure, so there's no pb with the library location.
It has to do with something deeper. I'd also like to read the section of 
config.log showing the failure.


Clément

François Bissey a écrit :

On Thu, 14 Jan 2010 22:27:53 Dr. David Kirkby wrote:
  

I tried a 64-bit build of Sage on a Sun Blade 2000 SPARC workstation
 running Solaris 10. I don't know if linbox is only looking for GMP in /usr
 and /usr/local, which is semi-implied below. But for whatever reason, it
 decides it can't find a suitable GMP.



Indeed the configure script only look there by default. However you can
provide a path to the library with the --with-gmp option. ie
configure --with-gmp=$SAGE_LOCAL
I would have thought spkg-install was doing that for linbox - if it doesn't
it probably should. If it does there is a deeper problem there.

Francois
  


-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] Re: Categories for the working programmer

2008-11-11 Thread Clement Pernet


  - Did anyone keep a list or graph of the desired categories from  
 sage days 7?
For reference, the hierarchy for MuPAD-Combinat is available there:
http://mupad-combinat.sourceforge.net/Papers/Categories.pdf
 
 I thought someone took a photo of it, but I don't know who.
 
I think I have it:
http://picasaweb.google.com/lh/photo/Ol00Dod4hLflsFYJvATGDg
http://picasaweb.google.com/lh/photo/cs8RBuly--2sczGILu-I5w

Using the full resolution (zoom lens button) one can barely read every
node of the tree!

Clément

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Advanced topics in linear algebra algorithmic class

2008-09-26 Thread Clement Pernet

Hi,

Thanks for your suggestion: unfortunately, Dailymotion limits both the
file size to 150M and the lenght to 15mins or so. So it will not work
for the class.

I'm uploading today's second lecture. A link will be available on the
course page when ready:
http://www.math.washington.edu/~pernet/Math581.html

I will always update de syllabus with a short list of topics that have
been covered with, bibliographic refs.

Clément

Timothy Clemans a écrit :
 Hi Clément,
 
 The free web site http://www.dailymotion.com/us offers much better
 video quality than Google Video.
 
 On Thu, Sep 25, 2008 at 8:12 PM, Clement Pernet
 [EMAIL PROTECTED] wrote:
 Hi,

 I made a first attempt of video-ing my class of linear algebra 581 with
 the first lecture I gave yesterday.

 http://video.google.fr/videoplay?docid=-3011598682395680999

 The overall quality is not great, so I am not sure it will be of any
 help. I'll still try to continue doing it for the next lectures and see
 if I can improve it.

 Clément

 Martin Albrecht a écrit :
 Will you provide videos of your lectures online? The outline looks very
 interesting.

 Cheers,
 Martin




 
  
 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Advanced topics in linear algebra algorithmic class

2008-09-25 Thread Clement Pernet

Hi,

I made a first attempt of video-ing my class of linear algebra 581 with
the first lecture I gave yesterday.

http://video.google.fr/videoplay?docid=-3011598682395680999

The overall quality is not great, so I am not sure it will be of any
help. I'll still try to continue doing it for the next lectures and see
if I can improve it.

Clément

Martin Albrecht a écrit :
 Will you provide videos of your lectures online? The outline looks very 
 interesting.
 
 Cheers,
 Martin
 
 
 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Advanced topics in linear algebra algorithmic class

2008-09-23 Thread Clement Pernet

Hi,

I wanted to advertise a class that I am going to teach this fall
semester, and which can be of interest for sage developers and users.

As its title suggests, it will be about theoretical and practical topics
on computational exact linear algebra. You can check out the web page
with the syllabus and outline here:
http://www.math.washington.edu/~pernet/Math581.html

The first lecture is happening tomorow at 12:30 in Smith 309 on UW campus.

I hope to see you there!

Cheers,

Clément Pernet

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: SD10 Accomodation Again

2008-09-17 Thread Clement Pernet

Hi there,

Bad news for the accomodation in Nancy:

I called the Youth Hostel, and they do not have enough room for the
period of the SD10: a jazz festival and several other events are
happening in the same period, and the Hostel already has several groups
registered and confirmed.

There is about 6 beds available the 9 and 10 but nothing in the week-end
and the begining of the next week.

This is very unfortunate since this place looked good, close to campus
and cheap!

I will look for other places, ask for availabilities and let you know
about the situation. Don't worry, I will not book anything before we all
agree on the place (especially since it might be more expensive).

Cheers,

Clément

Martin Albrecht a écrit :
 Hi there,
 
 (if you're not going to Sage Days 10, ignore this e-mail)
 
 this is a quick friendly reminder that the 'sign-up' for the Youth Hostel 
 bulk 
 booking closes tomorrow 11am PST.
 
 That is, tomorrow 11am PST Clément will do the booking so if you want to 
 add/remove yourself now is the time :-)
 
 http://wiki.sagemath.org/Days_10_Youth_Hostel_Page
 
 Cheers,
 Martin


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: SD10 Accomodation

2008-09-09 Thread Clement Pernet

The wiki page for the Youth Hostel accomodation in Nancy is on-line:

http://wiki.sagemath.org/Days_10_Youth_Hostel_Page

Please sign up before september 16, by editing your name and dates on
it, so we can make the booking soon enough.

Cheers,

Clément

Robert Bradshaw a écrit :
 On Tue, 9 Sep 2008, Martin Albrecht wrote:
 
 I'll update the wiki to broadcast this proposition to people outside of
 sage-devel.
 Thanks! Shall we say: we give everyone until the September 16 to 'sign up' on
 the Wiki (on a to-be-created page). If we feel that we achieved a critical
 mass by then we go ahead and book it.
 
 That sounds like a good idea.
 
 - Robert
 
 
  
 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Sage 3.0.5/3.0.6.alpha0: doctest failure in ssmod.py

2008-07-18 Thread Clement Pernet

I am looking into it.

Applying the patch at
http://sage.math.washington.edu/home/pernet/Patches/charpoly_LUK.patch

will disable the current probablistic charpoly algorithm.
This could help diagnose the origin of the bug.

Cheers

Clement

mabshoff a écrit :
 
 
 On Jul 17, 10:34 pm, mabshoff [EMAIL PROTECTED] wrote:
 Ok, here is what I found out last night:

  * 3.0.3 runs the test 200 times without failing it once
  * 3.0.4 with the new FLINT 1.0.13 fails 8 ought of 500 tests.

 So we are given a couple possibilities:

  * There is an algorithmic issue in ssmod somewhere or some
 algorithmic issue got exposed somehow in 3.0.4+
  * There is an undiscovered bug in LinBox
  * There is an undiscovered bug in FLINT
  * none of the above
  * all of the above
 
 After looking at the code William has conjectured that it is very
 likely charpoly mod p that fails here. We update Linbox in 3.0.3-
 3.0.4, so that fits the bill. This issue is now #3671. To debug this
 we can compute the charpoly with LinBox and the generic code for a
 large number of random inputs and compare. According to William the
 speed difference between generic code and LinBox for the example in
 ssmod (32 by 32 matrices) won't be too large.
 
 If you have any (alternate) theories what goes wrong here please let
 us know.
 
 Cheers,

 Michael
 
 Cheers,
 
 Michae
  
 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Sage 3.0.5/3.0.6.alpha0: doctest failure in ssmod.py

2008-07-18 Thread Clement Pernet

snip
 I have applied the patch and rebuild LinBox and started running the
 test 500 times to see. Can you guess if/how much this patch does
 affect performance for charpoly mod p?
 
For the dimensions you are considering (and up to a thousand) I don't
expect any performance loss.
But the probabilistic alg improves on larger matrices and gets
asymptotically better (the best algorithm indeed!)

I'll let you know when I've made progress on this one.

Clement

 mabshoff a écrit :



 On Jul 17, 10:34 pm, mabshoff [EMAIL PROTECTED] wrote:
 Ok, here is what I found out last night:
  * 3.0.3 runs the test 200 times without failing it once
  * 3.0.4 with the new FLINT 1.0.13 fails 8 ought of 500 tests.
 So we are given a couple possibilities:
  * There is an algorithmic issue in ssmod somewhere or some
 algorithmic issue got exposed somehow in 3.0.4+
  * There is an undiscovered bug in LinBox
  * There is an undiscovered bug in FLINT
  * none of the above
  * all of the above
 After looking at the code William has conjectured that it is very
 likely charpoly mod p that fails here. We update Linbox in 3.0.3-
 3.0.4, so that fits the bill. This issue is now #3671. To debug this
 we can compute the charpoly with LinBox and the generic code for a
 large number of random inputs and compare. According to William the
 speed difference between generic code and LinBox for the example in
 ssmod (32 by 32 matrices) won't be too large.
 If you have any (alternate) theories what goes wrong here please let
 us know.
 Cheers,
 Michael
 Cheers,
 Michae
  
 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Sage 3.0.4.alpha1 released

2008-06-27 Thread Clement Pernet

Hi,

Sorry about this.

This can be fixed by a trivial patch, that I forgot to add to the ticket
#3429. I am currently testing it before attaching it to the ticket.

If you want to try it:

http://sage.math.washington.edu/home/pernet/Patches/trac-3429-fix.patch


Cheers,

Clément

John Cremona a écrit :
 Failed for me too.
 
 John
 
 2008/6/27 William Stein [EMAIL PROTECTED]:
 Hi,

 Sage-3.0.4.alpha1 doesn't build on any of the 13 platforms where
 I tested the build.  They fail with problems probably due to refactoring
 some code in linbox.

 [when building the sage library]

 sage/libs/linbox/linbox.cpp:105:25: error: linbox_wrap.h: No such
 file or directory

 so linbox_wrap.h didn't get installed or got moved.

 So nobody else should bother building sage-3.0.4.alpha1 unless
 they want to fix these linbox issues (hint, hint: clement..)

 William

 
  
 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Sage 3.0.4.alpha1 released

2008-06-27 Thread Clement Pernet

Sorry about this, I could not test it since my install of 3.0.4 is still
not finished.

However, I fixed it and tested it on a 3.0.3. So I reopened #3429 and
proposed a patch there that will fix it.

http://trac.sagemath.org/sage_trac/attachment/ticket/3429/update_new_linbox_interface.patch

Let me know if it does not.

Clément

Glenn H Tarbox, PhD a écrit :
 On Fri, 2008-06-27 at 12:32 -0700, Clement Pernet wrote:
 Hi,

 Sorry about this.

 This can be fixed by a trivial patch, that I forgot to add to the ticket
 #3429. I am currently testing it before attaching it to the ticket.

 If you want to try it:

 http://sage.math.washington.edu/home/pernet/Patches/trac-3429-fix.patch
 
 nope... log failure at:
 
 http://tarbox.org/sage/hardy_64_pbuild.log.bz2
 
 -glenn
 

 Cheers,

 Clément

 John Cremona a écrit :
 Failed for me too.

 John

 2008/6/27 William Stein [EMAIL PROTECTED]:
 Hi,

 Sage-3.0.4.alpha1 doesn't build on any of the 13 platforms where
 I tested the build.  They fail with problems probably due to refactoring
 some code in linbox.

 [when building the sage library]

 sage/libs/linbox/linbox.cpp:105:25: error: linbox_wrap.h: No such
 file or directory

 so linbox_wrap.h didn't get installed or got moved.

 So nobody else should bother building sage-3.0.4.alpha1 unless
 they want to fix these linbox issues (hint, hint: clement..)

 William




--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: New Sage website

2008-06-04 Thread Clement Pernet

First, I really think this web site looks much better, and mature. Great
job!
I asked my roomate, Alan,  to review it, since he's quite a bit into web
app. development. Here are his comments:


* in general: less pages, dont hide things 3 pages deep. everything on
the site  could be edited down to 5 or 6 pages.

* you can try Sage online here should be either the first or second
thing on the page. this is a major hook.

* the blurb and links in the feature tour page should be on the first
page  above the icons.

* search box should be on first page. get rid of separate pages.

* therefore, the number of buttons reduces to 4

* rss feed should have _orange_ rss icon (people looking for RRS link
get are looking for something orange)

* put the try sage online link in the download section too

* remove second level of navigation tabs. no one expects these to be
there. instead, put all the content in the separate sub-pages in one
page under their own headings.

* links pages have no context. put links where they belong. if its to
packages you use, put them under a packages we use heading in
development section or sometime


My 2 cents to the discussion are:
* the blue is too blue (already said),
* in the download section: the left column is way too large ; I would
visually prefer a 1/3 2/3 proportion.

Maybe some of these comments would involve too deep modifications to be
taken into account, but I hope it will still be of any help.

Cheers,

Clément

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: SSE2 not so useless after all

2008-05-22 Thread Clement Pernet

Hi,

 
 Bill, I suppose that also means that now we actually beat (or are close to 
 beating) Magma on the C2D for real. My M4RI times are quite similar on the 
 C2D as your times on your Opteron. But my version of Magma (on the C2D) is 
 much worse than your version of Magma (on the Opteron). So it is probably 
 best to assume at least your times for Magma on my machine too.
 
That's awesome news!

 PS: I wonder if this argument makes sense:
 
 We have a complexity of n^2.807 and a CPU of say 2.333 Ghz which can operate 
 on 64 bits per clock (128 if we use SSE2). So if we had optimal code (no 
 missed branch predictions, no caching issues, everything optimal) we would 
 expect a running time ofn^2.807 / 64 / 2.333 / 10^9

Don't forget the constants!
Strassen-winograd, is 6n^2.807.
Now this constant correspond to both mul and adds and I guess that your
 boolean word operation ^= computes a + and a * in 1 clock cycle, so I
don't really know the constant in this case (6/2=3 seems dubious to me).

Furthermore, as Bill pointed out, one really has to count the real
number of ops since only a few recursive calls are made.
Eventhough, this would mean that the expected optimal time would be
larger, and consequently, that you're could is closer to optimal!

 
 If we plug in 20,000 for n we'd get 7.923 seconds w.o. SSE2 and 3.961 with 
 SSE2. So our implementation (12.2 s) is a factor of ~1.5 or ~3 away from 
 being optimal? Does that sound correct or complete bollocks?
 
 



--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: SSE2 not so useless after all

2008-05-19 Thread Clement Pernet

hi guys,

I am finally up to date with this discussion (I was being interviewed, 
and then flying when it started).
First, congrats for the great job you have achieved. I have started to 
dive into m4ri, and I really like the quality of the code.

I have a few remarks

* the loop unrolling technique used for the creation of the table, could 
maybe be used in the computation of the product as well.
Is 8 optimal? I remember seeing 32 in ATLAS, but don't know of any 
justifications. Since some pipeline are longer than 8, this might be 
better to have a longer unrolled loop.

* I am not sure about this, but wouldn't it be better to have a block 
decomposition that matches the babystep-giantstep structure?
This could happen at the strassen threshold : instead of simply copying 
the matrix (which already improves the data-locality) copy it into a 
bunch blocks of size blocksize and call m4rm on that structure. ATLAS 
are doing this kind of copies for dimensions not larger than 200 if I 
recall correctly.
Maybe I am just missing something about your babystep/giantstep algorithm.

Anyway, as you pointed out, the battle is now on the asymptotic 
comparison with Magma, and I still have no ideas on how to improve your 
strassen implementation. Still thinking about it

Cheers,

Clément

Bill Hart a écrit :
 Yep that's exactly the same thing as what M4RM does. Thanks for the
 explanation.

 Bill.

 On 20 May, 00:22, Robert Miller [EMAIL PROTECTED] wrote:
   
 I can't tell exactly what GAP does. It is beautifully documented, but
 it talks about grease units, which is terminology I don't
 understand. It does look like M4RM though.
   
 Grease is a concept for speeding up certain things using caching. For
 example, suppose I have the permutation group S_{32} acting on ints. I
 can represent a particular permutation as a permutation matrix, which
 means that applying that permutation is just vector-matrix
 multiplication. However, instead of computing the full matrix
 multiplication, we can cut the matrix into pieces (probably of length
 grease units or something). Essentially, we compute every possible
 sum of the first four rows, then the next four rows, etc. Then, to see
 how to multiply the vector by the matrix, we cut the vector into
 chunks of four, and simply look up the corresponding entry in the
 grease table, finally adding them together in the end.

 -- RLM
 
 

   


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Questions about various spkgs

2008-05-08 Thread Clement Pernet

Francois a écrit :
 On May 8, 12:25 pm, Timothy G Abbott [EMAIL PROTECTED] wrote:
   
 I'm working on getting several of the SAGE dependencies not already in
 Debian maintained in the main Debian archive.  I had a few questions about
 the future of some spkgs:

 I've heard rumor that linbox_wrap might be being merged into mainline
 linbox at some time in the near future.  If this is the casee, I'll not
 bother trying to get a separate linbox_wrap package into Debian.

 I've also heard rumor that flintqs may be subsumed by Flint in the future.
 Is this the case?  If so, I should not bother trying to get the flintqs
 spkg into Debian.

 
 I know from making an ebuild for Gentoo from upstream that the latest
 flint includes flintqs but you have to explicitly build it as it is a
 separate
 target. Inclusion of linbox_wrap would be nice as I wouldn't have to
 get
 sage sage's linbox spkg just for linbox_wrap.

 Francois
   
Rumors are always true!
I meant to merge linbox_wrap in linbox/interfaces for the 2.0 linbox 
release. Depending on how urgent is your need for this to happen, I may 
consider doing it in a 1.1.6 release instead.

Francois, I did not understand what is your need. Do you mean that you 
need sage's wrapper linbox_wrap to use linbox in another context?

Cheers,
Clément


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Fast matrices over GF(p)

2008-03-24 Thread Clement Pernet

Hi,

I still did not look at the code of Meat axe, but I remember having been 
really impressed a presentation at MSRI last year about MeatAxe.
The timings were really impressive especially the matmul ones.
So I am really surprised by your experience with slow matmul.

  The strength of MTX in arithmetics is IMO:
  - difference of two matrices (300 times faster than Sage);
  - multiplication of a matrix with a scalar (actually this is a very weak
   point of Sage matrices, namely it is slower than the multiplication of
   two matrices).
   
 I just want to point out that things like this are slow in Sage only
 because nobody has yet bothered to implement code to do it,
 so the operation just falls back to some very slow generic method.
 Any of the above could likely be made as fast or faster than MTX
 in Sage.  I would much prefer speeding up Sage matrices rather
 than incorporating MTX into Sage, since the result will in the former
 case will be much easier for users to understand and lots
 of other codes benefits.  If we go the later route (use MTX), we get
 a much more complicated situation -- and just put off the inevitable
 optimization of Sage's matrices.
 

   
+1 on that point: either Sage Linalg or LinBox can be improved on these 
computations, since they are definitely not optimized for it. And 
multiplying a matrix by a scalar, is not such a big application to 
justify switching to a new software (matmul could definitely be!)


 I tend to agree with all of the above - LinBox properly hooked into
 Sage should beat anything out there. If it doesn't we know where we
 need to improve LinBox :)

   
+1.
I really want to implement the small field matmul using the ideas of
http://hal.archives-ouvertes.fr/hal-00259950/fr/
The idea is to take advantage of both worlds:
* compressed storage
* double floating arithmetic (=BLAS, SSE, etc)

The trick to store k elts in a double as a polynomial evaluated in a 
power of 2.
Then multiplying two of such elements together (floating point mul) is a 
convolution of the polynomials. So if you store one of the 2 inputs in 
reverse order, you get a dot product of k elements for the price of 1 
floating point mul.
Experiments by Dumas showed computations speed around 20Gflops on a 
machine where ATLAS was nearly only 6Gflops.

I plan to update fflas-ffpack, linbox and consequently sage with this 
trick soon.
I think in a first attempt, only matmul should be addressed:
* it would be such a pain to deal with more elaborate algorithms, 
performing permutations and block decomposition by views on the matrix
* matmul is really the root of all efficiency, so it should already 
provide a good speed-up to many computations
  - Computation of nullspace is good (for a dense 1000x500 matrix over
   GF(7), MTX was 6 times faster than Sage).
   
 Why?  It's worth seeing if Linbox properly used can beat MTX -- I mean
 dense nullspace of matrices of that sort of size is I think exactly where
 linbox should be beating everything else, at least if one has a properly
 optimized BLAS.

 
I am surprised that nullspace could be so efficient without a good matmul.

   multiplication tables that are stored in a file in the current
   directory, and whose creation relies on an executable maketab. This, i
   think, is nasty. I don't know if the more recent versions have the
   multiplication tables in memory. Also i don't know if my wrapper would
   still work if i'd change to the new MeatAxe.
   
I don't understand this focus on files with meataxe.
 By the way, we don't even have a matrix over Givaro finite fields type
 in Sage at present, which would likely be very fast for arithmetic over GF(q)
 for q  65,000.  Using Linbox it would likely be pretty easy to add such
 a type to Sage.  Comments Clement?
 
Do you mean extension field using givaro? I agree this has to be done

For prime fields, the  best way to use linbox is the Modulardouble 
finite field in a clear transparent way.
I am currently designing linbox-2.0 matrix interface to help doing these 
interfaces as clear and copy-free as possible.

Clément

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: spare square matrices that are permutations of block diagonal matrices.

2008-01-23 Thread Clement Pernet

Hi Gregory,

I agree with you, this would be great to have in SAGE, and I was also 
thinking of writing something like that in LinBox sometime for the 
sparse charpoly.

Mike Monagan  Al. already wrote something on the subject:
http://www.cecm.sfu.ca/~monaganm/papers/CP8.pdf

I am just getting settled at UW but could help work on that pretty soon 
I guess.

Clément

Jason Grout a écrit :
 Gregory Bard wrote:
   
 Once in a while one has a sparse square matrix, and one might wonder
 if it is
 a matrix in block form, but permuted. This can be hard to recognize by
 eye
 if the matrix is big. It turns out one can determine this VERY
 quickly. The
 graph which has an adjacency matrix equal to the matrix under
 discussion
 will have a connected component for each block. In fact, a Depth First
 Search
 will not only identify the number of connected-components but also
 their memberships,
 essentially producing the permutation matrix that we want.

 So one could imagine a SAGE command, given a matrix A, that would
 output
 the number of blocks, and a permutation matrix P such that PAP^-1 is
 in
 block form.

 Perhaps this is already in SAGE. If not, perhaps I should write it?
 

 That would be great!  Here is some functionality that is in sage to help 
 you get started.  Please let me know if there is something you don't 
 understand in the sage session below.


 sage: a=matrix([[1,2,0,0],[3,4,0,0],[0,0,5,6],[0,0,7,8]]); a 


 [1 2 0 0]
 [3 4 0 0]
 [0 0 5 6]
 [0 0 7 8]
 sage: p=Permutation([1,3,2,4])
 sage: p.to_matrix()

 [1 0 0 0]
 [0 0 1 0]
 [0 1 0 0]
 [0 0 0 1]
 sage: perm_matrix=p.to_matrix()
 sage: (perm_matrix^-1) # inverse

 [1 0 0 0]
 [0 0 1 0]
 [0 1 0 0]
 [0 0 0 1]
 sage: permuted_a=(perm_matrix^-1)*a*perm_matrix; permuted_a

 [1 0 2 0]
 [0 5 0 6]
 [3 0 4 0]
 [0 7 0 8]
 sage: g=Graph(permuted_a); g
 Graph on 4 vertices
 sage: blocks=g.connected_components(); blocks
 [[0, 2], [1, 3]]
 sage: g.connected_components_number()
 2
 sage: g.connected_component_containing_vertex(0)
 [0, 2]
 sage: # Graphs are 0-based indexing, while permutations are 1-based.
 sage: graph_p=Permutation([i+1 for i in flatten(blocks)]); graph_p 

 [1, 3, 2, 4]
 sage: graph_p.to_matrix()

 [1 0 0 0]
 [0 0 1 0]
 [0 1 0 0]
 [0 0 0 1]
 sage: graph_p==p
 True




   
 This came up
 in a real research problem that a colleague of mine, Prof Robert
 Lewis, is
 working on. Surely this small function doesn't really need a package
 of its own...
 like the Method of Four Russians got... so how do you handle that?
 Does it get
 coded into an existing package? Shall I just write up the pseudocode
 for one
 of your more junior project members to code up? Or is this already
 built-in?
 

 I don't think the function is already there.  I think the best way to do 
 things is to write it up as a function and post it back here on the 
 mailing list and work from there.

 Thanks for helping!

 Jason


 

   


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: [PATCH] Compile Linbox with gcc 4.3 trunk

2007-12-04 Thread Clement Pernet

Thanks Ismail for the report.
The latest svn version of linbox is now fixed.

Clement

ismail dönmez a écrit :

Hi all,

I applied attached patch to Linbox 1.1.4 tarball and I was able to
build  pass regression tests with gcc 4.3 trunk.

Regards,
ismail


  



--- linbox-1.1.4/linbox/ffpack/ffpack.h2007-11-12 00:57:40.0 
+0200
+++ linbox-fixed/linbox/ffpack/ffpack.h2007-12-04 17:51:04.0 
+0200
@@ -1341,7 +1341,7 @@
   static int
   KGFast ( const Field F, std::listPolynomial charp, const size_t N,
typename Field::Element * A, const size_t lda, 
-   size_t * kg_mc, size_t* kg_mc, size_t* kg_j );
+   size_t * kg_mc, size_t* kg_mc2, size_t* kg_j );
 
   template class Field, class Polynomial
   static std::listPolynomial
  




--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: Raising matrices to a power

2007-12-03 Thread Clement Pernet

Hi there,

The method using x^k mod charpoly (or minpoly) is clearly the only 
method I know about for that problem.
If n is smallish, this is the good way to do it.

For larger n (say n=O(k)), the computation of the n power of A in step 3 
is the bottleneck (n^4 or n^(w+1) ops in Q, so roughly O(n^5) bit ops).
In this case, a good approach is to replace M by its Frobenius form F = 
U^-1AU in step 3 and apply the similarity transformation at last.

Clément

David Harvey a écrit :

On Dec 3, 2007, at 8:40 AM, Bill Hart wrote:

  

I've just been looking at SAGE ticket number 173:

http://www.sagemath.org:9002/sage_trac/ticket/173

The idea is that Mathematica raises a 3 dimensional matrix M over QQ
to the power 20,000 much faster than either SAGE or Magma.

I don't know any algorithm for doing this efficiently. I only know one
algorithm:

1) Compute the  characteristic polynomial p(x) of M (time 0.00s)
2) Compute x^2 mod p (time 0.22s)
3) Substitute M into the result (time 0.00s)

It's pretty obvious where the time is going here - polynomial
arithmetic. I guess this is the algorithm being used. Is Pari or NTL
being used for the polynomial expmod?

I reckon we can speed this up. What do people think?



I would have guessed the algorithm was just compute M^2 using  
repeated squaring. But your suggestion is quite interesting.

Maybe first check that mathematica is getting the right answer, and  
not some stupid floating point approximation or something.

david




  




--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: matrix multiply with huge entries

2007-10-23 Thread Clement Pernet

Hi everyone,

 We do have A._multiply_linbox(B), but we never
 use it by default, since when we first wrapped it sadly turned out
 to be slower than using our own multi-modular implementation.
 This is the sort of thing that may change someday, I hope...
Yep, that should change pretty soon!

I was actually planning to spend some time looking at this pb of small matrices 
with huge
ints.
Strassen-Winograd is fine for dimensions 2^k but if with odd dimension, it can 
be better
to use other algorithms (3x3 splitting for example, Pan algorithms, Winograd 
PreStrassen
algorithm, and some personnal ideas too).
Therefore it would be nice to have a kind of database of the best algorithm (in 
term of #
of mul) for each small dimension (2,3,4,5,6,7...) and have a preprocessing 
phase that
would choose the good combination of these algorithms, for any given n.

Clement

 ---
 
 It could make a lot of sense to make _matrix_times_matrix
 use strassen when the heights of the input matrices are
 large, instead of using classical O(n^3) multiplication.
 
 Willliam
 
  
 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] LinBox Givaro and gcc-4.2

2007-08-03 Thread Clement Pernet

As announced on linbox-devel, the gcc-4.2 problem with LinBox has been
fixed, and the up to date code can be retrieved from the svn server of
LinBox. We are soon going to release version 1.1.4.

Since LinBox and SAGE use intensively Givaro, we also fixed givaro
code, that had the same problem. We released it  in version 3.2.7.

The webmaster of givaro being currently on vacation, I temporarily put
the tar.gz on my web page:

http://ljk.imag.fr/membres/Clement.Pernet/givaro-3.2.7.tar.gz

Cheers
Clement


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: MSI packages for Windows and atlas

2007-05-09 Thread Clement Pernet

Hi All,

I am currently in Europe for inteviews, and not able to connect my
laptop to the net and access the LinBox svn server.
I plan to apply Michael's patches to the svn, and create the new
linbox spkg asap right after that, say next monday or so.

As for ATLAS, a PIIISSE2 is fine for most Intel32, but athlon64 users
will not be happy. i don't see any easy answer for this pb. Maybe just
options 3) 4-ATLAS-INTEL32) 4-ATLAS-ATHLON64) ?

Sorry for the delay.

Clement

On May 9, 7:24 am, William Stein [EMAIL PROTECTED] wrote:
 On Tuesday 08 May 2007 10:19 pm, mabshoff wrote:

   Yep, I got that.  But SAGE's linbox is still months behind the svn
   version of linbox right now, and that needs to be addressed first.
   Volunteers? E.g., SAGE uses Integers_GMP, but Linbox deprecated that
   type...

  Not sure about the Integers_GMP, but I could ping the Linbox people
  via linbox-use and ask if there is a canonicalwork-around/
  migration path.

 Basically somebody needs to just get linbox_wrap.cpp to compile
 with the new version of linbox.  I put an updated linbox package
 here:  http://www.sagemath.org/packages/experimental/
 It's probably just a matter of changing GMP_Integers to PID_Integer or
 PID_integer or something like that (I asked Clement Pernet recently).
 Then build everything, discover bugs, fix, write to linbox mailing
 lists, etc... :-)  Be really happy with how fast the result is.

  The patch I send you should apply to the version of Linbox in SAGE
  because it is only a two line patch to blas.m4. The patch isn't even
  in Linbox svn yet (as of 1.1.3-r2701). From a quick look at numpy and
  gsl it should be just as easy to insert a couple lines in the right
  place.

 Oh, OK, that should be easy.




--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---