2010/1/17 Benoit Jacob jacob.benoi...@gmail.com:
2010/1/17 Robert Kern robert.k...@gmail.com:
On Sun, Jan 17, 2010 at 13:18, Benoit Jacob jacob.benoi...@gmail.com wrote:
2010/1/17 Robert Kern robert.k...@gmail.com:
On Sun, Jan 17, 2010 at 12:11, Benoit Jacob jacob.benoi...@gmail.com
wrote:
On Mon, Jan 18, 2010 at 09:35, Benoit Jacob jacob.benoi...@gmail.com wrote:
Sorry for continuing the licensing noise on your list --- I though
that now that I've started, I should let you know that I think I
understand things more clearly now ;)
No worries.
First, Section 5 of the LGPL is
2010/1/18 Robert Kern robert.k...@gmail.com:
On Mon, Jan 18, 2010 at 09:35, Benoit Jacob jacob.benoi...@gmail.com wrote:
Sorry for continuing the licensing noise on your list --- I though
that now that I've started, I should let you know that I think I
understand things more clearly now ;)
On Mon, Jan 18, 2010 at 10:26, Benoit Jacob jacob.benoi...@gmail.com wrote:
2010/1/18 Robert Kern robert.k...@gmail.com:
On Mon, Jan 18, 2010 at 09:35, Benoit Jacob jacob.benoi...@gmail.com wrote:
Sorry for continuing the licensing noise on your list --- I though
that now that I've started, I
On 01/18/2010 10:46 AM, Benoit Jacob wrote:
2010/1/18 Robert Kernrobert.k...@gmail.com:
On Mon, Jan 18, 2010 at 10:26, Benoit Jacobjacob.benoi...@gmail.com wrote:
2010/1/18 Robert Kernrobert.k...@gmail.com:
On Mon, Jan 18, 2010 at 09:35, Benoit
On 01/18/2010 12:47 PM, Vicente Sole wrote:
Quoting Bruce Southey bsout...@gmail.com:
If you obtain the code from any package then you are bound by the terms
of that code. So while a user might not be 'inconvenienced' by the LGPL,
they are required to meet the terms as required. For some
Quoting Bruce Southey bsout...@gmail.com:
On 01/18/2010 12:47 PM, Vicente Sole wrote:
Quoting Bruce Southey bsout...@gmail.com:
If you obtain the code from any package then you are bound by the terms
of that code. So while a user might not be 'inconvenienced' by the LGPL,
they are required
On Mon, Jan 18, 2010 at 13:34, Vicente Sole s...@esrf.fr wrote:
You are taking point 4.d)0 while I am taking 4.d)1:
1) Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that (a) uses at run time a copy
of the Library already present on the
2010/1/17 David Cournapeau courn...@gmail.com:
On Sun, Jan 17, 2010 at 2:20 PM, Benoit Jacob jacob.benoi...@gmail.com
wrote:
Couldn't you simply:
- either add LGPL-licensed code to a third_party subdirectory not
subject to the NumPy license, and just use it? This is common
practice, see
On Sun, Jan 17, 2010 at 08:52, Benoit Jacob jacob.benoi...@gmail.com wrote:
2010/1/17 David Cournapeau courn...@gmail.com:
There are several issues with eigen2 for NumPy usage:
- using it as a default implementation does not make much sense IMHO,
as it would make distributed binaries non 100
2010/1/17 Robert Kern robert.k...@gmail.com:
On Sun, Jan 17, 2010 at 08:52, Benoit Jacob jacob.benoi...@gmail.com wrote:
2010/1/17 David Cournapeau courn...@gmail.com:
There are several issues with eigen2 for NumPy usage:
- using it as a default implementation does not make much sense IMHO,
On Sun, Jan 17, 2010 at 12:11, Benoit Jacob jacob.benoi...@gmail.com wrote:
2010/1/17 Robert Kern robert.k...@gmail.com:
On Sun, Jan 17, 2010 at 08:52, Benoit Jacob jacob.benoi...@gmail.com wrote:
2010/1/17 David Cournapeau courn...@gmail.com:
There are several issues with eigen2 for NumPy
2010/1/17 Robert Kern robert.k...@gmail.com:
On Sun, Jan 17, 2010 at 12:11, Benoit Jacob jacob.benoi...@gmail.com wrote:
2010/1/17 Robert Kern robert.k...@gmail.com:
On Sun, Jan 17, 2010 at 08:52, Benoit Jacob jacob.benoi...@gmail.com
wrote:
2010/1/17 David Cournapeau courn...@gmail.com:
On Sun, Jan 17, 2010 at 13:18, Benoit Jacob jacob.benoi...@gmail.com wrote:
2010/1/17 Robert Kern robert.k...@gmail.com:
On Sun, Jan 17, 2010 at 12:11, Benoit Jacob jacob.benoi...@gmail.com wrote:
2010/1/17 Robert Kern robert.k...@gmail.com:
On Sun, Jan 17, 2010 at 08:52, Benoit Jacob
2010/1/17 Robert Kern robert.k...@gmail.com:
On Sun, Jan 17, 2010 at 13:18, Benoit Jacob jacob.benoi...@gmail.com wrote:
2010/1/17 Robert Kern robert.k...@gmail.com:
On Sun, Jan 17, 2010 at 12:11, Benoit Jacob jacob.benoi...@gmail.com
wrote:
2010/1/17 Robert Kern robert.k...@gmail.com:
On
Hi,
I while back, someone talked about aigen2(http://eigen.tuxfamily.org/). In
their benchmark they give info that they are competitive again mkl and goto
on matrix matrix product. They are not better, but that could make a good
default implementation for numpy when their is no blas
On Sun, Jan 17, 2010 at 2:20 PM, Benoit Jacob jacob.benoi...@gmail.com wrote:
Couldn't you simply:
- either add LGPL-licensed code to a third_party subdirectory not
subject to the NumPy license, and just use it? This is common
practice, see e.g. how Qt puts a copy of WebKit in a third_party
Hi,
I while back, someone talked about aigen2(http://eigen.tuxfamily.org/). In
their benchmark they give info that they are competitive again mkl and goto
on matrix matrix product. They are not better, but that could make a good
default implementation for numpy when their is no blas installed. I
2010/1/8 Frédéric Bastien no...@nouiz.org:
Hi,
I while back, someone talked about aigen2(http://eigen.tuxfamily.org/). In
their benchmark they give info that they are competitive again mkl and goto
on matrix matrix product. They are not better, but that could make a good
default
I also tried to Install numpy with intel mkl 9.1
I still used gfortran for numpy installation as intel mkl 9.1 supports gnu
compiler.
I would suggest using GotoBLAS instead of ATLAS. It is easier to build
then ATLAS (basically no configuration), and has even better performance
than MKL.
Sturla Molden wrote:
I would suggest using GotoBLAS instead of ATLAS.
http://www.tacc.utexas.edu/tacc-projects/
That does look promising -- nay idea what the license is? They don't
make it clear on the site (maybe it it is you set up a user account and
download, but I'd rather know up
Sturla Molden wrote:
I would suggest using GotoBLAS instead of ATLAS.
http://www.tacc.utexas.edu/tacc-projects/
That does look promising -- nay idea what the license is? They don't
make it clear on the site
UT TACC Research License (Source Code)
The Texas Advanced Computing Center of
Hi David,
Thank you for the reply which is useful.
I also tried to Install numpy with intel mkl 9.1
I still used gfortran for numpy installation as intel mkl 9.1 supports gnu
compiler.
I only uncomment these lines for site.cfg in site.cfg.example
[mkl]
library_dirs =
On Thu, Jan 7, 2010 at 11:20 AM, Xue (Sue) Yang
x.y...@physics.usyd.edu.au wrote:
This time, only one cpu was used. Does it mean that our installed intel mkl
9.1 is not threaded?
You would have to consult the MKL documentation - I believe you can
control how many threads are used from an
Hi,
I followed what I collected about installation of numpy with lapack and
atlas and installed numpy on our desktop with RHEL4 and 4 cores.
uname -a
Linux curie.physics.usyd.edu.au 2.6.9-89.0.15.ELsmp #1 SMP Sat Oct 10
05:59:16 EDT 2009 i686 i686 i386 GNU/Linux
I successfully installed
Xue (Sue) Yang wrote:
Hi,
I followed what I collected about installation of numpy with lapack and
atlas and installed numpy on our desktop with RHEL4 and 4 cores.
uname -a
Linux curie.physics.usyd.edu.au 2.6.9-89.0.15.ELsmp #1 SMP Sat Oct 10
05:59:16 EDT 2009 i686 i686 i386 GNU/Linux
Sorry. I meant to update this thread after I had resolved my issue.
This was indeed one problem. I had to set LD_LIBRARY_PATH.
I also had another odd problem that I will spell out here in hopes
that I save someone some trouble. Specifically, one should be very
sure that the path to the blas
Jonathan,
What does ldd
/home/jtaylor/lib/python2.5/site-packages/numpy/linalg/lapack_lite.so
say ?
You need to make sure that it's using the libraries in /usr/local/lib.
You can remove the ones in /usr/lib or export
LD_LIBRARY_PATH=/usr/local/lib/:$LD_LIBRARY_PATH.
Hope it helps.
Best,
N
Following these instructions I have the following problem when I
import numpy. Does anyone know why this might be?
Thanks,
Jonathan.
import numpy
Traceback (most recent call last):
File stdin, line 1, in module
File /home/jtaylor/lib/python2.5/site-packages/numpy/__init__.py,
line 130, in
On 17-Jul-09, at 3:57 PM, Jonathan Taylor wrote:
File /home/jtaylor/lib/python2.5/site-packages/numpy/linalg/
__init__.py,
line 47, in module
from linalg import *
File /home/jtaylor/lib/python2.5/site-packages/numpy/linalg/
linalg.py,
line 22, in module
from numpy.linalg import
On 17-Jul-09, at 4:20 PM, David Warde-Farley wrote:
It doesn't look like you ATLAS is linked together properly,
specifically fblas. What fortran compiler are you using?
ImportError: /usr/local/lib/libptcblas.so: undefined symbol:
ATL_cpttrsm
Errr, nevermind. I seem to have very
On Sun, Jun 7, 2009 at 2:52 AM, Gabriel Beckersbeck...@orn.mpg.de wrote:
OK, perhaps I drank that beer too soon...
Now, numpy.test() hangs at:
test_pinv (test_defmatrix.TestProperties) ...
So perhaps something is wrong with ATLAS, even though the building went
fine, and make check and make
On Mon, Jun 8, 2009 at 11:02 AM, David Cournapeau
da...@ar.media.kyoto-u.ac.jp wrote:
Isn't it true for any general framework who enjoys some popularity :)
Yup :)
I think there are cases where gradient methods are not applicable
(latent models where the complete data Y cannot be split into
2009/6/9 Charles R Harris charlesr.har...@gmail.com:
- heavily expression-template-based C++, meaning compilation takes
ages
No, because _we_ are serious about compilation times, unlike other c++
template libraries. But granted, compilation times are not as short as
a plain C library
Hi David,
2009/6/9 David Cournapeau da...@ar.media.kyoto-u.ac.jp:
Hi Benoit,
Benoit Jacob wrote:
No, because _we_ are serious about compilation times, unlike other c++
template libraries. But granted, compilation times are not as short as
a plain C library either.
I concede it is not as
On Mon, Jun 8, 2009 at 7:14 PM, David Warde-Farleyd...@cs.toronto.edu wrote:
On 8-Jun-09, at 8:33 AM, Jason Rennie wrote:
Note that EM can be very slow to converge:
That's absolutely true, but EM for PCA can be a life saver in cases where
diagonalizing (or even computing) the full
2009/6/9 Robin robi...@gmail.com:
On Mon, Jun 8, 2009 at 7:14 PM, David Warde-Farleyd...@cs.toronto.edu wrote:
On 8-Jun-09, at 8:33 AM, Jason Rennie wrote:
Note that EM can be very slow to converge:
That's absolutely true, but EM for PCA can be a life saver in cases where
diagonalizing (or
Robin wrote:
On Mon, Jun 8, 2009 at 7:14 PM, David Warde-Farleyd...@cs.toronto.edu wrote:
On 8-Jun-09, at 8:33 AM, Jason Rennie wrote:
Note that EM can be very slow to converge:
That's absolutely true, but EM for PCA can be a life saver in cases where
diagonalizing (or even computing)
David Cournapeau wrote:
I think the biggest problem is the 'babel tower' aspect of machine
learning (the expression is from David H. Wolpert I believe), and
practitioners in different subfields often use totally different words
for more or less the same concepts (and many keep being
2009/6/9 David Cournapeau da...@ar.media.kyoto-u.ac.jp:
Anyway, the book from Bishop is a pretty good reference by one of the
leading researcher:
http://research.microsoft.com/en-us/um/people/cmbishop/prml/
It can be read without much background besides basic 1st year
calculus/linear
On 9-Jun-09, at 3:54 AM, David Cournapeau wrote:
For example, what ML people call PCA is called Karhunen Loéve in
signal
processing, and the concepts are quite similar.
Yup. This seems to be a nice set of review notes:
http://www.ece.rutgers.edu/~orfanidi/ece525/svd.pdf
And going
Hi,
I'm one of the Eigen developers and was pointed to your discussion. I
just want to clarify a few things for future reference (not trying to
get you to use Eigen):
No, eigen does not provide a (complete) BLAS/LAPACK interface.
True,
I don't know if that's even a goal of eigen
Not a goal
On Tue, Jun 9, 2009 at 7:46 PM, Benoit Jacob jacob.benoi...@gmail.comwrote:
Hi,
I'm one of the Eigen developers and was pointed to your discussion. I
just want to clarify a few things for future reference (not trying to
get you to use Eigen):
No, eigen does not provide a (complete)
Hi Benoit,
Benoit Jacob wrote:
No, because _we_ are serious about compilation times, unlike other c++
template libraries. But granted, compilation times are not as short as
a plain C library either.
I concede it is not as bad as the heavily templated libraries in boost.
But C++ is just
David Warde-Farley wrote:
On 9-Jun-09, at 3:54 AM, David Cournapeau wrote:
For example, what ML people call PCA is called Karhunen Loéve in
signal
processing, and the concepts are quite similar.
Yup. This seems to be a nice set of review notes:
2009/6/8 Gael Varoquaux gael.varoqu...@normalesup.org:
On Mon, Jun 08, 2009 at 12:29:08AM -0400, David Warde-Farley wrote:
On 7-Jun-09, at 6:12 AM, Gael Varoquaux wrote:
Well, I do bootstrapping of PCAs, that is SVDs. I can tell you, it
makes
a big difference, especially since I have 8
On 8-Jun-09, at 1:17 AM, David Cournapeau wrote:
I would not be surprised if David had this paper in mind :)
http://www.cs.toronto.edu/~roweis/papers/empca.pdf
Right you are :)
There is a slight trick to it, though, in that it won't produce an
orthogonal basis on its own, just something
On Mon, Jun 08, 2009 at 08:58:29AM +0200, Matthieu Brucher wrote:
Given the number of PCs, I think you may just be measuring noise.
As said in several manifold reduction publications (as the ones by
Torbjorn Vik who published on robust PCA for medical imaging), you
cannot expect to have more
2009/6/8 Gael Varoquaux gael.varoqu...@normalesup.org:
On Mon, Jun 08, 2009 at 08:58:29AM +0200, Matthieu Brucher wrote:
Given the number of PCs, I think you may just be measuring noise.
As said in several manifold reduction publications (as the ones by
Torbjorn Vik who published on robust PCA
2009/6/8 David Warde-Farley d...@cs.toronto.edu:
On 8-Jun-09, at 1:17 AM, David Cournapeau wrote:
I would not be surprised if David had this paper in mind :)
http://www.cs.toronto.edu/~roweis/papers/empca.pdf
Right you are :)
There is a slight trick to it, though, in that it won't
Note that EM can be very slow to converge:
http://www.cs.toronto.edu/~roweis/papers/emecgicml03.pdf
EM is great for churning-out papers, not so great for getting real work
done. Conjugate gradient is a much better tool, at least in my (and
Salakhutdinov's) experience. Btw, have you considered
On Mon, Jun 08, 2009 at 08:33:11AM -0400, Jason Rennie wrote:
EM is great for churning-out papers, not so great for getting real work
done.�
That's just what I thought.
Btw, have you considered how much the Gaussianity assumption is
hurting you?
I have. And the answer is: not
On Mon, Jun 8, 2009 at 3:29 AM, Gael Varoquaux
gael.varoqu...@normalesup.org wrote:
On Mon, Jun 08, 2009 at 08:58:29AM +0200, Matthieu Brucher wrote:
Given the number of PCs, I think you may just be measuring noise.
As said in several manifold reduction publications (as the ones by
Torbjorn
Jason Rennie wrote:
Note that EM can be very slow to converge:
http://www.cs.toronto.edu/~roweis/papers/emecgicml03.pdf
http://www.cs.toronto.edu/%7Eroweis/papers/emecgicml03.pdf
EM is great for churning-out papers, not so great for getting real
work done.
I think it depends on what you
On Mon, Jun 08, 2009 at 09:02:12AM -0400, josef.p...@gmail.com wrote:
whats the actual shape of the array/data you run your PCA on.
50 000 dimensions, 820 datapoints.
Number of time periods, size of cross section at point in time?
I am not sure what the question means. The data is sampled at
2009/6/8 Gael Varoquaux gael.varoqu...@normalesup.org:
On Mon, Jun 08, 2009 at 09:02:12AM -0400, josef.p...@gmail.com wrote:
whats the actual shape of the array/data you run your PCA on.
50 000 dimensions, 820 datapoints.
You definitely can't expect to find 50 meaningfull PCs. It's
impossible
On Mon, Jun 8, 2009 at 6:17 AM, Gael Varoquaux
gael.varoqu...@normalesup.org wrote:
On Mon, Jun 08, 2009 at 09:02:12AM -0400, josef.p...@gmail.com wrote:
whats the actual shape of the array/data you run your PCA on.
50 000 dimensions, 820 datapoints.
Have you tried shuffling each time series,
On Mon, Jun 08, 2009 at 06:28:06AM -0700, Keith Goodman wrote:
On Mon, Jun 8, 2009 at 6:17 AM, Gael Varoquaux
gael.varoqu...@normalesup.org wrote:
On Mon, Jun 08, 2009 at 09:02:12AM -0400, josef.p...@gmail.com wrote:
whats the actual shape of the array/data you run your PCA on.
50 000
On Mon, Jun 8, 2009 at 8:55 AM, David Cournapeau
da...@ar.media.kyoto-u.ac.jp wrote:
I think it depends on what you are doing - EM is used for 'real' work
too, after all :)
Certainly, but EM is really just a mediocre gradient descent/hill climbing
algorithm that is relatively easy to
Jason Rennie wrote:
I hung-out in the machine learning community appx. 1999-2007 and
thought the Salakhutdinov work was extremely refreshing to see after
listening to no end of papers applying EM to whatever was the hot
topic at the time. :)
Isn't it true for any general framework who enjoys
On 8-Jun-09, at 8:33 AM, Jason Rennie wrote:Note that EM can be very slow to converge:That's absolutely true, but EM for PCA can be a life saver in cases where diagonalizing (or even computing) the full covariance matrix is not a realistic option. Diagonalization can be a lot of wasted effort if
On Sat, 2009-06-06 at 12:59 -0400, Chris Colbert wrote:
../configure -b 64 -D c -DPentiumCPS=2400 -Fa -alg -fPIC
--with-netlib-lapack=/home/your-user-name/build/lapack/lapack-3.2.1/Lapack_LINUX.a
Many thanks Chris, I succeeded in building it.
The configure command above contained two problems
OK, perhaps I drank that beer too soon...
Now, numpy.test() hangs at:
test_pinv (test_defmatrix.TestProperties) ...
So perhaps something is wrong with ATLAS, even though the building went
fine, and make check and make ptcheck reported no errors.
Gabriel
On Sun, 2009-06-07 at 10:20 +0200,
Gabriel Beckers wrote:
OK, perhaps I drank that beer too soon...
Now, numpy.test() hangs at:
test_pinv (test_defmatrix.TestProperties) ...
So perhaps something is wrong with ATLAS, even though the building went
fine, and make check and make ptcheck reported no errors.
Maybe you did
On Sun, Jun 07, 2009 at 06:37:21PM +0900, David Cournapeau wrote:
That's why compiling atlas by yourself is hard, and I generally advise
against it: there is nothing intrinsically hard about it, but you need
to know a lot of small details and platform oddities to get it right
every time.
Gael Varoquaux wrote:
On Sun, Jun 07, 2009 at 06:37:21PM +0900, David Cournapeau wrote:
That's why compiling atlas by yourself is hard, and I generally advise
against it: there is nothing intrinsically hard about it, but you need
to know a lot of small details and platform oddities to get
On Sun, 2009-06-07 at 18:37 +0900, David Cournapeau wrote:
Maybe you did not use the same fortran compiler with atlas and numpy,
or
maybe something else. make check/make ptchek do not test anything
useful
to avoid problems with numpy, in my experience.
That's why compiling atlas by
On Sun, 2009-06-07 at 19:00 +0900, David Cournapeau wrote:
hence *most* :) I doubt most numpy users need to do PCA on
high-dimensional data.
OK a quick look on the MDP website learns that I am one of the
exceptions (as Gaël's email already suggested).
Gabriel
On Sun, 2009-06-07 at 18:37 +0900, David Cournapeau wrote:
That's why compiling atlas by yourself is hard, and I generally advise
against it: there is nothing intrinsically hard about it, but you need
to know a lot of small details and platform oddities to get it right
every time. That's just
thanks for catching the typos!
Chris
On Sun, Jun 7, 2009 at 4:20 AM, Gabriel Beckersbeck...@orn.mpg.de wrote:
On Sat, 2009-06-06 at 12:59 -0400, Chris Colbert wrote:
../configure -b 64 -D c -DPentiumCPS=2400 -Fa -alg -fPIC
when i had problems building atlas in the past (i.e. numpy.test()
failed) it was a problem with my lapack build, not atlas. The netlib
website gives instructions for building the lapack test suite. I
suggest you do that and run the tests on lapack and make sure
everything is kosher.
Chris
On
Gabriel Beckers wrote:
On Sun, 2009-06-07 at 18:37 +0900, David Cournapeau wrote:
That's why compiling atlas by yourself is hard, and I generally advise
against it: there is nothing intrinsically hard about it, but you need
to know a lot of small details and platform oddities to get it
On 7-Jun-09, at 6:12 AM, Gael Varoquaux wrote:
Well, I do bootstrapping of PCAs, that is SVDs. I can tell you, it
makes
a big difference, especially since I have 8 cores.
Just curious Gael: how many PC's are you retaining? Have you tried
iterative methods (i.e. the EM algorithm for PCA)?
On Mon, Jun 08, 2009 at 12:29:08AM -0400, David Warde-Farley wrote:
On 7-Jun-09, at 6:12 AM, Gael Varoquaux wrote:
Well, I do bootstrapping of PCAs, that is SVDs. I can tell you, it
makes
a big difference, especially since I have 8 cores.
Just curious Gael: how many PC's are you
Gael Varoquaux wrote:
I am using the heuristic exposed in
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4562996
We have very noisy and long time series. My experience is that most
model-based heuristics for choosing the number of PCs retained give us
way too much on this problem
On Mon, Jun 08, 2009 at 02:17:45PM +0900, David Cournapeau wrote:
However, being fairly new to statistics, I am not aware of the EM
algorithm that you mention. I'd be interested in a reference, to see
if I can use that algorithm.
I would not be surprised if David had this paper in mind :)
On Fri, Jun 5, 2009 at 2:37 PM, Chris Colbert sccolb...@gmail.com wrote:
I'll caution anyone from using Atlas from the repos in Ubuntu 9.04 as the
package is broken:
https://bugs.launchpad.net/ubuntu/+source/atlas/+bug/363510
just build Atlas yourself, you get better performance AND
since there is demand, and someone already emailed me, I'll put what I
did in this post. It pretty much follows whats on the scipy website,
with a couple other things I gleaned from reading the ATLAS install
guide:
and here it goes, this is valid for Ubuntu 9.04 64-bit (# starts a
comment when
Thanks for this excellent recipe.
I have not tried it out myself yet, but I will follow the instruction on
clean Ubuntu 9.04 64-bit.
Best,
Minjae
On Sat, Jun 6, 2009 at 11:59 AM, Chris Colbert sccolb...@gmail.com wrote:
since there is demand, and someone already emailed me, I'll put what I
Thanks for the replies so far.
I had already tested using an already transposed matrix in the loop,
it didn't make any difference. Oh and btw, I'm on (Scientific) Linux.
I used the Enthought distribution, but I guess I'll have to get
my hands dirty and try to get that Atlas thing working (I'm
On Thu, Jun 4, 2009 at 10:56 PM, Chris Colbertsccolb...@gmail.com wrote:
I should update after reading the thread Sebastian linked:
The current 1.3 version of numpy (don't know about previous versions) uses
the optimized Atlas BLAS routines for numpy.dot() if numpy was compiled with
these
Sebastian Walter wrote:
On Thu, Jun 4, 2009 at 10:56 PM, Chris Colbertsccolb...@gmail.com wrote:
I should update after reading the thread Sebastian linked:
The current 1.3 version of numpy (don't know about previous versions) uses
the optimized Atlas BLAS routines for numpy.dot() if numpy
On Fri, Jun 5, 2009 at 11:58 AM, David
Cournapeauda...@ar.media.kyoto-u.ac.jp wrote:
Sebastian Walter wrote:
On Thu, Jun 4, 2009 at 10:56 PM, Chris Colbertsccolb...@gmail.com wrote:
I should update after reading the thread Sebastian linked:
The current 1.3 version of numpy (don't know about
Sebastian Walter wrote:
On Fri, Jun 5, 2009 at 11:58 AM, David
Cournapeauda...@ar.media.kyoto-u.ac.jp wrote:
Sebastian Walter wrote:
On Thu, Jun 4, 2009 at 10:56 PM, Chris Colbertsccolb...@gmail.com wrote:
I should update after reading the thread Sebastian linked:
The
Hi David,
Let me suggest that you try the latest version of Ubuntu (9.04/Jaunty),
which was released two months ago. It sounds like you are effectively using
release 5 of RedHat Linux which was originally released May 2007. There
have been updates (5.1, 5.2, 5.3), but, if my memory serves me
Hi,
Thanks for the suggestion.
Unfortunately I'm using university managed machines here, so
I have no control over the distribution, not even root access.
However, I just downloaded the latest Enthought distribution,
which uses numpy 1.3, and now numpy is only 30% to 60% slower
than matlab,
David Cournapeau wrote:
It really depends on the CPU, compiler, how atlas was compiled, etc...
it can be slightly faster to 10 times faster (if you use a very poorly
optimized ATLAS).
For some recent benchmarks:
http://eigen.tuxfamily.org/index.php?title=Benchmark
David,
The eigen
Eric Firing wrote:
David,
The eigen web site indicates that eigen achieves high performance
without all the compilation difficulty of atlas. Does eigen have enough
functionality to replace atlas in numpy?
No, eigen does not provide a (complete) BLAS/LAPACK interface. I don't
know if
2009/6/5 David Cournapeau da...@ar.media.kyoto-u.ac.jp:
Eric Firing wrote:
David,
The eigen web site indicates that eigen achieves high performance
without all the compilation difficulty of atlas. Does eigen have enough
functionality to replace atlas in numpy?
No, eigen does not provide
Hi all,
I would be glad if someone could help me with
the following issue:
From what I've read on the web it appears to me
that numpy should be about as fast as matlab. However,
when I do simple matrix multiplication, it consistently
appears to be about 5 times slower. I tested this using
A =
Have a look at this thread:
http://www.mail-archive.com/numpy-discussion@scipy.org/msg13085.html
The speed difference is probably due to the fact that the matrix
multiplication does not call optimized an optimized blas routine, e.g.
the ATLAS blas.
Sebastian
On Thu, Jun 4, 2009 at 3:36 PM,
Sebastian is right.
Since Matlab r2007 (i think that's the version) it has included support for
multi-core architecture. On my core2 Quad here at the office, r2008b has no
problem utilizing 100% cpu for large matrix multiplications.
If you download and build atlas and lapack from source and
I should update after reading the thread Sebastian linked:
The current 1.3 version of numpy (don't know about previous versions) uses
the optimized Atlas BLAS routines for numpy.dot() if numpy was compiled with
these libraries. I've verified this on linux only, thought it shouldnt be
any
2009/6/4 David Paul Reichert d.p.reich...@sms.ed.ac.uk:
Hi all,
I would be glad if someone could help me with
the following issue:
From what I've read on the web it appears to me
that numpy should be about as fast as matlab. However,
when I do simple matrix multiplication, it consistently
On 4-Jun-09, at 5:03 PM, Anne Archibald wrote:
Apart from the implementation issues people have chimed in about
already, it's worth noting that the speed of matrix multiplication
depends on the memory layout of the matrices. So generating B instead
directly as a 100 by 500 matrix might affect
David Warde-Farley wrote:
On 4-Jun-09, at 5:03 PM, Anne Archibald wrote:
Apart from the implementation issues people have chimed in about
already, it's worth noting that the speed of matrix multiplication
depends on the memory layout of the matrices. So generating B instead
directly as a
96 matches
Mail list logo