Stefan Otte wrote:
> stack([[a, b], [c, d]])
>
> In my case `stack` replaced `hstack` and `vstack` almost completely.
>
> If you're interested in including it in numpy I created a pull request
> [1]. I'm looking forward to getting some feedback!
As far as I can see, it uses hstack and vsta
Benjamin Root wrote:
> In addition to issues with reproducibility, think of all of the unit tests
> that would break!
That is a reproducibility problem :)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listin
"James A. Bednar" wrote:
> Please don't ever, ever break the sequence of numpy's random numbers!
> Please! We have put a lot of effort into being able to reproduce our
> published work exactly,
Jup, it cannot be understated how important this is for reproducibility of
published research. Thus
Robert Kern wrote:
> No, that is not what I meant. If the SFMT can be made to output the
> same bitstream for the same seed, we can use it (modifying it if
> necessary to avoid global state if necessary), but it does not look so
> to me. I welcome corrections on that count (in PR form, preferably
On 05/09/14 19:19, Neal Becker wrote:
>> It's a variant of the standard MT rather than just an implementation
>> of it, so we can't just drop it in. You will need to build the
>> infrastructure to support multiple PRNGs first (or rather, build the
>> infrastructure to reuse the non-uniform dist
Robert Kern wrote:
>>> BLAS/LAPACK are heavy dependencies that often give problems, which is why
>>> you don't want to require them for the casual user that only needs numpy
>>> arrays to make some plots for examples.
>>
>> Maybe we are not talking about the same thing, but isn't blas_lite.c and
Ralf Gommers wrote:
> No. Numpy uses those libs when they're detected, but it falls back on its
> own dot implementation if they're not found. From first bullet under
> href="http://scipy.org/scipylib/building/linux.html#generic-instructions:";>http://scipy.org/scipylib/building/linux.html#gener
Ralf Gommers wrote:
> That's not possible. The only way you can do that is move the hard
> dependency on BLAS & LAPACK to numpy, which we don't want to do.
But NumPy already depends on BLAS and LAPACK, right?
___
NumPy-Discussion mailing list
NumPy-Di
Charles R Harris wrote:
>- Move _dotblas down into multiarray
>1. When there is cblas, add cblas implementations of decr->f->dot.
> 2. Reimplement API matrixproduct2
> 3. Make ndarray.dot a first class method and use it for numpy.dot.
>- Implement matmul
>1. Add matrix
Charles R Harris wrote:
>- Consider using blas_lite instead of cblas, but that is now independent
>of the previous steps.
It should also be possible to build reference cblas on top of blas_lite.
(Or just create a wrapper for the parts of cblas we need.)
Sturla
_
Matti Picus wrote:
> Thanks for your prompt reply. I think numpy is a wonderful project, and
> you all do a great job moving it forward.
> If you ask what would my vision for maturing numpy, I would like to see
> a grouping of linalg matrix-operation functionality into a python level
> package
On 09/08/14 16:28, Charles R Harris wrote:
> Yeah, I figured that out, there is a comment in dotblas that says not,
> but checking how things are configured, it turns out they should be
> good. The original problem seems to have been that dotblas requires
> cblas and can't work with fortran blas.
Charles R Harris wrote:
> It looks like numpy dot only uses BLAS if ATLAS is present, see
> numpy/core/setup.py. Has anyone done the mods needed to use OpenBLAS? What
> is the current status of using OpenBLAS with numpy?
I thought it also uses BLAS if MKL or Accerate Framework is present, but I
On 28/07/14 15:21, alex wrote:
> Are you sure they always give different results? Notice that
> np.ones((N,2)).mean(0)
> np.ones((2,N)).mean(1)
> compute means of different axes on transposed arrays so these
> differences 'cancel out'.
They will be if different algorithms are used. np.ones((N,2)
Nathaniel Smith wrote:
> The problem here is that when summing up the values, the sum gets
> large enough that after rounding, x + 1 = x and the sum stops
> increasing.
Interesting. That explains why the divide-and-conquer reduction is much
more robust.
Thanks :)
Sturla
_
Robert Kern wrote:
>> It would presumably require a global threading.RLock for protecting the
>> global state.
>
> We would use thread-local storage like we currently do with the
> np.errstate() context manager. Each thread will have its own "global"
> state.
That sounds like a better plan, yes
wrote:
> statsmodels still has avoided anything that smells like a global state that
> changes calculation.
If global states are stored in a stack, as in OpenGL, it is not so bad. A
context manager could push a state in __enter__ and pop the state in
__exit__. This is actually how I write OpenGL
Benjamin Root wrote:
> My other concern would be with multi-threaded code (which is where a global
> state would be bad).
It would presumably require a global threading.RLock for protecting the
global state.
Sturla
___
NumPy-Discussion mailing list
Sturla Molden wrote:
> Sebastian Berg wrote:
>
>> Yes, it is much more complicated and incompatible with naive ufuncs if
>> you want your memory access to be optimized. And optimizing that is very
>> much worth it speed wise...
>
> Why? Couldn't we just copy
Sebastian Berg wrote:
> Yes, it is much more complicated and incompatible with naive ufuncs if
> you want your memory access to be optimized. And optimizing that is very
> much worth it speed wise...
Why? Couldn't we just copy the data chunk-wise to a temporary buffer of say
2**13 numbers and th
Sebastian Berg wrote:
> chose more stable algorithms for such statistical functions. The
> pairwise summation that is in master now is very awesome, but it is not
> secure enough in the sense that a new user will have difficulty
> understanding when he can be sure it is used.
Why is it not alway
Julian Taylor wrote:
> The default integer dtype should be sufficiently large to index into any
> numpy array, thats what I call an API here. win64 behaves different, you
> have to explicitly upcast your index to be able to index all memory.
No, you don't have to manually upcast Python int to Py
Julian Taylor wrote:
> git rebase --onto $(git merge-base master maintenance/1.9.x) HEAD^
That's the problem with Git, it solves one problem an creates another.
Personally I have no idea what that command might do.
Sturla
___
NumPy-Discussion mailing
Sebastian Berg wrote:
>> Could it be useful for structured arrays?
>
> Not sure how. The named columns seem like a decent point to me.
NumPy is naming the fields, not the axes, so it might be more useful for
Pandas than NumPy. For example if we have an image with r,g,b data, NumPy
would not n
There is no os.mkfifo on Windows.
Sturla
Valentin Haenel wrote:
> sorry, for the top-post, but should we add this as an issue on the
> github tracker? I'd like to revisit it this summer.
>
> V-
>
> * Julian Taylor [2014-04-18]:
>> On 18.04.2014 18:29, Valentin Haenel wrote:
>>> Hi,
>>>
>>> *
Pandas might have more use for this than NumPy. Database interfaces might
also have use for this.
Sturla
Nathaniel Smith wrote:
> There's some discussion on python-ideas about making it possible for python
> indexing to accept kwargs, eg
>
>arr[1:2, foo=bar]
>
> Since numpy is a very heavy
On 02/07/14 19:55, Chris Barker wrote:
>
> Indeed -- the default (i.e what you get with pip install numpy) should
> be SSE2 -- I":d much rather have a few folks with old hardware have to
> go through some hoops that n have most people get something that is
> "much slower than MATLAB".
I think we
Nathaniel Smith wrote:
> Numpy internally does all index/stride calculations in units of bytes,
> though, so if accessing the data array directly and using strides, the only
> reliable approach is to use intp or equivalent.
If we use PyArray_STRIDES we should use npy_intp, yes, because we are
co
Chris Barker wrote:
> 2) a numpy=based ragged array implementation might make sense as well. You
> essentially store the data in a rank-1 shaped numpy array, and provide
> custom indexing to get the "rows" out. This would allow you to have all the
> data in a single memory block available to C (o
Julian Taylor wrote:
> another thing, don't use int as the index to the array, use npy_intp
> which is large enough to also index arrays > 4GB if the platform
> supports it.
With double* a 32-bit int can index 16 GB, a 32-bit unsigned int can index
32 GB.
With char* a 32-bit int can only inde
Chris Barker wrote:
> I'm curious, why not?
Because an MKL license is required to redistribute MKL. If someone wants to
include the binaries in their product they must acquire a license. An
MKL-based binary wheel would be for end-users that wants to install and use
NumPy. It would not be for tho
Ralf Gommers wrote:
> We have already asked and obtained that permission, under the condition
> that we put some attribution to Intel MKL on our website (which we already
> have at href="http://scipy.org/scipylib/donations.html";>http://scipy.org/scipylib/donations.html).
> I would not be in fav
Matthew Brett wrote:
> Meanwhile Sturla kindly worked up a patch to numpy to work round the
> Accelerate segfault [1]. I haven't tested that, but given I'd already
> built the wheels, I prefer the ATLAS builds because they work with
> multiprocessing.
It is an ugly hack. If it is used it would
Matthew Brett wrote:
> Would you consider doing a PR for that?
Yes, I started on it yesterday. ;-)
Sturla
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Frédéric Bastien wrote:
OSes
> Good luck, and if they act on this, I would be happy to know how you did.
>
If we report a segfault Apple will probably think we are to blame. 99.9 %
of bug reports come from idiots, and those who screen bug reports are
basically payed to hit the ignore button. Ind
Charles R Harris wrote:
> I confess I find the construction odd, but the error probably results from
> stricter indexing rules. Indeed, (13824,) does not broadcast to (4608,3).
> Apart from considerations of backward compatibility, should it?
Probably not. (But breaking matplotlib is bad.)
__
On 09/06/14 14:37, Sturla Molden wrote:
> Hm, no, the declarations of cblas_sdot seems to be the same.
Oh, stupid me, matrix * vector ... Then it must be cblas_sgemv. But
those are the same too.
Anyway, Enthought Canopy (linked with MKL) is *NOT* affected
>>> m = np.ran
On 09/06/14 13:40, Sturla Molden wrote:
>> https://github.com/numpy/numpy/issues/4007
>
> Possibly, but it seems to be in _dotblas which uses cblas. NumPy has its
> own cblas.h. Perhaps it conflicts with Apple's?
>
> Apple's documentation says that cblas_sdot r
Matthew Brett wrote:
> Is it possible this is related to:
>
> https://github.com/numpy/numpy/issues/4007
Possibly, but it seems to be in _dotblas which uses cblas. NumPy has its
own cblas.h. Perhaps it conflicts with Apple's?
Apple's documentation says that cblas_sdot returns float, but knowi
Julian Taylor wrote:
> probably a temporary commit that was never removed, it originates from 2008:
Should we get rid of it for NumPy 1.9?
Sturla
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/num
NumPy seems to define BLAS and LAPACK functions with gfortran ABI:
https://github.com/numpy/numpy/blob/master/numpy/linalg/umath_linalg.c.src#L289
extern float
FNAME(sdot)(int *n,
float *sx, int *incx,
float *sy, int *incy);
What happens on OS X where Accelerate Framework
Why does 64-bit MinGW require -O0 and -DDEBUG?
https://github.com/numpy/numpy/blob/master/numpy/distutils/mingw32ccompiler.py#L120
Is this a bug or intentional?
Sturla
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org
On 09/05/14 02:51, Matthew Brett wrote:
> https://github.com/numpy/numpy/wiki/Window-versions
>
> Firefox crash reports now have about 1 percent of machines without
> SSE2. I suspect that people running new installs of numpy will have
> slightly better machines on average than Firefox users, but
On 03/05/14 23:56, Siegfried Gonzi wrote:
> I noticed IDL uses at least 400% (4 processors or cores) out of the box
> for simple things like reading and processing files, calculating the
> mean etc.
The DMA controller is working at its own pace, regardless of what the
CPU is doing. You cannot
On 05/05/14 17:02, Francesc Alted wrote:
> Well, this might be because it is the place where using several
> processes makes more sense. Normally, when you are reading files, the
> bottleneck is the I/O subsystem (at least if you don't have to convert
> from text to numbers), and for calculating
Matthew Brett wrote:
> 2) Static linking - Carl's toolchain does full static linking
> including C runtimes
The C runtime cannot be statically linked. It would mean that we get
multiple copies of errno and multiple malloc heaps in the process – one of
each static CRT. We must use the same C runt
On 29/04/14 01:30, Nathaniel Smith wrote:
> I finally read this paper:
>
> http://www.cs.utexas.edu/users/flame/pubs/blis2_toms_rev2.pdf
>
> and I have to say that I'm no longer so convinced that OpenBLAS is the
> right starting point.
I think OpenBLAS in the long run is doomed as an OSS proj
On 28/04/14 18:21, Ralf Gommers wrote:
> No problems thus far, but I only installed it yesterday. :-)
>
>
> Sounds good. Let's give it a bit more time, once you've given it a good
> workout we can add that those gfortran 4.8.x compilers seem to work fine
> to the scipy build instructions.
I
Ralf Gommers wrote:
> Sounds good. Let's give it a bit more time, once you've given it a good
> workout we can add that those gfortran 4.8.x compilers seem to work fine to
> the scipy build instructions.
Yes, it needs to be tested properly.
The build instructions for OS X Mavericks should also
Ralf Gommers wrote:
> I'd be interested to hear if those work well for you. For people that just
> want to get things working, I would recommend to use the gfortran
> installers recommended at
> href="http://scipy.org/scipylib/building/macosx.html#compilers-c-c-fortran-cython.";>http://scipy.org
Pauli Virtanen wrote:
> Yes, Windows is the only platform on which Fortran was problematic. OSX
> is somewhat saner in this respect.
Oh yes, it seems there are official "unofficial gfortran binaries"
available for OSX:
http://gcc.gnu.org/wiki/GFortranBinaries#MacOS
Cool :)
Sturla
__
Matthew Brett wrote:
> Thanks to Cark Kleffner's toolchain and some help from Clint Whaley
> (main author of ATLAS), I've built 64-bit windows numpy and scipy
> wheels for testing.
Thanks for your great effort to solve this mess.
By Murphy's law, I do not have access to a Windows computer on wh
Eelco Hoogendoorn wrote:
> I wonder: how hard would it be to create a more 21th-century oriented BLAS,
> relying more on code generation tools, and perhaps LLVM/JITting?
>
> Wouldn't we get ten times the portability with one-tenth the lines of code?
> Or is there too much dark magic going on in
On 12/04/14 01:07, Sturla Molden wrote:
>> ATM the only other way to work with
>> a data set that's larger than memory-divided-by-numcpus is to
>> explicitly set up shared memory, and this is *really* hard for
>> anything more complicated than a single flat array.
>
On 12/04/14 00:39, Nathaniel Smith wrote:
> The spawn mode is fine and all, but (a) the presence of something in
> 3.4 helps only a minority of users, (b) "spawn" is not a full
> replacement for fork;
It basically does the same as on Windows. If you want portability to
Windows, you must abide by
On 11/04/14 20:47, Nathaniel Smith wrote:
> Also, while Windows is maybe in the worst shape, all platforms would
> seriously benefit from the existence of a reliable speed-competitive
> binary-distribution-compatible BLAS that doesn't break fork().
Windows is worst off, yes.
I don't think fork b
On 11/04/14 04:44, Matthew Brett wrote:
> I've been working on a general wiki page on building numerical stuff on
> Windows:
>
> https://github.com/numpy/numpy/wiki/Numerical-software-on-Windows
I am worried that the conclusion will be that there is no viable BLAS
alternative on Windows...
St
On 12/04/14 00:01, Matthew Brett wrote:
> No - sure - but it would be frustrating if you found yourself
> optimizing with a compiler that is useless for subsequent open-source
> builds.
No, I think MSVC or gcc 4.8/4.9 will work too. It's just that I happen
to have icc and clang on this computer
On 11/04/14 23:11, Matthew Brett wrote:
> Are you sure that you can redistribute object code statically linked
> against icc runtimes?
I am not a lawyer...
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/li
Matthew Brett wrote:
> Man, they have an awful license, making it quite useless for
> open-source: http://www.pgroup.com/doc/LICENSE.txt
Awful, and insanely expensive. :-(
And if you look at ACML, you will find that the MSVC compatible version is
built with the PG compiler. (There is an Intel i
Sturla Molden wrote:
> Making a totally new BLAS might seem like a crazy idea, but it might be the
> best solution in the long run.
To see if this can be done, I'll try to re-implement cblas_dgemm and then
benchmark against MKL, Accelerate and OpenBLAS. If I can get the
performance
Nathaniel Smith wrote:
> I unfortunately don't have the skills to actually lead such an effort
> (I've never written a line of asm in my life...), but surely our
> collective communities have people who do?
The assembly part in OpenBLAS/GotoBLAS is the major problem. Not just that
it's AT&T synt
Matthew Brett wrote:
> """
> This library contains an adaptation of the legacy cblas interface to BLAS for
> C++ AMP. At this point almost all interfaces are not implemented. One
> exception is the ampblas_saxpy and ampblas_daxpy which serve as a
> template for the
> implementation of other routi
Matthew Brett wrote:
> Hi,
>
> I've been working on a general wiki page on building numerical stuff on
> Windows:
>
> https://github.com/numpy/numpy/wiki/Numerical-software-on-Windows
>
> I'm hoping to let Microsoft know what problems we're having, and
> seeing whether we numericists can share
Yaroslav Halchenko wrote:
> it is in src/main/RNG.c (ack is nice ;) )... from visual inspection looks
> "matching"
I see... It's a rather vanilla Mersenne Twister, and it just use 32 bits of
randomness. An signed int32 is multiplied by 2.3283064365386963e-10 to
scale it to [0,1). Then they also
Yaroslav Halchenko wrote:
> so I would assume that the devil is indeed in R post-processing and would look
> into it (if/when get a chance).
I tried to look into the R source code. It's the worst mess I have ever
seen. I couldn't even find their Mersenne twister.
Sturla
___
Carl Kleffner wrote:
> MKL BLAS LAPACK has issues as well:
> href="http://software.intel.com/en-us/articles/intel-mkl-110-bug-fixes";>http://software.intel.com/en-us/articles/intel-mkl-110-bug-fixes
> .
> In case of OpenBLAS or GOTOBLAS what precisly is the problem you identify
> as showstopper?
Matthew Brett wrote:
> Put another way - does anyone know what bugs in gotoBLAS2 do arise for
> Windows / Intel builds?
http://www.openblas.net/Changelog.txt
There are some bug fixes for x86_64 here.
GotoBLAS (and GotoBLAS2) were the de facto BLAS on many HPC systems, and
are well proven. But
> a = np.random.bytes(4*n).view(dtype='> 5).astype(np.int32)
b = (np.random.bytes(4*n).view(dtype='> 6).astype(np.int32)
r = (a * 67108864.0 + b) / 9007199254740992.0
Sturla
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.
Yaroslav Halchenko wrote:
> R, Python std library, numpy all have Mersenne Twister RNG implementation.
> But
> all of them generate different numbers. This issue was previously discussed
> in
> https://github.com/numpy/numpy/issues/4530 : In Python, and numpy generated
> numbers are based on
Matthew Brett wrote:
> Julian - do you have any opinion on using gotoBLAS instead of OpenBLAS
> for the Windows binaries?
That is basically OpenBLAS too, except with more bugs and no AVX support.
Sturla
___
NumPy-Discussion mailing list
NumPy-Discuss
wrote:
> pandas came later and thought ddof=1 is worth more than consistency.
Pandas is a data analysis package. NumPy is a numerical array package.
I think ddof=1 is justified for Pandas, for consistency with statistical
software (SPSS et al.)
For NumPy, there are many computational tasks whe
alex wrote:
> I don't have any opinion about this debate, but I love the
> justification in that thread "Any surprise that is created by the
> different default should be mitigated by the fact that it's an
> opportunity to learn something about what you are doing."
That is so true.
Sturla
_
Haslwanter Thomas wrote:
> Personally I cannot think of many applications where it would be desired
> to calculate the standard deviation with ddof=0. In addition, I feel that
> there should be consistency between standard modules such as numpy, scipy,
> and pandas.
ddof=0 is the maxiumum likel
On 29/03/14 00:05, Julian Taylor wrote:
> But the library is still installed in the system (at least on the 10.9
> macs I saw)
>
I only find it in the gfortran 4.8 I installed separately. Nowhere else.
Sturla
___
NumPy-Discussion mailing list
NumPy-Di
Olivier Grisel wrote:
> Would it be possible to use a static
> gcc toolchain as Carl Kleffner is using for his experimental windows
> whl packages?
I think we should consider to device a Fortran to C99 translator.
Sturla
___
NumPy-Discussion mailing
Julian Taylor wrote:
> On 28.03.2014 23:09, Olivier Grisel wrote:
> you can get rid of libgfortran and quadmath with the -static-libgfortran
> flag
> libgcc_s is probably more tricky as scipy uses c++ so -static-libgcc may
> need checking before using it
> doesn't mac provide libgcc_s anyway? Eve
Nathaniel Smith wrote:
> I thought OpenBLAS is usually used with reference lapack?
It is.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Nathaniel Smith wrote:
> If the only problem with eigen turns out to be that we have to add a line
> of text to a file then I think we can probably manage this somehow.
We would also have to compile Eigen-BLAS for various architectures and CPU
counts. It is not "adaptive" like MKL or OpenBLAS.
Matthew Brett wrote:
> So - is Eigen our best option for optimized blas / lapack binaries on
> 64 bit Windows?
Maybe not:
http://gcdart.blogspot.de/2013/06/fast-matrix-multiply-and-ml.html
With AVX the difference is possibly even larger.
Sturla
__
Matthew Brett wrote:
> Does anyone know how their performance compares to MKL or the
> reference implementations?
http://eigen.tuxfamily.org/index.php?title=Benchmark
http://gcdart.blogspot.de/2013/06/fast-matrix-multiply-and-ml.html
Sturla
___
Nu
Matthew Brett wrote:
> I see it should be possible to build a full blas and partial lapack
> library with eigen [1] [2].
Eigen has a licensing issue as well, unfortunately, MPL2.
E.g. it requires recipients to be informed of the MPL requirements (cf.
impossible with pip install numpy).
Sturl
Nathaniel Smith wrote:
> - There might be some speed argument, if people often write things
> like "Mat @ Mat @ vec"? But no-one has found any evidence that people
> actually do write such things often.
With left associativity, this would be an algorithmic optimization:
Mat @ (Mat @ vec)
Charles R Harris wrote:
> Well, I this point I think we might as well go with left associativity.
> Most of the operator uses looked to involve a single `@`, where it doesn't
> matter, and the others were short where adding a couple of parenthesis
> wouldn't mess things up too much.
That is wha
Personally I did not like @@ in the first place.
Sturla
Nathaniel Smith wrote:
> Hi all,
>
> Here's the second thread for discussion about Guido's concerns about
> PEP 465. The issue here is that PEP 465 as currently written proposes
> two new operators, @ for matrix multiplication and @@ for
Sturla Molden wrote:
> wrote:
>
>> The only question IMO is which ddof for weighted std, ...
>
> Something like this?
>
> sum_weights - (ddof/float(n))*sum_weights
Please ignore.
___
NumPy-Discussion mailing list
NumPy-D
Sebastian Berg wrote:
> I am right now a bit unsure about whether or not the "weights" would be
> "aweights" or different... R seems to not care about the scale of the
> weights which seems a bit odd to me for an unbiased estimator? I always
> assumed that we can do the statistics behind using th
wrote:
> The only question IMO is which ddof for weighted std, ...
Something like this?
sum_weights - (ddof/float(n))*sum_weights
Sturla
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-disc
Nathaniel Smith wrote:
> 3. Using Cython in the numpy core
>
> The numpy core contains tons of complicated C code implementing
> elaborate operations like indexing, casting, ufunc dispatch, etc. It
> would be really nice if we could use Cython to write some of these
> things.
So the idea of hav
Sudheer Singh wrote:
> Hello Everyone !! I am Sudheer singh , an information technology student
> at IIIT - ALLAHABAD. I'm interested in contributing to Numpy.I was going
> through Idea page and I found Implementing "Levenberg-Marquardt " with
> additional feature like inequality constraints and
Chris Barker wrote:
> Sure -- but I'm afraid that there will be a lot of code that does an
> isinstance() check where it it absolutely unnecessary. If you really need
> to know if something is a sequence or a mapping, I suppose it's required,
> but how often is that?
I must say I don't understan
Anthony Scopatz wrote:
> Hello All,
>
> The semantics of this seem quite insane to me:
>
> In [1]: import numpy as np
>
> In [2]: import collections
>
> In [4]: isinstance(np.arange(5), collections.Sequence) Out[4]: False
>
> In [6]: np.version.full_version
> Out[6]: '1.9.0.dev-eb40f65'
>
>
JB wrote:
>
> x = np.array([1,2,3,4,5,6,7,8,9,10])
>
> If I want the first 5 elements, what do I do?
x[:5]
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On 22/02/14 23:39, Sturla Molden wrote:
Ok, next runner up is Accelerate. Let's see how it compares to OpenBLAS
and MKL on Mavericks.
It seems Accelerate has roughly the same performance as MKL now.
Did the upgrade to Mavericks do this?
These are the compile lines, in case you w
On 22/02/14 22:15, Nathaniel Smith wrote:
$ make TARGET=SANDYBRIDGE USE_OPENMP=0 BINARY=64 NOFORTRAN=1
You'll definitely want to disable the affinity support too, and
probably memory warmup. And possibly increase the maximum thread
count, unless you'll only use the library on the computer it w
On 22/02/14 22:00, Robert Kern wrote:
> If you actually want some help, you will have to provide a *little* more
> detail.
$ git clone https://github.com/xianyi/OpenBLAS
Oops...
$ cd OpenBLAS
did the trick. I need some coffee :)
Sturla
___
N
On 20/02/14 17:57, Jurgen Van Gael wrote:
> Hi All,
>
> I run Mac OS X 10.9.1 and was trying to get OpenBLAS working for numpy.
> I've downloaded the OpenBLAS source and compiled it (thanks to Olivier
> Grisel).
How?
$ make TARGET=SANDYBRIDGE USE_OPENMP=0 BINARY=64 NOFORTRAN=1
make: *** No target
Will this mean NumPy, SciPy et al. can start using OpenBLAS in the
"official" binary packages, e.g. on Windows and Mac OS X? ATLAS is slow and
Accelerate conflicts with fork as well.
Will dotblas be built against OpenBLAS? AFAIK, it is only buit against
ATLAS or MKL, not any other BLAS, but it sh
Andreas Hilboll wrote:
> On 18.02.2014 17:47, Sturla Molden wrote:
>> Undortunately the DCMT code was LGPL, not BSD, I don't know if this has
>> changed.
>
> I just checked. The file dcmt0.6.1b.tgz, available from
> http://www.math.sci.hiroshima-u.ac.jp/~m-mat/M
Matthieu Brucher wrote:
> Hi,
>
> The main issue with PRNG and MT is that you don't know how to
> initialize all MT generators properly. A hash-based PRNG is much more
> efficient in that regard (see Random123 for a more detailed
> explanation).
>> From what I heard, if MT is indeed chosen for RN
201 - 300 of 840 matches
Mail list logo