On 12/15/2009 1:39 PM, Bruce Southey wrote:
> +1 for the function but we can not shorten the name because of existing
> numpy.rank() function.
1. Is it a rule that there cannot be a name duplication
in this different namespace?
2. Is there a commitment to keeping both np.rank and np.ndim?
(I.e., c
On 12/3/2009 12:40 AM, Peter Cai wrote:
> If I have homogeneous linear equations like this
>
> array([[-0.75, 0.25, 0.25, 0.25],
> [ 1. , -1. , 0. , 0. ],
> [ 1. , 0. , -1. , 0. ],
> [ 1. , 0. , 0. , -1. ]])
>
> And I want to get a non-zero solution for
On 11/26/2009 8:20 AM, Nils Wagner wrote:
> a = array(([True,True],[True,True]))
> b = array(([False,False],[False,False]))
> a+b
NumPy's boolean operations are very well behaved.
>>> a = np.array(([True,True],[True,True]))
>>> a+a
array([[ True, True],
[ True, True]], dtype=bool)
Comp
On 11/7/2009 10:56 PM, a...@ajackson.org wrote:
> I want to build a 2D array of lists, and so I need to initialize the
> array with empty lists :
>
> myarray = array([[[],[],[]] ,[[],[],[]]])
[[[] for i in range(3)] for j in range(2) ]
fwiw,
Alan Isaac
__
On 11/7/2009 1:51 PM, Stas K wrote:
> Can I get rid of the loop in this example? And what is the fastest way
> to get v in the example?
>
> ar = array([1,2,3])
> for a in ar:
> for b in ar:
> v = a**2+b**2
>>> a2 = a*a
>>> np.add.outer(a2,a2)
array([[ 2, 5, 10],
[ 5, 8, 13
On 11/4/2009 3:09 PM, David Warde-Farley wrote:
> I'd like to map every unique element (these could be strings, objects,
> or already ints) to a unique integer between 0 and len(unique(d)) - 1.
mymap = dict((k,v) for v,k in enumerate(set(a)))
fwiw,
Alan Isaac
On 10/26/2009 4:04 AM, Nils Wagner wrote:
> how can I obtain the multiplicity of an entry in a list
> a = ['abc','def','abc','ghij']
That's a Python question, not a NumPy question.
So comp.lang.python would be a better forum.
But here's a simplest solution::
a = ['abc','def','abc','ghij']
for it
On 10/21/2009 3:23 PM, Charles R Harris wrote:
> What exactly *was* the history of that project and what can we learn
> from it?
Imo, what really drove this project forward, is that Skipper
was able to interact regularly with someone else who was actively
using and developing on the code base (i.e
On 10/7/2009 10:57 PM, Robert Kern wrote:
> it's "pimpl"
OK: http://en.wikipedia.org/wiki/Opaque_pointer
Thanks,
Alan Isaac
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On 10/7/2009 10:51 PM, David Cournapeau wrote:
> pimp-like strategies
Which means ... ?
Alan
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
> Alan G Isaac wrote:
>> That's just a slow implementation of meshgrid:
>> np.meshgrid(a,b).transpose().tolist()
>> Gives you the same thing.
On 9/25/2009 6:38 PM, Mads Ipsen wrote:
> Yes, but it should also work for [2.1,3.2,4.5] combined with
> [4.6,-2.3,5.6] -
OK, sure, for large arrays meshgrid will look bad.
(It creates a large array twice.) If the example
really involves sequential integers, then mgrid
could be used instead to save on this.
Even so, it is just implausible that duplicating
meshgrid functionality will be faster than using meshgrid.
So
On 9/25/2009 4:01 PM, Mads Ipsen wrote:
> a = numpy.array([1,2,3])
> b = numpy.array([4,5,6])
>
> (n,m) = (a.shape[0],b.shape[0])
> a = numpy.repeat(a,m).reshape(n,m)
> b = numpy.repeat(b,n).reshape(m,n).transpose()
> ab = numpy.dstack((a,b))
>
> print ab.tolist()
That's just a slow implementatio
I do not see what is wrong with itertools.product,
but if you hate it, you can use numpy.meshgrid:
>>> np.array(np.meshgrid([1,2,3],[4,5,6])).transpose()
array([[[1, 4],
[1, 5],
[1, 6]],
[[2, 4],
[2, 5],
[2, 6]],
[[3, 4],
[3, 5],
On 9/25/2009 1:45 PM, Mads Ipsen wrote:
> Is there a numpy operation on two arrays, say [1,2,3] and [4,5,6], that
> will yield:
>
> [[(1,4),(1,5),(1,6)],[(2,4),(2,5),(2,6)],[(3,4),(3,5),(3,6)]]
>>> from itertools import product
>>> list(product([1,2,3],[4,5,6]))
[(1, 4), (1, 5), (1, 6), (2, 4), (
On 9/16/2009 8:22 PM, Gökhan Sever wrote:
> I want to be able to count predefined simple rectangle shapes on an
> image as shown like in this one:
> http://img7.imageshack.us/img7/2327/particles.png
ch.9 of
http://www.amazon.com/Beginning-Python-Visualization-Transformation-Professionals/dp/14302
On 9/15/2009 7:07 AM, Sebastien Binet wrote:
> usage of the exec statement is usually frown upon and can be side stepped.
> e.g:
>
> for m in meat:
> for c in cut:
>locals()['consumed_%s_%s' % (m,c)] = some_array
Additionally, name construction can be pointless.
Maybe::
info = dict(
On 9/13/2009 7:46 AM, Robert wrote:
> 2 ways seem to be consistently Pythonic and logical: "size>
> 0"; or "any(a)" (*); and the later option may be more 'numerical'.
Well, *there's* the problem.
As a user I have felt more than once that a
length based test, like other containers, would
be natura
On 9/6/2009 8:33 AM, Sturla Molden wrote:
> map( cls.myMethod, a )
>
> is similar to:
>
> [aa.myMethod() for aa in a]
http://article.gmane.org/gmane.comp.python.general/630847
fwiw,
Alan Isaac
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.or
On 8/28/2009 10:46 AM Neal Becker apparently wrote:
> explicit is better than implicit. IMO, if I want int/int-> float, I should
> ask for it explicitly, by casting the ints to float first (in numpy, that
> would be using astype).
Aren't you begging the question?
Nobody is suggesting int//int
> Neil Martinsen-Burrell skrev:
>> The persistence of the idea that removing Numpy's legacy features will
>> only be annoyance is inimical to the popularity of the whole Numpy
>> project. [...] Once scientists have working codes it is more than an
>> annoyance to have to change those codes. In
Charles R Harris wrote:
> The real problem is deciding what to do with integer precisions that fit
> in float32. At present we have
>
> In [2]: x = ones(1, dtype=int16)
>
> In [3]: true_divide(x,x)
> Out[3]: array([ 1.], dtype=float32)
A user perspective:
ambiguous cases should always be
reso
Fancy indexing is discussed in detail in
the Guide to NumPy.
http://www.tramy.us/guidetoscipy.html
Alan Isaac
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On 6/9/2009 1:14 AM josef.p...@gmail.com quoted:
> Note: Gnumeric gives an error for negative rates. Excel and OOo2 do
> not. For NPER(-1%;-100;1000), OOo2 gives 9.48, Excel produces
> 9.483283066, Gnumeric gives a #DIV/0 error. This appears to be a bug
> in Gnumeric.
Recently fixed (in Gnumeric)
On 6/8/2009 11:18 PM David Goldsmith apparently wrote:
> I formally "move" that numpy.financial (or at least that
> subset of it consisting of functions which are commonly
> subject to multiple definitions) be moved out of numpy.
My recollection is that Travis O. added this with the
explicit
>> Going back to Alan Isaac's example:
>> 1) beta = (X.T*X).I * X.T * Y
>> 2) beta = np.dot(np.dot(la.inv(np.dot(X.T,X)),X.T),Y)
Robert Kern wrote:
> 4) beta = la.lstsq(X, Y)[0]
>
> I really hate that example.
Remember, the example is a **teaching** example.
I actually use NumPy in a Master'
Olivier Verdier wrote:
> Well, allowing dot(A,B,C) does not remove any other possibility does it?
> I won't fight for this though. I personally don't care but I think that
> it would remove the last argument for matrices against arrays, namely
> the fact that A*B*C is easier to write than dot(do
>> On Sun, Jun 7, 2009 at 04:44, Olivier Verdier wrote:
>>> Yes, I found the thread you are referring
>>> to: http://mail.python.org/pipermail/python-dev/2008-July/081554.html
>>> However, since A*B*C exists for matrices and actually computes (A*B)*C, why
>>> not do the same with dot? I.e. why
On 6/6/2009 6:39 PM Robert Kern apparently wrote:
> Ah, that's the beauty of .flat; it takes care of that for you. .flat
> is not a view onto the memory directly. It is a not-quite-a-view onto
> what the memory *would* be if the array were contiguous and the memory
> directly reflected the layout a
On 6/6/2009 6:02 PM Keith Goodman apparently wrote:
>> def fill_diag(arr, value):
> if arr.ndim != 2:
> raise ValueError, "Input must be 2-d."
> if arr.shape[0] != arr.shape[1]:
> raise ValueError, 'Input must be square.'
> arr.flat[::arr.shape[1]+1] = value
You might
> On Sat, Jun 6, 2009 at 1:59 PM, Alan G Isaac For sure GAUSS does. The result of x' * A * x
> is a "matrix" (it has one row and one column) but
> it functions like a scalar (and even more,
> since right multiplication by it is also allowed).
On
On 6/6/2009 4:30 PM Robert Kern apparently wrote:
> The old idea of introducing RowVector and ColumnVector would help
> here. If x were a ColumnVector and A a Matrix, then you can introduce
> the following rules:
>
> x.T is a RowVector
> RowVector * ColumnVector is a scalar
> RowVector * Matrix is
On 6/6/2009 2:58 PM Charles R Harris apparently wrote:
> How about the common expression
> exp((v.t*A*v)/2)
> do you expect a matrix exponential here?
I take your point that there are conveniences
to treating a 1 by 1 matrix as a scalar.
Most matrix programming languages do this, I think.
For sur
On 6/6/2009 2:03 PM Charles R Harris apparently wrote:
> So is eye(3)*(v.T*v) valid? If (v.T*v) is 1x1 you have incompatible
> dimensions for the multiplication
Exactly. So it is not valid. As you point out, to make it valid
implies a loss of the associativity of matrix multiplication.
Not a goo
On 6/6/2009 12:41 AM Charles R Harris apparently wrote:
> Well, one could argue that. The x.T is a member of the dual, hence maps
> vectors to the reals. Usually the reals aren't represented by 1x1
> matrices. Just my [.02] cents.
Of course that same perspective could
lead you to argue that a M
On 6/5/2009 5:41 PM Chris Colbert apparently wrote:
> well, it sounded like a good idea.
I think something close to this would be possible:
add dot as an array method.
A .dot(B) .dot(C)
is not as pretty as
A * B * C
but it is much better than
np.dot(np.dot(A,B),C)
In fact
On 6/5/2009 3:49 PM Stéfan van der Walt apparently wrote:
> If the Matrix class is to remain, we need to take the steps
> necessary to integrate it into NumPy properly.
I think this requires a list of current problems.
Many of the problems for NumPy have been addressed over time.
I believe the rem
On 6/5/2009 11:38 AM Olivier Verdier apparently wrote:
> I think matrices can be pretty tricky when used for
> teaching. For instance, you have to explain that all the
> operators work component-wise, except the multiplication!
> Another caveat is that since matrices are always 2x2, the
> "sca
On 6/4/2009 5:27 PM Tommy Grav apparently wrote:
> Or the core development team split the matrices out of numpy and make it
> as separate package that the people that use them could pick up and
> run with.
This too would be a mistake, I believe.
But it depends on whether a goal is to
have more
On 6/4/2009 1:27 PM josef.p...@gmail.com apparently wrote:
> Note: there are two versions of the docs for np.intersect1d, the
> currently published docs which describe the actual behavior (for the
> non-unique case), and the new docs on the doc editor
> http://docs.scipy.org/numpy/docs/numpy.lib.ar
On 6/4/2009 12:08 PM Olivier Verdier apparently wrote:
> I really don't see any advantage of matrices over arrays for teaching. I
> prefer to teach linear algebra with arrays.
beta = (X.T*X).I * X.T * Y
beta = np.dot(np.dot(la.inv(np.dot(X.T,X)),X.T),Y)
I rest my case.
I would have to switch
On 6/4/2009 11:29 AM josef.p...@gmail.com apparently wrote:
> intersect1d is the intersection between sets (which are stored as
> arrays), just like in the mathematical definition the two sets only
> have unique elements
Hmmm. OK, I see you and Robert believe this.
But it does not match the do
> On Sun, May 24, 2009 at 3:45 PM, David Warde-Farley
> wrote:
>> Anecdotally, it seems to me that lots of people (myself included) seem
>> to go through a phase early in their use of NumPy where they try to
>> use matrix(), but most seem to end up switching to using 2D arrays for
>> all the afor
On 6/4/2009 10:50 AM josef.p...@gmail.com apparently wrote:
> intersect1d gives set intersection if both arrays have
> only unique elements (i.e. are sets). I thought the
> naming is pretty clear:
> intersect1d(a,b) set intersection if a and b with unique elements
> intersect1d_nu(a,b) set
> On Thu, Jun 4, 2009 at 10:13 AM, Alan G Isaac wrote:
>> Or if a stable order is not important (I don't
>> recall if the OP specified), one could just
>> np.intersect1d(a, np.unique(b))
On 6/4/2009 10:50 AM josef.p...@gmail.com apparently wrote:
> This requires
> On Thu, Jun 4, 2009 at 8:23 AM, Alan G Isaac wrote:
>> a[(a==b[:,None]).sum(axis=0,dtype=bool)]
On 6/4/2009 8:35 AM josef.p...@gmail.com apparently wrote:
> If b is large this creates a huge intermediate array
True enough, but one could then use fromiter:
setb = set(b)
itr = (ai
a[(a==b[:,None]).sum(axis=0,dtype=bool)]
hth,
Alan Isaac
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On 6/1/2009 3:38 PM josef.p...@gmail.com apparently wrote:
> Here's a good one:
>
np.isnan([]).all()
> True
np.isnan([]).any()
> False
>>> all([])
True
>>> any([])
False
Cheers,
Alan Isaac
___
Numpy-discussion mailing list
Numpy-discussio
Would you like to put xirr in econpy until
it finds a home in SciPy? (Might as well
make it available.)
Cheers,
Alan Isaac
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On 5/16/2009 9:01 AM Quilby apparently wrote:
> Ax = y
> Where A is a rational m*n matrix (m<=n), and x and y are vectors of
> the right size. I know A and y, I don't know what x is equal to. I
> also know that there is no x where Ax equals exactly y.
If m<=n, that can only be true if there are no
On 5/14/2009 2:52 PM David J Strozzi apparently wrote:
> At the risk of being glib, I find the current science tools in python
> (numpy, scipy, matplotlib) to be a good beta version of yorick :)
I suspect that is too glib for quite a number of reasons,
but just to mention one aside from the very
On 5/14/2009 6:26 AM Emerald Jasper apparently wrote:
> Please, instruct me how to make non-linear optimization using
> numpy/simpy in python?
http://www.scipy.org/SciPyPackages/Optimize
http://www.scipy.org/Cookbook/OptimizationDemo1
hth,
Alan Isaac
__
On 5/11/2009 8:36 AM Nils Wagner apparently wrote:
> I would like to split strings made of digits after eight
> characters each.
[l[i*8:(i+1)*8] for i in range(len(l)/8)]
Alan Isaac
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://
On 5/11/2009 8:03 AM Nils Wagner apparently wrote:
line_a
> '12345678abcdefgh12345678'
> Is it possible to split line_a such that the output
> is
>
> ['12345678', 'abcdefgh', '12345678']
More of a comp.lang.python question, I think:
out = list()
for k, g in groupby('123abc456',lambda x: x.
On 5/11/2009 6:28 AM Nils Wagner apparently wrote:
> How can I convert a list of arrays into one array ?
Do you mean one long array, so that ``concatenate``
is appropriate, or a 2d array, in which case you
can just use ``array``.
But your example looks like you should preallocate the
larger array
On 5/6/2009 10:00 AM Talbot, Gerry apparently wrote:
> for n in xrange(1,N):
> y[n] = A*x[n] + B*y[n-1]
So, x is known before you start?
How big is N? Also, is y.shape (N,)?
Do you need all of y or only y[N]?
Alan Isaac
___
Numpy-discu
On 4/16/2009 5:06 AM dmitrey apparently wrote:
> I have orthonormal set of vectors B = [b_0, b_1,..., b_k-1],
> b_i from R^n (k may be less than n), and vector a from R^n
>
> What is most efficient way in numpy to get r from R^n and c_0, ...,
> c_k-1 from R:
> a = c_0*b_0+...+c_k-1*b_k-1 + r
> (r
Looks good.
Cheers,
Alan Isaac
Python 2.6.1 (r261:67517, Dec 4 2008, 16:51:00) [MSC v.1500 32 bit (Intel)] on
win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
>>> np.test()
Running unit tests for numpy
NumPy version 1.3.0rc2
NumPy is installed
On 3/30/2009 5:16 PM Bruce Southey apparently wrote:
> It is now official that Python will switch to Mercurial (Hg):
> http://thread.gmane.org/gmane.comp.python.devel/102706
>
> Not that it directly concerns me, but this is rather surprising given:
> http://www.python.org/dev/peps/pep-0374/
http
On 3/28/2009 9:26 AM David Cournapeau apparently wrote:
> I am pleased to announce the release of the rc1 for numpy
> 1.3.0. You can find source tarballs and installers for both Mac OS X
> and Windows on the sourceforge page:
> https://sourceforge.net/projects/numpy/
Was the Python 2.6 Superpack i
On 3/27/2009 6:48 AM David Cournapeau apparently wrote:
> To build the numpy .dmg mac os x installer, I use a script from the
> adium project, which uses applescript and some mac os x black magic. The
> script seems to be GPL, as adium itself:
It might be worth a query to see if the
author wo
I am really grateful to have NumPy on Python 2.6!
Alan Isaac
Python 2.6.1 (r261:67517, Dec 4 2008, 16:51:00) [MSC v.1500 32 bit (Intel)] on
win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> numpy.test()
Running unit tests for numpy
NumPy version
On 1/26/2009 2:37 PM Pauli Virtanen apparently wrote:
> Fixed, no need to be dissappointed any more.
Thanks!
Alan
PS I hope "disappointed" did not sound so strong
as to be discourteous. Email is tricky, and I
tend to write mine with dangerous speed.
_
On 1/26/2009 12:48 PM Pierre GM apparently wrote:
> Shouldn't we refer to the new doc.scipy.org instead of Travis' site ?
Not in my opinion: Travis wrote a book,
which is what is being cited.
The docs link is in fact to the same tramy site.
I must add that IMO, it would be a courtesy
for the do
@MANUAL{ascher.dubois.hinsen.hugunin.oliphant-1999-np,
author = {Ascher, David and Paul F. Dubois and Konrad Hinsen and James
Hugunin and Travis Oliphant},
year = 1999,
title= {Numerical Python},
edition = {UCRL-MA-128569},
address = {Livermore, CA},
Tim Michelsen wrote:
> did you already come to a conclusion regarding this cite topic?
> Did you try to run the bibtext extension for Sphinx?
> If so, please update the documentation guidelines.
I hope we reached agreement that the documentation
should use reST citations and not reST footnotes.
Y
On 1/13/2009 1:55 PM Tim Michelsen apparently wrote:
> Please have a look at:
> http://bitbucket.org/birkenfeld/sphinx/issue/63/make-sphinx-read-bibtex-files-for
>
> And the example Sphinx project at:
> #3 -
> http://bitbucket.org/birkenfeld/sphinx/issue/63/make-sphinx-read-bibtex-files-for#comme
> 2009/1/13 Alan G Isaac :
>> There really is no substitute for using real cite keys
>> and a central database of citations.
On 1/13/2009 9:44 AM Stéfan van der Walt apparently wrote:
> How do you propose getting the citations into the docstrings? Each
> docstring needs a c
> 2009/1/12 Alan G Isaac :
>> Numerical keys will clearly *not* be consistent.
>> The same key will refer to different citations
>> on different pages, and key width will not be
>> uniform.
On 1/12/2009 2:35 AM Stéfan van der Walt apparently wrote:
> We automatic
On 1/12/2009 9:08 AM j...@physics.ucf.edu apparently wrote:
> For citation keys, what's wrong with good old author-year format?
> Most scientific journals use it (Abt 1985).
> Abt, H. 1985. Harold Abt used to publish surveys of things like
> citations when he was ApJ editor in the 1980s but I
On 1/11/2009 4:13 PM Stéfan van der Walt apparently wrote:
> Thank you for your feedback. Yes, this is a problem. In a way,
> RestructuredText is partially to blame for not providing numerical
> citation markup.
I do not agree.
I cannot think of any bibliography tool that uses
numerical citati
The docstring standard at
http://projects.scipy.org/scipy/numpy/wiki/CodingStyleGuidelines#docstring-standard
suggests a citation reference format that is not compatible
with reStructuredText. Quoting from
http://docutils.sourceforge.net/docs/ref/rst/restructuredtext.html#citations:
Citat
A Tuesday 06 January 2009, Franck Pommereau escrigué:
> s = {} # sum of y values for each distinct x (as keys)
> n = {} # number of summed values (same keys)
> for x, y in zip(X, Y) :
> s[x] = s.get(x, 0.0) + y
> n[x] = n.get(x, 0) + 1
Maybe this is not so bad with a couple changes?
from
On 12/18/2008 5:56 AM Prashant Saxena apparently wrote:
> ST = np.empty((), dtype=np.float32)
> ST = np.append(ST, 10.0)
If you really need to append elements,
you probably want to use a list and
then convert to an array afterwards.
But if you know your array size,
you can preallocate memory and
On 12/16/2008 1:29 AM Jarrod Millman apparently wrote:
> Yes. Please don't start new moin wiki documentation. We have a good
> solution for documentation that didn't exist when the moin
> documentation was started. Either put new docs in the docstrings or
> in the scipy tutorial.
OK, in this c
> On Mon, Dec 15, 2008 at 9:21 PM, Alan G Isaac wrote:
>> I noticed that unique1d is not documented on the
>> Numpy Example List http://www.scipy.org/Numpy_Example_List
>> but is documented on the Numpy Example List with Doc
>> http://www.scipy.org/Numpy_Example_List
On 12/15/2008 7:53 PM Robert Kern apparently wrote:
> That basic idea is what unique1d() does; however, it uses numpy
> primitives to keep the heavy lifting in C instead of Python.
I noticed that unique1d is not documented on the
Numpy Example List http://www.scipy.org/Numpy_Example_List
but is
On 12/15/2008 6:01 PM Michael Gilbert apparently wrote:
> According to wikipedia [1], some common Mersenne twister algorithms
> use a linear congruential gradient (LCG) to generate seeds. LCGs have
> been known to produce poor random numbers. Does numpy's Mersenne
> twister do this? And if so, i
Hanno Klemm wrote:
> I the following problem: I have a relatively long array of points
> [(x0,y0), (x1,y1), ...]. Apparently, I have some duplicate entries, which
> prevents the Delaunay triangulation algorithm from completing its task.
>
> Question, is there an efficent way, of getting rid of the
On 12/8/2008 3:32 PM James apparently wrote:
> I have a very simple plot, and the lines join point to point, however i
> would like to add a line of best fit now onto the chart, i am really new
> to python etc, and didnt really understand those links!
See the `slope_intercept` method of the OLS
If I know my data is already clean
and is handled nicely by the
old loadtxt, will I be able to turn
off and the special handling in
order to retain the old load speed?
Alan Isaac
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projec
On 12/2/2008 8:12 AM Alan G Isaac apparently wrote:
> I hope this consideration remains prominent
> in this thread. Is the disappearance or
> read_array the reason for this change?
> What happened to it?
Apologies; it is only deprecated, not gone
On 12/2/2008 7:21 AM Joris De Ridder apparently wrote:
> As a historical note, we used to have scipy.io.read_array which at the
> time was considered by Travis too slow and too "grandiose" to be put
> in Numpy. As a consequence, numpy.loadtxt() was created which was
> simple and fast. Now it
On 11/20/2008 5:11 AM Hans Meine apparently wrote:
> I have a 2D matrix comprising a sequence of vectors, and I want to compute
> the
> norm of each vector. np.linalg.norm seems to be the best bet, but it does
> not
> support axis. Wouldn't this be a nice feature?
Of possible use until then:
On 11/20/2008 12:58 AM Scott Sinclair apparently wrote:
> A Notes section giving an overview of the algorithm has been added to
> the docstring http://docs.scipy.org/numpy/docs/numpy.linalg.linalg.solve/.
You beat me to it.
(I was awaiting editing privileges,
which I just received.)
Thanks!
Alan
Thanks Charles and Josh, but my question
about the documentation goal remains.
Here is how this came up. I mentioned
to a class that I have using NumPy that
solving Ax=b with an inverse is
computationally wasteful and also has
accuracy problems, and I recommend
using `solve` instead. So the ques
If if look at help(np.linalg.solve) I am
told what it does but not how it does it.
If I look at
http://www.scipy.org/doc/numpy_api_docs/numpy.linalg.linalg.html#solve
there is even less info. I'll guess the
algorithm is Gaussian elimination, but how
would I use the documentation to confirm this?
(
> Charles R Harris wrote:
>> Hmm... but I'm thinking one has to be clever here because the main
>> reason I heard for using logs was that normal floating point numbers
>> had insufficient range. So maybe something like
>>
>> logadd(a,b) = a + log(1 + exp(b - a))
>>
>> where a > b ?
On 11/5/2008 1
On 10/29/2008 3:43 PM Robert Kern wrote:
> The defining characteristic is
> that "x = y" should be equivalent to "x = x y" except
> possibly for *optional* in-place semantics.
This gets at a bit of the Language Reference that I've
never understood.
when possible, the actual operation
On 10/25/2008 6:07 PM I. Soumpasis wrote:
> The programs are GPL licensed. More info on the section of copyrights
> http://wiki.deductivethinking.com/wiki/Deductive_Thinking:Copyrights.
> I hope it is ok,
Well, that depends what you mean by "ok".
Obviously, the author picks the license s/he pref
On 10/25/2008 4:14 PM I. Soumpasis apparently wrote:
> http://blog.deductivethinking.com/?p=29
This is cool.
But I do not see a license.
May I hope this is released under the new BSD license,
like the packages it depends on?
Thanks,
Alan Isaac
___
Nump
On 10/20/2008 5:20 AM Andrea Gavana apparently wrote:
> this is probably a very silly question, but combinatorial math is
> not exactly my strength and I am not even sure on how to formulate the
> question. I apologize if it is a very elementary problem.
> Let's suppose that I have 60 oil wells
On 10/15/2008 4:26 PM Robert Kern apparently wrote:
> Which bits?
Those in lapack_lite?
Alan Isaac
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
On 10/14/2008 9:23 PM frank wang apparently wrote:
> I have a large ndarray that I want to dump to a file. I know that I can
> use a for loop to write one data at a time. Since Python is a very
> powerfully language, I want to find a way that will dump the data fast
> and clean. The data can be
Linda Seltzer wrote:
> Where is the moderator? Please get these condescending, demeaning personal
> comments off of this list. I asked technical question. Now please send
> technical information only.
The problem is, you did not just ask
for technical information. You also
accused people of bei
On 10/12/2008 2:39 AM Linda Seltzer apparently wrote:
> Please, no demeaning statements like "you forgot
> a parenthesis" or "you were using someone else's code"
> - just the lines of code for a file that actually *works.*
Those statements are not demeaning; lighten up.
And the answer was corr
On 10/11/2008 10:19 AM Linda Seltzer apparently wrote:
> Please do not forawrd anyone's e-mail to a list without
> permission.
I'm afraid that is a reversal of discussion list practice.
People on this list are not offering to participate in
private conversations. They are offering to participa
>>> http://mentat.za.net/numpy/numpy_advanced_slides/
Alan G Isaac wrote:
> Do you know why the display get muddled if
> you switch to full screen on FireFox?
I received this reply:
Whenever you resize an S5 display (switch to
fullscreen or just resize the wi
>> http://mentat.za.net/numpy/numpy_advanced_slides/
Zachary Pincus wrote:
> Those slides are really useful! Thanks a ton.
Nice content!
And I have to add,
S5 produces a beautiful show.
Alan Isaac
PS What did you use to produce the 3d figures?
PPS Do you know why the display get muddled if
yo
On 10/8/2008 10:29 AM Ravi apparently wrote:
> I sometimes wonder about the motivation for an
> unpaid volunteer to take on an utterly thankless job in which help is never
> forthcoming from users ...
> Thank for taking on this arduous task.
See, it is not entirely thankless. ;-)
I would like
301 - 400 of 716 matches
Mail list logo