Re: [Numpy-discussion] Strange PyArray_FromObject() behavior

2012-02-16 Thread Val Kalatsky
Hi Bill,

Looks like you are running a very fresh version of numpy.
Without knowing the build version and what's going on in the extension
module I can't tell you much.
The usual suspects would be:
1) Numpy bug, not too likely.
2) Incorrect use of PyArray_FromObject, you'll need to send more info.
3) Something is seriously corrupted, probably not the case, because
segfault would follow quickly.

Please provide more info.
Val

PS Is it something related to what we'll be working on (Trilinos)?


On Thu, Feb 16, 2012 at 11:09 AM, Spotz, William F wrote:

>  I have a user who is reporting tests that are failing on his platform.
>  I have not been able to reproduce the error on my system, but working with
> him, we have isolated the problem to unexpected results when
> PyArray_FromObject() is called.  Here is the chain of events:
>
>  In python, an integer is calculated.  Specifically, it is
>
>  len(result.errors) + len(result.failures)
>
>  where result is a unit test result object from the unittest module.  I
> had him verify that this value was in fact a python integer.  In my
> extension module, this PyObject gets passed to the PyArray_FromObject()
> function in a routine that comes from numpy.i.  What I expect, and what I
> typically get, is a numpy scalar array of type C long.  I had my user print
> the result using PyObject_Print() and what he got was
>
>  array([0:00:00], dtype=timedelta64[us])
>
>  I am stuck as to why this might be happening.  Any ideas?
>
>  Thanks
>
>  ** Bill Spotz  **
> ** Sandia National Laboratories  Voice: (505)845-0170  **
> ** P.O. Box 5800 Fax:   (505)284-0154  **
> ** Albuquerque, NM 87185-0370Email: wfsp...@sandia.gov **
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] is there an efficient way to get a random set of subsets/combinations?

2012-02-20 Thread Val Kalatsky
Hi Slava,

Since your k is only 10, here is a quickie:

import numpy as np
arr = np.arange(n)
for i in range(k):
np.random.shuffle(arr)
print np.sort(arr[:p])

If your ever get non-unique entries in a set of k=10 for your n and p,
consider yourself lucky:)
Val

On Mon, Feb 20, 2012 at 10:35 PM, Yaroslav Halchenko
wrote:

> Hi to all Numeric  Python experts,
>
> could not think of a mailing list with better fit to my question which
> might
> have an obvious answer:
>
> straightforward (naive) Python code to answer my question would be
> something like
>
> import random, itertools
> n,p,k=100,50,10  # don't try to run with this numbers! ;)
> print random.sample(list(itertools.combinations(range(n), p)), k)
>
> so the goal is to get k (non-repeating) p-subsets of n, where n and p
> prohibitively large to first populate the full set of combinations.
>
> Thank you in advance ;-)
> --
> =--=
> Keep in touch www.onerussian.com
> Yaroslav Halchenko www.ohloh.net/accounts/yarikoptic
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Determining the 'viewness' of an array, and flags.owndata confusion

2012-02-28 Thread Val Kalatsky
Viewness is in the eyes of the beholder.
You have to use indirect methods to figure it out.
Probably the most robust approach is to go up the base chain until you get
None.

In [71]: c1=np.arange(16)
In [72]: c2=c1[::2]
In [73]: c4=c2[::2]
In [74]: c8=c4[::2]
In [75]: id(c8.base)==id(c4)
Out[75]: True
In [76]: id(c8.base.base.base)==id(c1)
Out[76]: True

The graph of dependencies is only a tree, so sooner or later you'll get to
the root.

Val

On Tue, Feb 28, 2012 at 8:06 PM, Jonathan Rocher wrote:

> Thank you all for your answers. Kurt and I were training new developers on
> numpy and telling them that
> - fancy indexing creates a copy
> - owndata was a good way to know if an array is a view or a copy.
> It turns out that these were both correct statements but we didn't think
> that the fancy indexing would sometimes return a view after making a copy
> (BTW is that necessary? It sounds to me like it is complicating things for
> no real benefit.)
>
> So a more precise question then is "is a_fancy a view on a's data or does
> it copy the data and its data is independent?" and the answer is probably:
> since "a_fancy1.base is a" is false then the 2 array's data are
> independent from each other.
>
> Do I have it right now?
>
> Jonathan
>
>
> On Tue, Feb 28, 2012 at 5:31 PM, Nathaniel Smith  wrote:
>
>> On Tue, Feb 28, 2012 at 11:01 PM, Kurt Smith  wrote:
>> > For an arbitrary numpy array 'a', what does 'a.flags.owndata' indicate?
>>
>> I think what it really indicates is whether a's destructor should call
>> free() on a's data pointer.
>>
>> > I originally thought that owndata is False iff 'a' is a view.  But
>> > that is incorrect.
>> >
>> > Consider the following:
>> >
>> > In [119]: a = np.zeros((3,3))
>> >
>> > In [120]: a.flags.owndata  # should be True; zeros() creates and
>> > returns a non-view array.
>> > Out[120]: True
>> >
>> > In [121]: a_view1 = a[2:, :]
>> >
>> > In [122]: a_view1.flags.owndata   # expected to be False
>> > Out[122]: False
>> >
>> > In [123]: a_fancy1 = a[[0,1], :]
>> >
>> > In [124]: a_fancy1.flags.owndata  # expected to be True, a_fancy1 is a
>> > fancy-indexed array.
>> > Out[124]: True
>> >
>> > In [125]: a_fancy2 = a[:, [0,1]]
>> >
>> > In [126]: a_fancy2.flags.owndata # expected to be True, a_fancy2 is a
>> > fancy-indexed array.
>> > Out[126]: False
>> >
>> > So when I query an array's flags.owndata, what is it telling me?  What
>> > I want to know is whether an array is a view or not.  If flags.owndata
>> > has nothing to do with the 'viewness' of an array, how would I
>> > determine if an array is a view?
>> >
>> > In the previous example, a_fancy2 does not own its data, as indicated
>> > by 'owndata' being False.  But when I modify a_fancy2, 'a' is not
>> > modified, as expected, but contrary to what 'owndata' would seem to
>> > indicate.
>>
>> It looks like the fancy indexing code in this case actually creates a
>> new array, and then instead of returning it directly, returns a
>> (reshaped) view on this new array. If you look at a_fancy2.base,
>> you'll see this new array.
>>
>> So: a_fancy2 *is* a view... it's just not a view of 'a'. It's a view
>> of this other array.
>>
>> > If I cannot use flags.owndata, what is a reliable way to determine
>> > whether or not an array is a view?
>>
>> owndata *is* a reliable way to determine whether or not an array is a
>> view; it just turns out that this is not a very useful question to
>> ask.
>>
>> What are you actually trying to do? There's probably another way to
>> accomplish it.
>>
>> -- Nathaniel
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>
>
>
> --
> Jonathan Rocher, PhD
> Scientific software developer
> Enthought, Inc.
> jroc...@enthought.com
> 1-512-536-1057
> http://www.enthought.com
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] subclassing array in c

2012-03-07 Thread Val Kalatsky
Seeing the backtrace would be helpful.
Can you do whatever leads to the segfault
from python run from gdb?
Val

On Wed, Mar 7, 2012 at 7:04 PM, Christoph Gohle
wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hi,
>
> I have been struggeling for quite some time now. Desperate as I am, now I
> need help.
>
> I was trying to subclass ndarrays in a c extension (see code below) and do
> constantly get segfaults. I have been checking my INCREF and DECREF stuff
> up and down but can't find the error. Probably I got something completely
> wrong... anybody able to help?
>
> Thanks,
> Christoph
> - -
> #include 
> #include 
>
> #include 
>
> static PyObject *SpamError;
>
> typedef struct {
>  PyArrayObject base;
>  PyDictObject* unit;
> } UnitArrayObject;
>
> PyTypeObject unitArrayObjectType;
>
> static int
> checkunit(PyObject *unit1, PyObject *unit2) {
>  return PyObject_Compare(unit1, unit2);
> }
>
> static PyObject *
> unitArray_new(PyTypeObject *cls, PyObject *args, PyObject *kwargs) {
>PyObject *data = NULL,
> *unit = NULL;
>  PyArray_Descr* dtype = NULL;
>  PyObject *res = NULL, *tmp = NULL;
>
>if (!PyArg_ParseTuple(args, "OO|O&", &data, &unit,
> PyArray_DescrConverter, &dtype)) {
>Py_XDECREF(dtype);
>return NULL;
>}
>
>  res = PyArray_FromAny(data, dtype, 0, 0, NPY_ENSURECOPY, NULL);
>if (res == NULL) {
>Py_XDECREF(dtype);
>//TODO: raise exception?
>return NULL;
>}
>
>if (PyObject_IsInstance(data, (PyObject*)cls)) {
>if (unit!=NULL &&
> !checkunit((PyObject*)((UnitArrayObject*)data)->unit,unit)) {
>Py_XDECREF(res);
>//TODO: raise exception
>return NULL;
>}
>} else {
>if (PyObject_IsTrue(unit)) {
>  tmp = res;
>res = PyArray_View((PyArrayObject*)res, NULL,
> &unitArrayObjectType);
>  if (tmp!=res) {
>Py_XDECREF(tmp);
>  }
>  ((UnitArrayObject*)res)->unit = (PyDictObject*)unit;
>  Py_INCREF(unit);
>  if (unit!=NULL) {
>  }
>}
>}
>return res;
> }
>
> static PyObject*
> unitArray__array_finalize__(PyObject* new, PyObject* args) {
>PyObject *attr = NULL, *tmp = NULL;
>  PyObject *parent = NULL;
>
>  if (!PyArg_ParseTuple(args, "O", &parent)) {
>return NULL;
>  }
>   if (parent!=NULL) {
> attr = PyObject_GetAttrString(parent, "unit");
> if (attr == NULL) {
>//parent has no 'unit' so we make a new empty one
>   attr = PyDict_New();
>   PyErr_Clear();
> }
>   }
>  tmp = (PyObject*)((UnitArrayObject*)new)->unit;
>((UnitArrayObject*)new)->unit = (PyDictObject*)attr;
>
>  Py_INCREF(Py_None);
>  return Py_None;
> }
>
> static PyObject*
> unitArray__array_wrap__(PyObject *self, PyObject *args) {
>PyObject *array = NULL, *context = NULL;
>
>if (!PyArg_ParseTuple(args, "OO", array, context)) {
>//TODO: raise exception
>return NULL;
>}
>
>printf("%s",PyString_AsString(PyObject_Str(context)));
>
>  Py_INCREF(array);
>  return array;
> }
>
>
> static PyMethodDef unitArrayMethods[] = {
>  {"__array_finalize__", unitArray__array_finalize__, METH_VARARGS, "array
> finalize method"},
>  {"__array_wrap__", unitArray__array_wrap__, METH_VARARGS, "array wrap
> method"},
>  {NULL, NULL, 0, NULL}
> };
>
> static PyMemberDef unitArrayMembers[] = {
>  {"unit", T_OBJECT, offsetof(UnitArrayObject, unit),  0, "dictionary
> containing unit info."},
>  {NULL, 0, 0, 0, NULL}
> };
>
> PyTypeObject unitArrayObjectType = {
>PyObject_HEAD_INIT(NULL)
>0,  /* ob_size*/
>"spam.UnitArray",   /* tp_name*/
>sizeof(UnitArrayObject),/* tp_basicsize   */
>0,  /* tp_itemsize*/
>0,  /* tp_dealloc */
>0,  /* tp_print   */
>0,  /* tp_getattr */
>0,  /* tp_setattr */
>0,  /* tp_compare */
>0,  /* tp_repr*/
>0,  /* tp_as_number   */
>0,  /* tp_as_sequence */
>0,  /* tp_as_mapping  */
>0,  /* tp_hash*/
>0,  /* tp_call*/
>0,  /* tp_str */
>0,  /* tp_getattro*/
>0,  /* tp_setattro*/
>0,  /* tp_as_buffer   */
>Py_TPFLAGS_DEFAULT,   

Re: [Numpy-discussion] subclassing array in c

2012-03-07 Thread Val Kalatsky
Tried it on my Ubuntu 10.10 box, no problem:

1) Saved as spampub.c
2) Compiled with (setup.py attached): python setup.py build_ext -i
3) Tested from ipython:
In [1]: import spampub
In [2]: ua=spampub.UnitArray([0,1,2,3.0],'liter')
In [3]: ua
Out[3]: UnitArray([ 0.,  1.,  2.,  3.])
In [4]: ua.unit
Out[4]: 'liter'


On Wed, Mar 7, 2012 at 7:15 PM, Val Kalatsky  wrote:

>
> Seeing the backtrace would be helpful.
> Can you do whatever leads to the segfault
> from python run from gdb?
> Val
>
>
> On Wed, Mar 7, 2012 at 7:04 PM, Christoph Gohle <
> christoph.go...@mpq.mpg.de> wrote:
>
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA1
>>
>> Hi,
>>
>> I have been struggeling for quite some time now. Desperate as I am, now I
>> need help.
>>
>> I was trying to subclass ndarrays in a c extension (see code below) and
>> do constantly get segfaults. I have been checking my INCREF and DECREF
>> stuff up and down but can't find the error. Probably I got something
>> completely wrong... anybody able to help?
>>
>> Thanks,
>> Christoph
>> - -
>> #include 
>> #include 
>>
>> #include 
>>
>> static PyObject *SpamError;
>>
>> typedef struct {
>>  PyArrayObject base;
>>  PyDictObject* unit;
>> } UnitArrayObject;
>>
>> PyTypeObject unitArrayObjectType;
>>
>> static int
>> checkunit(PyObject *unit1, PyObject *unit2) {
>>  return PyObject_Compare(unit1, unit2);
>> }
>>
>> static PyObject *
>> unitArray_new(PyTypeObject *cls, PyObject *args, PyObject *kwargs) {
>>PyObject *data = NULL,
>> *unit = NULL;
>>  PyArray_Descr* dtype = NULL;
>>  PyObject *res = NULL, *tmp = NULL;
>>
>>if (!PyArg_ParseTuple(args, "OO|O&", &data, &unit,
>> PyArray_DescrConverter, &dtype)) {
>>Py_XDECREF(dtype);
>>return NULL;
>>}
>>
>>  res = PyArray_FromAny(data, dtype, 0, 0, NPY_ENSURECOPY, NULL);
>>if (res == NULL) {
>>Py_XDECREF(dtype);
>>//TODO: raise exception?
>>return NULL;
>>}
>>
>>if (PyObject_IsInstance(data, (PyObject*)cls)) {
>>if (unit!=NULL &&
>> !checkunit((PyObject*)((UnitArrayObject*)data)->unit,unit)) {
>>Py_XDECREF(res);
>>//TODO: raise exception
>>return NULL;
>>}
>>} else {
>>if (PyObject_IsTrue(unit)) {
>>  tmp = res;
>>res = PyArray_View((PyArrayObject*)res, NULL,
>> &unitArrayObjectType);
>>  if (tmp!=res) {
>>Py_XDECREF(tmp);
>>  }
>>  ((UnitArrayObject*)res)->unit = (PyDictObject*)unit;
>>  Py_INCREF(unit);
>>  if (unit!=NULL) {
>>  }
>>}
>>}
>>return res;
>> }
>>
>> static PyObject*
>> unitArray__array_finalize__(PyObject* new, PyObject* args) {
>>PyObject *attr = NULL, *tmp = NULL;
>>  PyObject *parent = NULL;
>>
>>  if (!PyArg_ParseTuple(args, "O", &parent)) {
>>return NULL;
>>  }
>>   if (parent!=NULL) {
>> attr = PyObject_GetAttrString(parent, "unit");
>> if (attr == NULL) {
>>//parent has no 'unit' so we make a new empty one
>>   attr = PyDict_New();
>>   PyErr_Clear();
>> }
>>   }
>>  tmp = (PyObject*)((UnitArrayObject*)new)->unit;
>>((UnitArrayObject*)new)->unit = (PyDictObject*)attr;
>>
>>  Py_INCREF(Py_None);
>>  return Py_None;
>> }
>>
>> static PyObject*
>> unitArray__array_wrap__(PyObject *self, PyObject *args) {
>>PyObject *array = NULL, *context = NULL;
>>
>>if (!PyArg_ParseTuple(args, "OO", array, context)) {
>>//TODO: raise exception
>>return NULL;
>>}
>>
>>printf("%s",PyString_AsString(PyObject_Str(context)));
>>
>>  Py_INCREF(array);
>>  return array;
>> }
>>
>>
>> static PyMethodDef unitArrayMethods[] = {
>>  {"__array_finalize__", unitArray__array_finalize__, METH_VARARGS, "array
>> finalize method"},
>>  {"__array_wrap__", unitArray__array_wrap__, METH_VARARGS, "array wrap
>> method"

Re: [Numpy-discussion] subclassing array in c

2012-03-08 Thread Val Kalatsky
Hi Christoph,

I've just tried
a=[spampub.UnitArray(i,{'s':i}) for i in xrange(1000)]
and everything looks fine on my side.
Probably my test environment is too different to give comparable results:

In [3]: call(["uname", "-a"])
Linux ubuntu 2.6.35-22-generic #33-Ubuntu SMP Sun Sep 19 20:32:27 UTC 2010
x86_64 GNU/Linux
In [4]: np.__version__
Out[4]: '1.5.1'

I am afraid I will not be able to help testing it on super-nova versions of
numpy.
Cheers
Val


On Thu, Mar 8, 2012 at 1:49 AM, Christoph Gohle
wrote:

> Dear Val,
>
> I agree that more detail is needed. Sorry for that it was late yesterday.
>
>  I am running Python 2.6.1, numpy development branch
> (numpy-2.0.0.dev_20101104-py2.6-macosx-10.6-universal.egg). maybe I should
> switch to release?
>
> I compile with your setup.py using 'python setup.py build_ext -i' and run
> the commands below in ipython. As you can see, I don't get a crash for a
> single call to the constructor, consistent with your observation. And it
> looks like, one has to use dict in the unit to make it crash.
>
> Can you make sense out of that?
>
>
> In [1]: import spampub
>
> In [4]: spampub.UnitArray(1,{'s':1})
> Out[4]: UnitArray(1)
>
> In [6]: a=[spampub.UnitArray(i,{'s':i}) for i in xrange(1000)]
> Segmentation fault
>
> backtrace is the following:
>
> Exception Type:  EXC_BAD_ACCESS (SIGSEGV)
> Exception Codes: KERN_INVALID_ADDRESS at 0x0021
> Crashed Thread:  0  Dispatch queue: com.apple.main-thread
>
> Thread 0 Crashed:  Dispatch queue: com.apple.main-thread
> 0   org.python.python 0x0001000b0b8e PyObject_GC_Track +
> 1473
> 1   org.python.python 0x0001000b1255 _PyObject_GC_Malloc
> + 191
> 2   org.python.python 0x0001000b12d0 _PyObject_GC_New + 21
> 3   org.python.python 0x00010003a856 PyDict_New + 116
> 4   org.python.python 0x00010003a99c _PyDict_NewPresized
> + 24
> 5   org.python.python 0x0001000880cb PyEval_EvalFrameEx +
> 11033
> 6   org.python.python 0x00010008acce PyEval_EvalCodeEx +
> 1803
> 7   org.python.python 0x00010008735d PyEval_EvalFrameEx +
> 7595
> 8   org.python.python 0x00010008acce PyEval_EvalCodeEx +
> 1803
> 9   org.python.python 0x00010008935e PyEval_EvalFrameEx +
> 15788
> 10  org.python.python 0x00010008acce PyEval_EvalCodeEx +
> 1803
> 11  org.python.python 0x00010008935e PyEval_EvalFrameEx +
> 15788
> 12  org.python.python 0x0001000892e1 PyEval_EvalFrameEx +
> 15663
> 13  org.python.python 0x00010008acce PyEval_EvalCodeEx +
> 1803
> 14  org.python.python 0x00010008935e PyEval_EvalFrameEx +
> 15788
> 15  org.python.python 0x00010008acce PyEval_EvalCodeEx +
> 1803
> 16  org.python.python 0x00010008935e PyEval_EvalFrameEx +
> 15788
> 17  org.python.python 0x00010008acce PyEval_EvalCodeEx +
> 1803
> 18  org.python.python 0x00010008935e PyEval_EvalFrameEx +
> 15788
> 19  org.python.python 0x00010008acce PyEval_EvalCodeEx +
> 1803
> 20  org.python.python 0x00010008935e PyEval_EvalFrameEx +
> 15788
> 21  org.python.python 0x00010008acce PyEval_EvalCodeEx +
> 1803
> 22  org.python.python 0x00010008ad61 PyEval_EvalCode + 54
> 23  org.python.python 0x0001000a265a Py_CompileString + 78
> 24  org.python.python     0x0001000a2723 PyRun_FileExFlags +
> 150
> 25  org.python.python 0x0001000a423d
> PyRun_SimpleFileExFlags + 704
> 26  org.python.python 0x0001000b0286 Py_Main + 2718
> 27  org.python.python.app 0x00010e6c start + 52
>
> Am 08.03.2012 um 02:36 schrieb Val Kalatsky:
>
>
> Tried it on my Ubuntu 10.10 box, no problem:
>
> 1) Saved as spampub.c
> 2) Compiled with (setup.py attached): python setup.py build_ext -i
> 3) Tested from ipython:
> In [1]: import spampub
> In [2]: ua=spampub.UnitArray([0,1,2,3.0],'liter')
> In [3]: ua
> Out[3]: UnitArray([ 0.,  1.,  2.,  3.])
> In [4]: ua.unit
> Out[4]: 'liter'
>
>
> On Wed, Mar 7, 2012 at 7:15 PM, Val Kalatsky  wrote:
>
>>
>> Seeing the backtrace would be helpful.
>> Can you do whatever leads to the segfault
>> from python run from gdb?
>> Val
>>
>>
>> On Wed, Mar 7, 2012 at 7:04 PM, Christoph Gohle <
>> christoph.go...@mpq.mpg.de> wrote:
>>
>>> -BEGIN PGP SIGNED MESSAGE-
>

Re: [Numpy-discussion] about sympy

2012-03-10 Thread Val Kalatsky
Can you?
The question should be: Why sympy does not have Fresnel integrals?

On Sun, Mar 11, 2012 at 1:06 AM, aa  wrote:

> why sympy cannot integrate sin(x**2)??
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] float96 on windows32 is float64?

2012-03-15 Thread Val Kalatsky
I just happened to have an xp64 VM running:
My version of numpy (1.6.1) does not have float128 (see more below what I
get in ipython session).
If you need to test something else please let me know.
Val
---

Enthought Python Distribution -- www.enthought.com

Python 2.7.2 |EPD 7.2-2 (64-bit)| (default, Jul  3 2011, 15:34:33) [MSC
v.1500 6
4 bit (AMD64)]
Type "copyright", "credits" or "license" for more information.

IPython 0.12 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help  -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.

Welcome to pylab, a matplotlib-based Python environment [backend: WXAgg].
For more information, type 'help(pylab)'.

In [1]: import numpy as np

In [2]: np.__version__
Out[2]: '1.6.1'

In [3]: np.flo
np.floatnp.float32  np.float_   np.floor
np.float16  np.float64  np.floating np.floor_divide

In [3]: np.flo

On Thu, Mar 15, 2012 at 11:25 PM, Matthew Brett wrote:

> Hi,
>
> On Thu, Mar 15, 2012 at 9:17 PM, David Cournapeau 
> wrote:
> >
> >
> > On Thu, Mar 15, 2012 at 11:10 PM, Matthew Brett  >
> > wrote:
> >>
> >> Hi,
> >>
> >> Am I right in thinking that float96 on windows 32 bit is a float64
> >> padded to 96 bits?
> >
> >
> > Yes
> >
> >>
> >>  If so, is it useful?
> >
> >
> > Yes: this is what allows you to use dtype to parse complex binary files
> > directly in numpy without having to care so much about those details. And
> > that's how it is defined on windows in any case (C standard only forces
> you
> > to have sizeof(long double) >= sizeof(double)).
>
> I propose then to rename this one to float64_96 .
>
> The nexp value in finfo(np.float96) is incorrect I believe, I'll make
> a ticket for it.
>
> >>  Has anyone got a windows64
> >> box to check float128 ?
> >
> >
> > Too lazy to check on my vm, but I am pretty sure it is 16 bytes on
> windows
> > 64.
>
> Should you have time to do that, could you confirm it's also a padded
> float64 and that nexp is still (incorrectly) 15?   That would be a
> great help,
>
> Thanks,
>
> Matthew
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] float96 on windows32 is float64?

2012-03-15 Thread Val Kalatsky
I does look like a joke.
Here is print np.finfo(np.longdouble)

In [2]: np.__version__
Out[2]: '1.6.1'

In [3]: np.flo
np.floatnp.float32  np.float_   np.floor
np.float16  np.float64  np.floating np.floor_divide

In [3]: print np.finfo(np.longdouble)
Machine parameters for float64
-
precision= 15   resolution= 1e-15
machep=   -52   eps=2.22044604925e-16
negep =   -53   epsneg= 1.11022302463e-16
minexp= -1022   tiny=   2.22507385851e-308
maxexp=  1024   max=1.79769313486e+308
nexp  =11   min=-max
-


On Thu, Mar 15, 2012 at 11:38 PM, Matthew Brett wrote:

> Hi,
>
> On Thu, Mar 15, 2012 at 9:33 PM, Val Kalatsky  wrote:
> >
> > I just happened to have an xp64 VM running:
> > My version of numpy (1.6.1) does not have float128 (see more below what I
> > get in ipython session).
> > If you need to test something else please let me know.
>
> Thanks a lot - that's helpful.  What do you get for:
>
> print np.finfo(np.longdouble)
>
> ?
>
> Best,
>
> Matthew
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] About np array and recarray

2012-03-21 Thread Val Kalatsky
Will this do what you need to accomplish?

import datetime
np.array([(datetime.datetime.strptime(i[0], "%Y-%m-%d").date(), i[1]) for i
in a], dtype=[('date', 'object'), ('count', 'int')])

Val

On Wed, Mar 21, 2012 at 11:48 PM, Yan Tang  wrote:

> Hi,
>
> I am really confused on the np array or record array, and cannot
> understand how it works.
>
> What I want to do is that I have a normal python two dimensional
> array/list:
>
> a = [['2000-01-01', 2],['2000-01-02', 3]]
>
> I want to convert it to a recarray with this dtype [('date', 'object'),
> ('count', 'int')].  I tried multiple ways and none of them works.  And some
> of the tests show pretty odd behavior.
>
> This is good, and it is almost what i want:
>
> >>> import numpy as np
> >>> a = [('2000-01-01', 2), ('2000-01-02', 3)]
> >>> np.array(a, dtype=[('date', 'object'), ('count', 'int')])
> array([('2000-01-01', 2), ('2000-01-02', 3)],
>   dtype=[('date', '|O8'), ('count', '
> Why this doesn't work?!
>
> >>> a = [['2000-01-01', 2],['2000-01-02', 3]]
> >>> np.array(a, dtype=[('date', 'object'), ('count', 'int')])
> Traceback (most recent call last):
>   File "", line 1, in 
> ValueError: tried to set void-array with object members using buffer.
>
> Why can this cause segmentation fault?!
>
> >>> a = [['2000-01-01', 2],['2000-01-02', 3]]
> >>> np.ndarray((len(a),), buffer=np.array(a), dtype=[('date', 'object'),
> ('count', 'int')])
> Segmentation fault (And python quit!)
>
> Python version 2.6.5
>
> On this reference page,
> http://docs.scipy.org/doc/numpy/reference/generated/numpy.array.html
>
> >>> x = np.array([(1,2),(3,4)])
> >>> x
> array([[1, 2],
>[3, 4]])
> >>> np.array([[1, 2], [3, 4]])
> array([[1, 2],
>[3, 4]])
>
> Can anyone help me about this?
>
> Thanks.
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] One question about the numpy.linalg.eig() routine

2012-04-02 Thread Val Kalatsky
Both results are correct.
There are 2 factors that make the results look different:
1) The order: the 2nd eigenvector of the numpy solution corresponds to the
1st eigenvector of your solution,
note that the vectors are written in columns.
2) The phase: an eigenvector can be multiplied by an arbitrary phase factor
with absolute value = 1.
As you can see this factor is -1 for the 2nd eigenvector
and -0.99887305445887753-0.047461785427773337j for the other one.
Val

2012/4/2 Hongbin Zhang 

>  Dear Python-users,
>
> I am currently very confused about the Scipy routine to obtain the
> eigenvectors of a complex matrix.
> In attached you find two files to diagonalize a 2X2 complex Hermitian
> matrix, however, on my computer,
>
> If I run python, I got:
>
> [[ 0.80322132+0.j  0.59500941+0.02827207j]
>  [-0.59500941+0.02827207j  0.80322132+0.j]]
>
> If I compile the fortran code, I got:
>
>  ( -0.595009410289, -0.028272068905) (  0.802316135182,  0.038122316497)
>  ( -0.803221321796,  0.) ( -0.595680709955,  0.)
>
> From the scipy webpage, it is said that numpy.linalg.eig() provides
> nothing but
> an interface to lapack zheevd subroutine, which is used in my fortran code.
>
> < /div>
> Would somebody be kind to tell me how to get consistent results?
>
> Many thanks in advance.
>
> Best wishes,
>
> Hongbin
>
>
>
>
> Ad hoc, ad loc
> and quid pro quo
>
>
> ---   Jeremy Hilary Boob
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] One question about the numpy.linalg.eig() routine

2012-04-02 Thread Val Kalatsky
BTW this extra degree of freedom can be used to "rotate" the eigenvectors
along the unit circle (multiplication by exp(j*phi)). To those of physical
inclinations
it should remind of gauge fixing (vector potential in EM/QM).
These "rotations" can be used to make one (any) non-zero component of each
eigenvector be positive real number.
Finally to the point: it seems that numpy.linalg.eig uses these "rotations"
to turn the
diagonal elements in the eigenvector matrix to real positive numbers,
that's why the numpy solutions looks neat.
Val

PS Probably nobody cares to know, but the phase factor I gave in my 1st
email should be negated:
0.99887305445887753+0.047461785427773337j

On Mon, Apr 2, 2012 at 8:53 PM, Matthew Brett wrote:

> Hi,
>
> On Mon, Apr 2, 2012 at 5:38 PM, Val Kalatsky  wrote:
> > Both results are correct.
> > There are 2 factors that make the results look different:
> > 1) The order: the 2nd eigenvector of the numpy solution corresponds to
> the
> > 1st eigenvector of your solution,
> > note that the vectors are written in columns.
> > 2) The phase: an eigenvector can be multiplied by an arbitrary phase
> factor
> > with absolute value = 1.
> > As you can see this factor is -1 for the 2nd eigenvector
> > and -0.99887305445887753-0.047461785427773337j for the other one.
>
> Thanks for this answer; for my own benefit:
>
> Definition: A . v = L . v  where A is the input matrix, L is an
> eigenvalue of A and v is an eigenvector of A.
>
> http://en.wikipedia.org/wiki/Eigendecomposition_of_a_matrix
>
> In [63]: A = [[0.6+0.0j,
> -1.97537668-0.09386068j],[-1.97537668+0.09386068j, -0.6+0.0j]]
>
> In [64]: L, v = np.linalg.eig(A)
>
> In [66]: np.allclose(np.dot(A, v), L * v)
> Out[66]: True
>
> Best,
>
> Matthew
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] One question about the numpy.linalg.eig() routine

2012-04-03 Thread Val Kalatsky
Interesting. I happen to know a little bit about Berry's phase
http://keck.ucsf.edu/~kalatsky/publications/PRL1998_BerryPhaseForLargeSpins.pdf
http://keck.ucsf.edu/~kalatsky/publications/PRA1999_SpectraOfLargeSpins-General.pdf
The latter one knocks out all point groups.
Probably you want to do something different, I cared about eigenvalues only
(BTW my Hamiltonians were carefully crafted).
Cheers
Val

PS I doubt anybody on this list cares to hear more about Berry's phase,
should take this discussion off-line


2012/4/3 Hongbin Zhang 

>  Hej Val,
>
> Thank you very much for your replies.
>
> Yes, I know that both eigenvectors are correct while they are indeed
> related
> to each other by unitary transformations (unitary matrices).
>
> Actually, what I am trying to do is to evaluate the Berry phase which is
> closely related to the gauge chosen. It is okay to apply an arbitrary
> phase to the eigenvectors, while to get the (meaningful) physical quantity
> the phase should be consistent for all the other eigenvectors.
>
> To my understanding, if I run both Fortran and python on the same computer,
> they should have the same phase (that is the arbitrary phase is
> computer-dependent). Maybe some additional "rotations" have been performed
> in
> python,
> but should this be written/commented somewhere in the man page?
>
> I will try to fix this by performing additional rotation to make the
> diagonal
> elements real and check whether this is the solution or not.
>
> Thank you all again, a nd of course more insightful suggestions are
> welcome.
>
> Regards,
>
>
> Hongbin
>
>
>
> Ad hoc, ad loc
> and quid pro quo
>
>  &n bsp;
>   ---   Jeremy Hilary Boob
>
>
> --
> Date: Mon, 2 Apr 2012 22:19:55 -0500
> From: kalat...@gmail.com
> To: numpy-discussion@scipy.org
> Subject: Re: [Numpy-discussion] One question about the numpy.linalg.eig()
> routine
>
>
> BTW this extra degree of freedom can be used to "rotate" the eigenvectors
> along the unit circle (multiplication by exp(j*phi)). To those of physical
> inclinations
> it should remind of gauge fixing (vector potential in EM/QM).
> These "rotations" can be used to make one (any) non-zero component of each
> eigenvector be positive real number.
> Finally to the point: it seems that numpy.linalg.eig uses these
> "rotations" to turn the
> diagonal elements in the eigenvector matrix to real positive numbers,
> that's why the numpy solutions looks neat.
> Val
>
> PS Probably nobody cares to know, but the phase factor I gave in my 1st
> email should be negated:
> 0.99887305445887753+0.047461785427773337j
>
> On Mon, Apr 2, 2012 at 8:53 PM, Matthew Brett wrote:
>
> Hi,
>
> On Mon, Apr 2, 2012 at 5:38 PM, Val Kalatsky  wrote:
> > Both results are correct.
> > There are 2 factors that make the results look different:
> > 1) The order: the 2nd eigenvector of the numpy solution corresponds to
> the
> > 1st eigenvector of your solution,
> > note that the vectors are written in columns.
> > 2) The phase: an eigenvector can be multiplied by an arbitrary phase
> factor
> > with absolute value = 1.
> > As you can see this factor is -1 for the 2nd eigenvector
> > and -0.99887305445887753-0.047461785427773337j for the other one.
>
> Thanks for this answer; for my own benefit:
>
> Definition: A . v = L . v  where A is the input matrix, L is an
> eigenvalue of A and v is an eigenvector of A.
>
> http://en.wikipedia.org/wiki/Eigendecomposition_of_a_matrix
>
> In [63]: A = [[0.6+0.0j,
> -1.97537668-0.09386068j],[-1.97537668+0.09386068j, -0.6+0.0j]]
>
> In [64]: L, v = np.linalg.eig(A)
>
> In [66]: np.allclose(np.dot(A, v), L * v)
> Out[66]: True
>
> Best,
>
> Matthew
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
>
> ___ NumPy-Discussion mailing
> list NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Slice specified axis

2012-04-05 Thread Val Kalatsky
The only slicing short-cut I can think of is the Ellipsis object, but it's
not going to help you much here.
The alternatives that come to my mind are (1) manipulation of shape
directly and (2) building a string and running eval on it.
Your solution is better than (1), and (2) is a horrible hack, so your
solution wins again.
Cheers
Val

On Thu, Apr 5, 2012 at 2:52 PM, Tony Yu  wrote:

> Is there a way to slice an nd-array along a specified axis? It's easy to
> slice along a fixed axis, e.g.:
>
> axis = 0:
> >>> array[start:end]
>
> axis = 1:
> >>> array[:, start:end]
> ...
>
> But I need to do this inside of a function that accepts arrays of any
> dimension, and the user can operate on any axis of the array. My current
> solution looks like the following:
>
> >>> aslice = lambda axis, s, e: (slice(None),) * axis + (slice(s, e),)
> >>> array[aslice(axis, start, end)]
>
> which works, but I'm guessing that numpy has a more readable way of doing
> this that I've overlooked.
>
> Thanks,
> -Tony
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Quaternion data type

2012-05-05 Thread Val Kalatsky
Hi Tod,

Would you consider bundling the quaternion dtype with your package.
I think everybody wins: your package would become stronger and
Martin's dtype would become easily available.
Thanks
Val

On Sat, May 5, 2012 at 6:27 AM, Tom Aldcroft
wrote:

> On Fri, May 4, 2012 at 11:44 PM, Ilan Schnell 
> wrote:
> > Hi Chuck,
> >
> > thanks for the prompt reply.  I as curious because because
> > someone was interested in adding http://pypi.python.org/pypi/Quaternion
> > to EPD, but Martin and Mark's implementation of quaternions
> > looks much better.
>
> Hi -
>
> I'm a co-author of the above mentioned Quaternion package.  I agree
> the numpy_quaternion version would be better, but if there is no
> expectation that it will move forward I can offer to improve our
> Quaternion.  A few months ago I played around with making it accept
> arbitrary array inputs (with similar shape of course) to essentially
> vectorize the transformations.  We never got around to putting this in
> a release because of a perceived lack of interest / priorities... If
> this would be useful then let me know.
>
> Best,
> Tom
>
> > - Ilan
> >
> >
> > On Fri, May 4, 2012 at 5:36 PM, Charles R Harris
> >  wrote:
> >> Hi Ilan
> >>
> >> On Fri, May 4, 2012 at 3:38 PM, Ilan Schnell 
> wrote:
> >>>
> >>> Hello everyone,
> >>>
> >>> what is the plan for Quaternion data types in numpy?
> >>> I saw that during last years SciPy spring
> >>> https://github.com/martinling/numpy_quaternion
> >>> was started, but not updated or released since then.
> >>>
> >>
> >> That was Martin Ling, link and thread here . I'm not sure what happened
> with
> >> this but I suspect we are waiting for extension types to be fixed up in
> >> master. Mark had some thoughts along those lines.
> >>
> >> Chuck
> >>
> >>
> >> ___
> >> NumPy-Discussion mailing list
> >> NumPy-Discussion@scipy.org
> >> http://mail.scipy.org/mailman/listinfo/numpy-discussion
> >>
> > ___
> > NumPy-Discussion mailing list
> > NumPy-Discussion@scipy.org
> > http://mail.scipy.org/mailman/listinfo/numpy-discussion
> >
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.random.gamma returns 0 for small shape parameters

2012-05-28 Thread Val Kalatsky
You'll need some patience to get non-zeros, especially for k=1e-5

In [84]: np.sum(np.random.gamma(1e-5,size=100)!=0.0)
Out[84]: 7259
that's less than 1%. For k=1e-4 it's ~7%

Val


On Mon, May 28, 2012 at 10:33 PM, Uri Laserson wrote:

> I am trying to sample from a Dirichlet distribution, where some of the
> shape parameters are very small.  To do so, the algorithm samples each
> component individually from a Gamma(k,1) distribution where k is the shape
> parameter for that component of the Dirichlet.  In principle, this should
> always return a positive number (as the Dirichlet is defined).  However, if
> k is very small, it will return zero:
>
> In [157]: np.random.gamma(1e-1)
> Out[157]: 4.863866491339177e-06
>
> In [158]: np.random.gamma(1e-2)
> Out[158]: 2.424451829710714e-57
>
> In [159]: np.random.gamma(1e-3)
> Out[159]: 5.1909861689757784e-197
>
> In [160]: np.random.gamma(1e-4)
> Out[160]: 0.0
>
> In [161]: np.random.gamma(1e-5)
> Out[161]: 0.0
>
> What is the best way to deal with this?
>
> Thanks!
> Uri
>
>
>
>
> ...
> Uri Laserson
> Graduate Student, Biomedical Engineering
> Harvard-MIT Division of Health Sciences and Technology
> M +1 917 742 8019
> laser...@mit.edu
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bus error when using flat on sliced, memmap'd array

2012-05-30 Thread Val Kalatsky
Confirmed on Ubuntu, np.__version__ 1.5.1 and 1.6.1 (backtraces are
bellow).
Something seems to be broken before it comes to memcpy
and/or _aligned_contig_to_strided_size1.

Val

-

np.__version__ 1.6.1

Program received signal SIGSEGV, Segmentation fault.
0x75b63cdf in _aligned_contig_to_strided_size1 ()
   from
/home/vkalatsky/src/epd8/lib/python2.7/site-packages/numpy/core/multiarray.so
(gdb) bt
#0  0x75b63cdf in _aligned_contig_to_strided_size1 ()
   from
/home/vkalatsky/src/epd8/lib/python2.7/site-packages/numpy/core/multiarray.so
#1  0x75bc2c31 in PyArray_CopyAnyIntoOrdered ()
   from
/home/vkalatsky/src/epd8/lib/python2.7/site-packages/numpy/core/multiarray.so
#2  0x75bc3329 in array_dealloc () from
/home/vkalatsky/src/epd8/lib/python2.7/site-packages/numpy/core/multiarray.so
#3  0x771b7bbb in meth_dealloc (m=0xb6fdd0) at
Objects/methodobject.c:134
#4  0x7720df99 in PyEval_EvalFrameEx (f=0xc3f310, throwflag=) at Python/ceval.c:2712
#5  0x77214722 in PyEval_EvalCodeEx (co=0x791d30, globals=, locals=, args=0xab7fd0,
argcount=5, kws=0xab7ff8, kwcount=0, defs=0x0, defcount=0, closure=0x0)
at Python/ceval.c:3253
#6  0x772126b6 in call_function (f=0xab7e20, throwflag=) at Python/ceval.c:4109
#7  PyEval_EvalFrameEx (f=0xab7e20, throwflag=) at
Python/ceval.c:2666
#8  0x77214722 in PyEval_EvalCodeEx (co=0x7988b0, globals=, locals=, args=0x3,
argcount=1, kws=0x3, kwcount=0, defs=0x79c518, defcount=3, closure=0x0)
at Python/ceval.c:3253
#9  0x772126b6 in call_function (f=0x6d0820, throwflag=) at Python/ceval.c:4109
#10 PyEval_EvalFrameEx (f=0x6d0820, throwflag=) at
Python/ceval.c:2666
#11 0x77214722 in PyEval_EvalCodeEx (co=0x77eb41b0,
globals=, locals=, args=0x0,
argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at
Python/ceval.c:3253
#12 0x77214772 in PyEval_EvalCode (co=0x77ff5000, globals=0x2,
locals=0x897cb1) at Python/ceval.c:667
#13 0x7722e432 in run_mod (mod=,
filename=, globals=0x640140, locals=0x640140,
flags=, arena=) at
Python/pythonrun.c:1346
#14 0x7722e506 in PyRun_FileExFlags (fp=0x6cb390,
filename=0x7fffe599 "cf_test.py", start=257, globals=0x640140,
locals=0x640140, closeit=1, flags=0x7fffe1a0) at
Python/pythonrun.c:1332
#15 0x7722fa67 in PyRun_SimpleFileExFlags (fp=, filename=0x7fffe599 "cf_test.py", closeit=1,
flags=0x7fffe1a0) at Python/pythonrun.c:936
#16 0x772401e2 in Py_Main (argc=1, argv=0x7fffe2c8) at
Modules/main.c:689
#17 0x7652cd8e in __libc_start_main (main=,
argc=, ubp_av=,
init=, fini=,
rtld_fini=, stack_end=0x7fffe2b8)
at libc-start.c:226
#18 0x004006f9 in _start ()

---

np.__version__ 1.5.1

Program received signal SIGSEGV, Segmentation fault.
memcpy () at ../sysdeps/x86_64/memcpy.S:67
67 ../sysdeps/x86_64/memcpy.S: No such file or directory.
in ../sysdeps/x86_64/memcpy.S
(gdb) bt
#0  memcpy () at ../sysdeps/x86_64/memcpy.S:67
#1  0x75bf2460 in PyArray_CopyAnyInto () from
/home/vkalatsky/epd70/lib/python2.7/site-packages/numpy/core/multiarray.so
#2  0x75bf2a69 in array_dealloc () from
/home/vkalatsky/epd70/lib/python2.7/site-packages/numpy/core/multiarray.so
#3  0x771b889b in meth_dealloc (m=0xbdd170) at
Objects/methodobject.c:134
#4  0x7720e8b9 in PyEval_EvalFrameEx (f=0xc12450, throwflag=) at Python/ceval.c:2711
#5  0x77215042 in PyEval_EvalCodeEx (co=0x81e630, globals=, locals=, args=0x7bd820,
argcount=5, kws=0x7bd848, kwcount=0, defs=0x0, defcount=0, closure=0x0)
at Python/ceval.c:3252
#6  0x77212fd6 in call_function (f=0x7bd670, throwflag=) at Python/ceval.c:4108
#7  PyEval_EvalFrameEx (f=0x7bd670, throwflag=) at
Python/ceval.c:2665
#8  0x77215042 in PyEval_EvalCodeEx (co=0x85f1b0, globals=, locals=, args=0x3,
argcount=1, kws=0x3, kwcount=0, defs=0x85e928, defcount=3, closure=0x0)
at Python/ceval.c:3252
#9  0x77212fd6 in call_function (f=0x747ea0, throwflag=) at Python/ceval.c:4108
#10 PyEval_EvalFrameEx (f=0x747ea0, throwflag=) at
Python/ceval.c:2665
#11 0x77215042 in PyEval_EvalCodeEx (co=0x691b30, globals=, locals=, args=0x0,
argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at
Python/ceval.c:3252
#12 0x77215092 in PyEval_EvalCode (co=0x77ff3000,
globals=0xb0, locals=0x1) at Python/ceval.c:666
#13 0x7722eb82 in run_mod (mod=,
filename=, globals=0x640140, locals=0x640140,
flags=, arena=) at
Python/pythonrun.c:1346
#14 0x7722ec56 in PyRun_FileExFlags (fp=0x743250,
filename=0x7fffe5bd "cf_test.py", start=257, globals=0x640140,
locals=0x640140, closeit=1, flags=0x7fffe1c0) at
Python/pythonrun.c:1332
#15 0x772301b7 in PyRun_SimpleFileExFlags (fp=, filename=0x7fffe5bd "cf_test.py", 

Re: [Numpy-discussion] fast access and normalizing of ndarray slices

2012-05-30 Thread Val Kalatsky
What do you mean by "normalized it"?
Could you give the output of your procedure for the sample input data.
Val

On Thu, May 31, 2012 at 12:36 AM, Wolfgang Kerzendorf  wrote:

> Dear all,
>
> I have an ndarray which consists of many arrays stacked behind each other
> (only conceptually, in truth it's a normal 1d float64 array).
> I have a second array which tells me the start of the individual data sets
> in the 1d float64 array and another one which tells me the length.
> Example:
>
> data_array = (conceptually) [[1,2], [1,2,3,4], [1,2,3]] = in reality
> [1,2,1,2,3,4,1,2,3, dtype=float64]
> start_pointer = [0, 2, 6]
> length_data = [2, 4, 3]
>
> I now want to normalize each of the individual data sets. I wrote a simple
> for loop over the start_pointer and length data grabbed the data and
> normalized it and wrote it back to the big array. That's slow. Is there an
> elegant numpy way to do that? Do I have to go the cython way?
>
> Cheers
>   Wolfgang
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] fast access and normalizing of ndarray slices

2012-05-31 Thread Val Kalatsky
Hi Wolfgang,

I thought maybe there is a trick for your specific operation.
Your array stacking is a simple case of the group-by operation and
normalization is aggregation followed by update.
I believe group-by and aggregation are on the NumPy todo-list.
You may have to write a small extension module to speed up your operations.
Val

On Thu, May 31, 2012 at 8:27 AM, Wolfgang Kerzendorf
wrote:

> Hey Val,
>
> Well it doesn't matter what I do, but specifically I do factor =
> sum(data_array[start_point:start_point+length_data]) and then
> data[array[start_point:start_point+length_data]) /= factor. and that for
> every star_point and length data.
>
> How to do this fast?
>
> Cheers
>Wolfgang
> On 2012-05-31, at 1:43 AM, Val Kalatsky wrote:
>
> What do you mean by "normalized it"?
> Could you give the output of your procedure for the sample input data.
> Val
>
> On Thu, May 31, 2012 at 12:36 AM, Wolfgang Kerzendorf <
> wkerzend...@gmail.com> wrote:
>
>> Dear all,
>>
>> I have an ndarray which consists of many arrays stacked behind each other
>> (only conceptually, in truth it's a normal 1d float64 array).
>> I have a second array which tells me the start of the individual data
>> sets in the 1d float64 array and another one which tells me the length.
>> Example:
>>
>> data_array = (conceptually) [[1,2], [1,2,3,4], [1,2,3]] = in reality
>> [1,2,1,2,3,4,1,2,3, dtype=float64]
>> start_pointer = [0, 2, 6]
>> length_data = [2, 4, 3]
>>
>> I now want to normalize each of the individual data sets. I wrote a
>> simple for loop over the start_pointer and length data grabbed the data and
>> normalized it and wrote it back to the big array. That's slow. Is there an
>> elegant numpy way to do that? Do I have to go the cython way?
>>
>> Cheers
>>   Wolfgang
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] np.unique for one bi-dimensional array

2012-07-24 Thread Val Kalatsky
There are various ways to repack the pair of arrays into one array.
The most universal is probably to use structured array (can repack more
than a pair):

x = np.array(zip(a, b), dtype=[('a',int), ('b',int)])

After repacking you can use unique and other numpy methods:

xu = np.unique(x)
zip(xu['a'], xu['b'], np.bincount(np.searchsorted(xu, x)))

[(4, 3, 2), (4, 4, 2), (4, 5, 1), (5, 4, 1)]

Val

On Tue, Jul 24, 2012 at 5:09 PM, Giuseppe Amatulli <
giuseppe.amatu...@gmail.com> wrote:

> Hi,
>
> would like to identify unique pairs of numbers in two arrays o in one
> bi-dimensional array, and count the observation
>
> a_clean=array([4,4,5,4,4,4])
> b_clean=array([3,5,4,4,3,4])
>
> and obtain
> (4,3,2)
> (4,5,1)
> (5,4,1)
> (4,4,2)
>
> I solved with tow loops but off course there will be a fast solution
> Any idea?
> what about using np.unique for one bi-dimensional array?
>
> In bash I usually unique command
>
> thanks in advance
> Giuseppe
>
> --
> Giuseppe Amatulli
> Web: www.spatial-ecology.net
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Reading String type to find long "magic word"

2013-04-19 Thread Val Kalatsky
Here's a seed for your function:

s = 'ThesampletextthatcouldbereadedthesameinbothordersArozaupalanalapuazorA'
f = np.array(list(s)).view('int8').astype(float)
f -= f.mean()
maybe_here = np.argmax(np.convolve(f,f))/2
magic = 10
print s[maybe_here - magic:maybe_here + magic + 1]


Let us now how to assign significance to the result and to find optimal
'magic'

Val


On Fri, Apr 19, 2013 at 6:01 PM, Bakhtiyor Zokhidov <
bakhtiyor_zokhi...@mail.ru> wrote:

> Hello everybody,
>
> I just have one long string
> type:ThesampletextthatcouldbereadedthesameinbothordersArozaupalanalapuazorA
>
> The result I want to take is ArozaupalanalapuazorA - which means reading
> directly each letter should be the same as reading reversely ...
>
> Is there any function which can deals with this problem in Python???
>
> Thanks for answer
>
>
> --
> Bakhti
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Ufuncs and flexible types, CAPI

2011-12-30 Thread Val Kalatsky
Hi folks,

First post, may not follow the standards, please bear with me.

Need to define a ufunc that takes care of various type.
Fixed - no problem, userdef - no problem, flexible - problem.
It appears that the standard ufunc loop does not provide means to
deliver the size of variable size items.
Questions and suggestions:

1) Please no laughing: I have to code for NumPy 1.3.0.
Perhaps this issue has been resolved, then the discussion becomes moot.
If so please direct me to the right link.

2) A reasonable approach here would be to use callbacks and to give the
user (read programmer)
a chance to intervene at least twice: OnInit and OnFail (OnFinish may not
be unreasonable as well).

OnInit: before starting the type resolution the user is given a chance to
do something (e.g. check for
that pesky type and take control then return a flag indicating a stop)
before the resolution starts
OnFail: the resolution took place and did not succeed, the user is given a
chance to fix it.
In most of the case these callbacks are NULLs.

I could patch numpy with a generic method that does it, but it's a shame
not to use the good ufunc machine.

Thanks for tips and suggestions.

Val Kalatsky
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Ufuncs and flexible types, CAPI

2012-01-10 Thread Val Kalatsky
Hi Samuel,

Thanks for the reply.

I hoped somebody will prove me wrong on ufuncs' limitation: no flexible
type support.
Also wanted to bring up a discussion on changing ufunc API.
I think adding another parameter that delivers pointers to arrays to the
loops would not
lead to any undesirable consequences.

Yep, 1.3.0 is old, but 1.7 has same loop prototype (with some minor
cosmetic change):
(char **args, intp *dimensions, intp *steps, void *func)  ->  (char **args,
intp *dimensions, intp *steps, void *NPY_UNUSED(func))
it probably has not change from the conception.

Thanks
Val

On Tue, Jan 10, 2012 at 10:29 AM, Samuel John  wrote:

> [sorry for duplicate - I used the wrong mail address]
>
> I am afraid, I didn't quite get the question.
> What is the scenario? What is the benefit that would weight out the
> performance hit of checking whether there is a callback or not. This has to
> be evaluated quite a lot.
>
> Oh well ... and 1.3.0 is pretty old :-)
>
> cheers,
> Samuel
>
> On 31.12.2011, at 07:48, Val Kalatsky wrote:
>
> >
> > Hi folks,
> >
> > First post, may not follow the standards, please bear with me.
> >
> > Need to define a ufunc that takes care of various type.
> > Fixed - no problem, userdef - no problem, flexible - problem.
> > It appears that the standard ufunc loop does not provide means to
> > deliver the size of variable size items.
> > Questions and suggestions:
> >
> > 1) Please no laughing: I have to code for NumPy 1.3.0.
> > Perhaps this issue has been resolved, then the discussion becomes moot.
> > If so please direct me to the right link.
> >
> > 2) A reasonable approach here would be to use callbacks and to give the
> user (read programmer)
> > a chance to intervene at least twice: OnInit and OnFail (OnFinish may
> not be unreasonable as well).
> >
> > OnInit: before starting the type resolution the user is given a chance
> to do something (e.g. check for
> > that pesky type and take control then return a flag indicating a stop)
> before the resolution starts
> > OnFail: the resolution took place and did not succeed, the user is given
> a chance to fix it.
> > In most of the case these callbacks are NULLs.
> >
> > I could patch numpy with a generic method that does it, but it's a shame
> not to use the good ufunc machine.
> >
> > Thanks for tips and suggestions.
> >
> > Val Kalatsky
> >
> >
> > ___
> > NumPy-Discussion mailing list
> > NumPy-Discussion@scipy.org
> > http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Index update

2012-01-10 Thread Val Kalatsky
A - np.digitize(A, S)
Should do the trick, just make sure that S is sorted and A and S do not
overlap,
if they do remove those items from A using set operations.
Val

On Tue, Jan 10, 2012 at 2:14 PM, Mads Ipsen  wrote:

> **
> Hi,
>
> Suppose you have N items, say N = 10.
>
> Now a subset of these items are selected given by a list A of indices.
> Lets say that items A = [2,5,7] are selected. Assume now that you delete
> some of the items given by the indices S = [1,4,8]. This means that the
> list of indices A must be updated, since items have been deleted. For this
> particular case the updated selection list A becomes A = [1,3,5].
>
> Is there some smart numpy way of doing this index update of the selected
> items in A without looping? Typically N is large.
>
> Best regards,
>
> Mads
>
> --
> +-+
> | Mads Ipsen  |
> +--+--+
> | Gåsebæksvej 7, 4. tv |  |
> | DK-2500 Valby| phone:  +45-29716388 |
> | Denmark  | email:  mads.ip...@gmail.com |
> +--+--+
>
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] bug in numpy.mean() ?

2012-01-24 Thread Val Kalatsky
Just what Bruce said.

You can run the following to confirm:
np.mean(data - data.mean())

If for some reason you do not want to convert to float64 you can add the
result of the previous line to the "bad" mean:
bad_mean = data.mean()
good_mean = bad_mean + np.mean(data - bad_mean)

Val

On Tue, Jan 24, 2012 at 12:33 PM, K.-Michael Aye wrote:

> I know I know, that's pretty outrageous to even suggest, but please
> bear with me, I am stumped as you may be:
>
> 2-D data file here:
> http://dl.dropbox.com/u/139035/data.npy
>
> Then:
> In [3]: data.mean()
> Out[3]: 3067.024383998
>
> In [4]: data.max()
> Out[4]: 3052.4343
>
> In [5]: data.shape
> Out[5]: (1000, 1000)
>
> In [6]: data.min()
> Out[6]: 3040.498
>
> In [7]: data.dtype
> Out[7]: dtype('float32')
>
>
> A mean value calculated per loop over the data gives me 3045.747251076416
> I first thought I still misunderstand how data.mean() works, per axis
> and so on, but did the same with a flattenend version with the same
> results.
>
> Am I really soo tired that I can't see what I am doing wrong here?
> For completion, the data was read by a osgeo.gdal dataset method called
> ReadAsArray()
> My numpy.__version__ gives me 1.6.1 and my whole setup is based on
> Enthought's EPD.
>
> Best regards,
> Michael
>
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] array metadata

2012-01-25 Thread Val Kalatsky
I believe there are no provisions made for that in ndarray.
But you can subclass ndarray.
Val

On Wed, Jan 25, 2012 at 12:10 PM, Emmanuel Mayssat wrote:

> Is there a way to store metadata for an array?
> For example, date the samples were collected, name of the operator, etc.
>
> Regards,
> --
> Emmanuel
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] need advice on installing NumPy onto a Windows 7 with Python2.7 (32-bit)

2012-01-26 Thread Val Kalatsky
To avoid all the hassle I suggest getting EPD:
http://enthought.com/products/epd.php
You'd get way more than just NumPy, which may or may not be what you need.
I have installed various NumPy's on linux only and from source only which
did
require compilation (gcc), so I am not a good help for your setup.
On the hand, I've done multiple EPD installations on various platforms and
never had problems.
Val

On Thu, Jan 26, 2012 at 11:09 PM, William McLendon wrote:

> Hi,
>
> I am trying to install NumPy (using
> numpy-1.6.1-win32-superpack-python2.7) on a Windows 7 machine that has
> 32-bit Python 2.7 installed on it using the latest installer
> (python-2.7.2.msi).  Python is installed into the default location,
> C:\Python27, and as far as I can tell the registry knows about it -- or at
> least the windows uninstaller in the control panel does...
>
> The installation fails because the NumPy installer cannot find the Python
> installation.  I am then prompted with a screen that should allow me to
> type in the location of my python installation, but the text-boxes where I
> should type this do not allow input so I'm kind of stuck.
>
> I did look into trying to build from source, but I don't have a C compiler
> on this system so setup.py died a horrible death.  I'd prefer to avoid
> having to install Visual C++ Express on this system.
>
> Does anyone have any suggestions that might be helpful?
>
> Thanks!
>   -William
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix indexing

2012-02-07 Thread Val Kalatsky
Aronne made good suggestions.
Here is another weapon for your arsenal:
1) I assume that the shape of your array is irrelevant (reshape if needed)
2) Depending on the structure of your data np.unique can be handy:
arr_unique, idx = np.unique(arr1d, return_inverse=True)
then search arr_unique instead of arr1d.
3) Caveat: np.unique is a major memory hogger, be prepared to waste ~1GB.
Val

On Tue, Feb 7, 2012 at 10:34 PM, Aronne Merrelli
wrote:

>
>
> On Mon, Feb 6, 2012 at 11:44 AM, Naresh Pai  wrote:
>
>> I have two large matrices, say, ABC and DEF, each with a shape of 7000 by
>> 4500. I have another list, say, elem, containing 850 values from ABC. I am
>> interested in finding out the corresponding values in DEF where ABC has
>> elem and store them *separately*. The code that I am using is:
>>
>> for i in range(len(elem)):
>>  DEF_distr = DEF[ABC==elem[i]]
>>
>> DEF_distr gets used for further processing before it gets cleared from
>> memory and the next round of the above loop begins. The loop above
>> currently takes about 20 minutes! I think the bottle neck is where elem is
>> getting searched repeatedly in ABC. So I am looking for a solution where
>> all elem can get processed in a single call and the indices of ABC be
>> stored in another variable (separately). I would appreciate if you suggest
>> any faster method for getting DEF_distr.
>>
>>
> You'll need to mention some details about the contents of ABC/DEF in order
> to get the best answer (what range of values, do they have a certain
> structure, etc). I made the assumption that ABC and elem have integers (I'm
> not sure it makes sense to search for ABC==elem[n] unless they are both
> integers), and then used a sort followed by searchsorted. This has a side
> effect of reordering the elements in DEF_distr. I don't know if that
> matters. You can skip the .copy() calls if you don't care that ABC/DEF are
> sorted.
>
> ABC_1D = ABC.copy().ravel()
> ABC_1D_sorter = np.argsort(ABC_1D)
> ABC_1D = ABC_1D[ABC_1D_sorter]
> DEF_1D = DEF.copy().ravel()
> DEF_1D = DEF_1D[ABC_1D_sorter]
> ind1 = np.searchsorted(ABC_1D, elem, side='left')
> ind2 = np.searchsorted(ABC_1D, elem, side='right')
> DEF_distr = []
> for n in range(len(elem)):
> DEF_distr.append( DEF_1D[ind1[n]:ind2[n]] )
>
>
> I tried this on the big memory workstation, and for the 7Kx4K size I get
> about 100 seconds for the simple method and 10 seconds for this more
> complicated sort-based method - if you are getting 20 minutes for that,
> maybe there is a memory problem, or a different part of the code that is
> the bottleneck?
>
> Hope that helps,
> Aronne
>
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion