Re: [Numpy-discussion] 100 Numpy exercices

2014-05-27 Thread Nicolas Rougier

On 27 May 2014, at 21:09, Chris Barker chris.bar...@noaa.gov wrote:

 On Mon, May 26, 2014 at 9:57 PM, Nicolas Rougier nicolas.roug...@inria.fr 
 wrote:
 
 I've updated the numpy exercices collection and made it available on github 
 at:
 
 https://github.com/rougier/numpy-100
 
 
 very useful resource -- thanks!
 
 a couple tiny notes:
 
 1) In the first section, the phrases Create a ... and Declare a... are 
 both used -- I suggest that create is better than declare -- you never 
 declare things in Python -- but in any case, consistent terminology is best.
 

Thanks, corrected.

 2) What? I don't get master lever for using stride_tricks? ;-)

Thanks, I did not remember the author of expert #2. Just fixed it.

Any other tricky stride_trick tricks ? I promised to put them in the master 
section.


Nicolas
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 100 Numpy exercices

2014-05-27 Thread Nicolas Rougier

Thanks, you just inaugurated the master section.

Nicolas

On 27 May 2014, at 21:48, Jaime Fernández del Río jaime.f...@gmail.com wrote:

 On Tue, May 27, 2014 at 12:27 PM, Nicolas Rougier nicolas.roug...@inria.fr 
 wrote:
 Any other tricky stride_trick tricks ? I promised to put them in the master 
 section.
 
 
 It doesn't use stride_tricks, and seberg doesn't quite like it, but this made 
 the rounds in StackOverflow a couple of years ago:
 
 http://stackoverflow.com/questions/16970982/find-unique-rows-in-numpy-array/16973510#16973510
 
 It may not work properly on floats, but I think it is a very cool use of 
 dtypes. Then again I'm obviously biased...
 
 Jaime
  
 -- 
 (\__/)
 ( O.o)
 (  ) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes de 
 dominación mundial.
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] 100 Numpy exercices

2014-05-26 Thread Nicolas Rougier

Hi all,

I've updated the numpy exercices collection and made it available on github at:

https://github.com/rougier/numpy-100


These exercices mainly comes from this mailing list and also from stack 
overflow. If you have other examples in mind, do not hesitate to make a pull 
request. The master and archmaster sections still need to be populated...

Once finished, I'll make an ipython notebook as well.



Nicolas
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] python array

2014-03-13 Thread Nicolas Rougier

Seems to be related to the masked values:

print r2010[:3,:3]
[[-- -- --]
 [-- -- --]
 [-- -- --]]

print abs(r2010)[:3,:3]
[[-- -- --]
 [-- -- --]
 [-- -- --]]


print r2010[ r2010[:3,:3] 0 ]
[-- -- -- -- -- -- -- -- --]

print r2010[ abs(r2010)[:3,:3]  0]
[]

Nicolas



On 13 Mar 2014, at 16:52, Sudheer Joseph sudheer.jos...@yahoo.com wrote:

 Dear experts,
  I am encountering a strange behaviour of python data 
 array as below. I have been trying to use the data from a netcdf 
 file(attached herewith) to do certain calculation using below code. If I take 
 absolute value of the same array and look for values .5  I get a different 
 value than the original array. But the fact is that this particular case do 
 not have any negative values in the array( but there are other files where it 
 can have negative values so the condition is put). I do not see any reason 
 for getting different numbers for values .5 in case of bt and expected it to 
 be same as that of r2010. If any one has a guess on what is behind this 
 behaviour please help.
 
 
 In [14]: from netCDF4 import Dataset as nc
 
 In [15]: nf=nc('r2010.nc')
 In [16]: r2010=nf.variables['R2010'][:]
 In [17]: bt=abs(r2010)
 In [18]: bt[bt=.5].shape
 Out[18]: (2872,)
 In [19]: r2010[r2010.5].shape
 Out[19]: (36738,)
 
 
 bt.min()
 Out[20]: 0.0027588337040836768
 
 In [21]: bt.max()
 Out[21]: 3.5078965479057089
 In [22]: r2010.max()
 Out[22]: 3.5078965479057089
 In [23]: r2010.min()
 Out[23]: 0.0027588337040836768
 
 
  
 ***
 Sudheer Joseph 
 Indian National Centre for Ocean Information Services
 Ministry of Earth Sciences, Govt. of India
 POST BOX NO: 21, IDA Jeedeemetla P.O.
 Via Pragathi Nagar,Kukatpally, Hyderabad; Pin:5000 55
 Tel:+91-40-23886047(O),Fax:+91-40-23895011(O),
 Tel:+91-40-23044600(R),Tel:+91-40-9440832534(Mobile)
 E-mail:sjo.in...@gmail.com;sudheer.jos...@yahoo.com
 Web- http://oppamthadathil.tripod.com
 ***r2010.nc___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] dtype promotion

2014-03-03 Thread Nicolas Rougier

Hi all,

I'm using numpy 1.8.0 (osx 10.9, python 2.7.6) and I can't understand dtype 
promotion in the following case:

 Z = np.zeros((2,2),dtype=np.float32) + 1
 print Z.dtype
float32

 Z = np.zeros((2,2),dtype=np.float32) + (1,1)
 print Z.dtype
float64


Is this the expected behavior ?
What it the difference between the two lines ?



Nicolas
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dtype promotion

2014-03-03 Thread Nicolas Rougier


I never noticed this kind of cast before (1.8.0), it's just a bit surprising.

It was convenient to write translations (for a bunch of points) such as:

Z = np.ones((n,2),dtype=np.float32) + (300,300)

but I can live with Z += 300,300


Nicolas


On 03 Mar 2014, at 23:02, Benjamin Root ben.r...@ou.edu wrote:

 IIRC, this is dependent on whether you are using 32bit versus 64bit numpy. 
 All regular integer numbers can fit in 32 bits (is that right?), but the 1.1 
 is treated as a float32 if on a 32 bit NumPy or as float64 if on a 64 bit 
 NumPy.
 
 That's my stab at it.
 
 Ben Root
 
 
 On Mon, Mar 3, 2014 at 4:06 PM, Nicolas Rougier nicolas.roug...@inria.fr 
 wrote:
 
 Hi all,
 
 I'm using numpy 1.8.0 (osx 10.9, python 2.7.6) and I can't understand dtype 
 promotion in the following case:
 
  Z = np.zeros((2,2),dtype=np.float32) + 1
  print Z.dtype
 float32
 
  Z = np.zeros((2,2),dtype=np.float32) + (1,1)
  print Z.dtype
 float64
 
 
 Is this the expected behavior ?
 What it the difference between the two lines ?
 
 
 
 Nicolas
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Bug in resize of structured array (with initial size = 0)

2014-01-10 Thread Nicolas Rougier

Hi,

I've tried to resize a record array that was first empty (on purpose, I need it)
and I got the following error (while it's working for regular array).


Traceback (most recent call last):
  File test_resize.py, line 10, in module
print np.resize(V,2)
  File 
/usr/locaL/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/fromnumeric.py,
 line 1053, in resize
if not Na: return mu.zeros(new_shape, a.dtype.char)
TypeError: Empty data-type


I'm using numpy 1.8.0, python 2.7.6, osx 10.9.1.
Can anyone confirm before I submit an issue ?


Here is the script:

V = np.zeros(0, dtype=np.float32)
print V.dtype
print np.resize(V,2)

V = np.zeros(0, dtype=[('a', np.float32, 1)])
print V.dtype
print np.resize(V,2)


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ArrayList object

2014-01-04 Thread Nicolas Rougier

Hi all,

I've coding an ArrayList object based on a regular numpy array. This objects 
allows to dynamically append/insert/delete/access items. I found it quite 
convenient since it allows to manipulate an array as if it was a list with 
elements of different sizes but with same underlying type (=array dtype).

# Creation from a nested list
L = ArrayList([ [0], [1,2], [3,4,5], [6,7,8,9] ])

# Creation from an array + common item size
L = ArrayList(np.ones(1000), 3)

# Empty list
L = ArrayList(dype=int)

# Creation from an array + individual item sizes
L = ArrayList(np.ones(10), 1+np.arange(4))

# Access to elements:
print L[0], L[1], L[2], L[3]
[0] [1 2] [3 4 5] [6 7 8 9]

# Operations on elements
L[:2] += 1
print L.data
[1 2 3 3 4 5 6 7 8 9]


Source code is available from: https://github.com/rougier/array-list

I wonder is there is any interest in having such object within core numpy 
(np.list ?) ?


Nicolas





___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Structured array dtype

2013-08-31 Thread Nicolas Rougier


Thanks for the explanation.
In fact, since the 4 components of 'b' are contiguous in memory, I wanted to 
find a way to express that fact in the dtype.

Z = np.zeros(10, [('a', np.float32, 3), ('b', np.float32, 4)])
Z['b'].strides
(28,4)

Z = np.zeros((10,4), np.float32)
Z.strides
(16,4)

Z = np.zeros(10, (np.float32,4))
Z.strides
(16,4)




Nicolas



On Aug 31, 2013, at 7:51 AM, Stéfan van der Walt ste...@sun.ac.za wrote:

 Hi Nicolas
 
 On Fri, 30 Aug 2013 17:26:51 +0200, Nicolas Rougier wrote:
 Z = np.zeros(10, [('a', np.float32, 3), ('b', np.float32, 4)])
 
 Z['a'].dtype
 dtype('float32')
 
 Z.dtype['a']
 dtype(('f4', (3,)))
 
 
 Does that mean that dtype['a'] is the dtype of field 'a' when in Z, while 
 Z['a'].dtype is the dtype of field 'a' when extracted or my way of 
 thinking is totally wrong ?
 
 Apologies if this is a duplicate response; I'm sending it offline.
 
 In case 1, you are indexing into the array, and querying its dtype.  In case
 two, you are indexing into a dtype.
 
 I.e., in case two, you are doing this:
 
 In [18]: dtype = np.dtype([('a', float, 3), ('b', int)])
 
 In [19]: dtype['a']
 Out[19]: dtype(('f8', (3,)))
 
 What bothers me the most is that I cannot do:
 
 Z['a'].view(Z.dtype['a'])
 ValueError: new type not compatible with array.
 
 That's quite a tricky operation to perform, since it has to take into account
 the underlying strides of the old array as well as calculate a shape for the
 new array.  It should be possible to make it work using something similar to
 `np.lib.stride_tricks.as_strided`, but my quick attempt failed because of the
 following:
 
 In [13]: class Foo:
__array_interface__ = Z.__array_interface__
   :
 
 In [14]: f = Foo()
 
 In [15]: np.asarray(f)
 Out[15]:
 array([, ,
   , ,
   , ,
   , ,
   , ],
  dtype='|V28')
 
 This does not seem right.
 
 Stéfan
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Structured array dtype

2013-08-30 Thread Nicolas Rougier

Hi,

I'm a bit lost with the following example (numpy 1.7.1, osx 10.8):


 Z = np.zeros(10, [('a', np.float32, 3), ('b', np.float32, 4)])

 Z['a'].dtype
dtype('float32')

 Z.dtype['a']
dtype(('f4', (3,)))


Does that mean that dtype['a'] is the dtype of field 'a' when in Z, while 
Z['a'].dtype is the dtype of field 'a' when extracted or my way of thinking 
is totally wrong ?


What bothers me the most is that I cannot do:

 Z['a'].view(Z.dtype['a'])
ValueError: new type not compatible with array.


Nicolas
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Using as_strided to avoid copy on repeat (with 2-dimensional array)

2013-08-12 Thread Nicolas Rougier

Hi,

I have a (n,2) shaped array representing points and I would like to double each 
point as in:

A = np.arange(10*2).reshape(10,2)
B = np.repeat( A, 2, axis=0 )

Is it possible to do the same using 'as_strided' to avoid copy (and still get 
the same output shape for B) ?


I found this reference:
http://stackoverflow.com/questions/5564098/repeat-numpy-array-without-replicating-data

but did not manage to adapt it to my case.


Bonus question (if previous 'as_strided' trick is possible), would this work or 
do I need a different construct ?

A = np.arange(10*2).reshape(10,2)
B = np.repeat( A[::2], 2, axis=0 )



Nicolas
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] np.dot and 'out' bug

2013-05-23 Thread Nicolas Rougier

 
 Sure, that's clearly what's going on, but numpy shouldn't let you
 silently shoot yourself in the foot like that. Re-using input as
 output is a very common operation, and usually supported fine.
 Probably we should silently make a copy of any input(s) that overlap
 with the output? For high-dimensional dot, buffering temprary
 subspaces would still be more memory efficient than anything users
 could reasonably accomplish by hand.



Also, from a user point of view it is difficult to sort out which functions 
currently allow 'out=a' or  out=b' since nothing in the 'dot' documentation 
warned me about such problem.


Nicolas



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] np.dot and 'out' bug

2013-05-23 Thread Nicolas Rougier


 Can you file a bug in the bug tracker so this won't get lost?

Done.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Possible conversion bug with record array

2013-05-22 Thread Nicolas Rougier


Hi all,

I got a weird output from the following script:

import numpy as np

U = np.zeros(1, dtype=[('x', np.float32, (4,4))])

U[0] = np.eye(4)
print U[0]
# output:  ([[0.0, 1.875, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 
1.875], [0.0, 0.0, 0.0, 0.0]],)

U[0] = np.eye(4, dtype=np.float32)
print U[0]
# output:  ([[1.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0], 
[0.0, 0.0, 0.0, 1.0]],)


The first output is obviously wrong. Can anyone confirm ?
(using numpy 1.7.1 on osx 10.8.3)


Nicolas
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Possible conversion bug with record array

2013-05-22 Thread Nicolas Rougier


Thanks, I filed a new issue on the bug tracker.

Nicolas




On May 22, 2013, at 8:15 PM, eat e.antero.ta...@gmail.com wrote:

 Hi,
 
 FWIW, apparently bug related to dtype of np.eye(.)
 
 
 On Wed, May 22, 2013 at 8:07 PM, Nicolas Rougier nicolas.roug...@inria.fr 
 wrote:
 
 
 Hi all,
 
 I got a weird output from the following script:
 
 import numpy as np
 
 U = np.zeros(1, dtype=[('x', np.float32, (4,4))])
 
 U[0] = np.eye(4)
 print U[0]
 # output:  ([[0.0, 1.875, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 
 1.875], [0.0, 0.0, 0.0, 0.0]],)
 
 U[0] = np.eye(4, dtype=np.float32)
 print U[0]
 # output:  ([[1.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 
 0.0], [0.0, 0.0, 0.0, 1.0]],)
 
 
 The first output is obviously wrong. Can anyone confirm ?
 (using numpy 1.7.1 on osx 10.8.3)
 In []: sys.version
 Out[]: '2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)]'
 In []: np.__version__
 Out[]: '1.6.0'
 In []: U= np.zeros(1, dtype= [('x', np.float32, (4, 4))])
 
 In []: U[0]= np.eye(4)
 In []: U
 Out[]: 
 array([ ([[0.0, 1.875, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 
 1.875], [0.0, 0.0, 0.0, 0.0]],)], 
   dtype=[('x', 'f4', (4, 4))])
 
 In []: U[0]= np.eye(4, dtype= np.float32)
 In []: U
 Out[]: 
 array([ ([[1.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0], 
 [0.0, 0.0, 0.0, 1.0]],)], 
   dtype=[('x', 'f4', (4, 4))])
 
 and 
 In []: sys.version
 Out[]: '3.3.0 (v3.3.0:bd8afb90ebf2, Sep 29 2012, 10:57:17) [MSC v.1600 64 bit 
 (AMD64)]'
 In []: np.__version__
 Out[]: '1.7.0rc1'
 In []: U= np.zeros(1, dtype= [('x', np.float32, (4, 4))])
 
 In []: U[0]= np.eye(4)
 In []: U
 Out[17]: 
 array([ ([[0.0, 1.875, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 
 1.875], [0.0, 0.0, 0.0, 0.0]],)], 
   dtype=[('x', 'f4', (4, 4))])
 
 U[0]= np.eye(4, dtype= np.float32)
 
 In []: U
 Out[]: 
 array([ ([[1.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0], 
 [0.0, 0.0, 0.0, 1.0]],)], 
   dtype=[('x', 'f4', (4, 4))]) 
 
 
 My 2 cents,
 -eat
 
 
 Nicolas
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numpy beginner tutorial

2013-05-07 Thread Nicolas Rougier


Hello everybody,


I've written a numpy beginner tutorial that is available from:

http://www.loria.fr/~rougier/teaching/numpy/numpy.html

It has been designed around cellular automata to try to make it fun.


While writing it, I tried to compile a set of exercises and make them 
progressively harder. For advanced levels, I thought the easiest way would be 
to extract simple questions (but more importantly answers) from this very 
mailing list in order to gather them on a single page. The goal would be both 
to offer a quick reference for new (and old users) and to provide also a set of 
exercices for those who teach. However, it's a bit harder than I thought since 
the mailing list is huge.

I made a separate page for this:

http://www.loria.fr/~rougier/teaching/numpy.100/index.html
(Sources are http://www.loria.fr/~rougier/teaching/numpy.100/index.rst)

(The level names came from an old-game: Dungeon Master)


In order to extract questions/answers and I would need some help, if you have 
some free time to spare...

If you remember having asked or answered a (short) problem, could you send a 
link to the relevant post (the one with the answer), or better, write directly 
the formated entry. Here is an example:


#. Find indices of non-zero elements from [1,2,0,0,4,0]

   .. code:: python

  # Author: Somebody

  print np.nonzero([1,2,0,0,4,0])


If you can provide the (assumed) level of the answer, that would be even better.


Nicolas

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] GSOC 2013

2013-03-04 Thread Nicolas Rougier

 This made me think of a serious performance limitation of structured dtypes: a
 structured dtype is always packed, which may lead to terrible byte alignment
 for common types.  For instance, `dtype([('a', 'u1'), ('b',
 'u8')]).itemsize == 9`,
 meaning that the 8-byte integer is not aligned as an equivalent C-struct's
 would be, leading to all sorts of horrors at the cache and register level.
 Python's ctypes does the right thing here, and can be mined for ideas.   For
 instance, the equivalent ctypes Structure adds pad bytes so the 8-byte integer
 is on the correct boundary:
 
class Aligned(ctypes.Structure):
_fields_ = [('a', ctypes.c_uint8),
('b', ctypes.c_uint64)]
 
print ctypes.sizeof(Aligned()) # -- 16
 
 I'd be surprised if someone hasn't already proposed fixing this, although
 perhaps this would be outside the scope of a GSOC project.  I'm willing to
 wager that the performance improvements would be easily measureable.


I've been confronted to this very problem and ended up coding a group class 
which is a split structured array (each field is stored as a single array) 
offering the same interface as a regular structured array.
http://www.loria.fr/~rougier/coding/software/numpy_group.py


Nicolas

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array indexing and repeated indices

2013-03-01 Thread Nicolas Rougier
 
 bincount takes a weights argument which should do exactly what you are
 looking for.

Fantastic ! Thanks !


Nicolas
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Array indexing and repeated indices

2013-02-28 Thread Nicolas Rougier

Hi,

I'm trying to increment an array using indexing and a second array for 
increment values (since it might be a little tedious to explain, see below for 
a short example). 

Using direct indexing, the values in the example are incremented by 1 only 
while I want to achieve the alternative behavior. My question is whether there 
is such function in numpy or if there a re better way to achieve the same 
result ?
(I would like to avoid the while statement)

I found and adapted the alternative solution from: 
http://stackoverflow.com/questions/2004364/increment-numpy-array-with-repeated-indices
 but it is only for a fixed increment from what I've understood.


Nicolas


# 

import numpy as np

n,p = 5,100
nodes = np.zeros( n, [('value', 'f4', 1)] )
links = np.zeros( p, [('source', 'i4', 1),
  ('target', 'i4', 1)])
links['source'] = np.random.randint(0, n, p)
links['target'] = np.random.randint(0, n, p)

targets = links['target'] # Indices can be repeated
K = np.ones(len(targets)) # Note K could be anything

# Direct indexing
nodes['value'] = 0
nodes['value'][targets] += K
print nodes

# Alternative indexing
nodes['value'] = 0
B = np.bincount(targets)
while B.any():
I = np.argwhere(B=1)
nodes['value'][I] += K[I]
B = np.maximum(B-1,0)
print nodes

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] View on sliced array without copy

2013-02-12 Thread Nicolas Rougier


Hi,

I'm trying to get a view on a sliced array without copy but I'm not able to do 
it as illustrated below:


dtype = np.dtype( [('color', 'f4', 4)] )
Z = np.zeros(100, dtype=dtype)

print Z.view('f4').base is Z# True
print Z[:50].base is Z  # True

print (Z.view('f4'))[:50].base is Z # False
print Z[:50].view('f4').base is Z   # False


Did I do something wrong or is it expected behavior ?


Nicolas

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] View on sliced array without copy

2013-02-12 Thread Nicolas Rougier

I did not know that. Thanks for the clear explanation.

Nicolas

On Feb 12, 2013, at 19:25 , Jaime Fernández del Río wrote:

 On Tue, Feb 12, 2013 at 9:53 AM, Nicolas Rougier nicolas.roug...@inria.fr 
 wrote:
 Did I do something wrong or is it expected behavior ?
  
 Try:
 
 print (Z.view('f4'))[:50].base.base is Z # True
 print Z[:50].view('f4').base.base is Z   # True
 
 This weird behaviour is fixed in the  just-released numpy 1.7. From the notes 
 of the release:
 
 The ``.base`` attribute on ndarrays, which is used on views to ensure that the
 underlying array owning the memory is not deallocated prematurely, now
 collapses out references when you have a view-of-a-view. For example::
 
 a = np.arange(10)
 b = a[1:]
 c = b[1:]
 
 In numpy 1.6, ``c.base`` is ``b``, and ``c.base.base`` is ``a``. In numpy 1.7,
 ``c.base`` is ``a``.
  
 Jaime
 
 -- 
 (\__/)
 ( O.o)
 (  ) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes de 
 dominación mundial.
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dtype reduction [SOLVED]

2013-01-15 Thread Nicolas Rougier


I ended coding the dtype reduction, it's not foolproof but it might be useful 
for others as well.

Nicolas



import numpy as np

def dtype_reduce(dtype, level=0, depth=0):

Try to reduce dtype up to a given level when it is possible

dtype =  [ ('vertex',  [('x', 'f4'), ('y', 'f4'), ('z', 'f4')]),
   ('normal',  [('x', 'f4'), ('y', 'f4'), ('z', 'f4')]),
   ('color',   [('r', 'f4'), ('g', 'f4'), ('b', 'f4'), ('a', 
'f4')])]

level 0: ['color,vertex,normal,', 10, 'float32']
level 1: [['color', 4, 'float32']
  ['normal', 3, 'float32']
  ['vertex', 3, 'float32']]

dtype = np.dtype(dtype)
fields = dtype.fields

# No fields
if fields is None:
if dtype.shape:
count = reduce(mul, dtype.shape)
else:
count = 1
size = dtype.itemsize/count
if dtype.subdtype:
name = str( dtype.subdtype[0] )
else:
name = str( dtype )
return ['', count, name]
else:
items = []
name = ''
# Get reduced fields
for key,value in fields.items():
l =  dtype_reduce(value[0], level, depth+1)
if type(l[0]) is str:
items.append( [key, l[1], l[2]] )
else:
items.append( l )
name += key+','

# Check if we can reduce item list
ctype = None
count = 0
for i,item in enumerate(items):
# One item is a list, we cannot reduce
if type(item[0]) is not str:
return items
else:
if i==0:
ctype = item[2]
count += item[1]
else:
if item[2] != ctype:
return items
count += item[1]
if depth = level:
return [name, count, ctype]
else:
return items

if __name__ == '__main__':

# Fully reductible
dtype =  [ ('vertex',  [('x', 'f4'), ('y', 'f4'), ('z', 'f4')]),
   ('normal',  [('x', 'f4'), ('y', 'f4'), ('z', 'f4')]),
   ('color',   [('r', 'f4'), ('g', 'f4'), ('b', 'f4'), ('a', 
'f4')])]
print 'level 0:'
print dtype_reduce(dtype,level=0)
print 'level 1:'
print dtype_reduce(dtype,level=1)
print

# Not fully reductible
dtype =  [ ('vertex',  [('x', 'i4'), ('y', 'i4'), ('z', 'i4')]),
   ('normal',  [('x', 'f4'), ('y', 'f4'), ('z', 'f4')]),
   ('color',   [('r', 'f4'), ('g', 'f4'), ('b', 'f4'), ('a', 
'f4')])]
print 'level 0:'
print dtype_reduce(dtype,level=0)
print

# Not reductible at all
dtype =  [ ('vertex',  [('x', 'f4'), ('y', 'f4'), ('z', 'i4')]),
   ('normal',  [('x', 'f4'), ('y', 'f4'), ('z', 'i4')]),
   ('color',   [('r', 'f4'), ('g', 'f4'), ('b', 'i4'), ('a', 
'f4')])]
print 'level 0:'
print dtype_reduce(dtype,level=0)





On Dec 27, 2012, at 9:11 , Nicolas Rougier wrote:

 
 Yep, I'm trying to construct dtype2 programmaticaly and was hoping for some 
 function giving me a canonical expression of the dtype. I've started 
 playing with fields but it's just a bit harder than I though (lot of 
 different cases and recursion).
 
 Thanks for the answer.
 
 
 Nicolas
 
 On Dec 27, 2012, at 1:32 , Nathaniel Smith wrote:
 
 On Wed, Dec 26, 2012 at 8:09 PM, Nicolas Rougier
 nicolas.roug...@inria.fr wrote:
 
 
 Hi all,
 
 
 I'm looking for a way to reduce dtype1 into dtype2 (when it is possible 
 of course).
 Is there some easy way to do that by any chance ?
 
 
 dtype1 = np.dtype( [ ('vertex',  [('x', 'f4'),
 ('y', 'f4'),
 ('z', 'f4')]),
   ('normal',  [('x', 'f4'),
('y', 'f4'),
('z', 'f4')]),
   ('color',   [('r', 'f4'),
('g', 'f4'),
('b', 'f4'),
('a', 'f4')]) ] )
 
 dtype2 = np.dtype( [ ('vertex',  'f4', 3),
('normal',  'f4', 3),
('color',   'f4', 4)] )
 
 
 If you have an array whose dtype is dtype1, and you want to convert it
 into an array with dtype2, then you just do
 my_dtype2_array = my_dtype1_array.view(dtype2)
 
 If you have dtype1 and you want to programmaticaly construct dtype2,
 then that's a little more fiddly and depends on what exactly you're
 trying to do, but start by poking around with dtype1.names and
 dtype1.fields, which contain information on how dtype1 is put together
 in the form of regular python structures.
 
 -n
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 ___
 NumPy-Discussion mailing list
 NumPy

Re: [Numpy-discussion] Manipulate neighboring points in 2D array

2012-12-30 Thread Nicolas Rougier


You might want to have a look at :

http://code.google.com/p/glumpy/source/browse/demos/gray-scott.py

which implements a Gray-Scott reaction-diffusion system.

The 'convolution_matrix(src, dst, kernel, toric)' build a sparse matrix such 
that multiplying an array with this matrix will result in the convolution. This 
is very fast if your kernel is small (like for cellular automata) and if you 
intend to repeat the convolution several times. Note that you only need to 
build the matrix once.

Example:

 S = np.ones((3,3))
 K = np.ones((3,3))
 M = convolution_matrix(S,S,K,True)
 print (M*S.ravel()).reshape(S.shape)
[[ 9.  9.  9.]
 [ 9.  9.  9.]
 [ 9.  9.  9.]]
 M = convolution_matrix(S,S,K,False)
 print (M*S.ravel()).reshape(S.shape)
[[ 4.  6.  4.]
 [ 6.  9.  6.]
 [ 4.  6.  4.]]

the 'dst' parameter won't be useful in your case so you have to set it to 'src'.




Nicolas



On Dec 30, 2012, at 12:21 , deb wrote:

 Thanks Zach for your interest
 
 I was thinking about ndimage.generic_filter when I wrote about generic filter.
 For generic_filter I used trivial function that returns .sum() but I can't 
 seem to make the code any faster than it is.
 
 This is the code: 
 http://code.activestate.com/recipes/578390-snowflake-simulation-using-reiter-cellular-automat/
 As commenter suggested I thought to try and make it in numpy
 
 Interestingly, the first thing I tried before trying to use numpy was change 
 range() loops with xrange(), as xrange is considered faster and more 
 efficient, but result was that code was twice slower.
 
 Anyway I give up, and concluded that my numpy skills are far below I expected 
 :D
 
 
 It's possible that some generic filter operations can be cast in
 terms of pure-numpy operations, or composed out of existing filters
 available in scipy.ndimage. If you can describe the filter operation
 you wish to perform, perhaps someone can make some suggestions.
 
 Alternately, scipy.ndimage.generic_filter can take an arbitrary
 python function. Though it's not really fast...
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dtype reduction

2012-12-27 Thread Nicolas Rougier

Yep, I'm trying to construct dtype2 programmaticaly and was hoping for some 
function giving me a canonical expression of the dtype. I've started playing 
with fields but it's just a bit harder than I though (lot of different cases 
and recursion).

Thanks for the answer.


Nicolas

On Dec 27, 2012, at 1:32 , Nathaniel Smith wrote:

 On Wed, Dec 26, 2012 at 8:09 PM, Nicolas Rougier
 nicolas.roug...@inria.fr wrote:
 
 
 Hi all,
 
 
 I'm looking for a way to reduce dtype1 into dtype2 (when it is possible of 
 course).
 Is there some easy way to do that by any chance ?
 
 
 dtype1 = np.dtype( [ ('vertex',  [('x', 'f4'),
  ('y', 'f4'),
  ('z', 'f4')]),
('normal',  [('x', 'f4'),
 ('y', 'f4'),
 ('z', 'f4')]),
('color',   [('r', 'f4'),
 ('g', 'f4'),
 ('b', 'f4'),
 ('a', 'f4')]) ] )
 
 dtype2 = np.dtype( [ ('vertex',  'f4', 3),
 ('normal',  'f4', 3),
 ('color',   'f4', 4)] )
 
 
 If you have an array whose dtype is dtype1, and you want to convert it
 into an array with dtype2, then you just do
  my_dtype2_array = my_dtype1_array.view(dtype2)
 
 If you have dtype1 and you want to programmaticaly construct dtype2,
 then that's a little more fiddly and depends on what exactly you're
 trying to do, but start by poking around with dtype1.names and
 dtype1.fields, which contain information on how dtype1 is put together
 in the form of regular python structures.
 
 -n
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] dtype reduction

2012-12-26 Thread Nicolas Rougier


Hi all,


I'm looking for a way to reduce dtype1 into dtype2 (when it is possible of 
course).
Is there some easy way to do that by any chance ?


dtype1 = np.dtype( [ ('vertex',  [('x', 'f4'),
  ('y', 'f4'),
  ('z', 'f4')]),
('normal',  [('x', 'f4'),
 ('y', 'f4'),
 ('z', 'f4')]),
('color',   [('r', 'f4'),
 ('g', 'f4'),
 ('b', 'f4'),
 ('a', 'f4')]) ] )

dtype2 = np.dtype( [ ('vertex',  'f4', 3),
 ('normal',  'f4', 3),
 ('color',   'f4', 4)] )


Nicolas
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [numpy] ENH: Initial implementation of a 'neighbor' calculation (#303)

2012-10-12 Thread Nicolas Rougier

Sorry, I'm away from the lab and did not have a chance to test is yet.
I will do next week.


Nicolas


On Oct 11, 2012, at 15:48 , Nathaniel Smith wrote:

 On Thu, Oct 11, 2012 at 10:50 AM, Nicolas Rougier
 nicolas.roug...@inria.fr wrote:
 I missed the original post but I personally find this addition especially 
 useful for my work in computational neuroscience.
 
 I did something vaguely similar in a small framework (http://dana.loria.fr/, 
 you can look more specifically at http://dana.loria.fr/doc/connection.html 
 for details). Examples are available from: http://dana.loria.fr/examples.html
 
 The actual computation can be made in several ways depending on the 
 properties of the kernel but the idea is to compute an array K such that 
 given an array A and a kernel k, A*K holds the expected result. This 
 also work with sparse array for example when the kernel is very small. I 
 suspect the PR will be quite efficient compared to what I did.
 
 Would the current PR be useful to you if merged as-is? A common
 pitfall with these sorts of contributions is that we realize only
 after merging it that there is some tiny detail of the API that makes
 it not-quite-usable for some people with related problems, so it'd be
 awesome if you could take a closer look.
 
 -n
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [numpy] ENH: Initial implementation of a 'neighbor' calculation (#303)

2012-10-11 Thread Nicolas Rougier


I missed the original post but I personally find this addition especially 
useful for my work in computational neuroscience.

I did something vaguely similar in a small framework (http://dana.loria.fr/, 
you can look more specifically at http://dana.loria.fr/doc/connection.html for 
details). Examples are available from: http://dana.loria.fr/examples.html

The actual computation can be made in several ways depending on the properties 
of the kernel but the idea is to compute an array K such that given an array 
A and a kernel k, A*K holds the expected result. This also work with sparse 
array for example when the kernel is very small. I suspect the PR will be quite 
efficient compared to what I did.


Nicolas


On Oct 10, 2012, at 18:55 , Cera, Tim wrote:

 On Wed, Oct 10, 2012 at 1:58 AM, Travis E. Oliphant 
 notificati...@github.com wrote:
 I'm not sure what to make of no comments on this PR. This seems like a useful 
 addition. @timcera are you still interested in having this PR merged?
 
 
 Yes.  
 
 I mentioned in PR comments that the lack of discussion is because my code 
 engenders speechless awe in anyone who looks at it.  :-)   Of course 
 speechless awe can come from two different reasons!  Hopefully it is because 
 my code is so awesome.
 
 Seriously, I really wanted some input, especially after I found #31.
  
 
 On Wed, Oct 10, 2012 at 7:24 AM, Eric Moore notificati...@github.com wrote:
 This seems to be trying to solve a very similar problem to #31
 
 Internally I implemented something like rolling window, but I don't return 
 the windows.  Instead the contents of the windows are used for calculation of 
 each windows 'central' cell in the results array.
 
 After seeing the rolling window function I thought it might be nice to bring 
 that out into a callable function, so that similar functionality would be 
 available.  That particular function isn't useful to me directly, but perhaps 
 others? 
 
 Kindest regards,
 Tim
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Matrices and arrays of vectors

2012-02-24 Thread Nicolas Rougier

You should use a (M,N,2) array to store your vectors:

import math
import numpy
import numpy.random

# Rotation angle
theta = math.pi/6.0

# Grid shape
M = 10
N = 10

# Establish the rotation matrix
c = math.cos(theta)
s = math.sin(theta)
rotation = numpy.array([[c,  s],
[-1.0*s, c]])

# Fudge some data to work with
data = numpy.random.uniform(-1.0, 1.0, (M,N,2))

numpy.dot(data,rotation)



Nicolas


On Feb 24, 2012, at 13:11 , Bob Dowling wrote:

 import math
 import numpy
 import numpy.random
 
 # Rotation angle
 theta = math.pi/6.0
 
 # Grid shape
 M = 10
 N = 10
 
 # Establish the rotation matrix
 c = math.cos(theta)
 s = math.sin(theta)
 rotation = numpy.matrix([[c, s], [-1.0*s, c]])
 
 # Fudge some data to work with
 data = numpy.random.uniform(-1.0, 1.0, 2*M*N).reshape(2,M,N)
 
 # THIS DOES NOT WORK.
 # BUT WHAT DOES?
 rotated_data = rotation*data

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN] glumpy 0.2.0

2011-09-17 Thread Nicolas Rougier

Thanks.
I just uploaded it to pypi.

Nicolas

On Sep 16, 2011, at 22:21 , Samuel John wrote:

 Hi Nicolas,
 
 that looks great. 
 Could you make this available such that `pip install glumpy` would work?
 
 cheers,
 Samuel
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [ANN] glumpy 0.2.0

2011-09-16 Thread Nicolas Rougier

Hi folks,

I am pleased to announce a new release of glumpy, a small python library for 
the (very) fast vizualization of numpy arrays, (mainly two dimensional) that 
has been designed with efficiency in mind. If you want to draw nice figures for 
inclusion in a scientific article, you’d better use matplotlib but if you want 
to have a sense of what’s going on in your simulation while it is running, then 
maybe glumpy can help you.


Download and screenshots at: http://code.google.com/p/glumpy/

Nicolas___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Recommndations for an easy GUI

2011-06-28 Thread Nicolas Rougier


Have a look at glumpy: http://code.google.com/p/glumpy/
It's quite simple and very fast for images (it's based on OpenGL/shaders).

Nicolas 


On Jun 28, 2011, at 6:38 AM, Nadav Horesh wrote:

 I have an application which generates and displays RGB images as rate of 
 several frames/seconds (5-15). Currently I use Tkinter+PIL, but I have a 
 problem that it slows down the rate significantly. I am looking for a fast 
 and easy alternative.
  
 Platform: Linux
 I prefer tools that would work also with python3
  
 I looked at the following:
 1. matplotlib's imshow: Easy but too slow.
 2. pyqwt: Easy, fast, does not support rgb images (only grayscale)
 3. pygame: Nice but lacking widgets like buttons etc.
 4. PyQT: Can do, I would like something simpler (I'll rather focus on 
 computations, not GUI)
  
Nadav.
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Viewer for 2D Numpy arrays (GUI)

2010-09-17 Thread Nicolas Rougier


Maybe glumpy may be of some help:
http://code.google.com/p/glumpy/


Nicolas


On Fri, 2010-09-17 at 09:03 +0200, Massimo Di Stefano wrote:
 Hi,
 
 
 have yo already tryied Spyderlib :
 
 
 http://code.google.com/p/spyderlib/
 
 
 a matlab-like environment based on pyqt
 you can store object like array in a QtTable
 and modify/save it.
 
 
 Ciao,
 
 
 Massimo.
 
 Il giorno 17/set/2010, alle ore 08.53, Mayank P Jain ha scritto:
 
  I thought about these options but what I need is excel like
  interface that displays the values for each cell and one can modify
  and save the files. 
  
  This would be convenient way of saving large files in less space and
  at the same time, see them and would remove those additional steps
  of writing out and reading them in each time my program runs.
  
  Regards
  Mayank P Jain
  
  V R TechNiche
  Transportation Modeler/Planner
  Phone: +91 901 356 0583
  
  
  
  2010/9/17 Nadav Horesh nad...@visionsense.com
  View 2D arrays:
   Most convenient: Matplotlib (imshow)
   As surface plot: Mayavi (Matplotlib has surface plots, but
  it is slow for large arrays)
  
  Modify files:
   I think the IDE spyder could help (and you can use
  mayavi/matplotlib within)
  
   Nadav
  
  
  -הודעה מקורית-
  מאת: numpy-discussion-boun...@scipy.org בשם Mayank P Jain
  נשלח: ו 17-ספטמבר-10 08:16
  אל: numpy-discussion
  נושא: [Numpy-discussion] Viewer for 2D Numpy arrays (GUI)
  
  
  Currently I am exporting them to csv files, but I wonder
  if there is a
  viewer that can be used with native numpy array files to
  view and preferably
  modify the 2D arrays.
  Any help would be appreciated.
  
  
  Regards
  Mayank P Jain
  
  V R TechNiche
  Transportation Modeler/Planner
  
  Phone: +91 901 356 0583
  begin_of_the_skype_highlighting  +91 901 356
  0583  end_of_the_skype_highlighting
  
  
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
  
  
  
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Help on vectorization of mesh normals computation

2010-06-17 Thread Nicolas Rougier

Hello,

I'm trying to find a way to compute the normals of a mesh (vertices + indices) 
using only vectorized computations and I wonder if anyone already did that. 
Here my code so far:

# Mesh generation + indices for triangles
n = 24
vertices = numpy.zeros(((n*n),3), dtype=numpy.float32)
normals  = numpy.zeros(((n*n),3), dtype=numpy.float32)
indices  = numpy.zeros((2*(n-1)*(n-1),3), dtype=numpy.int)
for xi in range(n):
for yi in range(n):
i = yi*n + xi
vertices[i] = xi/float(n-1), yi/float(n-1), 0
j=0
for yi in range(n-1):
for xi in range(n-1):
i = yi*n + xi
indices[j+0] = i,  i+1,i+n
indices[j+1] = i+n,i+1,i+n+1
j += 2

# Normal computations
normals[:] = 0,0,0
P1 = vertices[indices[:,0]]
P2 = vertices[indices[:,1]]
P3 = vertices[indices[:,2]]
N = numpy.cross(P2-P3,P3-P1)
for i in range(self.indices.shape[0]):
normals[indices[i]] += N[i]


I would like to get rid of the last 'for' loop but did not find the proper way 
to do it. Anyone got an idea ?


Nicolas
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Some help on matlab to numpy translation

2010-03-15 Thread Nicolas Rougier


Thanks and in fact, I already wasted quite some time on and your last version 
will help me a lot. Unfortunately, I'm not a specialist at lattice Boltzmann 
methods at all so I'm not able to answer your questions (my initial idea was to 
convert the matlab script to be have a running example to get some starting 
point). Also, I found today some computers in the lab to test the matlab 
version and it seems to run as advertised on the site. I  now need to run both 
versions side by side and to check where are the differences. I will post 
sources as soon as I get it to run properly.

Thanks again.


Nicolas





On Mar 15, 2010, at 22:32 , Friedrich Romstedt wrote:

 Ok, so I send yet another version.  Maybe Bruce is right, but I didn't
 care, because we have fret enough.  Now it not only computes
 something, but also displays something :-(
 
 Nicolas, maybe you can now waste some of your time with it?  I was
 curious, both to understand and to get it working, but I failed.  I
 doubt especially the section Microscopic boundary conditions,
 because commenting it out makes things, well, say worser.  Leaving the
 other sections out is also not recommendable, but at least not that
 destructive.  I do not understand why in the microscopic boundary
 section only directions 6 and 7 come into play and not 3.  Also I do
 not understand why they occur in *all* output direction expressions.
 
 Furthermore, the fluid, albeit behaving also at the inlet quite
 strange, bounces back the outlet ...
 
 I disabled the obstacle so far, and plotted the 4 direction
 (downwards), and the resulting ux and uy flows.
 
 I give up so far.
 
 Friedrich
 lbmethod.10-03-14.Friedrich.py___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Some help on matlab to numpy translation

2010-03-14 Thread Nicolas Rougier

Thanks a lot for the translation.

I think the opp relates to the opp array declared at the top and the 
circshift can be made using numpy roll. I modified your translation to include 
them. 

The results should be something like 
http://www.lbmethod.org/openlb/gif/karman.avi (I think) but unfortunately, it 
does not seem to do anything at the moment, I need to investigate further where 
is the problem.

Nicolas



New version:


'''
Channel flow past a cylindrical obstacle, using a LB method
Copyright (C) 2006 Jonas Latt
Address: Rue General Dufour 24,  1211 Geneva 4, Switzerland
E-mail: jonas.l...@cui.unige.ch
'''
import numpy
import matplotlib
matplotlib.use('MacOSX')
import matplotlib.pyplot as plt

# General flow constants
lx = 250
ly = 51
obst_x = lx/5.+1  # position of the cylinder; (exact
obst_y = ly/2.+1  # y-symmetry is avoided)
obst_r = ly/10.+1 # radius of the cylinder
uMax   = 0.02 # maximum velocity of Poiseuille inflow
Re = 100 # Reynolds number
nu = uMax * 2.*obst_r / Re# kinematic viscosity
omega  = 1. / (3*nu+1./2.)   # relaxation parameter
maxT   = 4000# total number of iterations
tPlot  = 5   # cycles

# D2Q9 Lattice constants
t  = numpy.array([4/9., 1/9.,1/9.,1/9.,1/9., 1/36.,1/36.,1/36.,1/36.])
cx = numpy.array([  0,   1,  0, -1,  0,1,  -1,  -1,   1])
cy = numpy.array([  0,   0,  1,  0, -1,1,   1,  -1,  -1])
opp = numpy.array([ 0,   3,  4,  1,  2,7,   8,   5,   6])
col = numpy.arange(1, ly - 1)

y,x = numpy.meshgrid(numpy.arange(ly),numpy.arange(lx))
obst = (x-obst_x)**2 + (y-obst_y)**2 = obst_r**2
obst[:,0] = obst[:,ly-1] = 1
bbRegion = numpy.nonzero(obst)

# Initial condition: (rho=0, u=0) == fIn(i) = t(i)
fIn = t[:, numpy.newaxis, numpy.newaxis].\
repeat(lx, axis = 1).\
repeat(ly, axis = 2)

# Main loop (time cycles)
for cycle in range(maxT):

   # Macroscopic variables
   rho = fIn.sum(axis = 0)
   ux = (cx[:,numpy.newaxis,numpy.newaxis] * fIn).sum(axis = 0) / rho
   uy = (cy[:,numpy.newaxis,numpy.newaxis] * fIn).sum(axis = 0) / rho

   # Macroscopic (Dirichlet) boundary condition
   L = ly-2.0
   y = col-1.5
   ux[:, 1, col] = 4 * uMax / (L ** 2) * (y * L - y ** 2)
   uy[:, 1, col] = 0
   rho[0, col] = 1 / (1 - ux[0, col]) * \
   (fIn[[1, 3, 5]][:, 0][:, col].sum(axis = 0) + \
2 * fIn[[4, 7, 8]][:, 1][:, col].sum(axis = 0))
   rho[lx - 1, col] = rho[lx - 2, col]
   uy[lx - 1, col] = 0
   ux[lx - 1, col] = ux[:, lx - 2, col]

   fEq = numpy.zeros((9, lx, ly))
   fOut = numpy.zeros((9, lx, ly))
   for i in xrange(0, 9):
   cu = 3 * (cx[i] * ux + cy[i] * uy)
   fEq[i] = rho * t[i] * (1 + cu + 0.5 * cu ** 2 - \
   1.5 * (ux ** 2 + uy ** 2))
   fOut[i] = fIn[i] - omega * (fIn[i] - fIn[i])

   # Microscopic boundary conditions
   for i in xrange(0, 9):
  # Left boundary:
  fOut[i, 1, col] = fEq[i,0,col] + 18 * t[i] * cx[i] * cy[i] * \
  (fIn[7,0,col] - fIn[6,0,col] - fEq[7,0,col] + fEq[6,0,col])
  fOut[i,lx-1,col] = fEq[i,lx-1,col] + 18 * t[i] * cx[i] * cy[i] * \
  (fIn[5,lx-1,col] - fIn[8,lx-1,col] - fEq[5,lx-1,col] + 
fEq[8,lx-1,col])
  fOut[i,bbRegion] = fIn[opp[i],bbRegion]

   # Streaming step
   for i in xrange(0,9):
  fIn = numpy.roll(fIn, cx[i], 1)
  fIn = numpy.roll(fIn, cy[i], 2)
   
   if not cycle%tPlot:
  u = numpy.sqrt(ux**2+uy**2)
  #u[bbRegion] = numpy.nan
  print u.min(), u.max()
  #plt.imshow(u)
  #plt.show()




On Mar 13, 2010, at 16:59 , Friedrich Romstedt wrote:

 2010/3/13 Nicolas Rougier nicolas.roug...@loria.fr:
 I'm trying to translate a small matlab program for the simulation in a 2D 
 flow in a channel past a cylinder and since I do not have matlab access, I 
 would like to know if someone can help me, especially on array indexing. The 
 matlab source code is available at: 
 http://www.lbmethod.org/openlb/lb.examples.html and below is what I've done 
 so far in my translation effort.
 
 In the matlab code, there is a ux array of shape (1,lx,ly) and I do not 
 understand syntax: ux(:,1,col) with col = [2:(ly-1)]. If someone knows, 
 that would help me a lot...
 
 It means that you select all in the 0-axis all indices, in the 1-axis
 the index 0 (matlab: index 1), and in the 2-axis the indices given by
 the list {col}.  {col} is in our case an ndarray of .ndim = 1.
 
 I attach a modified version of your script which is running, computing
 *something*.  If you could provide information about matlab functions
 opp() and circshift() that would be helpful.  I marked sections I
 changed with CHANGED, todos with TODO and lonely notes with NOTE and
 so on.
 
 Friedrich
 lbmethod.py___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy

[Numpy-discussion] Some help on matlab to numpy translation

2010-03-13 Thread Nicolas Rougier

Hello,

I'm trying to translate a small matlab program for the simulation in a 2D flow 
in a channel past a cylinder and since I do not have matlab access, I would 
like to know if someone can help me, especially on array indexing. The matlab 
source code is available at: http://www.lbmethod.org/openlb/lb.examples.html 
and below is what I've done so far in my translation effort. 

In the matlab code, there is a ux array of shape (1,lx,ly) and I do not 
understand syntax: ux(:,1,col) with col = [2:(ly-1)]. If someone knows, 
that would help me a lot...


Nicolas



'''
Channel flow past a cylindrical obstacle, using a LB method
Copyright (C) 2006 Jonas Latt
Address: Rue General Dufour 24,  1211 Geneva 4, Switzerland 
E-mail: jonas.l...@cui.unige.ch
'''
import numpy

# GENERAL FLOW CONSTANTS
lx = 250
ly = 51
obst_x = lx/5.+1  # position of the cylinder; (exact
obst_y = ly/2.+1  # y-symmetry is avoided)
obst_r = ly/10.+1 # radius of the cylinder
uMax   = 0.02 # maximum velocity of Poiseuille inflow
Re = 100 # Reynolds number
nu = uMax * 2.*obst_r / Re# kinematic viscosity
omega  = 1. / (3*nu+1./2.)   # relaxation parameter
maxT   = 4000# total number of iterations
tPlot  = 5   # cycles

# D2Q9 LATTICE CONSTANTS
# matlab: t  = [4/9, 1/9,1/9,1/9,1/9, 1/36,1/36,1/36,1/36];
# matlab: cx = [  0,   1,  0, -1,  0,1,  -1,  -1,   1];
# matlab: cy = [  0,   0,  1,  0, -1,1,   1,  -1,  -1];
# matlab: opp = [ 1,   4,  5,  2,  3,8,   9,   6,   7];
# matlab: col = [2:(ly-1)];
t  = numpy.array([4/9., 1/9.,1/9.,1/9.,1/9., 1/36.,1/36.,1/36.,1/36.])
cx = numpy.array([  0,   1,  0, -1,  0,1,  -1,  -1,   1])
cy = numpy.array([  0,   0,  1,  0, -1,1,   1,  -1,  -1])
opp = numpy.array([ 1,   4,  5,  2,  3,8,   9,   6,   7])
col = numpy.arange(2,(ly-1))

# matlab: [y,x] = meshgrid(1:ly,1:lx);
# matlab: obst = (x-obst_x).^2 + (y-obst_y).^2 = obst_r.^2;
# matlab: obst(:,[1,ly]) = 1;
# matlab: bbRegion = find(obst);
y,x = numpy.meshgrid(numpy.arange(ly),numpy.arange(lx))
obst = (x-obst_x)**2 + (y-obst_y)**2 = obst_r**2
obst[:,0] = obst[:,ly-1] = 1
bbRegion = numpy.nonzero(obst)


# INITIAL CONDITION: (rho=0, u=0) == fIn(i) = t(i)
# matlab: fIn = reshape( t' * ones(1,lx*ly), 9, lx, ly);
fIn = numpy.ones((lx,ly,9))
fIn [:,:] = t

# MAIN LOOP (TIME CYCLES)
# matlab: for cycle = 1:maxT
for cycle in range(maxT):

# MACROSCOPIC VARIABLES
# matlab: rho = sum(fIn);
# matlab: ux  = reshape ( ...
# matlab: (cx * reshape(fIn,9,lx*ly)), 1,lx,ly) ./rho;
# matlab: uy  = reshape ( ...
# matlab: (cy * reshape(fIn,9,lx*ly)), 1,lx,ly) ./rho;
rho = fIn.sum(-1)
ux  = (cx*fIn).sum(-1)/rho
uy  = (cx*fIn).sum(-1)/rho

# MACROSCOPIC (DIRICHLET) BOUNDARY CONDITIONS
# Inlet: Poiseuille profile
# matlab: L = ly-2; y = col-1.5;
# matlab: ux(:,1,col) = 4 * uMax / (L*L) * (y.*L-y.*y);
# matlab: uy(:,1,col) = 0;
# matlab: rho(:,1,col) = 1 ./ (1-ux(:,1,col)) .* ( ...
# matlab: sum(fIn([1,3,5],1,col)) + ...
# matlab: 2*sum(fIn([4,7,8],1,col)) );
L = ly-2.0
y = col-1.5
# Is that right ?
ux[0,1:-1] = 4 * uMax / (L*L) * (y.*L-y.*y)
# Is that right ?
uy[0,1:-1] = 0
rho[0,1:-1] = 1./(1-ux[1,1:-1])*(sum(fIn[   ([1,3,5],1,col)) + 
2*sum(fIn([4,7,8],1,col)))

# Outlet: Zero gradient on rho/ux
# matlab: rho(:,lx,col) = rho(:,lx-1,col);
# matlab: uy(:,lx,col)  = 0;
# matlab: ux(:,lx,col)  = ux(:,lx-1,col);

# % Outlet: Zero gradient on rho/ux
# rho(:,lx,col) = rho(:,lx-1,col);
# uy(:,lx,col)  = 0;
# ux(:,lx,col)  = ux(:,lx-1,col);


# % COLLISION STEP
# for i=1:9
#cu = 3*(cx(i)*ux+cy(i)*uy);
#fEq(i,:,:)  = rho .* t(i) .* ( 1 + cu + 1/2*(cu.*cu) - 
3/2*(ux.^2+uy.^2) );
#fOut(i,:,:) = fIn(i,:,:) - omega .* (fIn(i,:,:)-fEq(i,:,:));
# end

# % MICROSCOPIC BOUNDARY CONDITIONS
# for i=1:9
#  % Left boundary
#  fOut(i,1,col) = fEq(i,1,col) + 18*t(i)*cx(i)*cy(i)* ( fIn(8,1,col) - 
 fIn(7,1,col)-fEq(8,1,col)+fEq(7,1,col) );
#  % Right boundary
#  fOut(i,lx,col) = fEq(i,lx,col) + 18*t(i)*cx(i)*cy(i)* ( 
fIn(6,lx,col) - fIn(9,lx,col)-fEq(6,lx,col)+fEq(9,lx,col) );
#  % Bounce back region
#  fOut(i,bbRegion) = fIn(opp(i),bbRegion);

# % STREAMING STEP
# for i=1:9
#fIn(i,:,:) = circshift(fOut(i,:,:), [0,cx(i),cy(i)]);

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Some help on matlab to numpy translation

2010-03-13 Thread Nicolas Rougier

Thanks.
I agree that the use of ':' is a bit weird. 

Nicolas


On Mar 13, 2010, at 11:45 , Fabrice Silva wrote:

 Le samedi 13 mars 2010 à 10:20 +0100, Nicolas Rougier a écrit :
 Hello,
 I'm trying to translate a small matlab program for the simulation in a
 2D flow in a channel past a cylinder and since I do not have matlab
 access, I would like to know if someone can help me, especially on
 array indexing. The matlab source code is available at:
 http://www.lbmethod.org/openlb/lb.examples.html and below is what I've
 done so far in my translation effort. 
 
 In the matlab code, there is a ux array of shape (1,lx,ly) and I do
 not understand syntax: ux(:,1,col) with col = [2:(ly-1)]. If
 someone knows, that would help me a lot...
 
 
 As ux 's shape is (1,lx,ly), ux(:,1,col) is equal to ux(1,1,col) which
 is a vector with the elements [ux(1,1,2), ... ux(1,1,ly-1)].
 Using : juste after the reshape seems a lit bit silly...
 -- 
 Fabrice Silva
 LMA UPR CNRS 7051
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] glumpy, fast opengl visualization

2010-01-25 Thread Nicolas Rougier


Hello,

This is an update about glumpy, a fast-OpenGL based numpy visualization.
I modified the code such that the only dependencies are PyOpenGL and
IPython (for interactive sessions). You will also need matplotlib and
scipy for some demos.

Sources: hg clone http://glumpy.googlecode.com/hg/ glumpy
No installation required, you can run all demos inplace.

Homepage: http://code.google.com/p/glumpy/




Nicolas


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] nan_to_num and bool arrays

2009-12-11 Thread Nicolas Rougier

Hello,

Using both numpy 1.3.0 and 1.4.0rc1 I got the following exception using
nan_to_num on a bool array, is that the expected behavior ?


 import numpy
 Z = numpy.zeros((3,3),dtype=bool)
 numpy.nan_to_num(Z)
Traceback (most recent call last):
  File stdin, line 1, in module
  File /usr/lib/python2.6/dist-packages/numpy/lib/type_check.py, line
374, in nan_to_num
maxf, minf = _getmaxmin(y.dtype.type)
  File /usr/lib/python2.6/dist-packages/numpy/lib/type_check.py, line
307, in _getmaxmin
f = getlimits.finfo(t)
  File /usr/lib/python2.6/dist-packages/numpy/core/getlimits.py, line
103, in __new__
raise ValueError, data type %r not inexact % (dtype)
ValueError: data type type 'numpy.bool_' not inexact



Nicolas

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] nan_to_num and bool arrays

2009-12-11 Thread Nicolas Rougier

I've created a ticket (#1327).

Nicolas

On Dec 11, 2009, at 17:21 , Keith Goodman wrote:

 On Fri, Dec 11, 2009 at 12:50 AM, Nicolas Rougier
 nicolas.roug...@loria.fr wrote:
 
 Hello,
 
 Using both numpy 1.3.0 and 1.4.0rc1 I got the following exception using
 nan_to_num on a bool array, is that the expected behavior ?
 
 
 import numpy
 Z = numpy.zeros((3,3),dtype=bool)
 numpy.nan_to_num(Z)
 Traceback (most recent call last):
  File stdin, line 1, in module
  File /usr/lib/python2.6/dist-packages/numpy/lib/type_check.py, line
 374, in nan_to_num
maxf, minf = _getmaxmin(y.dtype.type)
  File /usr/lib/python2.6/dist-packages/numpy/lib/type_check.py, line
 307, in _getmaxmin
f = getlimits.finfo(t)
  File /usr/lib/python2.6/dist-packages/numpy/core/getlimits.py, line
 103, in __new__
raise ValueError, data type %r not inexact % (dtype)
 ValueError: data type type 'numpy.bool_' not inexact
 
 I guess a check for bool could be added at the top of nan_to_num. If
 the input x is a bool then nan_to_num would just return x unchanged.
 Or perhaps
 
 maxf, minf = _getmaxmin(y.dtype.type)
 
 could return False, True.
 
 Best bet is probably to file a ticket. And then pray.
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] glumpy: fast OpenGL numpy visualization + matplotlib integration

2009-09-28 Thread Nicolas Rougier

Hi all,

glumpy is a fast OpenGL visualization tool for numpy arrays coded on  
top of pyglet (http://www.pyglet.org/). The package contains many  
demos showing basic usage as well as integration with matplotlib. As a  
reference, the animation script available from matplotlib distribution  
runs at around 500 fps using glumpy instead of 30 fps on my machine.

Package/screenshots/explanations at: 
http://www.loria.fr/~rougier/coding/glumpy.html
(it does not require installation so you can run demos from within the  
glumpy directory).


Nicolas
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [Matplotlib-users] glumpy: fast OpenGL numpy visualization + matplotlib integration

2009-09-28 Thread Nicolas Rougier



Well, I've been starting working on a pyglet backend but it is  
currently painfully slow mainly because I do not know enough of the  
matplotlib internal machinery to really benefit from it. In the case  
of glumpy, the use of texture object for representing 2d arrays is a  
real speed boost since interpolation/colormap/heightmap is made on the  
GPU.


Concerning matplotlib examples, the use of glumpy should be actually  
two lines of code:


from pylab import *
from glumpy import imshow, show

but I did not package it this way yet (that is easy however).

I guess the main question is whether people are interested in glumpy  
to have a quick  dirty debug tool on top of matplotlib or whether  
they prefer a full fledged and fast pyglet/OpenGL backend (which is  
really harder).


Nicolas



On 28 Sep, 2009, at 18:05 , Gökhan Sever wrote:




On Mon, Sep 28, 2009 at 9:06 AM, Nicolas Rougier nicolas.roug...@loria.fr 
 wrote:


Hi all,

glumpy is a fast OpenGL visualization tool for numpy arrays coded on
top of pyglet (http://www.pyglet.org/). The package contains many
demos showing basic usage as well as integration with matplotlib. As a
reference, the animation script available from matplotlib distribution
runs at around 500 fps using glumpy instead of 30 fps on my machine.

Package/screenshots/explanations at: 
http://www.loria.fr/~rougier/coding/glumpy.html
(it does not require installation so you can run demos from within the
glumpy directory).


Nicolas
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Hi Nicolas,

This is technically called OpenGL backend, isn't it? It is nice that  
integrates with matplotlib, however 300 hundred lines of code indeed  
a lot of lines for an ordinary user. Do you think this could be  
further integrated into matplotlib with a wrapper to simplify its  
usage?



--
Gökhan
--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart  
your
developing skills, take BlackBerry mobile applications to market and  
stay
ahead of the curve. Join us from November 9#45;12, 2009. Register  
now#33;

http://p.sf.net/sfu/devconf___
Matplotlib-users mailing list
matplotlib-us...@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/matplotlib-users


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Recipe: extract a sub-array using given shape, centered on given position

2009-08-19 Thread Nicolas Rougier

Hi,

I've coded a function that allows to extract a contiguous array from  
another one using a given shape and centered on a given position. I  
did not find an equivalent within numpy so I hope I did not miss it.  
The only interest of the function is to guarantee that the resulting  
sub-array will have the required shape. If some values are out of  
bounds, result array is padded with a fill value. Hope it can be  
useful to someone.

Nicolas

Code:
-

import numpy

def extract(Z, shape, position, fill=numpy.NaN):
  Extract a sub-array from Z using given shape and centered on  
position.
 If some part of the sub-array is out of Z bounds, result is  
padded
 with fill value.

 **Parameters**
 `Z` : array_like
Input array.

`shape` : tuple
Shape of the output array

`position` : tuple
Position within Z

`fill` : scalar
Fill value

 **Returns**
 `out` : array_like
 Z slice with given shape and center

 **Examples**

  Z = numpy.arange(0,16).reshape((4,4))
  extract(Z, shape=(3,3), position=(0,0))
 [[ NaN  NaN  NaN]
  [ NaN   0.   1.]
  [ NaN   4.   5.]]

 Schema:

 +---+
 | 0   0   0 | = extract (Z, shape=(3,3), position=(0,0))
 |   +---+
 | 0 | 0   1 | 2   3 | = Z
 |   |   |   |
 | 0 | 4   5 | 6   7 |
 +---|---+   |
 | 8   9  10  11 |
 |   |
 | 12 13  14  15 |
 +---+

  Z = numpy.arange(0,16).reshape((4,4))
  extract(Z, shape=(3,3), position=(3,3))
 [[ 10.  11.  NaN]
  [ 14.  15.  NaN]
  [ NaN  NaN  NaN]]

 Schema:

 +---+
 | 0   1   2   3 | = Z
 |   |
 | 4   5   6   7 |
 |   +---+
 | 8   9 |10  11 | 0 | = extract (Z, shape=(3,3),  
position=(3,3))
 |   |   |   |
 | 12 13 |14  15 | 0 |
 +---+   |
 | 0   0   0 |
 +---+
 
#assert(len(position) == len(Z.shape))
#if len(shape)  len(Z.shape):
#shape = shape + Z.shape[len(Z.shape)-len(shape):]

 R = numpy.ones(shape, dtype=Z.dtype)*fill
 P  = numpy.array(list(position)).astype(int)
 Rs = numpy.array(list(R.shape)).astype(int)
 Zs = numpy.array(list(Z.shape)).astype(int)

 R_start = numpy.zeros((len(shape),)).astype(int)
 R_stop  = numpy.array(list(shape)).astype(int)
 Z_start = (P-Rs//2)
 Z_stop  = (P+Rs//2)+Rs%2

 R_start = (R_start - numpy.minimum(Z_start,0)).tolist()
 Z_start = (numpy.maximum(Z_start,0)).tolist()
 R_stop = (R_stop - numpy.maximum(Z_stop-Zs,0)).tolist()
 Z_stop = (numpy.minimum(Z_stop,Zs)).tolist()

 r = [slice(start,stop) for start,stop in zip(R_start,R_stop)]
 z = [slice(start,stop) for start,stop in zip(Z_start,Z_stop)]

 R[r] = Z[z]

 return R

Z = numpy.arange(0,16).reshape((4,4))
print Z
print
print extract(Z, shape=(3,3), position=(0,0))
print
print extract(Z, shape=(3,3), position=(3,3))

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Memory layout of record arrays

2009-07-31 Thread Nicolas Rougier


I've cooked a very rudimentary implementation of what I would like to
have, however I've got a small problem concerning array shape. The idea
is to have a parent array (a group) that can be instantiated like a
regular array and that build internally all the relevant contiguous
child arrays. I would like those child arrays to not be reshapable (at
least not without telling their parent first) and I naively overloaded
the shape property but got a lot of problems with infinite recursion. Is
there a way to do that ?

Nicolas



I put the code below (around 100 lines):

class array(np.ndarray):

def __new__(subtype, shape=(1,1), dtype=np.float32, order=None,
group=None):
obj = np.ndarray.__new__(subtype, shape=shape, dtype=dtype,
order=order)
obj._group = group
return obj

def _get_group(self):
return self._group or self
def _set_group(self, group):
if self.size == group.size:
self.shape = group.shape
self._group = group
else:
raise ValueError, \
'shape mismatch: objects cannot be broadcast to a single
shape'
group = property(_get_group, _set_group,
  doc = '''Group to which this array belongs to''')

def _get_shape(self):
return self.ctypes.shape
def _set_shape(self, shape):
if self.group == None:
self.ctypes.shape = shape
else:
raise AttributeError, \
'''Cannot reshape a child array (''group'' is not
None)'''
#shape = property(_get_shape, _set_shape, doc='''c-types shape''')


class group(object):
def __init__(self, shape, dtype=None, order=None):
object.__setattr__(self,'_arrays', {})
self._shape = shape
self._dtype = dtype
if len(dtype) == 0:
self._dtype = np.dtype('f0',dtype)
for i in range(len(self._dtype)):
name, dtype = self._dtype[i]
self._arrays[name] =
array(shape=shape,dtype=dtype,order=order)


def __getattr__(self, key):
if key in self._arrays.keys():
return self._arrays[key]
else:
return object.__getattribute__(self, key)
def __setattr__(self, key, value):
if key in self._arrays.keys():
self._arrays[key][...] = value
else:
object.__setattr__(self, key, value)

def __getitem__(self, key):
return self._arrays[key]
def __setitem__(self, key, value):
self._arrays[key][...] = value
def __len__(self):
return len(self._arrays[self._arrays.keys()[0]])


def _get_shape(self):
return self._shape
def _set_shape(self, shape):
for key in self._arrays.keys():
self._arrays[key].shape = shape
self._shape = shape
shape = property(_get_shape, _set_shape)


def _get_dtype(self):
return self._dtype
def _set_dtype(self):
raise AttributeError, \
'''attribute 'dtype' of 'group' objects is not writable'''
dtype = property(_get_dtype, _set_dtype)


def _get_size(self):
return self._arrays[self._arrays.keys()[0]].size
def _set_size(self):
raise AttributeError, \
'''attribute 'size' of 'group' objects is not writable'''
size = property(_get_size, _set_size)


def __repr__(self):
s = 'group(\n'
for i in range(len(self._dtype)):
name,dtype = self._dtype[i]
t = '%s':  % name
a = repr(self._arrays[name]).replace('\n', '\n' +' '*len(t))
s += t+a+'\n'
s += ')'
return s

if __name__ == '__main__':

G = group((3,3), dtype =
[('r',np.float32),('g',np.int32),('b',np.bool)])
G['r'] = G.g = G.b = 0
print G
print
G.r.shape = (9,1)
print G.r.shape
print G.shape







On Thu, 2009-07-30 at 20:01 +0200, Nicolas Rougier wrote:
 
 Thanks for the quick answer. It makes sense.
 I will have to find some other way to do it then.
 
 Nicolas
 
 
 On 30 Jul, 2009, at 18:52 , David Cournapeau wrote:
 
  On Fri, Jul 31, 2009 at 12:53 AM, Nicolas
  Rougiernicolas.roug...@loria.fr wrote:
 
 
  Hello,
 
  I've been using record arrays to create arrays with different types
  and since I'm doing a lot of computation on each of the different
  fields, the default memory layout does not serve my computations.
  Ideally, I would like to have record arrays where each field is a
  contiguous block of memory.
 
  I don't think you can do it with record arrays: one of the fundamental
  design choice of numpy array layout is that the data pointer points to
  one block of N items, where each item is described by the dtype. To
  have contiguous layout for each member of the structured dtype, you
  need two arrays with the corresponding dtype.
 
  cheers,
 
  David
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http

[Numpy-discussion] Memory layout of record arrays

2009-07-30 Thread Nicolas Rougier


Hello,

I've been using record arrays to create arrays with different types  
and since I'm doing a lot of computation on each of the different  
fields, the default memory layout does not serve my computations.  
Ideally, I would like to have record arrays where each field is a  
contiguous block of memory. I searched the dtype documentation but did  
not find anything about it. In the meantime,  I wrote a class that  
emulates this behavior, but it does not inherit from nd.array and I  
need inheritance.


To sum up, I would like to have:

  Z = np.zeros(3, dtype=[('x',np.float32), ('y',np.int)])

  print Z['x'].flags[['C_CONTIGUOUS']

False = should be True

  print Z['y'].flags[['C_CONTIGUOUS']

False = should be True

  Z.shape == Z['x'].shape == Z['y'].shape

True

Is there any obvious solution that I missed ?


Nicolas
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Memory layout of record arrays

2009-07-30 Thread Nicolas Rougier


Thanks for the quick answer. It makes sense.
I will have to find some other way to do it then.

Nicolas


On 30 Jul, 2009, at 18:52 , David Cournapeau wrote:

 On Fri, Jul 31, 2009 at 12:53 AM, Nicolas
 Rougiernicolas.roug...@loria.fr wrote:


 Hello,

 I've been using record arrays to create arrays with different types
 and since I'm doing a lot of computation on each of the different
 fields, the default memory layout does not serve my computations.
 Ideally, I would like to have record arrays where each field is a
 contiguous block of memory.

 I don't think you can do it with record arrays: one of the fundamental
 design choice of numpy array layout is that the data pointer points to
 one block of N items, where each item is described by the dtype. To
 have contiguous layout for each member of the structured dtype, you
 need two arrays with the corresponding dtype.

 cheers,

 David
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] sparse matrix dot product

2009-05-29 Thread Nicolas Rougier

Hi, 

I tried to post results but the file is too big, anyway, here is the
benchmark program if you want to run it:

Nicolas

-

import time
import numpy
from scipy import sparse

def benchmark(xtype = 'numpy.array', xdensity = 0.1, 
  ytype = 'numpy.array', ydensity = 1.0, n = 1000):

x = numpy.zeros((n,n), dtype = numpy.float64)
xi = int(n*n*xdensity)
x.reshape(n*n)[0:xi] = numpy.random.random((xi,))
y = numpy.zeros((n,1), dtype = numpy.float64)
yi = int(n*ydensity)
y.reshape(n)[0:yi] = numpy.random.random((yi,))
x = eval('%s(x)' % xtype)
y = eval('%s(y)' % ytype)
t0 = time.clock()
if xtype == 'numpy.array' and ytype == 'numpy.array':
for i in range(1000):
z = numpy.dot(x,y)
else:
for i in range(1000):
z = x*y
tf = time.clock() - t0
text = ''
text += (xtype + ' '*20)[0:20]
text += (ytype + ' '*20)[0:20]
text += '%4dx%4d   %4dx%4d   %.2f%.2f%.2f' %
(n,n,n,1,xdensity, ydensity, tf)
return text



xtypes = ['numpy.array',
  'numpy.matrix',
  'sparse.lil_matrix',
  'sparse.csr_matrix',
  'sparse.csc_matrix']
ytypes = ['numpy.array',
  'numpy.matrix',
  'sparse.lil_matrix',
  'sparse.csr_matrix',
  'sparse.csc_matrix']
xdensities = [0.01, 0.10, 0.25, 0.50, 1.00]
ydensities = [1.00]

print '=== === === ===
=== === ==='
print 'X type  Y type  X size  Y size  X
density   Y density   Time   '
print '--- --- --- ---
--- --- ---'
n = 100
for xdensity in xdensities:
for ydensity in ydensities:
for xtype in xtypes:
for ytype in ytypes:
print benchmark(xtype, xdensity, ytype, ydensity, n)
print '--- --- ---
--- --- --- ---'
n = 1000
for xdensity in xdensities:
for ydensity in ydensities:
for xtype in xtypes:
for ytype in ytypes:
print benchmark(xtype, xdensity, ytype, ydensity, n)
print '--- --- ---
--- --- --- ---'
print '=== === === ===
=== === ==='




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Benchmak on record arrays

2009-05-29 Thread Nicolas Rougier


Thank for the clear answer, it definitely helps.


Nicolas


On Thu, 2009-05-28 at 19:25 +0200, Francesc Alted wrote:
 A Wednesday 27 May 2009 17:31:20 Nicolas Rougier escrigué:
  Hi,
 
  I've written a very simple benchmark on recarrays:
 
  import numpy, time
 
  Z = numpy.zeros((100,100), dtype=numpy.float64)
  Z_fast = numpy.zeros((100,100), dtype=[('x',numpy.float64),
  ('y',numpy.int32)])
  Z_slow = numpy.zeros((100,100), dtype=[('x',numpy.float64),
  ('y',numpy.bool)])
 
  t = time.clock()
  for i in range(1): Z*Z
  print time.clock()-t
 
  t = time.clock()
  for i in range(1): Z_fast['x']*Z_fast['x']
  print time.clock()-t
 
  t = time.clock()
  for i in range(1): Z_slow['x']*Z_slow['x']
  print time.clock()-t
 
 
  And got the following results:
  0.23
  0.37
  3.96
 
  Am I right in thinking that the last case is quite slow because of some
  memory misalignment between float64 and bool or is there some machinery
  behind that makes things slow in this case ? Should this be mentioned
  somewhere in the recarray documentation ?
 
 Yes, I can reproduce your results, and I must admit that a 10x slowdown is a 
 lot.  However, I think that this affects mostly to small record arrays (i.e. 
 those that fit in CPU cache), and mainly in benchmarks (precisely because 
 they 
 fit well in cache).  You can simulate a more real-life scenario by defining a 
 large recarray that do not fit in CPU's cache.  For example:
 
 In [17]: Z = np.zeros((1000,1000), dtype=np.float64)  # 8 MB object
 
 In [18]: Z_fast = np.zeros((1000,1000), dtype=[('x',np.float64),
 ('y',np.int64)])   # 16 MB object
 
 In [19]: Z_slow = np.zeros((1000,1000), dtype=[('x',np.float64),
 ('y',np.bool)])  # 9 MB object
 
 In [20]: x_fast = Z_fast['x']
 In [21]: timeit x_fast * x_fast
 100 loops, best of 3: 5.48 ms per loop
 
 In [22]: x_slow = Z_slow['x']
 
 In [23]: timeit x_slow * x_slow
 100 loops, best of 3: 14.4 ms per loop
 
 So, the slowdown is less than 3x, which is a more reasonable figure.  If you 
 need optimal speed for operating with unaligned columns, you can use numexpr. 
  
 Here it is an example of what you can expect from it:
 
 In [24]: import numexpr as nx
   
 In [25]: timeit nx.evaluate('x_slow * x_slow')
 100 loops, best of 3: 11.1 ms per loop
 
 So, the slowdown is just 2x instead of 3x, which is near optimal for the 
 unaligned case.
 
 Numexpr also seems to help for small recarrays that fits in cache (i.e. for 
 benchmarking purposes ;) :
 
 # Create a 160 KB object
 In [26]: Z_fast = np.zeros((100,100), dtype=[('x',np.float64),('y',np.int64)])
 # Create a 110 KB object
 In [27]: Z_slow = np.zeros((100,100), dtype=[('x',np.float64),('y',np.bool)])
 
 In [28]: x_fast = Z_fast['x']
 
 In [29]: timeit x_fast * x_fast
 1 loops, best of 3: 20.7 µs per loop
 
 In [30]: x_slow = Z_slow['x']
 
 In [31]: timeit x_slow * x_slow
 1 loops, best of 3: 149 µs per loop
 
 In [32]: timeit nx.evaluate('x_slow * x_slow')
 1 loops, best of 3: 45.3 µs per loop
 
 Hope that helps,
 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] sparse matrix dot product

2009-05-28 Thread Nicolas Rougier

Hi,

I'm now testing dot product and using the following:

import numpy as np, scipy.sparse as sp

A = np.matrix(np.zeros((5,10)))
B = np.zeros((10,1))
print (A*B).shape
print np.dot(A,B).shape

A = sp.csr_matrix(np.zeros((5,10)))
B = sp.csr_matrix((10,1))
print (A*B).shape
print np.dot(A,B).shape

A = sp.csr_matrix(np.zeros((5,10)))
B = np.zeros((10,1))
print (A*B).shape
print np.dot(A,B).shape


I got:

(5, 1)
(5, 1)
(5, 1)
(5, 1)
(5, 1)
(10, 1)


Obviously, the last computation is not a dot product, but I got no
warning at all. Is that the expected behavior ?


By the way, I wrote a speed benchmark for dot product using the
different flavors of sparse matrices and I wonder if it should go
somewhere in documentation (in anycase, if anyone interested, I can post
the benchmark program and result).

Nicolas




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Benchmak on record arrays

2009-05-27 Thread Nicolas Rougier

Hi,

I've written a very simple benchmark on recarrays:

import numpy, time

Z = numpy.zeros((100,100), dtype=numpy.float64)
Z_fast = numpy.zeros((100,100), dtype=[('x',numpy.float64),
('y',numpy.int32)])
Z_slow = numpy.zeros((100,100), dtype=[('x',numpy.float64),
('y',numpy.bool)])

t = time.clock()
for i in range(1): Z*Z
print time.clock()-t

t = time.clock()
for i in range(1): Z_fast['x']*Z_fast['x']
print time.clock()-t

t = time.clock()
for i in range(1): Z_slow['x']*Z_slow['x']
print time.clock()-t


And got the following results:
0.23
0.37
3.96

Am I right in thinking that the last case is quite slow because of some
memory misalignment between float64 and bool or is there some machinery
behind that makes things slow in this case ? Should this be mentioned
somewhere in the recarray documentation ?


Nicolas


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Benchmak on record arrays

2009-05-27 Thread Nicolas Rougier



No, I don't have permission to edit.

Nicolas


On 27 May, 2009, at 18:01 , Charles R Harris wrote:




On Wed, May 27, 2009 at 9:31 AM, Nicolas Rougier nicolas.roug...@loria.fr 
 wrote:


Hi,

I've written a very simple benchmark on recarrays:

import numpy, time

Z = numpy.zeros((100,100), dtype=numpy.float64)
Z_fast = numpy.zeros((100,100), dtype=[('x',numpy.float64),
('y',numpy.int32)])
Z_slow = numpy.zeros((100,100), dtype=[('x',numpy.float64),
('y',numpy.bool)])

t = time.clock()
for i in range(1): Z*Z
print time.clock()-t

t = time.clock()
for i in range(1): Z_fast['x']*Z_fast['x']
print time.clock()-t

t = time.clock()
for i in range(1): Z_slow['x']*Z_slow['x']
print time.clock()-t


And got the following results:
0.23
0.37
3.96

Am I right in thinking that the last case is quite slow because of  
some
memory misalignment between float64 and bool or is there some  
machinery

behind that makes things slow in this case ?

Probably. Record arrays are stored like packed c structures and need  
to be unpacked by copying the bytes to aligned data types.


Should this be mentioned somewhere in the recarray documentation ?

A note would be appropriate, yes. You should be able to do that, do  
you have edit permissions for the documentation?


Chuck


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] arrray/matrix nonzero() type

2009-05-27 Thread Nicolas Rougier

Hi again,

I  have a problem with the nonzero() function for matrix.

The following test program:

import numpy, scipy.sparse

Z = numpy.zeros((10,10))
Z[0,0] = Z[1,1] = 1

i = Z.nonzero()
print i
Zc = scipy.sparse.coo_matrix((Z[i],i))

Z = numpy.matrix(Z)
i = Z.nonzero()
print i
Zc = scipy.sparse.coo_matrix((Z[i],i))


gives me:

(array([0, 1]), array([0, 1]))
(matrix([[0, 1]]), matrix([[0, 1]]))
Traceback (most recent call last):
   File test.py, line 13, in module
 Zc = scipy.sparse.coo_matrix((Z[i],i))
   File /Volumes/Data/Local/lib/python2.6/site-packages/scipy/sparse/ 
coo.py, line 179, in __init__
 self._check()
   File /Volumes/Data/Local/lib/python2.6/site-packages/scipy/sparse/ 
coo.py, line 194, in _check
 nnz = self.nnz
   File /Volumes/Data/Local/lib/python2.6/site-packages/scipy/sparse/ 
coo.py, line 187, in getnnz
 raise ValueError('row, column, and data arrays must have rank 1')
ValueError: row, column, and data arrays must have rank 1


Is that the intended behavior ? How can I use nonzero with matrix to  
build the coo one ?

Nicolas
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Segmentation fault on large arrays

2009-05-26 Thread Nicolas Rougier

Hello,

I've come across what is probably a bug in size check for large arrays:

 import numpy
 z1 = numpy.zeros((255*256,256*256))
Traceback (most recent call last):
  File stdin, line 1, in module
ValueError: dimensions too large.
 z2 = numpy.zeros((256*256,256*256))
 z2.shape
(65536, 65536)
 z2[0] = 0
Segmentation fault


Note that z1 size is smaller than z2 but z2 is not told that its
dimensions are too large. This has been tested with numpy 1.3.0.


Nicolas

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Tensordot and memory consumption

2009-02-17 Thread Nicolas Rougier

Hello,

I'm using tensordot in some computation and while I've been amazed by  
the speed, I'm now trying to reduce memory consumption in some very  
particular cases:

Let S be a 2 dims array of size (s1,s2)
Let D be a 2 dims array of size (d1,d2)
Let W be a 4 dims array of size (d1,d2,s1,s2)

Currently, I'm computing D as tensordot(W,S,2) and it works really  
fine and fast.

However, in some cases, W is based solely on a single 2 dims array K and
each of the W[i,j] is a slice of K. I would like to know if there is a
way to build W such that memory footprint is reduced ? Or maybe there is
other ways to perform the same computation ? For example, I know that
when S and D are of same shape, I can use 2d convolution, but I would
need the general case.

Since I imagine my explanations are not sot clear, I join a small
example. 


Nicolas

---

import numpy

n = 3

# Source
S = numpy.random.random((n,n))

# Destination (at this stage, only D shape matters)
D = numpy.zeros((2*n,2*n))

# Kernel
K = numpy.zeros((S.shape[0]*n+1,S.shape[1]*n+1))
K[S.shape[0],S.shape[1]] = 1

# Kernel decomposition for computing tensordot
W = numpy.zeros(D.shape + S.shape)
for i in range(W.shape[0]):
 for j in range(W.shape[1]):
 x = int(i/float(W.shape[0])*float(S.shape[0]))
 y = int(j/float(W.shape[1])*float(S.shape[1]))
 W[i,j] = K[n-x:n-x+S.shape[0],n-y:n-y+S.shape[1]]

D = numpy.tensordot(W,S,2)
print S
print
print D


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] python user defined type

2008-07-11 Thread Nicolas Rougier


My Unit class is supposed to represent a neuron that can be linked to
any other unit. The neuron itself is merely a (float) potential that can
vary along time under the influence of other units and learning. I
gather these units into groups which are in fact 2D matrix of units.
Currently, I implemented the Unit and Group and I talk with numpy
through an attribute of groups which represent all available potentials.
Finally my group is like a simple 2d matrix of float but I need to have
an underlying object to perform computation on each Unit at each time
step.

Currently I'm able to write something like:

 group = Unit()*[2,2]
 group.potentials = numpy.zeros([2,2])
 print group.potentials
[[ 0.  0.]
 [ 0.  0.]]
 group[0,0].potential = 1
[[ 1.  0.]
 [ 0.  0.]]



Nicolas



On Thu, 2008-07-10 at 16:30 -0700, Christopher Barker wrote:
 Nicolas Rougier wrote:
  Concerning the dtype argument during array creation, I thought it was
  here for somehow controlling the type of array elements. For example, if
  I use a regular numpy array (let's say a float array), I cannot set an
  item to a string value (it raises a ValueError: setting an array element
  with a sequence).
 
 Yes, but numpy is designed primarily for numeric types: ints, floats, 
 etc. It can also be used for custom types that are essentially like C 
 structs (see recarray). The key is that a dtype desribes a data type in 
 terms of bytes and that they represent -- It can not be a python type.
 
 The only way to use arbitrary python types is a object array, which 
 you've discovered, but then numpy doesn't know any thing about the 
 objects, other than that they are python objects.
 
  So what would be the best way to use numpy arrays with foreign types
  (or is it possible at all) ?  I've designed the real Unit in C++ and
  exported it to python (via boost and shared pointers) and I would like
  to create array of such Units
 
 If your type is a C++ class, then it may be possible, with some C 
 hackary to get numpy to understand it, but you're getting beyong my 
 depth here -- also I doubt that you'd get the full features like array 
 math and all anyway -- that's all set up for basic numeric types.
 
 Maybe others will have some idea, but I think you're pushing what numpy 
 is capable of.
 
  (in fact, I also created an array-like
  class but I would prefer to use directly the real array interface to
  benefit from the great work of numpy instead of re-inventing the
  wheel). 
 
 What operations do you expect to perform with these arrays of Units?
 
 -Chris
 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] python user defined type

2008-07-10 Thread Nicolas Rougier


Thanks for the precise explanation that make things clearer. In fact,
the example I gave was mainly to illustrate my question in the quickest
way.

Concerning the dtype argument during array creation, I thought it was
here for somehow controlling the type of array elements. For example, if
I use a regular numpy array (let's say a float array), I cannot set an
item to a string value (it raises a ValueError: setting an array element
with a sequence).

So what would be the best way to use numpy arrays with foreign types
(or is it possible at all) ?  I've designed the real Unit in C++ and
exported it to python (via boost and shared pointers) and I would like
to create array of such Units (in fact, I also created an array-like
class but I would prefer to use directly the real array interface to
benefit from the great work of numpy instead of re-inventing the
wheel). 

Ideally, I would like to be able to write 

z = N.array (a, dtype=Unit)

and would expect numpy to make a copy of the array by calling my type
with each element of a. Then, if my type accepts the argument during
creation, everything's fine, else it raises an error.

Nicolas 



On Thu, 2008-07-10 at 13:43 -0700, Christopher Barker wrote:
 Nicolas Rougier wrote:
  I would like to create numpy array with my own (python) datatype, so I
  tried the obvious solution:
  
  from numpy import *
  class Unit(object):
  def __init__(self,value=0.0):
  self.value = value
  def __float__(self):
  return self.value
  def __repr__(self):
  return str(self.value)
 
  a = array (array([[1,2],[3,4]]), dtype=Unit)
 
 the dtype argument is designed to take a numpy type object, not an 
 arbitrary class -- what you want in this case is dtype=numpy.object, 
 which is what you did before. I'm a surprised this didn't raise an 
 error, but it looks like you got an object array, but you objects you 
 gave it are python integers.  All python classes are objects as far an 
 numpy is concerned. The numpy dtypes are a description of a given number 
 of bytes in memory -- python classes are stored as a pointer to the 
 python object.
 
 (and you really don't want to use import *)
 
  Also, the commented line (a[0,0] = 0) makes the item to become an int
  while I need to set the value of the item instead, is that possible ?
 
 a[0,0] = Unit(0)
 
 You're setting the [0,0]th element to an object, you need to give it the 
 object you want, the literal 0 is a python integer with the value zero.
 
 numpy arrays of python objects act a whole lot like other python 
 containers. What would you expect from :
 
 a = [1,2,3,4]
 
 a list of integers, no?
 
 or
 
 a = [Unit(1), Unit(2)]
 
 #  a list of Units..
 
 then
 
 # a[0] = 3
 
 now a list with the integer3 in the zeroth position, and a Unit in the 1st.
 
 You did it right the first time:
 
 a = array([[Unit(1), Unit(2)], [Unit(3), Unit(4)]])
 
 though you need to be careful building object arrays of nested lists -- 
 numpy won't unnecessarily figure out how do d-nest it. You might want to do:
 
   import numpy as np
   a = np.empty((2,2), np.object)
   a
 array([[None, None],
 [None, None]], dtype=object)
   a[:,:] = [[Unit(1), Unit(2)], [Unit(3), Unit(4)]]
   a
 array([[1, 2],
 [3, 4]], dtype=object)
 
 One more note:
 
 class Unit(object):
  def __init__(self,value=0.0):
  self.value = value
  def __float__(self):
  return self.value
  def __repr__(self):
  return str(self.value)
 
 
 __repr__ really should be something like:
  def __repr__(self):
  return Unit(%g)%self.value
 
 eval(repr(Object)) == Object, ideally, plus it'll be easier to debug if 
 you can see what it is.
 
 -Chris
 
 
 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion