Re: [Numpy-discussion] scipy curve_fit variable list of optimisation parameters

2016-08-03 Thread Siegfried Gonzi

On 3 Aug 2016, at 13:00, numpy-discussion-requ...@scipy.org wrote:

> Message: 3
> Date: Tue, 2 Aug 2016 22:50:42 +0100
> From: Evgeni Burovski 
> To: Discussion of Numerical Python 
> Subject: Re: [Numpy-discussion] scipy curve_fit variable list of
>   optimisation parameters
> Message-ID:
>   
> Content-Type: text/plain; charset=UTF-8
> 
> 
> You can use `leastsq` or `least_squares` directly: they both accept an
> array of parameters.
> 
> BTW, since all of these functions are actually in scipy, you might
> want to redirect this discussion to the scipy-user mailing list.


Hi all

I found the solution in the following thread:

http://stackoverflow.com/questions/28969611/multiple-arguments-in-python

One has to call curve_fit with 'p0' (giving curve_fit a clue about the unknown 
number of variables)

I changed func2 to (note the *):

def func2( x, *a ):

   # Bessel function
tmp = scipy.special.j0( x[:,:] )

return np.dot( tmp[:,:] , a[:] )


and call it:

N = number of optimisation parameters

popt = scipy.optimize.curve_fit( func2, x, yi , p0=[1.0]*N)



Regards,
Siegfried Gonzi
Met Office, Exeter, UK
-- 
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] scipy curve_fit variable list of optimisation parameters

2016-08-02 Thread Siegfried Gonzi
Hi all

Does anyone know how to invoke curve_fit with a variable number of parameters, 
e.g. a1 to a10 without writing it out,

e.g.

def func2( x, a1,a2,a3,a4 ):

# Bessel function
tmp = scipy.special.j0( x[:,:] )

return np.dot( tmp[:,:] , np.array( [a1,a2,a3,a4] )


### yi = M measurements (.e.g M=20)
### x = M (=20) rows of N (=4) columns
popt = scipy.optimize.curve_fit( func2, x, yi )

I'd like to get *1 single vector* (in this case of size 4) of optimised A(i) 
values.

The function I am trying to minimise (.e.g F(r) is a vector of 20 model 
measurements): F(r) = SUM_i_to_N [ A(i) * bessel_function_J0(i * r) ] 


Thanks,
Siegfried Gonzi




-- 
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] f2py and string arrays (need help)

2014-05-28 Thread Siegfried Gonzi
Hi all

Given the following pseudo code:

==
SUBROUTINE READ_B( FILENAME, ix,iy,iz,nx, OUT_ARRAY,  out_cat)

IMPLICIT NONE


INTEGER*4, INTENT(IN)  :: IX, iy, iz, nx
REAL*4,INTENT(OUT) :: OUT_ARRAY(nx,IX, iy, iz)

CHARACTER, dimension(nx,40),intent(out) ::OUT_CAT

CHARACTER(LEN=40)  :: CATEGORY
integer :: i



do i=1,nx


READ( IU_FILE, IOSTAT=IOS) data, category, lon, lat,

!!! category = 'IJVG=$'
!!! or category = 'CHEM=$'
out_cat(i,:) = category(:)

enddo

end subroutine read_b
==

I'd like to fill 'out_cat' with the names of the fields. As you guess my 
code does not work properly.

How can I do it with f2py? I don't even know if my code is legal Fortran 
90 at all.

Thanks,
Siegfried



-- 
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Easter Egg or what I am missing here?

2014-05-21 Thread Siegfried Gonzi
On 22/05/2014 00:37, numpy-discussion-requ...@scipy.org wrote:
> Message: 4 Date: Wed, 21 May 2014 18:32:30 -0400 From: Warren 
> Weckesser  Subject: Re: [Numpy-discussion] 
> Easter Egg or what I am missing here? To: Discussion of Numerical 
> Python  Message-ID: 
>  
> Content-Type: text/plain; charset=UTF-8 On 5/21/14, Siegfried Gonzi 
>  wrote:
>> >Please would anyone tell me the following is an undocumented bug
>> >otherwise I will lose faith in everything:
>> >
>> >==
>> >import numpy as np
>> >
>> >
>> >years = [2004,2005,2006,2007]
>> >
>> >dates = [20040501,20050601,20060801,20071001]
>> >
>> >for x in years:
>> >
>> >  print 'year ',x
>> >
>> >  xy =  np.array([x*1.0e-4 for x in dates]).astype(np.int)
>> >
>> >  print 'year ',x
>> >==
>> >
>> >Or is this a recipe to blow up a power plant?
>> >
> This is a "wart" of Python 2.x.  The dummy variable used in a list
> comprehension remains defined with its final value in the enclosing
> scope.  For example, this is Python 2.7:
>
>>>> >>>x = 100
>>>> >>>w = [x*x for x in range(4)]
>>>> >>>x
> 3
>
>
> This behavior has been changed in Python 3.  Here's the same sequence
> in Python 3.4:
>
>>>> >>>x = 100
>>>> >>>w = [x*x for x in range(4)]
>>>> >>>x
> 100
>
>
> Guido van Rossum gives a summary of this issue near the end of this
> blog:http://python-history.blogspot.com/2010/06/from-list-comprehensions-to-generator.html
>
> Warren
>
>
>

[I still do not know how to properly use the reply function here. I 
apologise.]

Hi all and thanks to all the respondes.

I think I would have expected my code to be behaving like you said 
version 3.4 will do.

I would never have thought 'x' is being changed during execution. I took 
me nearly 2 hours in my code to figure out what was going on (it was a 
lenghty piece of code an not so easy to spot).

Siegfried

-- 
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Easter Egg or what I am missing here?

2014-05-21 Thread Siegfried Gonzi
Please would anyone tell me the following is an undocumented bug 
otherwise I will lose faith in everything:

==
import numpy as np


years = [2004,2005,2006,2007]

dates = [20040501,20050601,20060801,20071001]

for x in years:

 print 'year ',x

 xy =  np.array([x*1.0e-4 for x in dates]).astype(np.int)

 print 'year ',x
==

Or is this a recipe to blow up a power plant?

Thanks,
Siegfried

-- 
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] IDL vs Python parallel computing

2014-05-08 Thread Siegfried Gonzi
On 08/05/2014 04:00, numpy-discussion-requ...@scipy.org wrote:
> Send NumPy-Discussion mailing list submissions to
>   numpy-discussion@scipy.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>   http://mail.scipy.org/mailman/listinfo/numpy-discussion
> or, via email, send a message with subject or body 'help' to
>   numpy-discussion-requ...@scipy.org
>
> You can reach the person managing the list at
>   numpy-discussion-ow...@scipy.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of NumPy-Discussion digest..."
>
>
>
>
>
> --
>
> Message: 2
> Date: Wed, 7 May 2014 19:25:32 +0100
> From: Nathaniel Smith 
> Subject: Re: [Numpy-discussion] IDL vs Python parallel computing
> To: Discussion of Numerical Python 
> Message-ID:
>   
> Content-Type: text/plain; charset=UTF-8
>
> On Wed, May 7, 2014 at 7:11 PM, Sturla Molden  wrote:
> That said, reading data stored in text files is usually a CPU-bound 
> operation, and if someone wrote the code to make numpy's text file 
> readers multithreaded, and did so in a maintainable way, then we'd 
> probably accept the patch. The only reason this hasn't happened is 
> that no-one's done it.

To add to the confusion what IDL offers:

http://www.exelisvis.com/Support/HelpArticlesDetail/TabId/219/ArtMID/900/ArticleID/3252/3252.aspx

I am not using IDL (and was never interested in IDL at all as it is a 
horrible language) any more except for some legacy code. Nowadays I am 
mostly on Python.





-- 
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] IDL vs Python parallel computing

2014-05-07 Thread Siegfried Gonzi
On 08/05/2014 04:00, numpy-discussion-requ...@scipy.org wrote:
> Send NumPy-Discussion mailing list submissions to
>   numpy-discussion@scipy.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>   http://mail.scipy.org/mailman/listinfo/numpy-discussion
> or, via email, send a message with subject or body 'help' to
>   numpy-discussion-requ...@scipy.org
>
> You can reach the person managing the list at
>   numpy-discussion-ow...@scipy.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of NumPy-Discussion digest..."
>
>
> --
>
> Message: 1
> Date: Wed, 07 May 2014 20:11:13 +0200
> From: Sturla Molden 
> Subject: Re: [Numpy-discussion] IDL vs Python parallel computing
> To: numpy-discussion@scipy.org
> Message-ID: 
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> On 03/05/14 23:56, Siegfried Gonzi wrote:
>   > I noticed IDL uses at least 400% (4 processors or cores) out of the box
>   > for simple things like reading and processing files, calculating the
>   > mean etc.
>
> The DMA controller is working at its own pace, regardless of what the
> CPU is doing. You cannot get data faster off the disk by burning the
> CPU. If you are seeing 100 % CPU usage while doing file i/o there is
> something very bad going on. If you did this to an i/o intensive server
> it would go up in a ball of smoke... The purpose of high-performance
> asynchronous i/o systems such as epoll, kqueue, IOCP is actually to keep
> the CPU usage to a minimum.


It is probbaly not so much about reading in files. But I just noticed 
(top command) it for simple things like processing say 4 dimensional 
fields (longitute, latitude, altitutde, time) and calculating column 
means or moment statistics over grid boxes and writing the fields  out 
again and things like that.

But it never uses more than 400%.

  I haven't done any thorough testing of where and why the 400% really 
kicks in and if IDL is cheating here or not.






-- 
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] IDL vs Python parallel computing

2014-05-03 Thread Siegfried Gonzi
Hi all

I noticed IDL uses at least 400% (4 processors or cores) out of the box 
for simple things like reading and processing files, calculating the 
mean etc.

I have never seen this happening with numpy except for the linalgebra 
stuff (e.g lapack).

Any comments?

Thanks,
Siegfried



-- 
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] string replace

2014-04-20 Thread Siegfried Gonzi
Hi all

I know this is not numpy related but a colleague insists the following 
is supposed to work. But it doesn't:

==
line_left = './erfo/restart.ST010.EN0001-EN0090.MMDDhh'
enafix = 'ST000.EN0001-EN0092'
line_left = line_left.replace('STYYY.EN-EN', enafix)
print 'line_left',line_left
==

[right answer would be: ./erfo/restart.ST000.EN0001-EN0092.MMDDhh' ]

I think it works in Fortran but not in Python. What would be the easiest 
way to replacing this kind of pattern in a variable lenght string?


Thanks,
Siegfried



-- 
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Need help with np.ma.median and np.apply_along_axis

2013-12-20 Thread Siegfried Gonzi
Please have a look at version1 and version2. What are my other options  
here? Do I need to go the cython route here? Thanks, Siegfried

==
My array is as follows (shown here for dummy values; and yes this kind  
of arrays do exist: 150 observations x 8 years x 366 days x 24 hours x  
7 model levels):

data = np.random.random((150,8,366,24,7))


My function "np.apply_along_axis(my_moment,4,data)" takes 15 minutes.

I thought making use of masked arrays "my_moment_fast(data,axis=4)"  
will speed up things but

1. It blows up the memory consumption to 6 GB at times

and

2. It also takes... I do not know as I killed it after 20 minutes (it  
hangs at the median print statement).

The calculation of the median is the bottleneck here.

==
import numpy as np


def my_moment(data_in,nan_val=-999.0):


 tmp = data_in[np.where(data_in<>nan_val)]

 erg = np.array([np.min(tmp),np.mean(tmp),np.median(tmp),\
 np.max(tmp),np.std(tmp),np.size(tmp)])

 return erg


def my_moment_fast(data_in,nan_val=-999.0,axis=4):

 import numpy as np

 print 'min max',np.min(data_in),np.max(data_in)

 mx = np.ma.masked_where((data_in<=0.0)&(data_in<=nan_val), data_in)

 print 'shape mx',np.shape(mx),np.min(mx),np.max(mx)

 print 'min'
 tmp_min = np.ma.min(mx,axis=axis)
 print 'max'
 tmp_max = np.ma.max(mx,axis=axis)
 print 'mean'
 tmp_mean = np.ma.mean(mx,axis=axis)
 print 'median'
 #tmp_median = np.ma.sort(mx,axis=axis)
 tmp_median = np.ma.median(mx,axis=axis)
 print 'std'
 tmp_std = np.ma.std(mx,axis=axis)
 print 'N'
 tmp_N = np.ones(np.shape(mx))
 tmp_N[mx.mask] = 0.0e0
 tmp_N = np.ma.sum(tmp_N,axis=axis)

 print 'shape min',np.shape(tmp_min),np.min(tmp_min),np.max(tmp_min)
 print 'shape max',np.shape(tmp_max),np.min(tmp_max),np.max(tmp_min)
 print 'shape mean',np.shape(tmp_mean),np.min(tmp_mean),np.max(tmp_mean)
 print 'shape median', np.shape(tmp_median), np.min(tmp_median),  
np.max(tmp_median)
 print 'shape std',np.shape(tmp_std),np.min(tmp_std),np.max(tmp_std)
 print 'shape N', np.shape(tmp_N), np.min(tmp_N), np.max(tmp_N),  
np.shape(mx.mask)

 return np.array([tmp_min,tmp_mean,tmp_median,tmp_max,tmp_std,tmp_N])



data = np.random.random((150,8,366,24,7))
data[134,5,300,:,2] = -999.0
data[14,3,300,:,0] = -999.0

version1 = my_moment_fast(data,axis=4)
exit()
version2 = np.apply_along_axis(my_moment,4,data)
==

What  am I doing wrong here? I haven't tested it againts Fortran and  
have got no idea if sorting for fetching the median would be faster.

Thanks,
Siegfried
==

-- 
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Python parallel programming on Mac OS X Maverick

2013-10-28 Thread Siegfried Gonzi
Hi all

Quick question: What is the minimum RAM requirement for doing parallel 
programming with Python/Numpy on Mac OS X Maverick?

I am about to buy a Macbook Pro 15" and I'd like to know if 8GB RAM (with SSD 
flash storage) for the Haswell quad core will be enough. I have never done any 
parallel programming with Python/Numpy but plan to get to grips with it on my 
new Macbook Pro where memory from now on is being soldered on and 
non-replaceable.

Apple has a 14 days no quibbles refund policy but I am not sure if I can work 
out what I need within 14 days.


Thanks,
Siegfried
-- 
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy-Discussion Digest, Vol 85, Issue 19

2013-10-07 Thread Siegfried Gonzi

On 7 Oct 2013, at 21:16, numpy-discussion-requ...@scipy.org wrote:
> 
> 
> Message: 5
> Date: Mon, 7 Oct 2013 13:38:53 -0500
> From: Aronne Merrelli 
> Subject: Re: [Numpy-discussion] Equivalent to IDL's help function
> 
> 
> There isn't anything quite the same. (I think what you are really asking
> for is a way to print the essential info about one variable name, at least
> that is how I would use the IDL "help" procedure). In IPython, I use the
> whos magic to do something similar, although it just prints this info for
> all variables or all variables of one class, rather than just one variable.
> I do not think there is a way to do it for just one variable.
> 
> Here are some examples - you can see this works quite well but it will
> become unwieldy if your interactive namespace becomes large:
> 
> In [1]: x = 1; y = 2; z = 3.3; d = {'one':1, 'two':2, 'three':3}
> 
> In [2]: whos
> Variable   Type Data/Info
> -
> d  dict n=3
> x  int  1
> y  int  2
> z  float3.3
> 
> In [3]: whos dict
> Variable   TypeData/Info
> 
> d  dictn=3
> 
> In [4]: xa = np.arange(111); ya = np.ones((22,4))
> 
> In [5]: whos ndarray
> Variable   Type   Data/Info
> ---
> xa ndarray111: 111 elems, type `int64`, 888 bytes
> ya ndarray22x4: 88 elems, type `float64`, 704 bytes
> 
> 

Hi

[I hope I am not screwing up the digest reply function here].

I am after a  "whos" which would work in a script. It is not very often that I 
develop code at the command line.

I am definitely not one of the best  programmers out there but I used "help" a 
lot in my IDL  scripts and code. Our research group is migrating away from IDL 
towards Python.

I think Python's help is not the same than IDL's help. I know copying things 
from other languages is not always a good idea but one cannot argue that IDL's 
help comes in rather handy while developing and testing code.

 

 

> 
> 
> On Mon, Oct 7, 2013 at 12:15 PM, Siegfried Gonzi
> wrote:
> 
>> Hi all
>> 
>> What is the equivalent to IDL its help function, e.g.
>> 
>> ==
>> IDL> a = make_array(23,23,)
>> 
>> IDL> help,a
>> 
>> will result in:
>> 
>> A   FLOAT = Array[23, 23]
>> 
>> or
>> 
>> IDL> a = create_struct('idl',23)
>> 
>> IDL> help,a
>> 
>> gives:
>> 
>> A   STRUCT= ->  Array[1]
>> 
>> ==
>> 
>> I have been looking for it ever since using numpy. It would make my life
>> so much easier.
>> 
>> 
>> Thanks, Siegfried
>> 
>> 
>> --
>> The University of Edinburgh is a charitable body, registered in
>> Scotland, with registration number SC005336.
>> 
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>> 
> -- next part --
> An HTML attachment was scrubbed...
> URL: 
> http://mail.scipy.org/pipermail/numpy-discussion/attachments/20131007/cccb9996/attachment-0001.html
>  
> 
> --
> 
> Message: 6
> Date: Mon, 07 Oct 2013 23:22:17 +0300
> From: Dmitrey 
> Subject: Re: [Numpy-discussion] [ANN] MATLAB ODE solvers - now
>   available inPython (Dmitrey)
> To: Discussion of Numerical Python 
> Cc: numpy-discussion@scipy.org
> Message-ID: <1381177204.440489225.k74gn...@frv43.ukr.net>
> Content-Type: text/plain; charset="utf-8"
> 
> FYI scipy ODE solvers vode, dopri5, dop853 also have been connected to 
> OpenOpt, possibly with automatic differentiation by FuncDesigner? (dopri5 and 
> dop853 don't use derivatives although). 
> 
> -- 
> Regards, D. http://openopt.org/Dmitrey 
> 
> ---  ? --- 
> ?? : "David Goldsmith" < d.l.goldsm...@gmail.com > 
> : 7 ??? 2013, 07:16:33 
> 
> On Sun, Oct 6, 2013 at 10:00 AM, < numpy-discussion-requ...@scipy.org > 
> wrote: 
> Message: 2 
> Date: Sat, 05 Oct 2013 21:36:48 +0300 
> From: Dmitrey < tm...@ukr.net > 
> Subject: Re: [Numpy-discussion] [ANN] MATLAB ODE solvers - now 
> ? ? ? ? available in ? ?Python 
> To: Discussion of Numerical Python < numpy-discussion@scipy.org > 
> Cc: numpy-discussion@scipy.org 
> Message-ID: < 1380997576.559804301.aoyna...@frv43.ukr.ne

[Numpy-discussion] Equivalent to IDL's help function

2013-10-07 Thread Siegfried Gonzi
Hi all

What is the equivalent to IDL its help function, e.g.

==
IDL> a = make_array(23,23,)

IDL> help,a 

will result in:

A   FLOAT = Array[23, 23]

or

IDL> a = create_struct('idl',23)

IDL> help,a

gives:

A   STRUCT= ->  Array[1]

==

I have been looking for it ever since using numpy. It would make my life so 
much easier.


Thanks, Siegfried


-- 
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy where function on different sized arrays

2012-11-25 Thread Siegfried Gonzi

On 25 Nov 2012, at 00:29, numpy-discussion-requ...@scipy.org wrote:
> 
> Message: 3
> Date: Sat, 24 Nov 2012 23:23:36 +0100
> From: Da?id 
> Subject: Re: [Numpy-discussion] numpy where function on different
>   sized   arrays
> To: Discussion of Numerical Python 
> Message-ID:
>   
> Content-Type: text/plain; charset=ISO-8859-1
> 
> A pure Python approach could be:
> 
> for i, x in enumerate(a):
>   for j, y in enumerate(x):
>   if y in b:
>   idx.append((i,j))
> 
> Of course, it is slow if the arrays are large, but it is very
> readable, and probably very fast if cythonised.


Thanks for all the answers. In that particular case speed is not important (A 
is 360x720 and b and c is lower than 10 in terms of dimension). However, I 
stumbled across similar comparison problems in IDL a couple of times where 
speed was crucial.

My own solution or attempt was this:

==
def fu(A, b, c):
for x, y in zip(b,c):
indx = np.where(A == x)
A[indx] = y
return A
==

> 
> 
> 

The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy where function on different size

2012-11-24 Thread Siegfried Gonzi
> Message: 6
> Date: Sat, 24 Nov 2012 20:36:45 +
> From: Siegfried Gonzi 
> Subject: [Numpy-discussion] numpy where function on different size
> Hi all
>This must have been answered in the past but my google search 
> capabilities are not the best.
>Given an array A say of dimension 40x60 and given another array/vector B 
> of dimension 20 (the values in B occur only once).
>What I would like to do is the following which of course does not work 
> (by the way doesn't work in IDL either):
>indx=where(A == B)
>I understand A and B are both of different dimensions. So my question: 
> what would the fastest or proper way to accomplish this (I found a 
> solution but think is rather awkward and not very scipy/numpy-tonic 
> tough).


I should clarify: where(A==B, C, A) 

C is of equal dimension than B. Basically: everywhere where A equals e.g. B[0] 
replace by C[0]; or where B[4] equals A replace very occurence in A by C[4].


-- 
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy where function on different sized arrays

2012-11-24 Thread Siegfried Gonzi
Hi all
 
This must have been answered in the past but my google search capabilities are 
not the best.
 
Given an array A say of dimension 40x60 and given another array/vector B of 
dimension 20 (the values in B occur only once).
 
What I would like to do is the following which of course does not work (by the 
way doesn't work in IDL either):
 
indx=where(A == B)
 
I understand A and B are both of different dimensions. So my question: what 
would the fastest or proper way to accomplish this (I found a solution but 
think is rather awkward and not very scipy/numpy-tonic tough).
 
Thanks
-- 
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion