Re: [Numpy-discussion] The NumPy Fortran-ordering quiz

2006-10-20 Thread A. M. Archibald
On 18/10/06, Travis Oliphant [EMAIL PROTECTED] wrote:

 If there are any cases satisfying these rules where a copy does not have
 to occur then let me know.

For example, zeros((4,4))[:,1].reshape((2,2)) need not be copied.

I filed a bug in trac and supplied a patch to multiarray.c that avoids
copies in PyArray_NewShape unless absolutely necessary.

A. M. Archibald

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


[Numpy-discussion] Numpy-scalars vs Numpy 0-d arrays: copy or not copy?

2006-10-20 Thread Sebastien Bardeau
Hi!

I am confused with Numpy behavior with its scalar or 0-d arrays objects:

  numpy.__version__
'1.0rc2'
  a = numpy.array((1,2,3))
  b = a[:2]
  b += 1
  b
array([2, 3])
  a
array([2, 3, 3])
  type(b)
type 'numpy.ndarray'

To this point all is ok for me: subarrays share (by default) memory with 
their parent array. But:

  c = a[2]
  c += 1
  c
4
  a
array([2, 3, 3])
  type(c)
type 'numpy.int32'
  id(c)
169457808
  c += 1
  id(c)
169737448

That's really confusing, because slices (from __getslice__ method) are 
not copies (they share memory), and items (single elements from 
__getitem__ ) are copies to one of the scalar objects provided by Numpy. 
I can understand that numpy.scalars do not provide inplace operations 
(like Python standard scalars, they are immutable), so I'd like to use 
0-d Numpy.ndarrays. But:

  d = numpy.array(a[2],copy=False)
  d += 1
  d
array(4)
  a
array([2, 3, 3])
  type(d)
type 'numpy.ndarray'
  d.shape
()
  id(d)
169621280
  d += 1
  id(d)
169621280

This is not a solution because d is a copy since construction time...
My question is: is there a way to get a single element of an array into 
a 0-d array which shares memory with its parent array?

Thx for your help,

Sebastien


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


[Numpy-discussion] いつでもどこで もイカす出会い天国

2006-10-20 Thread EAGF
中だし体験記!携帯番号ゲットで中だし・・・最高です!
http://carmastorra.com/ika/



-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy-scalars vs Numpy 0-d arrays: copy or not copy?

2006-10-20 Thread Francesc Altet
A Divendres 20 Octubre 2006 11:42, Sebastien Bardeau va escriure:
[snip]
 I can understand that numpy.scalars do not provide inplace operations
 (like Python standard scalars, they are immutable), so I'd like to use

 0-d Numpy.ndarrays. But:
   d = numpy.array(a[2],copy=False)
   d += 1
   d

 array(4)

   a

 array([2, 3, 3])

   type(d)

 type 'numpy.ndarray'

   d.shape

 ()

   id(d)

 169621280

   d += 1
   id(d)

 169621280

 This is not a solution because d is a copy since construction time...
 My question is: is there a way to get a single element of an array into
 a 0-d array which shares memory with its parent array?

One possible solution (there can be more) is using ndarray:

In [47]: a=numpy.array([1,2,3], dtype=i4)
In [48]: n=1# the position that you want to share
In [49]: b=numpy.ndarray(buffer=a[n:n+1], shape=(), dtype=i4)
In [50]: a
Out[50]: array([1, 2, 3])
In [51]: b
Out[51]: array(2)
In [52]: b += 1
In [53]: b
Out[53]: array(3)
In [54]: a
Out[54]: array([1, 3, 3])


Cheers,

-- 
0,0   Francesc Altet     http://www.carabos.com/
V   V   Cárabos Coop. V.   Enjoy Data
 -

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Can' compile numpy 1.02rc3 on OSX 10.3.9

2006-10-20 Thread Markus Rosenstihl

Am 20.10.2006 um 02:53 schrieb Jay Parlar:

 Hi!
 I try to compile numpy rc3 on Panther and get following errors.
 (I start build with python2.3 setup.py build to be sure to use the
 python shipped with OS X. I din't manage to compile Python2.5 either
 yet with similar errors)
 Does anynbody has an Idea?
 gcc-3.3
 XCode 1.5
 November gcc updater is installed


 I couldn't get numpy building with Python 2.5 on 10.3.9 (although I
 had different compile errors). The solution that ended up working for
 me was Python 2.4. There's a bug in the released version of Python 2.5
 that's preventing it from working with numpy, should be fixed in the
 next release.

 You can find a .dmg for Python 2.4 here:
 http://pythonmac.org/packages/py24-fat/index.html

 Jay P.



I have that installed already but i get some bus errors with that.  
Furthermore it is built with gcc4
and i need to compile an extra module(pytables) and I fear that will  
not work, hence I try to compile myself. Python 2.5 dosent't compile  
either (libSystemStubs is only on Tiger).
The linking works when i remove the -lSystemStubs and it compiled clean.
Numpy rc3 wass also compiling now with python 2.5, but the tests failed:

Python 2.5 (r25:51908, Oct 20 2006, 11:40:08)
[GCC 3.3 20030304 (Apple Computer, Inc. build 1671)] on darwin
Type help, copyright, credits or license for more information.
  import numpy
  numpy.test(10)
   Found 5 tests for numpy.distutils.misc_util
   Found 4 tests for numpy.lib.getlimits
   Found 31 tests for numpy.core.numerictypes
   Found 32 tests for numpy.linalg
   Found 13 tests for numpy.core.umath
   Found 4 tests for numpy.core.scalarmath
   Found 9 tests for numpy.lib.arraysetops
   Found 42 tests for numpy.lib.type_check
   Found 183 tests for numpy.core.multiarray
   Found 3 tests for numpy.fft.helper
   Found 36 tests for numpy.core.ma
   Found 1 tests for numpy.lib.ufunclike
   Found 12 tests for numpy.lib.twodim_base
   Found 10 tests for numpy.core.defmatrix
   Found 4 tests for numpy.ctypeslib
   Found 41 tests for numpy.lib.function_base
   Found 2 tests for numpy.lib.polynomial
   Found 8 tests for numpy.core.records
   Found 28 tests for numpy.core.numeric
   Found 4 tests for numpy.lib.index_tricks
   Found 47 tests for numpy.lib.shape_base
   Found 0 tests for __main__
 
Warning: invalid value encountered in  
divide
..Warning: invalid value encountered in divide
..Warning: divide by zero encountered in divide
.Warning: divide by zero encountered in divide
..Warning: invalid value encountered in divide
.Warning: divide by zero encountered in divide
.Warning: divide by zero encountered in divide
.Warning: divide by zero encountered in divide
.Warning: divide by zero encountered in divide
..Warning: invalid value encountered in divide
..Warning: invalid value encountered in divide
..Warning: divide by zero encountered in divide
.Warning: divide by zero encountered in divide
.Warning: divide by zero encountered in divide
.Warning: divide by zero encountered in divide
Warning: invalid value encountered in divide
.Warning: invalid value encountered in divide
..Warning: divide by zero encountered in divide
 
..Warning: overflow  
encountered in exp
F... 
..Warning: divide by zero encountered in divide
Warning: divide by zero encountered in divide
Warning: divide by zero encountered in divide
Warning: divide by zero encountered in divide
Warning: divide by zero encountered in divide
Warning: divide by zero encountered in divide
Warning: divide by zero encountered in divide
Warning: divide by zero encountered in divide
Warning: divide by zero encountered in divide
.Warning: invalid value encountered in sqrt
Warning: invalid value encountered in log
Warning: invalid value encountered in log10
..Warning: invalid value encountered in sqrt
Warning: invalid value encountered in sqrt
Warning: divide by zero encountered in log
Warning: divide by zero encountered in log
Warning: divide by zero encountered in log10
Warning: divide by zero encountered in log10
Warning: invalid value encountered in arcsin
Warning: invalid value encountered in arcsin
Warning: invalid value encountered in arccos
Warning: invalid value encountered in arccos
Warning: invalid value encountered in arccosh
Warning: invalid value encountered in arccosh
Warning: divide by zero encountered in arctanh
Warning: divide by zero encountered in arctanh
Warning: invalid value encountered in divide
Warning: invalid value encountered in true_divide
Warning: invalid value encountered in floor_divide
Warning: invalid value encountered in remainder
Warning: invalid value encountered in fmod

[Numpy-discussion] Helper function to unroll a array

2006-10-20 Thread Gael Varoquaux
Hi,

There is an operation I do a lot, I would call it unrolling a array.
The best way to describe it is probably to give the code:

def unroll(M):
 Flattens the array M and returns a 2D array with the first columns 
being the indices of M, and the last column the flatten M.

return hstack((indices(M.shape).reshape(-1,M.ndim),M.reshape(-1,1)))

Example:

 M
array([[ 0.73530097,  0.3553424 ,  0.3719772 ],
   [ 0.83353373,  0.74622133,  0.14748905],
   [ 0.72023762,  0.32306969,  0.19142366]])

 unroll(M)
array([[ 0.,  0.,  0.73530097],
   [ 0.,  1.,  0.3553424 ],
   [ 1.,  1.,  0.3719772 ],
   [ 2.,  2.,  0.83353373],
   [ 2.,  0.,  0.74622133],
   [ 1.,  2.,  0.14748905],
   [ 0.,  1.,  0.72023762],
   [ 2.,  0.,  0.32306969],
   [ 1.,  2.,  0.19142366]])


The docstring sucks. The function is trivial (when you know numpy a bit).
Maybe this function already exists in numpy, if so I couldn't find it.
Elsewhere I propose it for inclusion.

Cheers,

Gaël

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy-scalars vs Numpy 0-d arrays: copy or not copy?

2006-10-20 Thread Stefan van der Walt
On Fri, Oct 20, 2006 at 11:42:26AM +0200, Sebastien Bardeau wrote:
   a = numpy.array((1,2,3))
   b = a[:2]

Here you index by a slice.

   c = a[2]

Whereas here you index by a scalar.

So you want to do

b = a[[2]]
b += 1

or in the general case

b = a[slice(2,3)]
b += 1

Regards
Stéfan

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy-scalars vs Numpy 0-d arrays: copy or not copy?

2006-10-20 Thread Tim Hochberg
Francesc Altet wrote:
 A Divendres 20 Octubre 2006 11:42, Sebastien Bardeau va escriure:
 [snip]
   
 I can understand that numpy.scalars do not provide inplace operations
 (like Python standard scalars, they are immutable), so I'd like to use

 0-d Numpy.ndarrays. But:
   d = numpy.array(a[2],copy=False)
   d += 1
   d

 array(4)

   a

 array([2, 3, 3])

   type(d)

 type 'numpy.ndarray'

   d.shape

 ()

   id(d)

 169621280

   d += 1
   id(d)

 169621280

 This is not a solution because d is a copy since construction time...
 My question is: is there a way to get a single element of an array into
 a 0-d array which shares memory with its parent array?
 

 One possible solution (there can be more) is using ndarray:
[SNIP]

Here's a slightly more concise version of the same idea:

b = a[n:n+1].reshape([])


-tim



-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy-scalars vs Numpy 0-d arrays: copy or not copy?

2006-10-20 Thread Sebastien Bardeau
Ooops sorry there was two mistakes with the 'hasslice' flag. This seems 
now to work for me.

   def __getitem__(self,index): # Index may be either an int or a tuple
  # Index length:
  if type(index) == int: # A single element through first dimension
 ilen = 1
 index = (index,)# A tuple
  else:
 ilen = len(index)
  # Array rank:
  arank = len(self.shape)
  # Check if there is a slice:
  hasslice = False
  for i in index:
 if type(i) == slice:
hasslice = True
  # Array is already a 0-d array:
  if arank == 0 and index == (0,):
 return self
  elif arank == 0:
 raise IndexError, 0-d array has only one element at index 0.
  # This will return a single element as a 0-d array:
  elif arank == ilen and not hasslice:
 # This ugly thing returns a numpy 0-D array AND NOT a numpy scalar!
 # (Numpy scalars do not share their data with the parent array)
 newindex = list(index)
 newindex[0] = slice(index[0],index[0]+1,None)
 newindex = tuple(newindex)
 return self[newindex].reshape(())
  # This will return a n-D subarray (n=1):
  else:
 return self[index]


Sebastien Bardeau wrote:
 One possible solution (there can be more) is using ndarray:

 In [47]: a=numpy.array([1,2,3], dtype=i4)
 In [48]: n=1# the position that you want to share
 In [49]: b=numpy.ndarray(buffer=a[n:n+1], shape=(), dtype=i4)
   
 
 Ok thanks. Actually that was also the solution I found. But this is much 
 more complicated when arrays are N dimensional with N1, and above all 
 if user asks for a slice in one or more dimension. Here is how I 
 redefine the __getitem__ method for my arrays. Remember that the goal is 
 to return a 0-d array rather than a numpy.scalar when I extract a single 
 element out of a N-dimensional (N=1) array:

def __getitem__(self,index): # Index may be either an int or a tuple
   # Index length:
   if type(index) == int: # A single element through first dimension
  ilen = 1
  index = (index,)# A tuple
   else:
  ilen = len(index)
   # Array rank:
   arank = len(self.shape)
   # Check if there is a slice:
   for i in index:
  if type(i) == slice:
 hasslice = True
  else:
 hasslice = False
   # Array is already a 0-d array:
   if arank == 0 and index == (0,):
  return self[()]
   elif arank == 0:
  raise IndexError, 0-d array has only one element at index 0.
   # This will return a single element as a 0-d array:
   elif arank == ilen and hasslice:
  # This ugly thing returns a numpy 0-D array AND NOT a numpy scalar!
  # (Numpy scalars do not share their data with the parent array)
  newindex = list(index)
  newindex[0] = slice(index[0],index[0]+1,None)
  newindex = tuple(newindex)
  return self[newindex].reshape(())
   # This will return a n-D subarray (n=1):
   else:
  return self[index]

  Well... I do not think this is very nice. Someone has another idea? My 
 question in my first post was: is there a way to get a single element of 
 an array into
 a 0-d array which shares memory with its parent array?

  Sebastien

 -
 Using Tomcat but need to do more? Need to support web services, security?
 Get stuff done quickly with pre-integrated technology to make your job easier
 Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
 http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
 ___
 Numpy-discussion mailing list
 Numpy-discussion@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/numpy-discussion


   

-- 
-
   Sebastien Bardeau
  L3AB - CNRS UMR 5804
 2 rue de l'observatoire
 BP 89
F - 33270 Floirac
Tel: (+33) 5 57 77 61 46
-


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] histogram complete makeover

2006-10-20 Thread David Huard
Thanks for the comments, Here is the code for the new histogram, tests included. I'll wait for comments or suggestions before submitting a patch (numpy / scipy) ?CheersDavid
2006/10/18, Tim Hochberg [EMAIL PROTECTED]:
My $0.02:If histogram is going to get a makeover, particularly one that makes itmore complex than at present, it should probably be moved to SciPy.Failing that, it should be moved to a submodule of numpy with similar
statistical tools. Preferably with consistent interfaces for all of thefunctions.-Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easierDownload IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642___Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.nethttps://lists.sourceforge.net/lists/listinfo/numpy-discussion
# License: Scipy compatible
# Author: David Huard, 2006
from numpy import *
def histogram(a, bins=10, range=None, normed=False, weights=None, axis=None):
histogram(a, bins=10, range=None, normed=False, weights=None, axis=None) 
   - H, dict

Return the distribution of sample.

Parameters
--
a:   Array sample.
bins:Number of bins, or 
 an array of bin edges, in which case the range is not used.
range:   Lower and upper bin edges, default: [min, max].
normed:  Boolean, if False, return the number of samples in each bin,
 if True, return a frequency distribution.  
weights: Sample weights.
axis:Specifies the dimension along which the histogram is computed. 
 Defaults to None, which aggregates the entire sample array. 

Output
--
H:The number of samples in each bin. 
  If normed is True, H is a frequency distribution.
dict{
'edges':  The bin edges, including the rightmost edge.
'upper':  Upper outliers.
'lower':  Lower outliers.
'bincenters': Center of bins.
}

Examples

x = random.rand(100,10)
H, Dict = histogram(x, bins=10, range=[0,1], normed=True)
H2, Dict = histogram(x, bins=10, range=[0,1], normed=True, axis=0)

See also: histogramnd


a = asarray(a)
if axis is None:
a = atleast_1d(a.ravel())
axis = 0 

# Bin edges.   
if not iterable(bins):
if range is None:
range = (a.min(), a.max())
mn, mx = [mi+0.0 for mi in range]
if mn == mx:
mn -= 0.5
mx += 0.5
edges = linspace(mn, mx, bins+1, endpoint=True)
else:
edges = asarray(bins, float)

dedges = diff(edges)
decimal = int(-log10(dedges.min())+6)
bincenters = edges[:-1] + dedges/2.

# apply_along_axis accepts only one array input, but we need to pass the 
# weights along with the sample. The strategy here is to concatenate the 
# weights array along axis, so the passed array contains [sample, weights]. 
# The array is then split back in  __hist1d.
if weights is not None:
aw = concatenate((a, weights), axis)
weighted = True
else:
aw = a
weighted = False

count = apply_along_axis(__hist1d, axis, aw, edges, decimal, weighted)

# Outlier count
upper = count.take(array([-1]), axis)
lower = count.take(array([0]), axis)

# Non-outlier count
core = a.ndim*[slice(None)]
core[axis] = slice(1, -1)
hist = count[core]

if normed:
normalize = lambda x: atleast_1d(x/(x*dedges).sum())
hist = apply_along_axis(normalize, axis, hist)

return hist, {'edges':edges, 'lower':lower, 'upper':upper, \
'bincenters':bincenters}

 
def __hist1d(aw, edges, decimal, weighted):
Internal routine to compute the 1d histogram.
aw: sample, [weights]
edges: bin edges
decimal: approximation to put values lying on the rightmost edge in the last
 bin.
weighted: Means that the weights are appended to array a. 
Return the bin count or frequency if normed.

nbin = edges.shape[0]+1
if weighted:
count = zeros(nbin, dtype=float)
a,w = hsplit(aw,2)
w = w/w.mean()
else:
a = aw
count = zeros(nbin, dtype=int)
w = None

binindex = digitize(a, edges)

# Values that fall on an edge are put in the right bin.
# For the rightmost bin, we want values equal to the right 
# edge to be counted in the last bin, and not as an outlier. 
on_edge = where(around(a,decimal) == around(edges[-1], decimal))[0]
binindex[on_edge] -= 1

# Count the number of identical 

[Numpy-discussion] slicing suggestion

2006-10-20 Thread JJ
Hello.
I have a suggestion that might make slicing using
matrices more user-friendly.  I often have a matrix of
row or column numbers that I wish to use as a slice. 
If K was a matrix of row numbers (nx1) and M was a nxm
matrix, then I would use ans = M[K.A.ravel(),:] to
obtain the matrix I want.  It turns out that I use
.A.ravel() quite a lot in my code, as I usually work
with matrices rather than arrays.  My suggestion is to
create a new attribute, such as .AR, so that the
following could be used: M[K.AR,:].  I believe this
would be more concise, easier to read, and well used. 
If slices are made in both directions of the matrix,
then the .A.ravel() becomes even more unwieldy.  Does
anyone else like this idea?

John 

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


[Numpy-discussion] Model and experiment fitting.

2006-10-20 Thread Sebastian Żurek
Hi!

This is probably a silly question but I'm getting confused with a 
certain problem: a comparison between experimental data points (2D 
points set) and a model (2D points set - no analytical form).

The physical model produces (by a sophisticated simulations done by an 
external program) some 2D points data and  one of my task is to compare 
those calculated data with an experimental one.

The experimental and modeled data have form of 2D curves, build of n 
2D-points, i.e.:

expDat=[[x1,x2,x3,..xn],[y1,y2,y3,...,yn]]
simDat=[[X1,X2,X3,...,Xn],[Y1,Y2,Y3,...,Yn]]

The task of determining, let's say, a root mean squarred error (RMSe)
is trivial if x1==X1, x2==X2, etc.

In general, which is a common situation xk differs from Xk (k=0..n) and 
one may not simply compare succeeding Yk and yk (k=0..n) to determine 
the goodness-of-fit. The distance h=Xk-X(k-1) is constant, but similar
distance m(k)=xk-x(k-1) depends on k-th point and is not a constant 
value, although the data array lengths for simulation and experiment are 
the same.

My first idea was to do some interpolations to obtain the missing 
points, but I held it 'by a hand' (which, BTW gave quite rewarding 
results)  and I suppose, there's some i.g. numpy method to do it for me, 
isn't it?

I suppose to do something like:

gfit(expDat,simDat,'measure_type')

which I hope will return the number determining the goodness-of-fit
(mean squarred error, root mean squarred error,...) of two sets of 
discrete 2D data points.

Is there something like that in any numerical python modules (numpy, 
pylab) I could use?


I can imagine, I can fit the data with some polynomial or whatever,
and than compare the fitted data, but my goal is to operate on
as raw data as it's possible.

Thanks for your comments!

Sebastian


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Model and experiment fitting.

2006-10-20 Thread Robert Kern
Sebastian Żurek wrote:
 Hi!
 
 This is probably a silly question but I'm getting confused with a 
 certain problem: a comparison between experimental data points (2D 
 points set) and a model (2D points set - no analytical form).
 
 The physical model produces (by a sophisticated simulations done by an 
 external program) some 2D points data and  one of my task is to compare 
 those calculated data with an experimental one.
 
 The experimental and modeled data have form of 2D curves, build of n 
 2D-points, i.e.:
 
 expDat=[[x1,x2,x3,..xn],[y1,y2,y3,...,yn]]
 simDat=[[X1,X2,X3,...,Xn],[Y1,Y2,Y3,...,Yn]]
 
 The task of determining, let's say, a root mean squarred error (RMSe)
 is trivial if x1==X1, x2==X2, etc.
 
 In general, which is a common situation xk differs from Xk (k=0..n) and 
 one may not simply compare succeeding Yk and yk (k=0..n) to determine 
 the goodness-of-fit. The distance h=Xk-X(k-1) is constant, but similar
 distance m(k)=xk-x(k-1) depends on k-th point and is not a constant 
 value, although the data array lengths for simulation and experiment are 
 the same.

Your description is a bit vague. Do you mean that you have some model function 
f 
that maps X values to Y values?

   f(x) - y

If that is the case, is there some reason that you cannot run your simulation 
using the same X points as your experimental data?

OTOH, is there some other independent variable (say Z) that *is* common between 
your experimental and simulated data?

   f(z) - (x, y)

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth.
   -- Umberto Eco


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Model and experiment fitting.

2006-10-20 Thread A. M. Archibald
On 20/10/06, Sebastian Żurek [EMAIL PROTECTED] wrote:


 Is there something like that in any numerical python modules (numpy,
 pylab) I could use?

In scipy there are some very convenient spline fitting tools which
will allow you to fit a nice smooth spline through the simulation data
points (or near, if they have some uncertainty); you can then easily
look at the RMS difference in the y values. You can also, less easily,
look at the distance from the curve allowing for some uncertainty in
the x values.

I suppose you could also fit a curve through the experimental points
and compare the two curves in some way.

 I can imagine, I can fit the data with some polynomial or whatever,
 and than compare the fitted data, but my goal is to operate on
 as raw data as it's possible.

If you want to avoid using an a priori model, Numerical Recipes
discuss some possible approaches (Do two-dimensional distributions
differ? at http://www.nrbook.com/a/bookcpdf.html is one) but it's not
clear how to turn the problem you describe into a solvable one - some
assumption about how the models vary between sampled x values appears
to be necessary, and that amounts to interpolation.

A. M. Archibald
-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


[Numpy-discussion] Problem introduced after 1.0rc2 on AIX with xlc

2006-10-20 Thread Brian Granger
Hi,

i am running numpy on aix compiling with xlc.  Revision 1.0rc2 works
fine and passes all tests.  But 1.0rc3 and more recent give the
following on import:

Warning: invalid value encountered in multiply
Warning: invalid value encountered in multiply
Warning: invalid value encountered in multiply
Warning: invalid value encountered in add
Warning: invalid value encountered in not_equal
Warning: invalid value encountered in absolute
Warning: invalid value encountered in less
Warning: invalid value encountered in multiply
Warning: invalid value encountered in multiply
Warning: invalid value encountered in equal
Warning: invalid value encountered in multiply
Warning: invalid value encountered in multiply
Warning: invalid value encountered in multiply
Warning: invalid value encountered in add
Warning: invalid value encountered in not_equal
Warning: invalid value encountered in absolute
Warning: invalid value encountered in less
Warning: invalid value encountered in multiply
Warning: invalid value encountered in multiply
Warning: invalid value encountered in equal
Warning: invalid value encountered in multiply
Warning: invalid value encountered in multiply
Warning: invalid value encountered in multiply
[lots more of this]

The odd thing is that all tests pass.  I have looked, but can't find
where this Warning is coming from  in the code.  Any thoughts on where
this is coming from?  What can I do to help debug this?  I am not sure
what revision introduced this issue.

Thanks

Brian

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Problem introduced after 1.0rc2 on AIX with xlc

2006-10-20 Thread Tim Hochberg
Brian Granger wrote:
 Hi,

 i am running numpy on aix compiling with xlc.  Revision 1.0rc2 works
 fine and passes all tests.  But 1.0rc3 and more recent give the
 following on import:

 Warning: invalid value encountered in multiply
 Warning: invalid value encountered in multiply
 Warning: invalid value encountered in multiply
 Warning: invalid value encountered in add
 Warning: invalid value encountered in not_equal
 Warning: invalid value encountered in absolute
 Warning: invalid value encountered in less
 Warning: invalid value encountered in multiply
 Warning: invalid value encountered in multiply
 Warning: invalid value encountered in equal
 Warning: invalid value encountered in multiply
 Warning: invalid value encountered in multiply
 Warning: invalid value encountered in multiply
 Warning: invalid value encountered in add
 Warning: invalid value encountered in not_equal
 Warning: invalid value encountered in absolute
 Warning: invalid value encountered in less
 Warning: invalid value encountered in multiply
 Warning: invalid value encountered in multiply
 Warning: invalid value encountered in equal
 Warning: invalid value encountered in multiply
 Warning: invalid value encountered in multiply
 Warning: invalid value encountered in multiply
 [lots more of this]

 The odd thing is that all tests pass.  I have looked, but can't find
 where this Warning is coming from  in the code.  Any thoughts on where
 this is coming from?  What can I do to help debug this?  I am not sure
 what revision introduced this issue.
   
The reason that you are seeing this now is that the default error state 
has been tightened up. There were some issues with tests failing as a 
result of this, but I believe I fixed those already and you're seeing 
this on import, not when running the tests correct? The first thing to 
do is figure out where the invalids are occurring, and the natural way 
to do that is to set the error state to raise, but you can't set the 
error state till you import it, so that's not going to help here.

I think the first thing that I would try is to throw in a 
seterr(all='raise', under='ignore') right after the call to _setdef in 
numeric.py. If you're lucky, this will point out where the invalids are 
popping up. As a sanity check, you could instead make this 
seterr(all='ignore'), which should make all the warnings go away, but 
won't tell you anything about why there are warnings to begin with.

Regards,

-tim


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Problem introduced after 1.0rc2 on AIX with xlc

2006-10-20 Thread Brian Granger
Also, when I use seterr(all='ignore') the the tests fail:

==
FAIL: Ticket #112
--
Traceback (most recent call last):
  File 
/usr/common/homes/g/granger/usr/local/lib/python/numpy/core/tests/test_regression.py,
line 219, in check_longfloat_repr
assert(str(a)[1:9] == str(a[0])[:8])
AssertionError

--
Ran 516 tests in 0.823s

FAILED (failures=1)

Thanks for helping out on this.

On 10/20/06, Tim Hochberg [EMAIL PROTECTED] wrote:
 Brian Granger wrote:
  Hi,
 
  i am running numpy on aix compiling with xlc.  Revision 1.0rc2 works
  fine and passes all tests.  But 1.0rc3 and more recent give the
  following on import:
 
  Warning: invalid value encountered in multiply
  Warning: invalid value encountered in multiply
  Warning: invalid value encountered in multiply
  Warning: invalid value encountered in add
  Warning: invalid value encountered in not_equal
  Warning: invalid value encountered in absolute
  Warning: invalid value encountered in less
  Warning: invalid value encountered in multiply
  Warning: invalid value encountered in multiply
  Warning: invalid value encountered in equal
  Warning: invalid value encountered in multiply
  Warning: invalid value encountered in multiply
  Warning: invalid value encountered in multiply
  Warning: invalid value encountered in add
  Warning: invalid value encountered in not_equal
  Warning: invalid value encountered in absolute
  Warning: invalid value encountered in less
  Warning: invalid value encountered in multiply
  Warning: invalid value encountered in multiply
  Warning: invalid value encountered in equal
  Warning: invalid value encountered in multiply
  Warning: invalid value encountered in multiply
  Warning: invalid value encountered in multiply
  [lots more of this]
 
  The odd thing is that all tests pass.  I have looked, but can't find
  where this Warning is coming from  in the code.  Any thoughts on where
  this is coming from?  What can I do to help debug this?  I am not sure
  what revision introduced this issue.
 
 The reason that you are seeing this now is that the default error state
 has been tightened up. There were some issues with tests failing as a
 result of this, but I believe I fixed those already and you're seeing
 this on import, not when running the tests correct? The first thing to
 do is figure out where the invalids are occurring, and the natural way
 to do that is to set the error state to raise, but you can't set the
 error state till you import it, so that's not going to help here.

 I think the first thing that I would try is to throw in a
 seterr(all='raise', under='ignore') right after the call to _setdef in
 numeric.py. If you're lucky, this will point out where the invalids are
 popping up. As a sanity check, you could instead make this
 seterr(all='ignore'), which should make all the warnings go away, but
 won't tell you anything about why there are warnings to begin with.

 Regards,

 -tim


 -
 Using Tomcat but need to do more? Need to support web services, security?
 Get stuff done quickly with pre-integrated technology to make your job easier
 Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
 http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
 ___
 Numpy-discussion mailing list
 Numpy-discussion@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/numpy-discussion


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Problem introduced after 1.0rc2 on AIX with xlc

2006-10-20 Thread Brian Granger
When I set seterr(all='warn') I see the following:

In [1]: import numpy
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/ufunclike.py:46:
RuntimeWarning: invalid value encountered in log
  _log2 = umath.log(2)
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/scimath.py:19:
RuntimeWarning: invalid value encountered in log
  _ln2 = nx.log(2.0)
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:64:
RuntimeWarning: invalid value encountered in add
  two = one + one
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:65:
RuntimeWarning: invalid value encountered in subtract
  zero = one - one
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:71:
RuntimeWarning: invalid value encountered in add
  a = a + a
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:72:
RuntimeWarning: invalid value encountered in add
  temp = a + one
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:73:
RuntimeWarning: invalid value encountered in subtract
  temp1 = temp - a
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:74:
RuntimeWarning: invalid value encountered in subtract
  if any(temp1 - one != zero):
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:74:
RuntimeWarning: invalid value encountered in not_equal
  if any(temp1 - one != zero):
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:80:
RuntimeWarning: invalid value encountered in add
  b = b + b
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:81:
RuntimeWarning: invalid value encountered in add
  temp = a + b
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:82:
RuntimeWarning: invalid value encountered in subtract
  itemp = int_conv(temp-a)
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:83:
RuntimeWarning: invalid value encountered in not_equal
  if any(itemp != 0):
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:95:
RuntimeWarning: invalid value encountered in multiply
  b = b * beta
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:96:
RuntimeWarning: invalid value encountered in add
  temp = b + one
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:97:
RuntimeWarning: invalid value encountered in subtract
  temp1 = temp - b
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:98:
RuntimeWarning: invalid value encountered in subtract
  if any(temp1 - one != zero):
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:98:
RuntimeWarning: invalid value encountered in not_equal
  if any(temp1 - one != zero):
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:103:
RuntimeWarning: invalid value encountered in divide
  betah = beta / two
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:106:
RuntimeWarning: invalid value encountered in add
  a = a + a
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:107:
RuntimeWarning: invalid value encountered in add
  temp = a + one
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:108:
RuntimeWarning: invalid value encountered in subtract
  temp1 = temp - a
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:109:
RuntimeWarning: invalid value encountered in subtract
  if any(temp1 - one != zero):
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:109:
RuntimeWarning: invalid value encountered in not_equal
  if any(temp1 - one != zero):
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:113:
RuntimeWarning: invalid value encountered in add
  temp = a + betah
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:115:
RuntimeWarning: invalid value encountered in subtract
  if any(temp-a != zero):
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:115:
RuntimeWarning: invalid value encountered in not_equal
  if any(temp-a != zero):
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:117:
RuntimeWarning: invalid value encountered in add
  tempa = a + beta
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:118:
RuntimeWarning: invalid value encountered in add
  temp = tempa + betah
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:119:
RuntimeWarning: invalid value encountered in subtract
  if irnd==0 and any(temp-tempa != zero):
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:119:
RuntimeWarning: invalid value encountered in not_equal
  if irnd==0 and any(temp-tempa != zero):
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:124:
RuntimeWarning: invalid value encountered in divide
  betain = one / beta
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/machar.py:127:
RuntimeWarning: invalid value encountered in multiply
  a = a * betain

Re: [Numpy-discussion] Problem introduced after 1.0rc2 on AIX with xlc

2006-10-20 Thread Tim Hochberg
Brian Granger wrote:
 Also, when I use seterr(all='ignore') the the tests fail:

 ==
 FAIL: Ticket #112
 --
 Traceback (most recent call last):
   File 
 /usr/common/homes/g/granger/usr/local/lib/python/numpy/core/tests/test_regression.py,
 line 219, in check_longfloat_repr
 assert(str(a)[1:9] == str(a[0])[:8])
 AssertionError

 --
 Ran 516 tests in 0.823s

 FAILED (failures=1)

 Thanks for helping out on this.
   
How recent is your version? I just a problem that was causing this same 
failure yesterday -- if you checkout is older than that, you may want to 
get the most recent stuff from SVN and see if that fixes this.

-tim


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Problem introduced after 1.0rc2 on AIX with xlc

2006-10-20 Thread Brian Granger
I have been doing these recent tests with 1.0rc3.  I am building from
trunk right now and we will see how that goes.  Thanks for your help.

Brian

On 10/20/06, Tim Hochberg [EMAIL PROTECTED] wrote:
 Brian Granger wrote:
  Also, when I use seterr(all='ignore') the the tests fail:
 
  ==
  FAIL: Ticket #112
  --
  Traceback (most recent call last):
File 
  /usr/common/homes/g/granger/usr/local/lib/python/numpy/core/tests/test_regression.py,
  line 219, in check_longfloat_repr
  assert(str(a)[1:9] == str(a[0])[:8])
  AssertionError
 
  --
  Ran 516 tests in 0.823s
 
  FAILED (failures=1)
 
  Thanks for helping out on this.
 
 How recent is your version? I just a problem that was causing this same
 failure yesterday -- if you checkout is older than that, you may want to
 get the most recent stuff from SVN and see if that fixes this.

 -tim


 -
 Using Tomcat but need to do more? Need to support web services, security?
 Get stuff done quickly with pre-integrated technology to make your job easier
 Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
 http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
 ___
 Numpy-discussion mailing list
 Numpy-discussion@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/numpy-discussion


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Problem introduced after 1.0rc2 on AIX with xlc

2006-10-20 Thread Tim Hochberg
Brian Granger wrote:
 When I set seterr(all='warn') I see the following:

 In [1]: import numpy
 /usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/ufunclike.py:46:
 RuntimeWarning: invalid value encountered in log
   _log2 = umath.log(2)
 /usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/scimath.py:19:
 RuntimeWarning: invalid value encountered in log
   _ln2 = nx.log(2.0)
   
[etc, etc]

Wow! That looks pretty bad. What do you get if you try just 
numpy.log(2) or numpy.log(2.0)? Is it producing sane results for 
scalars at all? I suppose another possibility is that the error 
reporting is broken on AIX for some reason.

Hmmm.

I'm betting that is is. The macro UFUNC_CHECK_STATUS is very platform 
dependent. There is a version from AIX (ufuncobject.h line 301), but 
perhaps it's broken on your particular configuration and as a result is 
spitting out all kinds of bogus errors. This is only coming to light now 
because the default error checking level got cranked up.

I gotta call it a night and I'll be out tomorrow, so I won't be much 
more help, but here's something that you might look into: have you 
compiled numarray sucessfully? If you haven't you might want to try it.  
It uses the same default error checking that numpy is now using. If you 
have, you might want to look for the equivalent of UFUNC_CHECK_STATUS 
(it might even have the same name) and splice it into numpy and see if 
it fixes your problems.

Of course, if numpy.log(2) is spitting out something bogus, there's 
something much worse going on, but I suspect you would have noticed that 
by now.

Good luck,

-tim


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Problem introduced after 1.0rc2 on AIX with xlc

2006-10-20 Thread Tim Hochberg
Brian Granger wrote:
 Tim,

 I just tried everything with r3375.  I set seterr(all='warn') and the
 tests passed.  But all the floating point warning are still there.
 With seterr(all='ignore') the warnings go away and all the tests pass.
  should I worry about the warnings?
   
Maybe. I just sent you some email on this. But my guess is that the code 
that checks for FP errors is broken on your particular system. Mainly I 
suspect this because I think you would have noticed by now if everything 
was as broken as the warnings seem to indicate. Assuming that's the 
case, and this will probably become clear if you  test a bunch of 
computations that give correct (and non INF/NAN) results, but still spit 
out warnings, you have two choices: try to fix the warnings code or 
disable the warnings. The former would be preferable since then you 
could actually use the warnings code, but it may be a pain in the neck 
unless you can find some place to steal the relevant code from.

-tim


 thanks

 Brian



 On 10/20/06, Tim Hochberg [EMAIL PROTECTED] wrote:
   
 Brian Granger wrote:
 
 Also, when I use seterr(all='ignore') the the tests fail:

 ==
 FAIL: Ticket #112
 --
 Traceback (most recent call last):
   File 
 /usr/common/homes/g/granger/usr/local/lib/python/numpy/core/tests/test_regression.py,
 line 219, in check_longfloat_repr
 assert(str(a)[1:9] == str(a[0])[:8])
 AssertionError

 --
 Ran 516 tests in 0.823s

 FAILED (failures=1)

 Thanks for helping out on this.

   
 How recent is your version? I just a problem that was causing this same
 failure yesterday -- if you checkout is older than that, you may want to
 get the most recent stuff from SVN and see if that fixes this.

 -tim


 -
 Using Tomcat but need to do more? Need to support web services, security?
 Get stuff done quickly with pre-integrated technology to make your job easier
 Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
 http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
 ___
 Numpy-discussion mailing list
 Numpy-discussion@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/numpy-discussion

 

 -
 Using Tomcat but need to do more? Need to support web services, security?
 Get stuff done quickly with pre-integrated technology to make your job easier
 Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
 http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
 ___
 Numpy-discussion mailing list
 Numpy-discussion@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/numpy-discussion


   



-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Problem introduced after 1.0rc2 on AIX with xlc

2006-10-20 Thread Brian Granger
Thanks, I will investigate more on these things and get back to you
early in the week.  But for now numpy seems to be functioning pretty
normally (log(2) gives the correct answer).  thanks again.
It would be great to figure this stuff out before 1.0, but we might
not have time.


Brian

On 10/20/06, Tim Hochberg [EMAIL PROTECTED] wrote:
 Brian Granger wrote:
  When I set seterr(all='warn') I see the following:
 
  In [1]: import numpy
  /usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/ufunclike.py:46:
  RuntimeWarning: invalid value encountered in log
_log2 = umath.log(2)
  /usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/scimath.py:19:
  RuntimeWarning: invalid value encountered in log
_ln2 = nx.log(2.0)
 
 [etc, etc]

 Wow! That looks pretty bad. What do you get if you try just
 numpy.log(2) or numpy.log(2.0)? Is it producing sane results for
 scalars at all? I suppose another possibility is that the error
 reporting is broken on AIX for some reason.

 Hmmm.

 I'm betting that is is. The macro UFUNC_CHECK_STATUS is very platform
 dependent. There is a version from AIX (ufuncobject.h line 301), but
 perhaps it's broken on your particular configuration and as a result is
 spitting out all kinds of bogus errors. This is only coming to light now
 because the default error checking level got cranked up.

 I gotta call it a night and I'll be out tomorrow, so I won't be much
 more help, but here's something that you might look into: have you
 compiled numarray sucessfully? If you haven't you might want to try it.
 It uses the same default error checking that numpy is now using. If you
 have, you might want to look for the equivalent of UFUNC_CHECK_STATUS
 (it might even have the same name) and splice it into numpy and see if
 it fixes your problems.

 Of course, if numpy.log(2) is spitting out something bogus, there's
 something much worse going on, but I suspect you would have noticed that
 by now.

 Good luck,

 -tim


 -
 Using Tomcat but need to do more? Need to support web services, security?
 Get stuff done quickly with pre-integrated technology to make your job easier
 Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
 http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
 ___
 Numpy-discussion mailing list
 Numpy-discussion@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/numpy-discussion


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion