Re: [Numpy-discussion] New release of pycdf package ported to NumPy

2007-02-21 Thread George Nurser
Hi Andre,
I've downloaded bpycdf and it works very nicely with  numpy; thanks
very much for all your effort.

One small problem; I'm probably being stupid, but I cannot see how to
set a _Fillvalue as Float32.

regards, George Nurser.

On 12/02/07, Andre Gosselin [EMAIL PROTECTED] wrote:
 A small error slipped into the pycdf-0.6-3 release. Attribute deletion
 through del(), and dimension length inquiry through len() were missing
 in that release.

 A new pycdf-0.6.3b fixes those problems. I have withdrawn pycdf-0.6.3
 from Sourceforge.net . Those people who have already downloaded this
 release can safely continue to use it,if they do not mind missing the
 del() and len() features.

 Reagrds,
 Andre Gosselin

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] installation documentation

2007-02-21 Thread Toon Knapen
Hi all,

Is there detailed info on the installation process available.

I'm asking because in addition to installing numpy on linux-x86, I'm 
also trying to install numpy on aix-power and windows-x86. So before 
bombarding the ml with questions, I would like to get my hands on all 
doc available (I already have the 'Guide to Numpy' of sept. 15 2006 but 
can not find all the info I'm looking for in there neither)

Thanks in advance,

Toon Knapen
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Managing Rolling Data

2007-02-21 Thread Alexander Michael
I'm new to numpy and looking for advice on setting up and managing
array data for my particular problem. I'm collecting observations of P
properties for N objects over a rolling horizon of H sample times. I
could conceptually store the data in three-dimensional array with
shape (N,P,H) that would allow me to easily (and efficiently with
strided slices) compute the statistics over both N and H that I am
interested in. This is great, but the rub is that H, an interval of T,
 is a rolling horizon. T is to large to fit in memory, so I need to
load up H, perform my calculations, pop the oldest N x P slice and
push the newest N x P slice into the data cube. What's the best way to
do this that will maintain fast computations along the one-dimensional
slices over N and H? Is there a commonly accepted idiom?

Fundamentally, I see two solutions. The first would be to essentially
perform a memcpy to propagate the data. The second would be to manage
the N x P slices as H discontiguous memory blocks and merely reorder
the pointers with each new sample. Can I do either of these with
numpy?

Thanks,
Alex
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Greek Letters

2007-02-21 Thread Chris Barker
Andrew Straw wrote:
  Here's one that seems like
 it might work, but I haven't tried it yet: 
 http://software.jessies.org/terminator

Now if only there was a decent terminal emulator for Windows that didn't 
use cygwin...

-Chris



-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [Matplotlib-users] [matplotlib-devel] Unifying numpy, scipy, and matplotlib docstring formats

2007-02-21 Thread Chris Barker
There's probably a better forum for this conversation, but...

Barry Wark wrote:
 Perhaps we should consider two use cases: interactive use ala Matlab
 and larger code bases.

A couple key points -- yes, interactive use is different than larger 
code bases, but I think it's a Bad Idea to promite totally different 
coding styles for these cases for a couple reasons:

-- One usually is doing both at once. I was a long-time, every day 
Matlab user, and hardly did anything of consequence interactively. I 
learned very quickly that it made a whole lot more sense to write a five 
line script that I could save, edit, etc. than do stuff interactively. 
Once I got something working, parts of that five line script might get 
cutpasted into real code.

I do still test one or two lines interactively, but even then, I want 
the style to be something I can put in my code.

2) consistency in docs and examples is important, recommending different 
styles for interactive and programming use is just going to confuse 
people more.

3) even for folks that do a lot of interactive use, they are likely to 
write larger scale code at some point, and then they would need to learn 
something new.

 In the first case, being able to import * saves
 a lot of typing

No, it saves a little typing, if you're using an OOO style anyway.

  and the namespace polution problem isn't a big deal.

Yes, it can be. A good interactive environment will be able to do things 
like method and command completion -- namespace pollution keeps that 
from working well.

 Returning to the OP's questions, why couldn't both cases be helped by
 creating a meta-package for numpy, scipy, and matplotlib? For the
 sake of argument, lets call the package plab. Existing code could be
 affected by changing the individual packages, but a package that
 essentially does
 
 from pylab import *
 from numpy import *
 from scipy import *


The issue with this is that you've now hidden where things are coming 
from. People seeing examples using that package will have no idea where 
things come from.

and by the way, the current pylab, as delivered with MPL, pretty much 
does this already. I think we need to move away from that, rather than 
putting even more into pylab.

Matthew Brett wrote:
  The downside about making numpy / python like matlab is that
 you soon realize that you really have to think about your problems
 differently, and write code in a different way.

Good point. A part of good Pythonic code is namespaces and OOO style. 
New users might as well learn the whole pile at once.

That all being said, it would be nice to establish a standard convention 
for how to import the key packages. I use:

import numpy as N
import matplotlib as MPL

But I don't really care that much, if we can come to any kind of 
community consensus, I'll follow it. The goal would be for all docs, 
Wiki entries, examples on the mailing lists, etc. to use the same style.

-Chris






-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] what goes wrong with cos(), sin()

2007-02-21 Thread WolfgangZillig
Hi,

I'm quite new to numpy/scipy so please excuse if my problem is too obvious.

example code:

import numpy as n
print n.sin(n.pi)
print n.cos(n.pi/2.0)

results in:
1.22460635382e-016
6.12303176911e-017

I've expected something around 0. Can anybody explain what I am doing 
wrong here?

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] what goes wrong with cos(), sin()

2007-02-21 Thread Charles R Harris

On 2/21/07, WolfgangZillig [EMAIL PROTECTED] wrote:


Hi,

I'm quite new to numpy/scipy so please excuse if my problem is too
obvious.

example code:

import numpy as n
print n.sin(n.pi)
print n.cos(n.pi/2.0)

results in:
1.22460635382e-016
6.12303176911e-017

I've expected something around 0. Can anybody explain what I am doing
wrong here?



They *are* around zero. You are seeing rounding error, probably both in the
value of pi and the approximations used to evaluate the sin and cos, most
likely some rational min/max fit. I recall teaching PDEs to students who
used Mathematica and they had the same sort of problem because terms
involving sin(pi) didn't go exactly to zero. Made their fourier series not
quite match the text answers. If you want these sort of terms to be exact
you will have to use some sort of symbolic program.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] what goes wrong with cos(), sin()

2007-02-21 Thread Zachary Pincus
Your results are indeed around zero.

  numpy.allclose(0, 1.22460635382e-016)
True

It's not exactly zero because floating point math is in general not  
exact. You'll need to check out a reference about doing floating  
point operations numerically for more details, but in general you  
should not expect exact results due to the limited precision of any  
fixed-width digital representation of floats.

A corrolary: in general do not two floating-point values for equality  
-- use something like numpy.allclose. (Exception -- equality is  
expected if the exact sequence of operations to generate two numbers  
were identical.)

Zach Pincus

Program in Biomedical Informatics and Department of Biochemistry
Stanford University School of Medicine

On Feb 21, 2007, at 10:11 AM, WolfgangZillig wrote:

 Hi,

 I'm quite new to numpy/scipy so please excuse if my problem is too  
 obvious.

 example code:

 import numpy as n
 print n.sin(n.pi)
 print n.cos(n.pi/2.0)

 results in:
 1.22460635382e-016
 6.12303176911e-017

 I've expected something around 0. Can anybody explain what I am doing
 wrong here?

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Managing Rolling Data

2007-02-21 Thread Mike Ressler
On 2/21/07, Alexander Michael [EMAIL PROTECTED] wrote:
 ... T is to large to fit in memory, so I need to
 load up H, perform my calculations, pop the oldest N x P slice and
 push the newest N x P slice into the data cube. What's the best way to
 do this that will maintain fast computations along the one-dimensional
 slices over N and H? Is there a commonly accepted idiom?

Would loading your data via memmap, then slicing it, do your job
(using numpy.memmap)? I work on 12 GB files with 4 GB of memory, but
it is transparent to me since the OS takes care of moving data in and
out of memory. May not be the fastest solution possible, but for me it
is a case where dev time is more significant than run time.

Mike


-- 
[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] what goes wrong with cos(), sin()

2007-02-21 Thread David Goldsmith
As far as a computer is concerned, those numbers are around zero - 
growing-up w/ Matlab, e.g., one quickly learns to recognize these 
numbers for what they are.  One way to return zero for numbers like 
these is

if numpy.allclose(x, 0): return 0 (or 0*x to assure that 0 is the same 
type as x),

but caveat emptor: sometimes, of course, 1e-16 really is supposed to be 
1e-16, not just the best the algorithm can do to get to zero.  Also, 
(help me out here guys) I thought there was something like 
zeroifclose(x) which does the above, or does numpy only have realifclose 
to return a real when an imaginary part is close to zero?

DG

WolfgangZillig wrote:
 Hi,

 I'm quite new to numpy/scipy so please excuse if my problem is too obvious.

 example code:

 import numpy as n
 print n.sin(n.pi)
 print n.cos(n.pi/2.0)

 results in:
 1.22460635382e-016
 6.12303176911e-017

 I've expected something around 0. Can anybody explain what I am doing 
 wrong here?

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion
   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] what goes wrong with cos(), sin()

2007-02-21 Thread David Goldsmith
Anne Archibald wrote:
 On 21/02/07, Zachary Pincus [EMAIL PROTECTED] wrote:

   
 A corrolary: in general do not two floating-point values for equality
 -- use something like numpy.allclose. (Exception -- equality is
 expected if the exact sequence of operations to generate two numbers
 were identical.)
 

 I really can't agree that blindly using allclose() is a good idea. For
 example, allclose(plancks_constant,0) and the difference leads to
 quantum mechanics... you really have to know how much difference you
 expect and how big your numbers are going to be.

 Anne
   
Precisely! ;-)

Last time we had a posting like this, didn't one of the respondents 
include a link to something within the numpy Web docs that talks about 
floating point precision?

DG
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion
   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] installation documentation

2007-02-21 Thread Travis Oliphant
Toon Knapen wrote:

Hi all,

Is there detailed info on the installation process available.

I'm asking because in addition to installing numpy on linux-x86, I'm 
also trying to install numpy on aix-power and windows-x86. So before 
bombarding the ml with questions, I would like to get my hands on all 
doc available (I already have the 'Guide to Numpy' of sept. 15 2006 but 
can not find all the info I'm looking for in there neither)

  

 
If you don't care about using vendor-specific math libraries then it's 
as easy as

python setup.py install

on linux and aix (it's also that easy on Windows if you have the right 
compiler and/or configuration file set up).

If you care about vendor specific libraries, then you have to either put 
them in a known place or write a site.cfg file.   (We should document 
where the known places are...)

The installation process uses distutils and so information about 
distutils that you can glean from other places is also relevant.

Ask on the mailing list for more specifics.

-Travis



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] what goes wrong with cos(), sin()

2007-02-21 Thread Robert Kern
Christopher Barker wrote:
 I wonder if there are any C math libs that do a better job than you'd 
 expect from standard FP? (short of unlimited precision ones)

With respect to π and the zeros of sin() and cos()? Not really. If
numpy.sin(numpy.pi) were to give you 0.0, it would be *wrong*. numpy.sin() is
supposed to give you the most accurate result representable in double-precision
for the input you gave it. numpy.pi is not π.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] what goes wrong with cos(), sin()

2007-02-21 Thread David Goldsmith
Robert Kern wrote:
 Christopher Barker wrote:
   
 I wonder if there are any C math libs that do a better job than you'd 
 expect from standard FP? (short of unlimited precision ones)
 

 With respect to π and the zeros of sin() and cos()? Not really. If
 numpy.sin(numpy.pi) were to give you 0.0, it would be *wrong*. numpy.sin() is
 supposed to give you the most accurate result representable in 
 double-precision
 for the input you gave it. numpy.pi is not π.

   
True, but since the string literal numpy.pi is not the same as the 
string literal 3.1415926535897931 there's no reason in principle why 
numpy.py couldn't symbolically represent the exact value; of course, 
since numpy does not otherwise support symbolic computation, such an 
adoption would be frivolous and rather silly (but then the Pythons were 
never afraid of being silly). :-)


DG
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] what goes wrong with cos(), sin()

2007-02-21 Thread Christopher Barker
Robert Kern wrote:
 Christopher Barker wrote:
 I wonder if there are any C math libs that do a better job than you'd 
 expect from standard FP? (short of unlimited precision ones)
 
 With respect to π and the zeros of sin() and cos()? Not really. If
 numpy.sin(numpy.pi) were to give you 0.0, it would be *wrong*. numpy.sin() is
 supposed to give you the most accurate result representable in 
 double-precision
 for the input you gave it.

But does it?

 numpy.pi is not π.

More precisely, it's the best IEEE754 64 bit FP approximation of pi.

Right. I think that was the trick that HP used -- they somehow stored 
and worked with pi with more digits. The things you can do if you're 
making dedicated hardware.

I do wonder if there would be some way to use the extended precision 
built in to Intel FP hardware -- i.e. have a pi that you can pass in 
that has the full 80 bits that can be used internally. I don't know if 
the trig functions can be done with extended precision though.

-Chris


-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

[EMAIL PROTECTED]

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] what goes wrong with cos(), sin()

2007-02-21 Thread Robert Kern
Christopher Barker wrote:
 Robert Kern wrote:
 Christopher Barker wrote:
 I wonder if there are any C math libs that do a better job than you'd 
 expect from standard FP? (short of unlimited precision ones)
 With respect to π and the zeros of sin() and cos()? Not really.

I'll back off on this a little bit. There are some approaches that will work;
they're not floating point, but they're not really symbolic computation 
either.

  http://keithbriggs.info/xrc.html

 If
 numpy.sin(numpy.pi) were to give you 0.0, it would be *wrong*. numpy.sin() is
 supposed to give you the most accurate result representable in 
 double-precision
 for the input you gave it.
 
 But does it?

Not quite, it seems, but 0 is even farther from the correct answer, apparently:

[sage-2.0-intelmac-i386-Darwin]$ ./sage
--
| SAGE Version 2.0, Release Date: 2007-01-28 |
| Type notebook() for the GUI, and license() for information.|
--

sage: npi = RealNumber(3.1415926535897931, min_prec=53)
sage: npi.sin()
0.00013634246772254977
sage: import numpy
sage: numpy.sin(numpy.pi)
1.22460635382e-16

 numpy.pi is not π.
 
 More precisely, it's the best IEEE754 64 bit FP approximation of pi.
 
 Right. I think that was the trick that HP used -- they somehow stored 
 and worked with pi with more digits. The things you can do if you're 
 making dedicated hardware.
 
 I do wonder if there would be some way to use the extended precision 
 built in to Intel FP hardware -- i.e. have a pi that you can pass in 
 that has the full 80 bits that can be used internally. I don't know if 
 the trig functions can be done with extended precision though.

Well, you can always use long double if it is implemented on your platform. You
will have to construct a value for π yourself, though. I'm afraid that we don't
really make that easy.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] what goes wrong with cos(), sin()

2007-02-21 Thread Anne Archibald
On 21/02/07, Robert Kern [EMAIL PROTECTED] wrote:

 Well, you can always use long double if it is implemented on your platform. 
 You
 will have to construct a value for π yourself, though. I'm afraid that we 
 don't
 really make that easy.

If the trig functions are implemented at all, you can probably use
atan2(-1,0) and get a decent approximation; alternatively, if you want
to use sin(pi)=0 as a definition you could use scipy's bisection
routine (which is datatype-independent, IIRC) to find pi. Or you could
probably use a rational approximation to pi, then convert numerator
and denominator to long doubles, or better yet compare the rational
number with your long double approximation of pi.

But no, not particularly easy.

Anne
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Managing Rolling Data

2007-02-21 Thread Timothy Hochberg

If none of the suggested methods turn out to be efficient enough due to
copying overhead, here's a way to reduce the copying overhead by trading
memory (and a bit of complexity) for copying overhead. The general thrust is
to allocate M extra slices of memory and then shift the data every M time
slices instead of every time slice.

First you would allocate a block of memory N*P*(H+M) in size.

buffer = zeros([H+M,N,P], float)

Then you'd look at the first H time slices.

data = buffer[:H]

Top pop one piece of data off the stack you'd simply shift data to look at a
different place in the buffer. The first time, you'd have something like
this:

data = buffer[1:1+H]

Every M time steps you need to recopy the data. I expect that this should
reduce your copying overhead a bunch since your not copying as frequently.
It's pretty tunable too. You'd want to wrap some convenience functions
around stuff to automate the copying and popping, but that should be easy
enough.

I haven't tried this though, so caveat emptor.

-tim
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] what goes wrong with cos(), sin()

2007-02-21 Thread Charles R Harris

On 2/21/07, Robert Kern [EMAIL PROTECTED] wrote:


Christopher Barker wrote:
 Robert Kern wrote:
 Christopher Barker wrote:
 I wonder if there are any C math libs that do a better job than you'd
 expect from standard FP? (short of unlimited precision ones)
 With respect to π and the zeros of sin() and cos()? Not really.



snip

Well, you can always use long double if it is implemented on your platform.

You
will have to construct a value for π yourself, though. I'm afraid that we
don't
really make that easy.

--



pi = 3. 1415926535 8979323846 2643383279 5028841971 6939937510 5820974944
5923078164 0628620899 8628034825 3421170679 8214808651 *...

*
I dont know what that looks like when converted to long double. Lessee,

In [1]: import numpy

In [2]: pi = numpy.float128(3.1415926535897932384626433832795028841971)

In [3]: numpy.pi - pi
Out[3]: 0.0

In [7]: '%25.20f'%numpy.pi
Out[7]: '   3.14159265358979311600'

In [8]: '%25.20f'%pi
Out[8]: '   3.14159265358979311600'

I think we have a bug. Or else extended arithmetic isn't supported on this
machine.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] what goes wrong with cos(), sin()

2007-02-21 Thread Timothy Hochberg

On 2/21/07, Charles R Harris [EMAIL PROTECTED] wrote:




On 2/21/07, Robert Kern [EMAIL PROTECTED] wrote:

 Christopher Barker wrote:
  Robert Kern wrote:
  Christopher Barker wrote:
  I wonder if there are any C math libs that do a better job than
 you'd
  expect from standard FP? (short of unlimited precision ones)
  With respect to π and the zeros of sin() and cos()? Not really.


snip

Well, you can always use long double if it is implemented on your
 platform. You
 will have to construct a value for π yourself, though. I'm afraid that
 we don't
 really make that easy.

 --


pi = 3. 1415926535 8979323846 2643383279 5028841971 6939937510 5820974944
5923078164 0628620899 8628034825 3421170679 8214808651 *...

*
I dont know what that looks like when converted to long double. Lessee,

In [1]: import numpy

In [2]: pi = numpy.float128(3.1415926535897932384626433832795028841971)



I think this is where you go wrong. Your string of digits is first a python
float and *then* is converted to a long double. In the intermediate stage it
gets truncated and you don't get the precision back.


In [3]: numpy.pi - pi

Out[3]: 0.0

In [7]: '%25.20f'%numpy.pi
Out[7]: '   3.14159265358979311600 '

In [8]: '%25.20f'%pi
Out[8]: '   3.14159265358979311600'

I think we have a bug. Or else extended arithmetic isn't supported on this
machine.

Chuck



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion





--

//=][=\\

[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Distributing prebuilt numpy and other extensions

2007-02-21 Thread Russell E. Owen
In article [EMAIL PROTECTED],
 Zachary Pincus [EMAIL PROTECTED] wrote:

 Hello folks,
 
 I've developed some command-line tools for biologists using python/ 
 numpy and some custom C and Fortran extensions, and I'm trying to  
 figure out how to easily distribute them...
 
 For people using linux, I figure a source distribution is no problem  
 at all. (Right?)
 On the other hand, for Mac users (whose computers by default don't  
 have the dev tools, and even then would need to get a fortran  
 compiler elsewhere) I'd like to figure out something a bit easier.
 
 I'd like to somehow provide an installer (double-clickable or python  
 script) that does a version check and then installs an appropriate  
 version of prebuilt binaries for numpy and my C and Fortran  
 extensions. Is this possible within the bounds of the python or numpy  
 distutils? Would setuptools be a better way to go? Preferably it  
 would be a dead easy, one-step thing...
 
 Or is this whole idea problematic, and better to stick with source  
 distribution in all cases?

As Robert Kern said, using bdist_mpkg is a nice easy way to create a 
double-clickable Mac installer for python code. It builds an installer 
package using the normal setup.py file for your stuff.

Lots of packages built this way are available at:
http://pythonmac.org/packages

But if you want one installer that installs everything then, you have to 
figure what to do if the user already has some of the your python 
packages installed (e.g. numpy). Overwrite the existing package? Somehow 
install it in parallel and have the user pick which version to use?

None of this is automated in bdist_mpkg. It is set up to install one 
python package at a time. So...

For your project I suspect you would be better off using easy_install
http://peak.telecommunity.com/DevCenter/EasyInstall
and packaging your project as a python egg. easy_install is 
cross-platform, handles dependencies automatically and can install from 
source or precompiled binaries. That said, I've not actually used it 
except to install existing eggs, though I'd like to find some time to 
learn it.

-- Russell

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] what goes wrong with cos(), sin()

2007-02-21 Thread Charles R Harris

On 2/21/07, Timothy Hochberg [EMAIL PROTECTED] wrote:




On 2/21/07, Charles R Harris [EMAIL PROTECTED] wrote:



 On 2/21/07, Robert Kern  [EMAIL PROTECTED] wrote:
 
  Christopher Barker wrote:
   Robert Kern wrote:
   Christopher Barker wrote:
   I wonder if there are any C math libs that do a better job than
  you'd
   expect from standard FP? (short of unlimited precision ones)
   With respect to π and the zeros of sin() and cos()? Not really.


 snip

 Well, you can always use long double if it is implemented on your
  platform. You
  will have to construct a value for π yourself, though. I'm afraid that
  we don't
  really make that easy.
 
  --


 pi = 3. 1415926535 8979323846 2643383279 5028841971 6939937510
 5820974944 5923078164 0628620899 8628034825 3421170679 8214808651 *...

 *
 I dont know what that looks like when converted to long double. Lessee,

 In [1]: import numpy

 In [2]: pi = numpy.float128(3.1415926535897932384626433832795028841971)


I think this is where you go wrong. Your string of digits is first a
python float and *then* is converted to a long double. In the intermediate
stage it gets truncated and you don't get the precision back.



True. But there is missing functionality here.

In [4]: pi = numpy.float128('3.1415926535897932384626433832795028841971')
---
exceptions.TypeError Traceback (most recent
call last)

/home/charris/workspace/microsat/daemon/ipython console

TypeError: a float is required

It's somewhat pointless to have a data type that you can't properly
initialize. I think the string value should work, it works for python types.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] what goes wrong with cos(), sin()

2007-02-21 Thread Charles R Harris

On 2/21/07, Charles R Harris [EMAIL PROTECTED] wrote:




On 2/21/07, Timothy Hochberg [EMAIL PROTECTED] wrote:



 On 2/21/07, Charles R Harris  [EMAIL PROTECTED] wrote:
 
 
 
  On 2/21/07, Robert Kern  [EMAIL PROTECTED] wrote:
  
   Christopher Barker wrote:
Robert Kern wrote:
Christopher Barker wrote:
I wonder if there are any C math libs that do a better job than
   you'd
expect from standard FP? (short of unlimited precision ones)
With respect to π and the zeros of sin() and cos()? Not really.
 
 
  snip
 
  Well, you can always use long double if it is implemented on your
   platform. You
   will have to construct a value for π yourself, though. I'm afraid
   that we don't
   really make that easy.
  
   --
 
 
  pi = 3. 1415926535 8979323846 2643383279 5028841971 6939937510
  5820974944 5923078164 0628620899 8628034825 3421170679 8214808651 *...
 
  *
  I dont know what that looks like when converted to long double.
  Lessee,
 
  In [1]: import numpy
 
  In [2]: pi = numpy.float128(3.1415926535897932384626433832795028841971
  )
 

 I think this is where you go wrong. Your string of digits is first a
 python float and *then* is converted to a long double. In the intermediate
 stage it gets truncated and you don't get the precision back.


True. But there is missing functionality here.

In [4]: pi = numpy.float128('3.1415926535897932384626433832795028841971')
---

exceptions.TypeError Traceback (most
recent call last)

/home/charris/workspace/microsat/daemon/ipython console

TypeError: a float is required

It's somewhat pointless to have a data type that you can't properly
initialize. I think the string value should work, it works for python types.




OTOH, the old way does give extended precision for pi

In [8]: numpy.arctan2(numpy.float128(1), numpy.float128(1))*4
Out[8]: 3.14159265358979323851

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion