Re: [Numpy-discussion] [SciPy-dev] Doc-day

2007-12-29 Thread Travis E. Oliphant
Charles R Harris wrote:
>
>
> On Dec 29, 2007 7:59 PM, Fernando Perez <[EMAIL PROTECTED] 
> > wrote:
>
> On Dec 29, 2007 6:51 PM, Charles R Harris
> <[EMAIL PROTECTED] > wrote:
>
> > If not, we should
> > definitely decide on the structure of the docstrings and stick
> to it.
>
> +100
>
>  
> I have raised the topic of documentation formats on the list several 
> times, and the discussions have always petered out. So I made a 
> decision, posted what *worked* with epydoc, and also generated a 
> significant amount of documentation. Now that is tossed out with 
> hardly a mention. I would like to propose that anyone who submits a 
> new documentation standard also submit code to parse the result, 
> compare it to the existing practice, discuss why it is an 
> improvement, and generate documentation to show how it works. I did 
> all of those things, I expect nothing less from others if we are going 
> to reinvent the wheel. And the end result should be available as soon 
> the the formatting changes are made, not at some indefinite point in 
> the future.

Hey Chuck,

I'm the guilty party in the docstring revisions, and I'm very sorry that 
my actions seemed to undermine your contributions (which are very much 
appreciated).

The changes to the docstring format that I pushed are not much different 
from what was put in place.  However, they make it so that the 
documentation standard for SciPy and NumPy is not different from the ETS 
standard.  In addition, I think it conserves horizontal real-estate and 
looks better when just printed as plain text (which is mostly how 
docstrings get read).  

I ran epydoc (on the example.py file) and it still produces output that 
is very readable even if it is not currently as good looking as the 
previously rendered result.   A relatively simple pre-processer should 
be able to produce fancier output with epydoc.

As we were going to have a Doc-Day, and several people were going to be 
adding documentation, I wanted to cement the docstring standard and I 
did not want it to be based on a "technological" reason (i.e. we have 
all this stuff in the docstring so that epydoc can produce pretty 
output).My experience is that >90% of docstring-reading is done in 
plain-text, so this should drive the decision, and not the availability 
of a tool to render it.

We had to cement a decision, so I did it.  I am very sorry for stepping 
on your toes.  I should have looked at who committed the most recent 
changes to the documentation information pages and taken more time to 
work things out with you privately (but it was very early in the morning 
on Friday (2-4 am ish).   I also did not want to have another 
mailing-list discussion on docstring formats because as you've 
witnessed, it is not usually helpful on something which is primarily in 
the eye of the beholder.

I'll take responsibility to get something in place to render the 
docstrings more beautifully.  If anybody is interested in helping with 
that, please let me know.

Sheepishly yours,

-Travis O.



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [SciPy-dev] Doc-day

2007-12-29 Thread Charles R Harris
On Dec 29, 2007 7:59 PM, Fernando Perez <[EMAIL PROTECTED]> wrote:

> On Dec 29, 2007 6:51 PM, Charles R Harris <[EMAIL PROTECTED]>
> wrote:
>
> > If not, we should
> > definitely decide on the structure of the docstrings and stick to it.
>
> +100
>

I have raised the topic of documentation formats on the list several times,
and the discussions have always petered out. So I made a decision, posted
what *worked* with epydoc, and also generated a significant amount of
documentation. Now that is tossed out with hardly a mention. I would like to
propose that anyone who submits a new documentation standard also
submit code to parse the result, compare it to the existing
practice, discuss why it is an improvement, and generate documentation to
show how it works. I did all of those things, I expect nothing less from
others if we are going to reinvent the wheel. And the end result should be
available as soon the the formatting changes are made, not at some
indefinite point in the future.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [SciPy-dev] Doc-day

2007-12-29 Thread Fernando Perez
On Dec 29, 2007 6:51 PM, Charles R Harris <[EMAIL PROTECTED]> wrote:

> If not, we should
> definitely decide on the structure of the docstrings and stick to it.

+100

I'm about to commit docstrings for scimath (what I started yesterday).
 After some fixes to the numpy build, I can now work in-place and get
things like

tlon[~]> nosetests --with-doctest numpy.lib.scimath
...
--
Ran 7 tests in 0.660s

OK


Which is nice.  For now I'm adhering to what was in the guide:

>>> log?
Type:   function
Base Class: 
Namespace:  Interactive
File:   /home/installers/src/scipy/numpy/numpy/lib/scimath.py
Definition: log(x)
Docstring:
Return the natural logarithm of x.

If x contains negative inputs, the answer is computed and returned in the
complex domain.

Parameters
--
x : array_like

Returns
---
array_like

Examples

>>> import math

>>> log(math.exp(1))
1.0

Negative arguments are correctly handled (recall that for negative
arguments, the identity exp(log(z))==z does not hold anymore):

>>> log(-math.exp(1)) == (1+1j*math.pi)
True


But it's easy enough to fix them if a change is made.  But let's
settle this *soon* please, so that if changes need to be made it's
just a few, and we can just move on.

At this point we just need *a* decision.  It doesn't need to be
perfect, but any docstring is better than our canonical:

>>> np.cumsum?
Type:   function
Base Class: 
Namespace:  Interactive
File:
/home/fperez/usr/opt/lib/python2.5/site-packages/numpy/core/fromnumeric.py
Definition: np.cumsum(a, axis=None, dtype=None, out=None)
Docstring:
Sum the array over the given axis.

Blah, Blah.


I got that 'Blah blah' while lecturing to 20 developers and scientists
at a workshop at NCAR (the National Center for Atmospheric Research in
Boulder), projected on a 20 foot-wide screen.  A bit embarrassing, I
must admit.

So please, no more code without full docstrings in numpy/scipy

Cheers,


f
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [SciPy-dev] Doc-day

2007-12-29 Thread Charles R Harris
On Dec 28, 2007 5:26 AM, Stefan van der Walt <[EMAIL PROTECTED]> wrote:

> On Thu, Dec 27, 2007 at 09:27:09PM -0800, Jarrod Millman wrote:
> > On Dec 27, 2007 7:42 PM, Travis E. Oliphant <[EMAIL PROTECTED]>
> wrote:
> > > Doc-day will start tomorrow (in about 12 hours).  It will be Friday
> for
> > > much of America and be moving into Saturday for Europe and Asia.  Join
> > > in on the irc.freenode.net  (channel scipy) to coordinate effort.  I
> > > imaging people will be in an out.  I plan on being available in IRC
> from
> > > about 9:30 am CST to 6:00 pm CST and then possibly later.
> > >
> > > If you are available at different times in different parts of the
> > > world,  jump in and pick something to work on.
> >
> > Since this is our first doc-day, it will be fairly informal.  Travis
> > is going to be trying to get some estimate of which packages need the
> > most work.  But if there is some area of NumPy or SciPy you are
> > familiar with, please go ahead and pitch in.  Here is the current
> > NumPy/ SciPy coding standard including docstring standards:
> > http://projects.scipy.org/scipy/numpy/wiki/CodingStyleGuidelines
>
> I have some questions regarding Travis' latest modifications to the
> documentation guidelines:
>
> The following section was removed, why?
>
> """
> A reST-documented module should define::
>
>  __docformat__ = 'restructuredtext en'
>
> at the top level in accordance with `PEP 258
> `__.  Note that the
> ``__docformat__`` variable in a package's ``__init__.py`` file does
> not apply to objects defined in subpackages and submodules.
> """
>
> We had a long discussion on the mailing list on the pros and cons of
> "*Parameters*:" vs. "Parameters:".  I see now that it has been changed
> to
>
>  Parameters
>  --
>

If we are going to avoid ReST and html definition lists, i.e.

Parameters:

then we should probably avoid ReST and headings

Parameters
--

because that will likely be moved about. In fact, we should probably throw
ReST overboard if we are going to avoid ReST'isms. I personally think the
underline is ugly (hi Travis), but I suppose that's beside the point. If we
stay with ReST it would be nice to retain the links in the see also section.

I'm not at home at the moment, so I don't know how the new style is rendered
by epydoc, or even if we are still going to use epydoc. If not, we should
definitely decide on the structure of the docstrings and stick to it. With a
bit more pain in the appearence, we could also format for doxygen, which is
pretty standard and would also work for the C code. I suspect that's
not going to fly, though ;)

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [C++-sig] Overloading sqrt(5.5)*myvector

2007-12-29 Thread Robert Kern
Bruce Sherwood wrote:
> There is also the question of 
> whether it would pay for numpy to make what is probably an exceedingly 
> fast check and do much faster calculations of sqrt(scalar) and other 
> such mathematical functions.

There is no question that it would pay. It takes time and effort to implement,
though.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [C++-sig] Overloading sqrt(5.5)*myvector

2007-12-29 Thread Bruce Sherwood
Okay, I've implemented the scheme below that was proposed by Scott 
Daniels on the VPython mailing list, and it solves my problem. It's also 
much faster than using numpy directly: even with the "def "and "if" 
overhead: sqrt(scalar) is over 3 times faster than the numpy sqrt, and 
sqrt(array) is very nearly as fast as the numpy sqrt.

Thanks to those who made suggestions. There remains the question of why 
operator overloading of the kind I've described worked with Numeric and 
Boost but not with numpy and Boost. There is also the question of 
whether it would pay for numpy to make what is probably an exceedingly 
fast check and do much faster calculations of sqrt(scalar) and other 
such mathematical functions.

Bruce Sherwood

Bruce Sherwood wrote:
> I found by timing measurements that a faster scheme with less penalty 
> for the case of sqrt(array) looks like this:
>
> def sqrt(x):
>  if type(x) is float: return mathsqrt(x)
>  return numpysqrt(x)
>
> Bruce Sherwood wrote:
>   
>> Roman Yakovenko wrote:
>> 
>>> On Dec 29, 2007 7:47 AM, Bruce Sherwood <[EMAIL PROTECTED]> wrote:
>>>  
>>>   
 I realized belatedly that I should upgrade from Boost 1.33 to 1.34.
 Alas, that didn't cure my problem.
 
 
>>> Can you post small and complete example of what you are trying to 
>>> achieve?
>>>   
>>>   
>> I don't have a "small and complete" example available, but I'll 
>> summarize from earlier posts. VPython (vpython.org) has its own vector 
>> class to mimic the properties of 3D vectors used in physics, in the 
>> service of easy creation of 3D animations. There is a beta version 
>> which imports numpy and uses it internally; the older production 
>> version uses Numeric. Boost python and thread libraries are used to 
>> connect the C++ VPython code to Python.
>>
>> There is operator overloading that includes scalar*vector and 
>> vector*scalar, both producing vector. With Numeric, sqrt produced a 
>> float, which was a scalar for the operator overloading. With numpy, 
>> sqrt produces a numpy.float64 which is caught by vector*scalar but not 
>> by scalar*vector, which means that scalar*vector produces an ndarray 
>> rather than a vector, which leads to a big performance hit in existing 
>> VPython programs. The overloading and Boost code is the same in the 
>> VPython/Numeric and VPython/numpy versions. I don't know whether the 
>> problem is with numpy or with Boost or with the combination of the two.
>>
>> Here is the relevant part of the vector class:
>>
>> inline vector
>> operator*( const double s) const throw()
>> { return vector( s*x, s*y, s*z); }
>>
>> and here is the free function for right multiplication:
>>
>> inline vector
>> operator*( const double& s, const vector& v)
>> { return vector( s*v.x, s*v.y, s*v.z); }
>>
>> Maybe the problem is in the Boost definitions:
>>
>> py::class_("vector", py::init< py::optional> double> >())
>>.def( self * double())
>>.def( double() * self)
>>
>> Left multiplication is fine, but right multiplication isn't.
>>
>> A colleague suggested the following Boost declarations but cautioned 
>> that he wasn't sure of the syntax for referring to operator, and 
>> indeed this doesn't compile:
>>
>> .def( "__mul__", &vector::operator*(double), "Multiply vector times 
>> scalar")
>> .def( "__rmul__", &operator*(const double&, const vector&), "Multiply 
>> scalar times vector")
>>
>> I would really appreciate a Boost or numpy expert being able to tell 
>> me what's wrong (if anything) with these forms. However, I may have a 
>> useful workaround as I described in a post to the numpy discussion 
>> list. A colleague suggested that I do something like this for sqrt and 
>> other such mathematical functions:
>>
>> def sqrt(x):
>>   try: return mathsqrt(x)
>>   except TypeError: return numpysqrt(x)
>>
>> That is, first try the simple case of a scalar argument, handled by 
>> the math module sqrt, and only use the numpy sqrt routine in the case 
>> of an array argument. Even with the overhead of the try/except 
>> machinery, one gets must faster square roots for scalar arguments this 
>> way than with the numpy sqrt.
>>
>> Bruce Sherwood
>>
>> 
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>   
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [C++-sig] Overloading sqrt(5.5)*myvector

2007-12-29 Thread Bruce Sherwood
I found by timing measurements that a faster scheme with less penalty 
for the case of sqrt(array) looks like this:

def sqrt(x):
 if type(x) is float: return mathsqrt(x)
 return numpysqrt(x)

Bruce Sherwood wrote:
> Roman Yakovenko wrote:
>> On Dec 29, 2007 7:47 AM, Bruce Sherwood <[EMAIL PROTECTED]> wrote:
>>  
>>> I realized belatedly that I should upgrade from Boost 1.33 to 1.34.
>>> Alas, that didn't cure my problem.
>>> 
>> Can you post small and complete example of what you are trying to 
>> achieve?
>>   
> I don't have a "small and complete" example available, but I'll 
> summarize from earlier posts. VPython (vpython.org) has its own vector 
> class to mimic the properties of 3D vectors used in physics, in the 
> service of easy creation of 3D animations. There is a beta version 
> which imports numpy and uses it internally; the older production 
> version uses Numeric. Boost python and thread libraries are used to 
> connect the C++ VPython code to Python.
>
> There is operator overloading that includes scalar*vector and 
> vector*scalar, both producing vector. With Numeric, sqrt produced a 
> float, which was a scalar for the operator overloading. With numpy, 
> sqrt produces a numpy.float64 which is caught by vector*scalar but not 
> by scalar*vector, which means that scalar*vector produces an ndarray 
> rather than a vector, which leads to a big performance hit in existing 
> VPython programs. The overloading and Boost code is the same in the 
> VPython/Numeric and VPython/numpy versions. I don't know whether the 
> problem is with numpy or with Boost or with the combination of the two.
>
> Here is the relevant part of the vector class:
>
> inline vector
> operator*( const double s) const throw()
> { return vector( s*x, s*y, s*z); }
>
> and here is the free function for right multiplication:
>
> inline vector
> operator*( const double& s, const vector& v)
> { return vector( s*v.x, s*v.y, s*v.z); }
>
> Maybe the problem is in the Boost definitions:
>
> py::class_("vector", py::init< py::optional double> >())
>.def( self * double())
>.def( double() * self)
>
> Left multiplication is fine, but right multiplication isn't.
>
> A colleague suggested the following Boost declarations but cautioned 
> that he wasn't sure of the syntax for referring to operator, and 
> indeed this doesn't compile:
>
> .def( "__mul__", &vector::operator*(double), "Multiply vector times 
> scalar")
> .def( "__rmul__", &operator*(const double&, const vector&), "Multiply 
> scalar times vector")
>
> I would really appreciate a Boost or numpy expert being able to tell 
> me what's wrong (if anything) with these forms. However, I may have a 
> useful workaround as I described in a post to the numpy discussion 
> list. A colleague suggested that I do something like this for sqrt and 
> other such mathematical functions:
>
> def sqrt(x):
>   try: return mathsqrt(x)
>   except TypeError: return numpysqrt(x)
>
> That is, first try the simple case of a scalar argument, handled by 
> the math module sqrt, and only use the numpy sqrt routine in the case 
> of an array argument. Even with the overhead of the try/except 
> machinery, one gets must faster square roots for scalar arguments this 
> way than with the numpy sqrt.
>
> Bruce Sherwood
>
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [C++-sig] Overloading sqrt(5.5)*myvector

2007-12-29 Thread Bruce Sherwood
Roman Yakovenko wrote:
> On Dec 29, 2007 7:47 AM, Bruce Sherwood <[EMAIL PROTECTED]> wrote:
>   
>> I realized belatedly that I should upgrade from Boost 1.33 to 1.34.
>> Alas, that didn't cure my problem.
>> 
> Can you post small and complete example of what you are trying to achieve?
>   
I don't have a "small and complete" example available, but I'll 
summarize from earlier posts. VPython (vpython.org) has its own vector 
class to mimic the properties of 3D vectors used in physics, in the 
service of easy creation of 3D animations. There is a beta version which 
imports numpy and uses it internally; the older production version uses 
Numeric. Boost python and thread libraries are used to connect the C++ 
VPython code to Python.

There is operator overloading that includes scalar*vector and 
vector*scalar, both producing vector. With Numeric, sqrt produced a 
float, which was a scalar for the operator overloading. With numpy, sqrt 
produces a numpy.float64 which is caught by vector*scalar but not by 
scalar*vector, which means that scalar*vector produces an ndarray rather 
than a vector, which leads to a big performance hit in existing VPython 
programs. The overloading and Boost code is the same in the 
VPython/Numeric and VPython/numpy versions. I don't know whether the 
problem is with numpy or with Boost or with the combination of the two.

Here is the relevant part of the vector class:

inline vector
operator*( const double s) const throw()
{ return vector( s*x, s*y, s*z); }

and here is the free function for right multiplication:

inline vector
operator*( const double& s, const vector& v)
{ return vector( s*v.x, s*v.y, s*v.z); }

Maybe the problem is in the Boost definitions:

py::class_("vector", py::init< py::optional >())
.def( self * double())
.def( double() * self)

Left multiplication is fine, but right multiplication isn't.

A colleague suggested the following Boost declarations but cautioned 
that he wasn't sure of the syntax for referring to operator, and indeed 
this doesn't compile:

.def( "__mul__", &vector::operator*(double), "Multiply vector times scalar")
.def( "__rmul__", &operator*(const double&, const vector&), "Multiply 
scalar times vector")

I would really appreciate a Boost or numpy expert being able to tell me 
what's wrong (if anything) with these forms. However, I may have a 
useful workaround as I described in a post to the numpy discussion list. 
A colleague suggested that I do something like this for sqrt and other 
such mathematical functions:

def sqrt(x):
   try: return mathsqrt(x)
   except TypeError: return numpysqrt(x)

That is, first try the simple case of a scalar argument, handled by the 
math module sqrt, and only use the numpy sqrt routine in the case of an 
array argument. Even with the overhead of the try/except machinery, one 
gets must faster square roots for scalar arguments this way than with 
the numpy sqrt.

Bruce Sherwood

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy installed but can' use

2007-12-29 Thread Alan G Isaac
On Sat, 29 Dec 2007, dikshie apparently wrote:
> so import numpy and from numpy import *
> are different ? 

http://docs.python.org/tut/node8.html

hth,
Alan Isaac




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion