Re: [Numpy-discussion] DVCS at PyCon

2009-04-13 Thread Ondrej Certik
2009/4/13 Stéfan van der Walt ste...@sun.ac.za:
 2009/4/13 Eric Firing efir...@hawaii.edu:

 Stéfan, or other git-users,

 One feature of hg that I use frequently is hg serve, the builtin web
 server.  I use it for two purposes: for temporary local publishing
 (e.g., in place of using ssh--sometimes it is quicker and easier), and
 for providing access to the very nice hgwebdir.cgi browsing capability
 on local repos.  I have looked through git commands etc., and have not
 found an equivalent; am I missing something?  The browsing capability of
 hgwebdir.cgi is much better than any gui interface I have seen for git
 or for hg.  I realize there is a gitweb.cgi, but having that cgi is not
 the same as being able to publish locally with a single command, and
 then browse.

 The command is

 git instaweb --httpd=webrick

Ah, that's nice, thanks for sharing it. I didn't know about it. Works
fine for me.

Ondrej
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.test() errors r6862

2009-04-13 Thread Nils Wagner
On Sun, 12 Apr 2009 12:56:46 -0600
  Charles R Harris charlesr.har...@gmail.com wrote:
 On Sun, Apr 12, 2009 at 12:52 PM, Charles R Harris 
 charlesr.har...@gmail.com wrote:
 


 On Sun, Apr 12, 2009 at 12:17 PM, Nils Wagner 
 nwag...@iam.uni-stuttgart.de wrote:

 ==
 ERROR: test suite
 --
 Traceback (most recent call last):
   File

 /home/nwagner/local/lib64/python2.6/site-packages/nose-0.10.4-py2.6.egg/nose/suite.py,
 line 154, in run
 self.setUp()
   File

 /home/nwagner/local/lib64/python2.6/site-packages/nose-0.10.4-py2.6.egg/nose/suite.py,
 line 180, in setUp
 if not self:
   File

 /home/nwagner/local/lib64/python2.6/site-packages/nose-0.10.4-py2.6.egg/nose/suite.py,
 line 65, in __nonzero__
 test = self.test_generator.next()
   File

 /home/nwagner/local/lib64/python2.6/site-packages/nose-0.10.4-py2.6.egg/nose/loader.py,
 line 221, in generate
 for test in g():
   File

 /home/nwagner/local/lib64/python2.6/site-packages/numpy/lib/tests/test_format.py,
 line 440, in test_memmap_roundtrip
 shape=arr.shape, fortran_order=fortran_order)
   File
 /home/nwagner/local/lib64/python2.6/site-packages/numpy/lib/format.py,
 line 484, in open_memmap
 mode=mode, offset=offset)
   File
 /home/nwagner/local/lib64/python2.6/site-packages/numpy/core/memmap.py,
 line 231, in __new__
 mm = mmap.mmap(fid.fileno(), bytes, access=acc,
 offset=offset)
 error: [Errno 22] Invalid argument

 ==
 ERROR: test_mmap (test_io.TestSaveLoad)
 --
 Traceback (most recent call last):
   File

 /home/nwagner/local/lib64/python2.6/site-packages/numpy/testing/decorators.py,
 line 169, in knownfailer
 return f(*args, **kwargs)
   File

 /home/nwagner/local/lib64/python2.6/site-packages/numpy/lib/tests/test_io.py,
 line 92, in test_mmap
 self.roundtrip(a, file_on_disk=True,
 load_kwds={'mmap_mode': 'r'})
   File

 /home/nwagner/local/lib64/python2.6/site-packages/numpy/lib/tests/test_io.py,
 line 100, in roundtrip
 RoundtripTest.roundtrip(self, np.save, *args,
 **kwargs)
   File

 /home/nwagner/local/lib64/python2.6/site-packages/numpy/lib/tests/test_io.py,
 line 67, in roundtrip
 arr_reloaded = np.load(load_file, **load_kwds)
   File
 /home/nwagner/local/lib64/python2.6/site-packages/numpy/lib/io.py,
 line 193, in load
 return format.open_memmap(file, mode=mmap_mode)
   File
 /home/nwagner/local/lib64/python2.6/site-packages/numpy/lib/format.py,
 line 484, in open_memmap
 mode=mode, offset=offset)
   File
 /home/nwagner/local/lib64/python2.6/site-packages/numpy/core/memmap.py,
 line 231, in __new__
 mm = mmap.mmap(fid.fileno(), bytes, access=acc,
 offset=offset)
 error: [Errno 22] Invalid argument

 --
 Ran 1889 tests in 12.656s

 FAILED (KNOWNFAIL=1, errors=2)


 Hmm, I'll guess that the problem is this:

 *offset* must be a multiple of the 
ALLOCATIONGRANULARITY.

 This conflicts with the current intent of offset. Looks 
like we need to fix
 up the patch to use the  nearest multiple of 
ALLOCATIONGRANULARITY and then
 offset the usual way. Or, since ALLOCATIONGRANULARITY is 
likely to be
 platform dependent, maybe we should just revert the 
patch.

 
 Can you import mmap, then do a dir(mmap) and see if 
 ALLOCATIONGRANULARITY
 is available?
 
 TIA,
 
 Chuck

Hi Chuck,

 import mmap
 dir (mmap)
['ACCESS_COPY', 'ACCESS_READ', 'ACCESS_WRITE', 
'ALLOCATIONGRANULARITY', 'MAP_ANON', 'MAP_ANONYMOUS', 
'MAP_DENYWRITE', 'MAP_EXECUTABLE', 'MAP_PRIVATE', 
'MAP_SHARED', 'PAGESIZE', 'PROT_EXEC', 'PROT_READ', 
'PROT_WRITE', '__doc__', '__file__', '__name__', 
'__package__', 'error', 'mmap']


All tests pass with '1.4.0.dev6864'. Thank you very much.

Cheers,
 Nils
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] survey of freely available software for the solution of linear algebra problems

2009-04-13 Thread Nils Wagner
FWIW,

From: Jack Dongarra donga...@cs.utk.edu
Date: Tue, 7 Apr 2009 12:00:01 -0400
Subject: Survey of linear algebra software

We have updated the survey of freely available software 
for the solution of
linear algebra problems. Send us comments if you see a 
problem.

http://www.netlib.org/utk/people/JackDongarra/la-sw.html

Regards,

Jack and Hatem
  
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] DVCS at PyCon

2009-04-13 Thread Eric Firing
Stéfan van der Walt wrote:
 2009/4/13 Eric Firing efir...@hawaii.edu:
 Stéfan, or other git-users,

 One feature of hg that I use frequently is hg serve, the builtin web
 server.  I use it for two purposes: for temporary local publishing
 (e.g., in place of using ssh--sometimes it is quicker and easier), and
 for providing access to the very nice hgwebdir.cgi browsing capability
 on local repos.  I have looked through git commands etc., and have not
 found an equivalent; am I missing something?  The browsing capability of
 hgwebdir.cgi is much better than any gui interface I have seen for git
 or for hg.  I realize there is a gitweb.cgi, but having that cgi is not
 the same as being able to publish locally with a single command, and
 then browse.
 
 The command is
 
 git instaweb --httpd=webrick

Well, sort of--but one must already have one of the three supported web 
servers installed, so it is a bit heavier-weight.  The gitserve program 
is closer to hg serve because all it requires is git and python (and 
*nix, unlike hg serve).

Eric

 
 Cheers
 Stéfan
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] DVCS at PyCon

2009-04-13 Thread Martin Geisler
Ondrej Certik ond...@certik.cz writes:

 Plus with git, you can fetch the remote repository with all the
 branches and browse them locally in your remote branches, when you are
 offline. And merge them with your own branches. In mercurial, it seems
 the only way to see what changes are there and which branch and which
 commits I want to merge is to use hg in, but that requires the
 internet connection, so it's basically like svn.

No, no, ... I think you're misunderstanding how the history graph works
in Mercurial. It works like it does in Git -- changesets are in a
parent-child relationship. So if I cloned a repository at changeset T:

  ... --- T

I'm free to make new commits:

  ... --- T --- A --- B

And you can do the same:

  ... --- T --- X --- Y --- Z

I can pull in your changesets without disrupting my own work:

X --- Y --- Z
   /
  ... --- T --- A --- B

Your changesets will be attached to the graph at the point where you
made them, the same for my changesets. I don't have to merge at this
point, instead I can continue with my work even though the repository
has multiple heads (changesets with no childre). So I make C:

X
   /
  ... --- T --- A --- B --- C

I might now decide that I like your X, Y changesets but not Z, so I
merge them Y into my line of work to create D:

X --- Y --- Z
   /   \
  ... --- T ` D
   \ /
A --- B --- C

or I might decide that I don't need your changesets and discard them by
cloning or by the strip command from mq (one or the other):

  hg clone -r C repo repo-with-C
  hg strip X

The result is a repository that has only the history up to C:

  ... --- T --- A --- B --- C

So I think it's nonsense to say that Mercurial is like Subversion here:
you pull in changesets when online and you can make new commits, merge
commits, delete commits while offline.

Git has the advantage that it names these branches in a nice way, and
you can work with these names across the network. The bookmarks
extension for Mercurial is inspired by this and lets you attach local
names to heads in the graph.

 I don't know if mercurial has improved in this lately, but at least
 for me, that's a major problem with mercurial.

There has been no improvement lately since Mercurial has worked like
this all the time :-)

-- 
Martin Geisler

VIFF (Virtual Ideal Functionality Framework) brings easy and efficient
SMPC (Secure Multiparty Computation) to Python. See: http://viff.dk/.


pgp1bdrHZLYl2.pgp
Description: PGP signature
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] DVCS at PyCon

2009-04-13 Thread David Cournapeau
Martin Geisler wrote:
 Ondrej Certik ond...@certik.cz writes:

 Plus with git, you can fetch the remote repository with all the
 branches and browse them locally in your remote branches, when you are
 offline. And merge them with your own branches. In mercurial, it seems
 the only way to see what changes are there and which branch and which
 commits I want to merge is to use hg in, but that requires the
 internet connection, so it's basically like svn.

 No, no, ... I think you're misunderstanding how the history graph works
 in Mercurial. It works like it does in Git -- changesets are in a
 parent-child relationship. So if I cloned a repository at changeset T:

   ... --- T

 I'm free to make new commits:

   ... --- T --- A --- B

 And you can do the same:

   ... --- T --- X --- Y --- Z

 I can pull in your changesets without disrupting my own work:

 X --- Y --- Z
/
   ... --- T --- A --- B

 Your changesets will be attached to the graph at the point where you
 made them, the same for my changesets. I don't have to merge at this
 point, instead I can continue with my work even though the repository
 has multiple heads (changesets with no childre). So I make C:

 X
/
   ... --- T --- A --- B --- C

 I might now decide that I like your X, Y changesets but not Z, so I
 merge them Y into my line of work to create D:

 X --- Y --- Z
/   \
   ... --- T ` D
\ /
 A --- B --- C

 or I might decide that I don't need your changesets and discard them by
 cloning or by the strip command from mq (one or the other):

   hg clone -r C repo repo-with-C
   hg strip X

 The result is a repository that has only the history up to C:

   ... --- T --- A --- B --- C

 So I think it's nonsense to say that Mercurial is like Subversion here:
 you pull in changesets when online and you can make new commits, merge
 commits, delete commits while offline.

 Git has the advantage that it names these branches in a nice way, and
 you can work with these names across the network. The bookmarks
 extension for Mercurial is inspired by this and lets you attach local
 names to heads in the graph.

Thanks a lot for this information, that's really useful.

I am still a bit confused about how this works at the UI level, though,
w.r.t one clone/branch. In bzr, there is one branch and at most one
working tree / repository (at least that's how it is used generally). It
looks like hg, while being based on one branch / repository, is a more
flexible. One thing which converted me to git from bzr was the UI for
branch comparison. My basic reference is for release process. Currently,
in numpy, we have the trunk, and potentially several release branches,
which would be one head each if I get the vocabulary right:

   -- A  B --- C (1.2.x head)
 /
trunk -- Release 1.2.0  Release 1.3.0  (mainline head)
   \
 D- E --- F
 (1.3.x head)

How do you refer to different branches from one clone ? For example, if
I want to see the differences between mainline and 1.3.x branch,
cherry-pick things from mainline into both 1.3.x and 1.2.x, etc... How
does it work with bookmarks ?

Also, do we have to agree on everyone using bookmark to communicate each
other (one repository for everything on main numpy web repository), or
can people still use clones for every branch, and putting things back
into bookmarks when they push things in the official repo ?

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] DVCS at PyCon

2009-04-13 Thread Martin Geisler
David Cournapeau da...@ar.media.kyoto-u.ac.jp writes:

Hi David x 2 :-)

I've put the David Soria on Cc since he wrote the bookmarks extension,
maybe he can give additional information. The thread can be found here:

  http://thread.gmane.org/gmane.comp.python.numeric.general/29117

 Martin Geisler wrote:

 [...] changesets are in a parent-child relationship. So if I cloned a
 repository at changeset T:

   ... --- T

 I'm free to make new commits:

   ... --- T --- A --- B

 And you can do the same:

   ... --- T --- X --- Y --- Z

 I can pull in your changesets without disrupting my own work:

 X --- Y --- Z
/
   ... --- T --- A --- B

 Your changesets will be attached to the graph at the point where you
 made them, the same for my changesets. I don't have to merge at this
 point, instead I can continue with my work even though the repository
 has multiple heads (changesets with no childre). So I make C:

 X
/
   ... --- T --- A --- B --- C

 I might now decide that I like your X, Y changesets but not Z, so I
 merge them Y into my line of work to create D:

 X --- Y --- Z
/   \
   ... --- T ` D
\ /
 A --- B --- C

 or I might decide that I don't need your changesets and discard them by
 cloning or by the strip command from mq (one or the other):

   hg clone -r C repo repo-with-C
   hg strip X

 The result is a repository that has only the history up to C:

   ... --- T --- A --- B --- C

 So I think it's nonsense to say that Mercurial is like Subversion here:
 you pull in changesets when online and you can make new commits, merge
 commits, delete commits while offline.

 Git has the advantage that it names these branches in a nice way, and
 you can work with these names across the network. The bookmarks
 extension for Mercurial is inspired by this and lets you attach local
 names to heads in the graph.

 Thanks a lot for this information, that's really useful.

Great! I trust that you guys will let me know when/if you get tired of
this discussion :-)

 I am still a bit confused about how this works at the UI level,
 though, w.r.t one clone/branch. In bzr, there is one branch and at
 most one working tree / repository (at least that's how it is used
 generally). It looks like hg, while being based on one branch /
 repository, is a more flexible. One thing which converted me to git
 from bzr was the UI for branch comparison. My basic reference is for
 release process. Currently, in numpy, we have the trunk, and
 potentially several release branches, which would be one head each if
 I get the vocabulary right:

 --- A --- B --- C (1.2.x head)
   /
 trunk --- Release 1.2.0 --- Release 1.3.0 --- (mainline head)
  \
D --- E --- F --- (1.3.x head)


 How do you refer to different branches from one clone ? For example,
 if I want to see the differences between mainline and 1.3.x branch,
 cherry-pick things from mainline into both 1.3.x and 1.2.x, etc... How
 does it work with bookmarks ?

You write things like

  hg diff -r F -r tip

where 'tip' is a built-in name that always point to the newest revision
in a repository. If you have a bookmark named 'numpy-1.2.x' on F you
could write:

  hg diff -r numpy-1.2.x -r tip

As for cherry-picking the recommended way (in both Git and Mercurial, as
far as I know) is to do the commit on the oldest relevant branch and
then merge this branch into younger branches and finally into the
mainline. This way each branch is a strict subset of the next branch,
and mainline contains all commits on all branches.

But of course one might realise too late that a changeset on mainline
should have been made on, say, the 1.3.x branch. In case one can apply
the change as a new changeset by exporting it from mainline and
importing it on 1.3.x:

  hg export tip  tip.patch
  hg update -C 1.3.x
  hg import tip.patch

The transplant extension for Mercurial can help with this, probably with
better handling of merges, but I don't have any real experience with it.

 Also, do we have to agree on everyone using bookmark to communicate
 each other (one repository for everything on main numpy web
 repository), or can people still use clones for every branch, and
 putting things back into bookmarks when they push things in the
 official repo ?

The bookmarks is a convenience layer on top of the basic functionality
in Mercurial. They let you attach a name to a changeset just like a tag,
but unlike tags the names move around: if you make a commit based on a
changeset F with 'numpy-1.3.x', the child changeset will now have the
bookmark 'numpy-1.3.x'. This way bookmarks become moveable pointers into
the history, similarly to how Git heads point to changesets in the
history.

At the moment, bookmarks are even local to a repository, but it is
planned to add support for looking them up 

Re: [Numpy-discussion] [OT] read data from pdf

2009-04-13 Thread João Luís Silva
Neal Becker wrote:
 Anyone know of software that can assist with reading data points from a pdf 
 version of a 2-d line graph?

There are programs to help convert a graphic image to data points, such 
as http://plotdigitizer.sourceforge.net/

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [OT] read data from pdf

2009-04-13 Thread Nadav Horesh

Do you mean a manual digitation? You can use g3data after converting the plot 
to a bitmap.

  Nadav

-הודעה מקורית-
מאת: numpy-discussion-boun...@scipy.org בשם Neal Becker
נשלח: ב 13-אפריל-09 13:10
אל: numpy-discussion@scipy.org
נושא: [Numpy-discussion] [OT] read data from pdf
 
Anyone know of software that can assist with reading data points from a pdf 
version of a 2-d line graph?


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

winmail.dat___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [OT] read data from pdf

2009-04-13 Thread Gary Ruben
My friend has used this successfully:
http://www.datathief.org/
Looks like this will do it too:
http://www.datamaster2003.com/

Gary R.

João Luís Silva wrote:
 Neal Becker wrote:
 Anyone know of software that can assist with reading data points from a pdf 
 version of a 2-d line graph?
 
 There are programs to help convert a graphic image to data points, such 
 as http://plotdigitizer.sourceforge.net/

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [OT] read data from pdf

2009-04-13 Thread Gael Varoquaux
On Mon, Apr 13, 2009 at 07:10:40AM -0400, Neal Becker wrote:
 Anyone know of software that can assist with reading data points from a pdf 
 version of a 2-d line graph?

I know domeone who had a lot of success with engauge-digitizer (packaged
in Ubuntu).

Gaël
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] DVCS at PyCon

2009-04-13 Thread David Cournapeau
On Mon, Apr 13, 2009 at 6:22 PM, Martin Geisler m...@lazybytes.net wrote:


  hg diff -r F -r tip

 where 'tip' is a built-in name that always point to the newest revision
 in a repository. If you have a bookmark named 'numpy-1.2.x' on F you
 could write:

  hg diff -r numpy-1.2.x -r tip

Ok, so bookmarks are like named branches in that regard. But what if
they happened to be in different repositories ? If you have two
clones, how do you refer one clone from the other ? That's where bzr
UI for branch handling broke IMHO, so I was wondering about hg ?


 As for cherry-picking the recommended way (in both Git and Mercurial, as
 far as I know) is to do the commit on the oldest relevant branch and
 then merge this branch into younger branches and finally into the
 mainline. This way each branch is a strict subset of the next branch,
 and mainline contains all commits on all branches.

 But of course one might realise too late that a changeset on mainline
 should have been made on, say, the 1.3.x branch. In case one can apply
 the change as a new changeset by exporting it from mainline and
 importing it on 1.3.x:

  hg export tip  tip.patch
  hg update -C 1.3.x
  hg import tip.patch

 The transplant extension for Mercurial can help with this, probably with
 better handling of merges, but I don't have any real experience with it.

Ok.


 The bookmarks is a convenience layer on top of the basic functionality
 in Mercurial. They let you attach a name to a changeset just like a tag,
 but unlike tags the names move around: if you make a commit based on a
 changeset F with 'numpy-1.3.x', the child changeset will now have the
 bookmark 'numpy-1.3.x'. This way bookmarks become moveable pointers into
 the history, similarly to how Git heads point to changesets in the
 history.

 At the moment, bookmarks are even local to a repository, but it is
 planned to add support for looking them up remotely and for transferring
 them on push/pull. But right now you cannot tell if I'm using bookmarks
 to keep track of the heads in my clone.

So it means that we would have to keep clones for each branch on the
server at the moment, right ? Can I import this into bookmarks ? Say,
we have three numpy branches on scipy.org:

numpy-mainline
numpy-1.2.x
numpy-1.3.x

and I want to keep everything into one repository, each branch being
bookmark. It looks like it is possible internally from my
understanding on how it works in hg, but does the UI supports it ?

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] DVCS at PyCon

2009-04-13 Thread Martin Geisler
David Cournapeau courn...@gmail.com writes:

 On Mon, Apr 13, 2009 at 6:22 PM, Martin Geisler m...@lazybytes.net wrote:

  hg diff -r F -r tip

 where 'tip' is a built-in name that always point to the newest
 revision in a repository. If you have a bookmark named 'numpy-1.2.x'
 on F you could write:

  hg diff -r numpy-1.2.x -r tip

 Ok, so bookmarks are like named branches in that regard.

Yeah, sort of. Bookmarks are actually more like tags: both concepts
allow you to give a changeset a friendly name.

Named branches are slightly different: they allow you to stamp a
friendly name on a changeset, but not on just one changeset -- all
changesets in the named branch get the stamp. The branch name can be
used in operations like 'hg update', 'hg diff', etc, and will resolve to
the tip-most changeset on the branch.

 But what if they happened to be in different repositories ? If you
 have two clones, how do you refer one clone from the other ? That's
 where bzr UI for branch handling broke IMHO, so I was wondering about
 hg ?

You're right, the UI is not so good. In particular, you cannot use 'hg
diff' to compare repositories. The rdiff extension does this, though:

  http://www.selenic.com/mercurial/wiki/index.cgi/RdiffExtension

Using stock Mercurial you can use 'hg incoming -p' to see the patches of
the incoming (missing) changesets, or you can pull in the changesets and
then look at them -- if you don't like them you can remove them again.

 At the moment, bookmarks are even local to a repository, but it is
 planned to add support for looking them up remotely and for
 transferring them on push/pull. But right now you cannot tell if I'm
 using bookmarks to keep track of the heads in my clone.

 So it means that we would have to keep clones for each branch on the
 server at the moment, right ?

Yes, that would be easiest. The bookmarks are stored in a plain text
file (.hg/bookmarks) which can be copied around, but the bookmarks will
only update themselves upon commit when they point to a head changeset.

 Can I import this into bookmarks ? Say, we have three numpy branches
 on scipy.org:

 numpy-mainline
 numpy-1.2.x
 numpy-1.3.x

 and I want to keep everything into one repository, each branch being
 bookmark. It looks like it is possible internally from my
 understanding on how it works in hg, but does the UI supports it ?

Right, it is supported -- you would just pull the changesets from each
clone into one clone and this will merge the acyclic graphs into one
graph with all changesets.

You could do this in just your clone, without requiring the numpy server
to do the same. If you do this you'll probably want to attach three
bookmarks in your clone in order to make it easy for yourself to update
From one head to another.

Or you could use named branches for the numpy-1.2.x and numpy-1.3.x
branches -- then everybody would see the branch names when they clone
the combined repository.

-- 
Martin Geisler

VIFF (Virtual Ideal Functionality Framework) brings easy and efficient
SMPC (Secure Multiparty Computation) to Python. See: http://viff.dk/.


pgp0Yi7rDaRaA.pgp
Description: PGP signature
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] DVCS at PyCon

2009-04-13 Thread Stéfan van der Walt
2009/4/12 Stéfan van der Walt ste...@sun.ac.za:
 I underestimated the
 value of this type of manipulation, and of having a clearly structured
 and easily traversable history.

I read that Bram Cohen of Codeville / patience diff fame doesn't
agree with me, so I'll give his opinion too:


Don't bother with a pretty history.

The history of a branch is hardly ever looked at. Making it look
pretty for the historians is just a waste of time. The beauty of 3-way
merge is that you can always clean stuff up later and never worry
about the past mess ever again. In particular, don't go to great
lengths to make sure that there's a coherent local image of the entire
repository exactly as it appeared on your local machine after every
new feature. There are very rare projects which maintain a level of
reliability and testing which warrant such behavior, and yours isn't
one of them.


http://bramcohen.livejournal.com/52148.html

I look at the SciPy history a lot, so I'm not convinced, but there it is.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Leopard install

2009-04-13 Thread Tommy Grav
On Apr 12, 2009, at 7:02 PM, David Cournapeau wrote:

 On Mon, Apr 13, 2009 at 1:19 AM, Stuart Edwards sedwar...@cinci.rr.com 
  wrote:
 Hi

 I am trying to install numpy 1.3.0 on Leopard 10.5.6 and at the point
 in the install process where I select a destination, my boot disc is
 excluded with the message:


 I think you need to install python from python.org (version 2.5) to
 install the numpy binary,

Or you can alternatively use the ActiveState python binary (v2.5), which
also seems to work.

Cheers
   Tommy
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] documentation suggestion

2009-04-13 Thread Pauli Virtanen
Sun, 12 Apr 2009 17:32:31 -0400, Neal Becker wrote:
[clip]
 Done, but can someone check that what I wrote is accurate?  I wrote that
 changes to the ndarray will change the underlying buffer object.  But,
 the buffer protocol allows for read-only buffers.  Not sure what ndarray
 would do if you tried to write in that case.

For read-only buffers, the new array will not be writeable:

 x=abcde
 z=np.ndarray((5,), buffer=x, dtype='1S')
 z[0] = 'x'
Traceback (most recent call last):
  File stdin, line 1, in module
RuntimeError: array is not writeable

However, you can override this, and misusing it potentially leads to 
nasty things:

 x=abcde
 y=abcde
 z=np.ndarray((5,), buffer=x, dtype='1S')
 z.flags.writeable = True
 z[0] = 'x'
 x
'xbcde'
 y
'xbcde'
 hash(x)
-1332677140
 hash(abcde)
-1332677140

Hashing breaks, and the assumed immutability of strings causes nonlocal 
effects!

***

However, for some reason, I don't see your contribution in the change 
list:

http://docs.scipy.org/numpy/changes/

Can you check if the change you made went through? What page did you edit?

-- 
Pauli Virtanen

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] survey of freely available software for the solution of linear algebra problems

2009-04-13 Thread Russell E. Owen
In article web-118971...@uni-stuttgart.de,
 Nils Wagner nwag...@iam.uni-stuttgart.de wrote:

 http://www.netlib.org/utk/people/JackDongarra/la-sw.html

You might add Eigen:
http://eigen.tuxfamily.org/index.php?title=Main_Page
We are finding it to be a very nice package (though the name is 
unfortunate from the perspective of internet search engines). It is a 
pure C++ template library, which is brilliant. That makes it much easier 
to build a package using Eigen than one using lapack/blas/etc..

-- Russell

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Got: undefined symbol: PyUnicodeUCS2_FromUnicode error

2009-04-13 Thread charlie
Thanks all! I was mixed with the system python version and mine.
And I got the problem solved by configuring the python environmental.


On Wed, Apr 8, 2009 at 1:10 AM, Michael Abshoff 
michael.absh...@googlemail.com wrote:

 charlie wrote:
  Hi All,

 Hi Charlie,

  I got the  undefined symbol: PyUnicodeUCS2_FromUnicode error when
  importing numpy.
 
  I have my own non-root version of python 2.5.4 final installed with
  --prefix=$HOME/usr.
  PYTHONHOME=$HOME/usr;
  PYTHONPATH=$PYTHONHOME/lib:$PYTHONHOME/lib/python2.5/site-packages/
  install and import other modules works fine.
  I install numpy-1.3.0rc2 from the svn repository with python setup.py
  install
  then import numpy results in following error:
  *Python 2.5 (release25-maint, Jul 23 2008, 17:54:01)
  [GCC 4.1.2 20061115 (prerelease) (Debian 4.1.1-21)] on linux2
  Type help, copyright, credits or license for more information.

 SNIP

  PyUnicodeUCS2_FromUnicode

 numpy was build with a python configured with ucs2 while the python you
 have is build with ucs4.

  import Bio
  import sys*
  I am not sure where is trick is. As I checked the previous discussion, I
  found several people raised similar issue but no one has posted a final
  solution to this yet. So I can only ask for help here again! Thanks in
  advance!

 To fix this build python with ucs2, i.e. check configure --help and
 set python to use ucs2.

  Charlie
 

 Cheers,

 Michael

 
  
 
  ___
  Numpy-discussion mailing list
  Numpy-discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] polyfit on multiple data points

2009-04-13 Thread Mathew Yeates
Hi,
I understand how to fit  the points (x1,y1) (x2,y2),(x3,y3) with a line 
using polyfit. But, what if I want to perform this task on every row of 
an array?
For instance

[[x1,x2,x3],
 [s1,s2,s3]]

[[y1,y2,y3,],
 [r1,r2,r3]]

and I want the results to be the coefficients  [a,b,c]  and [d,e,f] where
[a,b,c] fits the points (x1,y1) (x2,y2),(x3,y3) and
[d,e,f] fits the points (s1,r1) (s2,r2),(s3,r3)

I realize I could use apply_along_axis but I'm afraid of the 
performance penalty. Is there a way to do this without resorting to a 
function call for each row?

Mathew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] recommendations on goodness of fit functions?

2009-04-13 Thread Brennan Williams
Hi numpy/scipy users,

I'm looking to add some basic goodness-of-fit functions/plots to my app.

I have a set of simulated y vs time data and a set of observed y vs time 
data.

The time values aren't always the same, i.e. there are often fewer 
observed data points.

Some variables will be in a 0.0...1.0 range, others in a 0.0.1.0e+12 
range.

I'm also hoping to update the calculated goodness of fit value at each 
simulated timestep, the idea being to allow the user to  set a tolerance 
level which if exceeded stops the simulation (which otherwise can keep 
running for many hours/days).

Thanks

Brennan
 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] recommendations on goodness of fit functions?

2009-04-13 Thread Charles R Harris
On Mon, Apr 13, 2009 at 6:28 PM, Brennan Williams 
brennan.willi...@visualreservoir.com wrote:

 Hi numpy/scipy users,

 I'm looking to add some basic goodness-of-fit functions/plots to my app.

 I have a set of simulated y vs time data and a set of observed y vs time
 data.

 The time values aren't always the same, i.e. there are often fewer
 observed data points.

 Some variables will be in a 0.0...1.0 range, others in a 0.0.1.0e+12
 range.

 I'm also hoping to update the calculated goodness of fit value at each
 simulated timestep, the idea being to allow the user to  set a tolerance
 level which if exceeded stops the simulation (which otherwise can keep
 running for many hours/days).


Some questions.

1) What kind of fit are you doing?
2) What is the measurement model?
3) What do you know apriori about the measurement errors?
4) How is the time series modeled?

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] dimension along axis?

2009-04-13 Thread Grissiom
Hi all,

It there a convenience way to get dimension along an axis? Say I have two
ndarray:

li1 = np.array([2,3,4])
li2 = np.array([[2,3,4],[5,6,7]])

I know my list is in C order so the two array is the same in someway. But
li1.shape will give (3, ) and li2.shape will give (2,3). 3 appear in
different position so it's inconvenient to identify them. Is there anyway to
get dimension along axis? (In this case should be axis0)

-- 
Cheers,
Grissiom
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dimension along axis?

2009-04-13 Thread Grissiom
On Tue, Apr 14, 2009 at 09:47, Charles R Harris
charlesr.har...@gmail.comwrote:

 You mean something like this?

 In [1]: li1 = np.array([2,3,4])

 In [2]: li1[np.newaxis,:].shape
 Out[2]: (1, 3)

 Or maybe like this?

 In [3]: li1 = np.array([[2,3,4]])

 In [4]: li1.shape
 Out[4]: (1, 3)

 Chuck



This is exactly what I want. Thanks ;)

-- 
Cheers,
Grissiom
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] polyfit on multiple data points

2009-04-13 Thread josef . pktd
On Mon, Apr 13, 2009 at 5:59 PM, Mathew Yeates myea...@jpl.nasa.gov wrote:
 Hi,
 I understand how to fit  the points (x1,y1) (x2,y2),(x3,y3) with a line
 using polyfit. But, what if I want to perform this task on every row of
 an array?
 For instance

 [[x1,x2,x3],
  [s1,s2,s3]]

 [[y1,y2,y3,],
  [r1,r2,r3]]

 and I want the results to be the coefficients  [a,b,c]  and [d,e,f] where
 [a,b,c] fits the points (x1,y1) (x2,y2),(x3,y3) and
 [d,e,f] fits the points (s1,r1) (s2,r2),(s3,r3)

 I realize I could use apply_along_axis but I'm afraid of the
 performance penalty. Is there a way to do this without resorting to a
 function call for each row?

 Mathew

Which order of polynomial are you fitting to each 3 points?

If you want to have independent polynomials, the only other way I can
see is building a block design matrix, essentially stacking all
independent regression into one big least squares problem. The
performance will depend on the number of points you have, but I doubt
it will be faster to build the design matrix and solve the large least
squares problem then the loop.

For a low order polynomial, it might be possible to get an explicit
solution, easy for linear, I never tried for ols with two regressors,
maybe sympy would help.

If you really only have 3 points, then with two regressors and a
constant, you would already have an exact solution with a second order
polynomial, 3 equations in 3 unknowns. In this case, you can write an
explicit solution where it might be possible to use only vectorized
array manipulation.

If you need the loop and you have fixed axis, doing the loop yourself
should be faster then apply_along_axis.

I haven't tried any of this myself, but that's where I would be
looking, if runtime is more expensive than developer time and you
don't need a solution for the general case with more points and higher
order polynomials.

--
Rereading your email, I interpret you want a line, 1st order
polynomial, and the predicted values of the dependent variable. That's
just a linear regression, you can just use the formula for slope and
constant, see for example stats.linregress.

Something like the following: assuming x, and y is mean corrected
array with your 3 points in columns, and rows are different cases,
both are (n,3) arrays, then the slope coefficient should be

b = (x*y).sum(1) / (x,x).sum(1)
constant is something like:  a = y.mean(axis=1) - b * x.mean(1)
predicted points  ypred = a[:,np.newaxis] + x*a[:,np.newaxis]
(should be (n,3) array)

warning: not tested, written from memory

Josef
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Leopard install

2009-04-13 Thread Chris . Barker
David Cournapeau wrote:
 I think you need to install python from python.org (version 2.5) to
 install the numpy binary,

yes, that's it -- system Python is a misnomer. I really should figure 
out how to change that message.

-Chris




-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] recommendations on goodness of fit functions?

2009-04-13 Thread Brennan Williams

Charles R Harris wrote:



On Mon, Apr 13, 2009 at 6:28 PM, Brennan Williams 
brennan.willi...@visualreservoir.com 
mailto:brennan.willi...@visualreservoir.com wrote:


Hi numpy/scipy users,

I'm looking to add some basic goodness-of-fit functions/plots to
my app.

I have a set of simulated y vs time data and a set of observed y
vs time
data.

The time values aren't always the same, i.e. there are often fewer
observed data points.

Some variables will be in a 0.0...1.0 range, others in a
0.0.1.0e+12
range.

I'm also hoping to update the calculated goodness of fit value at each
simulated timestep, the idea being to allow the user to  set a
tolerance
level which if exceeded stops the simulation (which otherwise can keep
running for many hours/days).


Before I try and answer the following, attached is an example of a 
suggested GOF function.



Some questions.

1) What kind of fit are you doing?
2) What is the measurement model?
3) What do you know apriori about the measurement errors?
4) How is the time series modeled?


The simulated data are output by a oil reservoir simulator.

Time series is typically monthly or annual timesteps over anything from 
5-30 years

but it could also be in some cases 10 minute timesteps over 24 hours

The timesteps output by the simulator are controlled by the user and are 
not always even, e.g. for a simulation over 30 years you may
have annual timesteps from year 0 to year 25 and then 3 monthly from 
year 26-29 and then monthly for the most recent year.


Not sure about measurement errors - the older the data the higher the 
errors due to changes in oil field measurement technology.
And the error range varies depending on the data type as well, e.g. 
error range for a water meter is likely to be higher than that for an 
oil or gas meter.

I'll try and find out more about that.

Brennan


Chuck




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
  


inline: goodnessoffit.png___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion