Re: [Numpy-discussion] very large matrices.

2007-05-13 Thread Dave P. Novakovic
They are very large numbers indeed. Thanks for giving me a wake up call.
Currently my data is represented as vectors in a vectorset, a typical
sparse representation.

I reduced the problem significantly by removing lots of noise. I'm
basically recording traces of a terms occurrence throughout a corpus
and doing an analysis of the eigenvectors.

I reduced my matrix to  4863 x 4863 by filtering the original corpus.
Now when I attempt svd, I'm finding a memory error in the svd routine.
Is there a hard upper limit of the size of a matrix for these
calculations?

  File /usr/lib/python2.4/site-packages/numpy/linalg/linalg.py, line
575, in svd
vt = zeros((n, nvt), t)
MemoryError

Cheers

Dave


On 5/13/07, Anne Archibald [EMAIL PROTECTED] wrote:
 On 12/05/07, Dave P. Novakovic [EMAIL PROTECTED] wrote:

  core 2 duo with 4gb RAM.
 
  I've heard about iterative svd functions. I actually need a complete
  svd, with all eigenvalues (not LSI). I'm actually more interested in
  the individual eigenvectors.
 
  As an example, a single row could probably have about 3000 non zero 
  elements.

 I think you need to think hard about whether your problem can be done
 in another way.

 First of all, the singular values (as returned from the svd) are not
 eigenvalues - eigenvalue decomposition is a much harder problem,
 numerically.

 Second, your full non-sparse matrix will be 8*75000*75000 bytes, or
 about 42 gibibytes. Put another way, the representation of your data
 alone is ten times the size of the RAM on the machine you're using.

 Third, your matrix has 225 000 000 nonzero entries; assuming a perfect
 sparse representation with no extra bytes (at least two bytes per
 entry is typical, usually more), that's 1.7 GiB.

 Recall that basically any matrix operation is at least O(N^3), so you
 can expect order 10^14 floating-point operations to be required. This
 is actually the *least* significant constraint; pushing stuff into and
 out of disk caches will be taking most of your time.

 Even if you can represent your matrix sparsely (using only a couple of
 gibibytes), you've said you want the full set of eigenvectors, which
 is not likely to be a sparse matrix - so your result is back up to 42
 GiB. And you should expect an eigenvalue algorithm, if it even
 survives massive roundoff problems, to require something like that
 much working space; thus your problem probably has a working size of
 something like 84 GiB.

 SVD is a little easier, if that's what you want, but the full solution
 is twice as large, though if you discard entries corresponding to
 small values it might be quite reasonable. You'll still need some
 fairly specialized code, though. Which form are you looking for?

 Solving your problem in a reasonable amount of time, as described and
 on the hardware you specify, is going to require some very specialized
 algorithms; you could try looking for an out-of-core eigenvalue
 package, but I'd first look to see if there's any way you can simplify
 your problem - getting just one eigenvector, maybe.

 Anne
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] problems with calculating numpy.float64

2007-05-13 Thread highdraw

Hi out there,

this is the code segment

  if m  maxN and n  maxN and self.activeWide[m+1, n+1]:
 try:
 deltaX = x[m+1] - x[m]
 except TypeError:
 print '-' * 40
 print type(x)
 type_a, type_b = map(type, (x[m +  
1], x[m]))
 print type_a, type_b, type_a is type_b
 print '-' * 40
 raise

the if-conclusion is TRUE!
I got an error at the line deltaX = x[m+1] - x[m], the values of  
these types:


x : type 'numpy.ndarray'
x[m]  : type 'numpy.float64'
x[m+1]: type 'numpy.float64'
m : counting variable, i guess an integer?

I just try to subtract x[m] from x[m+1], but I got an error:

Inappropriate argument type.
unsupported operand type(s) for -: 'numpy.float64' and 'numpy.float64'

For more code snippets you can follow these URL:
http://www.python-forum.de/topic-10580.html
It is in german language, but there are more code snippets and some  
tests from me and other users.

The problem is that both variables are definitely from the same type,  
but i cant substract theme. When I go to my python interpreter and  
reproduce the code there is no problem. I think the imports from  
numpy are OK, i cant see the error?

Thanks for any help!
Michael --





___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] howto make from flat array (1-dim) 2-dimensional?

2007-05-13 Thread dmitrey
i.e. for example from flat array [1, 2, 3] obtain
array([[ 1.],
   [ 2.],
   [ 3.]])

I have numpy v 1.0.1
Thx, D.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] howto make from flat array (1-dim) 2-dimensional?

2007-05-13 Thread David M. Cooke
On Sun, May 13, 2007 at 02:36:39PM +0300, dmitrey wrote:
 i.e. for example from flat array [1, 2, 3] obtain
 array([[ 1.],
[ 2.],
[ 3.]])
 
 I have numpy v 1.0.1
 Thx, D.

Use newaxis:

In [1]: a = array([1., 2., 3.])
In [2]: a
Out[2]: array([ 1.,  2.,  3.])
In [3]: a[:,newaxis]
Out[3]: 
array([[ 1.],
   [ 2.],
   [ 3.]])
In [4]: a[newaxis,:]
Out[4]: array([[ 1.,  2.,  3.]])

When newaxis is used as an index, a new axis of dimension 1 is added.

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] howto make from flat array (1-dim) 2-dimensional?

2007-05-13 Thread Darren Dale
On Sunday 13 May 2007 7:36:39 am dmitrey wrote:
 i.e. for example from flat array [1, 2, 3] obtain
 array([[ 1.],
[ 2.],
[ 3.]])

a=array([1,2,3])
a.shape=(len(a),1)
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy 1.0.3 release next week

2007-05-13 Thread Albert Strasheim
Hello all

On Sat, 12 May 2007, Charles R Harris wrote:

 On 5/12/07, Albert Strasheim [EMAIL PROTECTED] wrote:
 
 I've more or less finished my quick triage effort.
 
 Thanks, Albert. The tickets look much better organized now.

My pleasure. Stefan van der Walt has also gotten in on the act and 
we're now down to 19 open tickets with 1.0.3 as the milestone.

http://projects.scipy.org/scipy/numpy/query?status=newstatus=assignedstatus=reopenedmilestone=1.0.3+Release

Regards,

Albert
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] howto make from flat array (1-dim) 2-dimensional?

2007-05-13 Thread Stefan van der Walt
On Sun, May 13, 2007 at 07:46:47AM -0400, Darren Dale wrote:
 On Sunday 13 May 2007 7:36:39 am dmitrey wrote:
  i.e. for example from flat array [1, 2, 3] obtain
  array([[ 1.],
 [ 2.],
 [ 3.]])
 
 a=array([1,2,3])
 a.shape=(len(a),1)

Or just

a.shape = (-1,1)

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] very large matrices.

2007-05-13 Thread Charles R Harris

On 5/13/07, Dave P. Novakovic [EMAIL PROTECTED] wrote:


They are very large numbers indeed. Thanks for giving me a wake up call.
Currently my data is represented as vectors in a vectorset, a typical
sparse representation.

I reduced the problem significantly by removing lots of noise. I'm
basically recording traces of a terms occurrence throughout a corpus
and doing an analysis of the eigenvectors.

I reduced my matrix to  4863 x 4863 by filtering the original corpus.
Now when I attempt svd, I'm finding a memory error in the svd routine.
Is there a hard upper limit of the size of a matrix for these
calculations?



I get the same error here with linalg.svd(eye(5000)), and the memory is
indeed gone. Hmm, I think it should work, although it is sure pushing the
limits of what I've got: linalg.svd(eye(1000)) works fine. I think 4GB
should be enough if your memory limits are set high enough.

Are you trying some sort of principal components analysis?

snip

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy 1.0.3 release next week

2007-05-13 Thread dmitrey
Is it possible somehow to speedup numpy 1.0.3 appearing in Linux update 
channels? (as for me I'm interested in Ubuntu/Kubuntu,  currently there 
is v 1.0.1)
I tried to compile numpy 1.0.2, but, as well as in Octave compiling, it 
failed because c compiler can't create executable. gcc reinstallation 
didn't help, other c compilers are absent in update channel (I had seen 
only tcc, but I'm sure it will not help, and it (I mean trying to 
install other C compilers) requires too much efforts).

D.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy 1.0.3 release next week

2007-05-13 Thread Stefan van der Walt
On Sun, May 13, 2007 at 06:19:30PM +0300, dmitrey wrote:
 Is it possible somehow to speedup numpy 1.0.3 appearing in Linux update 
 channels? (as for me I'm interested in Ubuntu/Kubuntu,  currently there 
 is v 1.0.1)
 I tried to compile numpy 1.0.2, but, as well as in Octave compiling, it 
 failed because c compiler can't create executable. gcc reinstallation 
 didn't help, other c compilers are absent in update channel (I had seen 
 only tcc, but I'm sure it will not help, and it (I mean trying to 
 install other C compilers) requires too much efforts).

Many people here are compiling numpy fine under Ubuntu.  Do you have
write permissions to the output directory? What is the compiler error
given?

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] copy object with multiple subfields, including ndarrays

2007-05-13 Thread dmitrey
hi all,
does anyone know howto copy an instance of class, that contains multiple 
subfields, for example
myObj.field1.subfield2 = 'asdf'
myObj.field4.subfield8 = numpy.mat('1 2 3; 4 5 6')

I tried
from copy import copy
myObjCopy = copy(myObj)
but it seems that it doesn't work correctly

Thx, D.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] dtype hashes are not equal

2007-05-13 Thread Stefan van der Walt
Hi all,

In the numpy.sctypes dictionary, there are two entries for uint32:

In [2]: N.sctypes['uint']
Out[2]: 
[type 'numpy.uint8',
 type 'numpy.uint16',
 type 'numpy.uint32',
 type 'numpy.uint32',
 type 'numpy.uint64']

Comparing the dtypes of the two types gives the correct answer:

In [3]: sc = N.sctypes['uint']

In [4]: N.dtype(sc[2]) == N.dtype(sc[3])
Out[4]: True

But the hash values for the dtypes (and the types) differ:

In [42]: for T in N.sctypes['uint']:
dt = N.dtype(T)
print T, dt
print '=', hash(T), hash(dt)

type 'numpy.uint8' uint8
= -1217082432 -1217078592
type 'numpy.uint16' uint16
= -1217082240 -1217078464
type 'numpy.uint32' uint32
= -1217081856 -1217078336
type 'numpy.uint32' uint32
= -1217082048 -1217078400
type 'numpy.uint64' uint64
= -1217081664 -1217078208

Is this expected/correct behaviour?

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dtype hashes are not equal

2007-05-13 Thread Robert Kern
Stefan van der Walt wrote:
 Hi all,
 
 In the numpy.sctypes dictionary, there are two entries for uint32:
 
 In [2]: N.sctypes['uint']
 Out[2]: 
 [type 'numpy.uint8',
  type 'numpy.uint16',
  type 'numpy.uint32',
  type 'numpy.uint32',
  type 'numpy.uint64']
 
 Comparing the dtypes of the two types gives the correct answer:
 
 In [3]: sc = N.sctypes['uint']
 
 In [4]: N.dtype(sc[2]) == N.dtype(sc[3])
 Out[4]: True
 
 But the hash values for the dtypes (and the types) differ:
 
 In [42]: for T in N.sctypes['uint']:
 dt = N.dtype(T)
 print T, dt
 print '=', hash(T), hash(dt)
 
 type 'numpy.uint8' uint8
 = -1217082432 -1217078592
 type 'numpy.uint16' uint16
 = -1217082240 -1217078464
 type 'numpy.uint32' uint32
 = -1217081856 -1217078336
 type 'numpy.uint32' uint32
 = -1217082048 -1217078400
 type 'numpy.uint64' uint64
 = -1217081664 -1217078208
 
 Is this expected/correct behaviour?

It's expected, but not desired. We haven't implemented the hash function for
dtype objects, so we have the default, which is based on object identity rather
than value. It is something that should be implemented, given time.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy 1.0.3 release next week

2007-05-13 Thread dmitrey
Stefan van der Walt wrote:
 On Sun, May 13, 2007 at 06:19:30PM +0300, dmitrey wrote:
   
 Is it possible somehow to speedup numpy 1.0.3 appearing in Linux update 
 channels? (as for me I'm interested in Ubuntu/Kubuntu,  currently there 
 is v 1.0.1)
 I tried to compile numpy 1.0.2, but, as well as in Octave compiling, it 
 failed because c compiler can't create executable. gcc reinstallation 
 didn't help, other c compilers are absent in update channel (I had seen 
 only tcc, but I'm sure it will not help, and it (I mean trying to 
 install other C compilers) requires too much efforts).
 

 Many people here are compiling numpy fine under Ubuntu.  Do you have
 write permissions to the output directory? What is the compiler error
 given?

 Cheers
 Stéfan
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion
   
Sorry, I meant compiling Python2.5 and Octave, not numpy  Octave
Python2.5 is already present (in Ubuntu 7.04), but I tried to compile 
and install it from sources because numpy compilation failed with
(I have gcc version 4.1.2 (Ubuntu 4.1.2-0ubuntu4), compiling as root)

...
...
C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O2 -Wall 
-Wstrict-prototypes -fPIC
compile options: '-I/usr/include/python2.5 -Inumpy/core/src 
-Inumpy/core/include -I/usr/include/python2.5 -c'
gcc: _configtest.c
In file included from 
/usr/lib/gcc/x86_64-linux-gnu/4.1.2/include/syslimits.h:7,
from /usr/lib/gcc/x86_64-linux-gnu/4.1.2/include/limits.h:11,
from /usr/include/python2.5/Python.h:18,
from _configtest.c:2:
/usr/lib/gcc/x86_64-linux-gnu/4.1.2/include/limits.h:122:61: error: 
limits.h: No such file or directory
In file included from _configtest.c:2:
/usr/include/python2.5/Python.h:32:19: error: stdio.h: No such file or 
directory
/usr/include/python2.5/Python.h:34:5: error: #error Python.h requires 
that stdio.h define NULL.
/usr/include/python2.5/Python.h:37:20: error: string.h: No such file or 
directory
/usr/include/python2.5/Python.h:39:19: error: errno.h: No such file or 
directory
/usr/include/python2.5/Python.h:41:20: error: stdlib.h: No such file or 
directory
/usr/include/python2.5/Python.h:43:20: error: unistd.h: No such file or 
directory
/usr/include/python2.5/Python.h:55:20: error: assert.h: No such file or 
directory
In file included from /usr/include/python2.5/Python.h:57,
from _configtest.c:2:
/usr/include/python2.5/pyport.h:7:20: error: stdint.h: No such file or 
directory
In file included from /usr/include/python2.5/Python.h:57,
from _configtest.c:2:
/usr/include/python2.5/pyport.h:73: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ 
or ‘__attribute__’ before ‘Py_uintptr_t’
/usr/include/python2.5/pyport.h:74: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ 
or ‘__attribute__’ before ‘Py_intptr_t’
/usr/include/python2.5/pyport.h:97: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ 
or ‘__attribute__’ before ‘Py_ssize_t’
/usr/include/python2.5/pyport.h:204:76: error: math.h: No such file or 
directory
/usr/include/python2.5/pyport.h:211:22: error: sys/time.h: No such file 
or directory
/usr/include/python2.5/pyport.h:212:18: error: time.h: No such file or 
directory
/usr/include/python2.5/pyport.h:230:24: error: sys/select.h: No such 
file or directory
/usr/include/python2.5/pyport.h:269:22: error: sys/stat.h: No such file 
or directory
In file included from /usr/include/python2.5/Python.h:76,
from _configtest.c:2:
/usr/include/python2.5/pymem.h:50: warning: parameter names (without 
types) in function declaration
/usr/include/python2.5/pymem.h:51: error: expected declaration 
specifiers or ‘...’ before ‘size_t’
In file included from /usr/include/python2.5/Python.h:78,
from _configtest.c:2:
/usr/include/python2.5/object.h:104: error: expected 
specifier-qualifier-list before ‘Py_ssize_t’
/usr/include/python2.5/object.h:108: error: expected 
specifier-qualifier-list before ‘Py_ssize_t’
/usr/include/python2.5/object.h:131: error: expected declaration 
specifiers or ‘...’ before ‘*’ token
/usr/include/python2.5/object.h:131: warning: type defaults to ‘int’ in 
declaration of ‘Py_ssize_t’
/usr/include/python2.5/object.h:131: error: ‘Py_ssize_t’ declared as 
function returning a function
/usr/include/python2.5/object.h:131: warning: function declaration isn’t 
a prototype
.
(etc,etc)


and here is a part of python2.5 compilation log:

gcc version 4.1.2 (Ubuntu 4.1.2-0ubuntu4)
configure:2093: $? = 0
configure:2095: gcc -V /dev/null 5
gcc: '-V' option must have argument
configure:2098: $? = 1
configure:2121: checking for C compiler default output file name
configure:2124: gcc conftest.c 5
/usr/bin/ld: crt1.o: No such file: No such file or directory
collect2: ld returned 1 exit status
configure:2127: $? = 1
configure: failed program was:
| /* confdefs.h. */
|
| #define _GNU_SOURCE 1
| #define _NETBSD_SOURCE 1
| #define __BSD_VISIBLE 1
| #define _BSD_TYPES 1
| #define _XOPEN_SOURCE 600
| #define 

Re: [Numpy-discussion] NumPy 1.0.3 release next week

2007-05-13 Thread Matthieu Brucher

Hi,

you have a problem with your Ubuntu installation, not with numpy.

Matthieu

2007/5/13, dmitrey [EMAIL PROTECTED]:


Stefan van der Walt wrote:
 On Sun, May 13, 2007 at 06:19:30PM +0300, dmitrey wrote:

 Is it possible somehow to speedup numpy 1.0.3 appearing in Linux update
 channels? (as for me I'm interested in Ubuntu/Kubuntu,  currently there
 is v 1.0.1)
 I tried to compile numpy 1.0.2, but, as well as in Octave compiling, it
 failed because c compiler can't create executable. gcc reinstallation
 didn't help, other c compilers are absent in update channel (I had seen
 only tcc, but I'm sure it will not help, and it (I mean trying to
 install other C compilers) requires too much efforts).


 Many people here are compiling numpy fine under Ubuntu.  Do you have
 write permissions to the output directory? What is the compiler error
 given?

 Cheers
 Stéfan
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

Sorry, I meant compiling Python2.5 and Octave, not numpy  Octave
Python2.5 is already present (in Ubuntu 7.04), but I tried to compile
and install it from sources because numpy compilation failed with
(I have gcc version 4.1.2 (Ubuntu 4.1.2-0ubuntu4), compiling as root)

...
...
C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O2 -Wall
-Wstrict-prototypes -fPIC
compile options: '-I/usr/include/python2.5 -Inumpy/core/src
-Inumpy/core/include -I/usr/include/python2.5 -c'
gcc: _configtest.c
In file included from
/usr/lib/gcc/x86_64-linux-gnu/4.1.2/include/syslimits.h:7,
from /usr/lib/gcc/x86_64-linux-gnu/4.1.2/include/limits.h:11,
from /usr/include/python2.5/Python.h:18,
from _configtest.c:2:
/usr/lib/gcc/x86_64-linux-gnu/4.1.2/include/limits.h:122:61: error:
limits.h: No such file or directory
In file included from _configtest.c:2:
/usr/include/python2.5/Python.h:32:19: error: stdio.h: No such file or
directory
/usr/include/python2.5/Python.h:34:5: error: #error Python.h requires
that stdio.h define NULL.
/usr/include/python2.5/Python.h:37:20: error: string.h: No such file or
directory
/usr/include/python2.5/Python.h:39:19: error: errno.h: No such file or
directory
/usr/include/python2.5/Python.h:41:20: error: stdlib.h: No such file or
directory
/usr/include/python2.5/Python.h:43:20: error: unistd.h: No such file or
directory
/usr/include/python2.5/Python.h:55:20: error: assert.h: No such file or
directory
In file included from /usr/include/python2.5/Python.h:57,
from _configtest.c:2:
/usr/include/python2.5/pyport.h:7:20: error: stdint.h: No such file or
directory
In file included from /usr/include/python2.5/Python.h:57,
from _configtest.c:2:
/usr/include/python2.5/pyport.h:73: error: expected '=', ',', ';', 'asm'
or '__attribute__' before 'Py_uintptr_t'
/usr/include/python2.5/pyport.h:74: error: expected '=', ',', ';', 'asm'
or '__attribute__' before 'Py_intptr_t'
/usr/include/python2.5/pyport.h:97: error: expected '=', ',', ';', 'asm'
or '__attribute__' before 'Py_ssize_t'
/usr/include/python2.5/pyport.h:204:76: error: math.h: No such file or
directory
/usr/include/python2.5/pyport.h:211:22: error: sys/time.h: No such file
or directory
/usr/include/python2.5/pyport.h:212:18: error: time.h: No such file or
directory
/usr/include/python2.5/pyport.h:230:24: error: sys/select.h: No such
file or directory
/usr/include/python2.5/pyport.h:269:22: error: sys/stat.h: No such file
or directory
In file included from /usr/include/python2.5/Python.h:76,
from _configtest.c:2:
/usr/include/python2.5/pymem.h:50: warning: parameter names (without
types) in function declaration
/usr/include/python2.5/pymem.h:51: error: expected declaration
specifiers or '...' before 'size_t'
In file included from /usr/include/python2.5/Python.h:78,
from _configtest.c:2:
/usr/include/python2.5/object.h:104: error: expected
specifier-qualifier-list before 'Py_ssize_t'
/usr/include/python2.5/object.h:108: error: expected
specifier-qualifier-list before 'Py_ssize_t'
/usr/include/python2.5/object.h:131: error: expected declaration
specifiers or '...' before '*' token
/usr/include/python2.5/object.h:131: warning: type defaults to 'int' in
declaration of 'Py_ssize_t'
/usr/include/python2.5/object.h:131: error: 'Py_ssize_t' declared as
function returning a function
/usr/include/python2.5/object.h:131: warning: function declaration isn't
a prototype
.
(etc,etc)


and here is a part of python2.5 compilation log:

gcc version 4.1.2 (Ubuntu 4.1.2-0ubuntu4)
configure:2093: $? = 0
configure:2095: gcc -V /dev/null 5
gcc: '-V' option must have argument
configure:2098: $? = 1
configure:2121: checking for C compiler default output file name
configure:2124: gcc conftest.c 5
/usr/bin/ld: crt1.o: No such file: No such file or directory
collect2: ld returned 1 exit status
configure:2127: $? = 1
configure: failed program was:
| /* confdefs.h. */
|
| #define _GNU_SOURCE 1
| #define _NETBSD_SOURCE 1
| #define 

Re: [Numpy-discussion] NumPy 1.0.3 release next week

2007-05-13 Thread Stefan van der Walt
Hi Dmitrey

On Sun, May 13, 2007 at 08:21:15PM +0300, dmitrey wrote:
  Many people here are compiling numpy fine under Ubuntu.  Do you have
  write permissions to the output directory? What is the compiler error
  given?

 Sorry, I meant compiling Python2.5 and Octave, not numpy  Octave
 Python2.5 is already present (in Ubuntu 7.04), but I tried to compile 
 and install it from sources because numpy compilation failed with
 (I have gcc version 4.1.2 (Ubuntu 4.1.2-0ubuntu4), compiling as
 root)

This isn't really the place to discuss compiling Python or Octave, but
a good first move would be to install the 'build-essential' package.
This will hopefully provide the header files and the compiler you
need.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] copy object with multiple subfields, including ndarrays

2007-05-13 Thread Timothy Hochberg

It's alwys helpful if you can include a self contained example so it's easy
to figure out exactly what you are getting at. I say that because I'm not
entirely sure of the context here -- it appears that this is not numpy
related issue at all, but rather a general python question. If so, I think
what you are looking for is copy.deepcopy. As it name implies it does a deep
copy of an object as opposed to a shallow copy, which is what
copy.copydoes. If that doesn't do what you want or I misunderstood
your question,
please supply some more detail.


On 5/13/07, dmitrey [EMAIL PROTECTED] wrote:


hi all,
does anyone know howto copy an instance of class, that contains multiple
subfields, for example
myObj.field1.subfield2 = 'asdf'
myObj.field4.subfield8 = numpy.mat('1 2 3; 4 5 6')

I tried
from copy import copy
myObjCopy = copy(myObj)
but it seems that it doesn't work correctly

Thx, D.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion





--

//=][=\\

[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] very large matrices.

2007-05-13 Thread Dave P. Novakovic
 Are you trying some sort of principal components analysis?

PCA is indeed one part of the research I'm doing.

Dave
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] very large matrices.

2007-05-13 Thread Dave P. Novakovic
There are definitely elements of spectral graph theory in my research
too. I'll summarise

We are interested in seeing the each eigenvector from svd can
represent in a semantic space
In addition to this we'll be testing it against some algorithms like
concept indexing (uses a bipartitional k-meansish method for dim
reduction)
also testing against Orthogonal Locality Preserving indexing, which
uses the laplacian of a similarity matrix to calculate projections of
a document (or term) into a manifold.

These methods have been implemented and tested for document
classification, I'm interested in seeing their applicability to
modelling semantics with a system known as Hyperspace to analog
language.

I was hoping to do svd to my HAL built out of reuters, but that was
way too big. instead i'm trying with the traces idea i mentioned
before (ie contextually grepping a keyword out of the docs to build a
space around it.)

Cheers

Dave

On 5/14/07, Charles R Harris [EMAIL PROTECTED] wrote:


 On 5/13/07, Dave P. Novakovic [EMAIL PROTECTED] wrote:
   Are you trying some sort of principal components analysis?
 
  PCA is indeed one part of the research I'm doing.

 I had the impression you were trying to build a linear space in which to
 embed a model, like atmospheric folk do when they try to invert spectra to
 obtain thermal profiles. Model based compression would be another aspect of
 this. I wonder if there aren't some algorithms out there for this sort of
 thing.

 Chuck


 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion