They are very large numbers indeed. Thanks for giving me a wake up call.
Currently my data is represented as vectors in a vectorset, a typical
sparse representation.
I reduced the problem significantly by removing lots of noise. I'm
basically recording traces of a terms occurrence throughout a cor
On 12/05/07, Dave P. Novakovic <[EMAIL PROTECTED]> wrote:
> core 2 duo with 4gb RAM.
>
> I've heard about iterative svd functions. I actually need a complete
> svd, with all eigenvalues (not LSI). I'm actually more interested in
> the individual eigenvectors.
>
> As an example, a single row could
On 5/12/07, Andrew Straw <[EMAIL PROTECTED]> wrote:
Charles R Harris wrote:
>
> I'll pitch in a few donuts (and my eternal gratitude) for an
> example of
> shared memory use using numpy arrays that is cross platform, or at
> least
> works in linux, mac, and windows.
>
>
> I w
Charles R Harris wrote:
>
> I'll pitch in a few donuts (and my eternal gratitude) for an
> example of
> shared memory use using numpy arrays that is cross platform, or at
> least
> works in linux, mac, and windows.
>
>
> I wonder if you could mmap a file and use it as common mem
Andrew added:
I'll pitch in a few donuts (and
my eternal gratitude) for an example of
shared memory use using numpy arrays that is cross platform, or at
least works in linux, mac, and windows.
I thought that getting the address from the buffer() of the array and
creating a new one from it in th
Hey, thanks for the response.
core 2 duo with 4gb RAM.
I've heard about iterative svd functions. I actually need a complete
svd, with all eigenvalues (not LSI). I'm actually more interested in
the individual eigenvectors.
As an example, a single row could probably have about 3000 non zero elemen
On 5/12/07, Andrew Straw <[EMAIL PROTECTED]> wrote:
Ray Schumacher wrote:
>
> After Googling for examples on this, in the Cookbook
> http://www.scipy.org/Cookbook/Multithreading
> MPI and POSH (dead?), I don't think I know the answer...
> We have a data collection app running on dual core proces
On 5/12/07, Dave P. Novakovic <[EMAIL PROTECTED]> wrote:
Hi,
I have test data of about 75000 x 75000 dimensions. I need to do svd,
or at least an eigen decomp on this data. from search suggests to me
that the linalg functions in scipy and numpy don't work on sparse
matrices.
I can't even get
Hi,
I have test data of about 75000 x 75000 dimensions. I need to do svd,
or at least an eigen decomp on this data. from search suggests to me
that the linalg functions in scipy and numpy don't work on sparse
matrices.
I can't even get empty((1,1),dtype=float) to work (memory
errors, or
[EMAIL PROTECTED] wrote:
> Hello out there,
>
> i try to run this Python-code snippet after I have imported:
Can you try to come up with a small, self-contained example? I can't replicate
your problem.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless enigma
Hello out there,
i try to run this Python-code snippet after I have imported:
import numpy as Numeric
import numpy as numpy
Numeric.Int = Numeric.int32
Numeric.Float = Numeric.float64
Code:
if m < maxN and n < maxN and self.activeWide[m+1, n+1]:
try:
Ray Schumacher wrote:
>
> After Googling for examples on this, in the Cookbook
> http://www.scipy.org/Cookbook/Multithreading
> MPI and POSH (dead?), I don't think I know the answer...
> We have a data collection app running on dual core processors; I start
> one thread collecting/writing new data
Brian Hawthorne wrote:
> Seems like it might be convenient for IPython to detect if matplotlib is
> installed and if it is then to use pylab mode by default (unless
> specified otherwise with a switch like -nopylab).
That's a bad idea. IPython has some magic, but it shouldn't be that magical.
Just
Seems like it might be convenient for IPython to detect if matplotlib is
installed and if it is then to use pylab mode by default (unless specified
otherwise with a switch like -nopylab).
Brian
On 5/12/07, Ryan Krauss <[EMAIL PROTECTED]> wrote:
You can add the -pylab switch to the desktop shor
You can add the -pylab switch to the desktop shortcut under Windows.
I had created a Windows IPython installer that automatically creates a
second entry under Start > All Programs > IPython
that includes the -pylab -p scipy option. You can download my
installer from here:
http://www.siue.edu/~rkra
Is the genutils module not included to standard CPython edition?
First of all I'm interested in what is the best way for latter, for to
users not need installing anything else.
Thx, D.
Fernando Perez wrote:
> On 5/12/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
>> It depends on what you
On Sat, May 12, 2007 at 12:12:21PM -0600, Fernando Perez wrote:
> Thanks a lot for putting time into this, which is extremely useful to
> newcomers.
I got bored of always explaining the same things to project students :->.
> I think it would be best to start with the -pylab approach from the
>
On 5/12/07, Gael Varoquaux <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> I would very much link the Getting Started wiki page (
> http://scipy.org/Getting_Started ) to the front page. But I am not sure
> it is of good enough quality so far. Could people please have a look and
> make comments, or edit th
On 5/12/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> It depends on what you're aiming at. If you want to compare different
> implementations of some expressions and need to know their average
> execution times you should use the timeit module. If you want to have
> the full execution time of
After Googling for examples on this, in the Cookbook
http://www.scipy.org/Cookbook/Multithreading
MPI and POSH (dead?), I don't think I know the answer...
We have a data collection app running on dual core processors; I start
one thread collecting/writing new data directly into a numpy circular
On 5/12/07, Albert Strasheim <[EMAIL PROTECTED]> wrote:
I've more or less finished my quick triage effort.
Issues remaining to be resolved for the 1.0.3 release:
http://projects.scipy.org/scipy/numpy/query?status=new&status=assigned&status=reopened&milestone=1.0.3+Release
If they can't be fi
> I would very much link the Getting Started wiki page (
> http://scipy.org/Getting_Started ) to the front page. But I am not sure
> it is of good enough quality so far. Could people please have a look and
> make comments, or edit the page.
Thank you for doing this. It's pitched very well.
Matthe
Hi all,
I would very much link the Getting Started wiki page (
http://scipy.org/Getting_Started ) to the front page. But I am not sure
it is of good enough quality so far. Could people please have a look and
make comments, or edit the page.
Cheers,
Gaël
__
I've more or less finished my quick triage effort.
Issues remaining to be resolved for the 1.0.3 release:
http://projects.scipy.org/scipy/numpy/query?status=new&status=assigned&status=reopened&milestone=1.0.3+Release
If they can't be fixed for this release, we should move them over to
1.1 or may
Hello all
On Fri, 11 May 2007, David M. Cooke wrote:
> I've added a 1.0.3 milestone and set these to them (or to 1.1, according
> to Travis's comments).
I've reviewed some more tickets and filed everything that looks like it
can be resolved for this release under 1.0.3.
To see which tickets ar
It depends on what you're aiming at. If you want to compare different
implementations of some expressions and need to know their average
execution times you should use the timeit module. If you want to have
the full execution time of a script, time.time (call at the begin and
end, compute the diffe
On Fri, 11 May 2007, Travis Oliphant wrote:
> Thanks for the ticket reviews, Albert. That is really helpful.
My pleasure.
Found two more issues that look like they could be addressed:
http://projects.scipy.org/scipy/numpy/ticket/422
http://projects.scipy.org/scipy/numpy/ticket/450
Cheers,
A
27 matches
Mail list logo