Let's continue this discussion at
https://github.com/JuliaLang/julia/issues/6708
-viral
On Thursday, May 1, 2014 2:22:42 PM UTC+5:30, Viral Shah wrote:
>
> It could be because of memory usage. I have 1 TB RAM on the machine I was
> doing. If you were running into swap, it would certainly take
It could be because of memory usage. I have 1 TB RAM on the machine I was
doing. If you were running into swap, it would certainly take much longer.
I will try the other version as soon as the machine is available for me to use
(some admin issues), and also look into speeding things up if possib
Hmmm. That is much better than I was getting. Thanks Viral.
Was it much faster for you to create the column-index, row-index, and value
arrays? I would still expect them to be roughly on par in terms of speed.
On Wed, Apr 30, 2014 at 2:36 PM, Viral Shah wrote:
> I ran the sprand example, an
Of course in this case, it's easy to build the CSC arrays directly instead
of the (row, col, val) triples. I updated my gist. The construction of the
sparse matrix using a direct call to SparseMatrixCSC now takes 2.6e-6
seconds! This is still with vec_len=70,000. Here are the timings:
elapsed t
I ran the sprand example, and it took 290 seconds on a machine with enough RAM.
Given that it is creating a matrix with half a billion nonzeros, this doesn’t
sound too bad.
-viral
On 30-Apr-2014, at 8:48 pm, Ryan Gardner wrote:
> I've got 16GB of RAM on this machine. Largely, my question,
The SparseMatrixCSC constructor currently has the signature
SparseMatrixCSC(m, n, colptr, rowval, nzval)
It looks as though this isn't formally documented, but it's a pretty clear
implementation if you understand the basics of the CSC format (and remember
that all the indexing is 1-based if you'v
Sorry, here's my code: https://gist.github.com/11431891
I don't see how to use SparseMatrixCSC directly. Doesn't it require that
the arrays already represent the CSC structure?
On Wednesday, April 30, 2014 8:40:20 AM UTC-7, Viral Shah wrote:
>
> Octave 3.6 just gave up:
>
> octave:1> tic; spran
You can call SparseMatrixCSC directly, but then you have to do all the
arrangement and sorting yourself. Depending on your application and how the
nonzeros are generated, this may or may not help.
I will investigate this further. I now have all the information I need.
Thanks,
-viral
On 30-A
Octave 3.6 just gave up:
octave:1> tic; sprand(70, 70, .001); toc;
error: memory exhausted or requested size too large for range of Octave's index
type -- trying to return to prompt
-viral
On 30-Apr-2014, at 9:08 pm, Viral Shah wrote:
> You can call SparseMatrixCSC directly, but th
I've got 16GB of RAM on this machine. Largely, my question, with
admittedly little knowledge of the internal structure of the sparse arrays,
is why generating the actual SparseMatrixCSC is so much slower than
generating what is essentially another sparse matrix representation
consisting of the ind
If you're assembling the matrix in row-sorted column-major order and
there's no duplication, then you can also skip the conversion work by using
the SparseMatrixCSC constructor directly.
On Wednesday, April 30, 2014 1:10:31 AM UTC-7, Viral Shah wrote:
>
> Could you post your code? Will avoid me
Could you post your code? Will avoid me writing the same. :-)
Was building the vectors taking all the time, or was it in building the sparse
matrix from the triples? Triples to CSC conversion is an expensive operation,
and we have spent a fair amount of time making it fast. Of course, there coul
Creating sparse arrays seems exceptionally slow.
I can set up the non-zero data of the array relatively quickly. For
example, the following code takes about 80 seconds on one machine.
vec_len = 70
row_ind = Uint64[]
col_ind = Uint64[]
value = Float64[]
for j = 1:70
for k = 1:700
13 matches
Mail list logo