The plan is that the transpose function will return this eventually and
within the 1.0 time frame but it's not done yet. It will probably not be
a PermutedDimsArray though because it wouldn't do the right thing for the
conjugate transpose of a complex matrix.
On Wednesday, October 12, 2016 at
It's not on purpose. It is just that it hasn't been implemented yet. It
would be great if you could open a pull request with such a method.
You might also want to define a special type for C+λI such that you can
avoid creating a new matrix but it is probably better to experiment with
such a
The documentation should be expanded with more examples. Many of the linear
algebra functions work for arbitrary input types so if you construct a
matrix with rational or integer inputs then many of the functions will
still work. We don't have much support in base for fancy math on such
Maybe I’ll try 0.5 and OpenBLAS for comparison.
>
> On 10 Sep 2016, at 2:34 AM, Andreas Noack <andreasnoackjen...@gmail.com>
> wrote:
>
> Try to time it again with threading disabled. Sometimes the
> threading heuristics can cause unintuitive performance.
>
> On Friday, Sept
It would be helpful if you could provide a self-contained example. Also,
would it be possible to try out the release candidate for 0.5. We have made
a few changes to the ARPACK wrappers so it would be useful to know if is
only happening on 0.4. Thanks.
On Saturday, September 10, 2016 at
Try to time it again with threading disabled. Sometimes the
threading heuristics can cause unintuitive performance.
On Friday, September 9, 2016 at 6:39:13 AM UTC-4, Sheehan Olver wrote:
>
>
> I have the following code that is part of a Householder routine, where
> j::Int64,
> N::Int64,
My memory is too short. I just realized that I implemented a generic
pivoted QR last fall so if you try out the prerelease of Julia 0.5 then
you'll be able to compute the pivoted QR of a BigFloat matrix.
On Wednesday, September 7, 2016 at 9:20:12 AM UTC-4, Andreas Noack wrote:
>
>
We use LAPACK's QR with column pivoting.
See http://www.netlib.org/lapack/lug/node42.html. LAPACK uses blocking for
BLAS3 but that is not necessary for a generic implementation. So it's the
task is just to sort the columns by norm at each step. If you want to try
an implementation you can look
Evan, this is exactly where you should use I, i.e.
m = m + λ*I
The reason is that eye(m) will first allocate a dense matrix of size(m,1)^2
elements. Then * will do size(m,1)^2 multiplications of lambda and allocate
a new size(m,1)^2 matrix for the result. Finally, size(m,1)^2 additions
will be
Julia is developing over time. Originally, eye was probably implemented to
mimic Matlab. Later we realized that the type system allowed us to define
the much nicer UniformScaling which has the special case
const I = UniformScaling(1)
which is almost alway better to use unless you plan to modify
We could deprecate eye. Then the users would get a warning directing them
to use `I` instead.
On Mon, Aug 29, 2016 at 6:29 AM, Júlio Hoffimann
wrote:
> I still think that having a "global variable" named "I" is not robust.
> I've read so many scripts in matlab that do
If we could somehow make `I` more visible, wouldn't you think that
B = I*A
is better than
B = eye(1)*A
?
Small side note: the best we can hope for is probably performance similarly
to B = copy(A) because it wouldn't be okay to alias A and B when B has been
constructed from *.
On Mon, Aug
No. We are only exposing `cond` but as you can see
in https://github.com/JuliaLang/julia/blob/master/base/linalg/lu.jl#L235 we
are actually getting `rcond` from LAPACK and then calling `inv`. I can see
the usefulness of working with a number in [0,1] instead of [1,inf) but it
seems superfluous
You can also overwrite eye
Could you elaborate on the "90% of the users won't be aware of these
internal details in their day-to-day coding" part? If we ignore the name
for a while, why is `I` not what you want here? It is as efficient as it
can possibly be.
On Sunday, August 28, 2016 at
ntry like
> this in Compat.jl.
>
> On Sunday, August 7, 2016 at 10:02:44 PM UTC+2, Andreas Noack wrote:
>>
>> It would be great with an entry for this in Compat.jl, e.g. something like
>>
>> cholfact(A::HermOrSym, args...) = cholfact(A.data, A.uplo, args...)
>>
It would be great with an entry for this in Compat.jl, e.g. something like
cholfact(A::HermOrSym, args...) = cholfact(A.data, A.uplo, args...)
On Sun, Aug 7, 2016 at 2:44 PM, Chris <7hunderstr...@gmail.com> wrote:
> mmh, could you explain your comment a little more?
>
> David, thanks for the
mail.com
> > wrote:
>
>> Andreas, thanks for the investigation. I'll use 0.5 for now, and hope the
>> real problems I encounter are within the capabilities of ARPACK.
>>
>> It's embarrassing to be bested by Matlab...
>>
>> On Fri, Aug 5, 2016 at 9:23 P
I've looked a bit into this. I believe there is a bug in the Julia wrappers
on 0.4. The good news is that the bug appears to be fixed on 0.5. The bad
news is the ARPACK seems to have a hard time with the problem. I get
julia> eigs(A,C,nev = 1, which = :LR)[1]
ERROR:
Yes. We are stricter now. LAPACK doesn't check for symmetry at all which we
used to mimic. It might seem pedantic but it will capture programming
errors and it is more in line with how check things elsewhere and it's in
line with the behavior for the sparse Cholesky. For now, you'd need to use
A couple of things go wrong here. Right now, Julia tries to use the QR
factorization to solve underdetermined systems. If things had worked the
way I'd planned, your matrix would have been promoted to floating point
elements (I know that it's not what you want so keep reading). The
promotion
The for Cholesky and sparse LDLt there are methods for reusing an existing
symbolic factorization. They are cholfact!/ldltfact!(Factorization,Matrix)
so you can e.g. do
```
julia> A = sprandn(100,100,0.1) + 10I;
julia> F = cholfact(Symmetric(A));
julia> cholfact!(F, Symmetric(A - I));
```
We could consider not to throw in cholfact but only when the factorization
is applied. This is what we do for LU and BunchKaufman.
On Saturday, June 11, 2016 at 5:02:55 PM UTC-4, Tony Kelman wrote:
>
> For now, you can just manually make the same cholmod calls that cholfact
> does, then use the
To start with the conclusion, the easiest solution here is to use the
lufact, i.e. lufact(sparse(A))\ones(3). The explanation is that when a
sparse matrix is symmetric, we first try to use a sparse Cholesky
factorization and when that fails we try a sparse LDLt factorization. Both
use Cholmod,
Yes. The way out Ls are constructed will ensure that it has 1s in the
diagonal.
On Tuesday, June 7, 2016 at 6:20:26 PM UTC-4, Gabriel Goh wrote:
>
> A simple question as posed in the title, this is guaranteed by LAPACK
>
> The factorization has the form
> A = P * L * U
> where
tot_xy += x[i]*y[i]
> end
> b1 = (tot_xy - mx*tot_y)/tot_dx # a 1\n cancels in the top and bottom
> b0 = tot_y/n - b1*mx
> return [b0, b1]
> end
>
> If anyone has any comments on what could be made better.
>
> Thanks,
> Gabriel
>
> On Mon
return [b0, b1]
> end
>
> Which I find speeds up around 3x, or do you mean writing a custom cov
> function that is smarter about memory? (I am returning an array as I like
> to be able to do vector math on the coefficients ... but if I return a
> tuple it isn't much faster for me)
I don't think that linreg has received much attention over the years. Most
often it is almost as simple to use \. It you take a look at linreg then
I'd suggest that you try to write in terms of cov and var and return a
tuple instead of an Array. That will speed up the computation already now
The Hackathon on Saturday is quite flexible so if somebody volunteers to
organize such session then I'm sure we can make it happen. There should be
people around that are knowledgeable about the topics.
On Tuesday, May 17, 2016 at 10:26:00 AM UTC-4, Josef Sachs wrote:
>
> Will there be any
I think there'll necessarily be some overhead in the decompression of the
packed Booleans in a BitArray. The difference between Bool and Int8 is that
the Int8 is promoted to a Float64 whereas a Bool is not. It appears that
the Bool multiplication
in
has co-designed the programming language Scheme, which has greatly
influenced the design of Julia, as well as languages such as Fortress and
Java.
Andreas Noack
JuliaCon 2016 Local Chair
Postdoctoral Associate
Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute
It's to ensure that the return type doesn't depend on the value of x. If x and
y are integers then the return type of hypot1 will be Int if x==0 and Float64
otherwise.
> CONFLICT (content): Merge conflict in src/MixedModels.jl
> Auto-merging src/GLMM/PIRLS.jl
> CONFLICT (content): Merge conflict in src/GLMM/PIRLS.jl
> Automatic merge failed; fix conflicts and then commit the result.
>
>
>
>
> On Tuesday, March 8, 2016 at 2:29:08
Provided that you have pushed all your changes, I think the easiest
solution is to "remove" the commit in which you add the chksqr fix. You can
do that with
git reset --hard c0b5c41d136013a8e2cd57f5bedd8c96f5d2e3c6 # the commit
right before the chksqr changes
git cherry-pick
the .so)
>
> We currently have a C++ project with Python bindings done using
> Boost.Python, which also uses the latter approach, so translating this is
> more natural using CppWrapper.
>
> Maybe I should clarify this in the docs?
>
> Cheers,
>
> Bart
>
> On
Hi Bart,
Are you aware of https://github.com/Keno/Cxx.jl? What are the reasons for a
separate package?
Best
Andreas
On Sunday, February 14, 2016 at 6:11:07 PM UTC-5, Bart Janssens wrote:
>
> Hi all,
>
> The CppWrapper package is meant to facilitate exposing C++ libraries to
> Julia, allowing
Eventually, we should support multiplying with Q in the same way it
possible with dense QR. It will require that we introduce a Q type for
sparse QR for which we can overload the usual matrix multiplication
functions by appropriate calls to qmult. Notice however that qmult is only
defined for
Sorry for the confusion here. We have moved the CHOLMOD module to
SparseMatrix.CHOLMOD in 0.4 and SparseArrays.CHOLMOD in 0.5. However,
update! will probably be an exported function in 0.5 so this should become
much easier.
On Tuesday, December 1, 2015 at 7:08:12 AM UTC-5, Matthew Pearce
Please file this issue at DistributedArrays.jl
On Monday, November 23, 2015 at 8:13:06 AM UTC-5, Antonio Suriano wrote:
>
> addprocs(3)
>
> @everywhere using DistributedArrays
>
> function tony(N)
> return sum(drandn(N,N))
> end
>
>
> function pesante(N)
> a=zeros(N,N)
> for i = 1:N
> for j=1:N
>
The order of operations is from left to right so the parentheses can be
important here. We have discussed ways of executing more efficiently for
matrix products, see https://github.com/JuliaLang/julia/issues/12065, but
so far nothing has been implemented. In that issue, you can also see the
Hi Michael
Author of Civecm.jl here. The repo has code that I developed during my PhD
and I haven't spend much time on it lately, but please ask questions if
you'd like to use it.
Best
Andreas
On Friday, October 16, 2015 at 5:59:44 PM UTC-4, Michael Wang wrote:
>
> Cool, thank you! I will
I think you are right that we should simply remove the mean keyword
argument from cov and cor. If users want the efficient versions with user
provided means then they can use corm and covm. Right now they are not
exported, but we could consider doing it, although I'm in doubt if it is
really
Take a look in StatsBase.jl
On Sunday, August 9, 2015 at 8:17:02 AM UTC-4, paul analyst wrote:
*Is some Autocorrelation function in Julia?Paul*
For the example you describe, you can simply use the tools in base, but
unfortunately I don't think our reader can handle continental style decimal
comma yet. However, that is easy to search/replace with a dot. Something
like
cov(diff(log(readdlm(prices.csv, ';'
should then do the job.
You can use sub or slice for this, e.g. W = slice(Wall, 2:size(Wall, 1),
2:size(Wall, 2))
Den onsdag den 15. juli 2015 kl. 06.14.14 UTC-4 skrev Ferran Mazzanti:
Hi folks,
I have a little mess with the way arrays are being handled in Julia. I
come from C and fortran95 and I know I can do
No such BLAS routine exists, but for larger matrices the calculation will
be dominated by the final matrix-matrix product anyway.
Den tirsdag den 7. juli 2015 kl. 18.24.34 UTC-4 skrev Matthieu:
Thanks, this is what I currently do :)
However, I'd like to find a solution that is both memory
Hi Jared
The short answer is yes. Different algorithms are used in `svdvals` and
`svd`/`svdfact`. In both cases, we are using the divide and conquer routine
xGESDD from LAPACK, but internally in the routine uses two different
algorithms depending on the choice for the vectors. Are requested or
You could, but unless the matrices are small, it would be slower because it
wouldn't use optimized matrix multiplication.
2015-07-08 10:36 GMT-04:00 Josh Langsfeld jdla...@gmail.com:
Maybe I'm missing something obvious, but couldn't you easily write your
own 'cross' function that uses a couple
8, 2015 at 10:39 AM, Andreas Noack
andreasnoackjen...@gmail.com wrote:
You could, but unless the matrices are small, it would be slower because
it wouldn't use optimized matrix multiplication.
2015-07-08 10:36 GMT-04:00 Josh Langsfeld jdla...@gmail.com:
Maybe I'm missing something obvious
Hi Ivan
This is fixed on 0.4 but needs a backport to 0.3. I'll take a look.
Den tirsdag den 7. juli 2015 kl. 08.41.37 UTC-4 skrev Ivan Slapnicar:
In [1]:
Z=givens(1.0,2.0,1,3,3)
Z, Z', transpose(Z)
Out[1]:
(
3x3 Givens{Float64}:
0.447214 0.0 0.894427
0.0 1.0 0.0
:
This is very interesting !
So UMFPACK is more robust and this is why I am not having any issues with
the same matrix.
Thanks.
On Wednesday, May 27, 2015 at 6:15:57 PM UTC-3, Andreas Noack wrote:
It could happen if a pivot is zero. CHOLMOD's ldlt is only making
permutations in the symbolic
The convert methods for Date.Period, Complex and Rational are inferred to
give Any. The problem in Period is because of the use of the value method
in line 4 of periods.jl. It extracts a field from an abstract type so even
though all subtypes in base have the specified field and have it defined
I think the chosen matrix has very good convergence properties for
iterative methods, but I agree that iterative methods are very useful to
have in Julia. There is already quite a few implementations in
https://github.com/JuliaLang/IterativeSolvers.jl
I'm not sure if these methods cover the
You can use try/catch, e.g.
julia try
cholfact(A)
catch e
e.info
end
1
In 0.3, you can construct a Triangular matrix with Triangular(B, :L) and in
0.4 with LowerTriangular(B).
Den onsdag den 27. maj 2015 kl. 18.12.51 UTC-4 skrev Roy Wang:
Is there an easy way to
confused with this error.
On Wednesday, May 27, 2015 at 2:22:30 PM UTC-3, Andreas Noack wrote:
In 0.3 the sparse LDLt and Cholesky factorizations are both in the
cholfact function. If the matrix is symmetric, but not positive definite
the result of cholfact will be an LDLt factorization. In 0.4
In 0.3 the sparse LDLt and Cholesky factorizations are both in the
cholfact function. If the matrix is symmetric, but not positive definite
the result of cholfact will be an LDLt factorization. In 0.4 the
factorizations have been split into cholfact and ldltfact.
Den onsdag den 27. maj 2015
27, 2015 at 3:25:46 PM UTC-3, Eduardo Lenz wrote:
Funny... I dont have CHOLMOD installed...but I am using the official
windows installer.. I will try to make a fresh install.
Thanks Andreas !
On Wednesday, May 27, 2015 at 2:59:30 PM UTC-3, Andreas Noack wrote:
What do you get when you type
On Wednesday, May 27, 2015 at 5:54:20 PM UTC-3, Andreas Noack wrote:
As I wrote in the first reply: in 0.3 the cholfact function returns the
LDLt when the matrix is symmetric but not positive definite, e.g.
julia A = sprandn(5,5, 0.5);
julia A = A + A';
julia b = A*ones(5);
julia
avaliable in a regular
windows install.
Thanks for your help Andreas !
On Wednesday, May 27, 2015 at 5:37:53 PM UTC-3, Andreas Noack wrote:
You are using 0.3.8 and not 0.4. Have you tried cholfact(A)?
2015-05-27 16:33 GMT-04:00 Eduardo Lenz eduardo...@gmail.com:
Just to make it clear
1. In Julia fft(A) is the 2d DFT of A. You can get MATLAB's behavior with
fft(A, 1)
2. I might not understand what you are trying to do, but it appears to me
that you can just apply the DFT to the full vector and then sample the
elements of the vector.
2015-05-10 22:32 GMT-04:00 Edward Chen
and
not the covariance function.
2015-05-08 17:05 GMT-04:00 JPi pin...@gmail.com:
Yes, the variance.
But that doesn't explain why you can't get the covariance matrix of an
array of vectors.
On Friday, May 8, 2015 at 4:51:13 PM UTC-4, Andreas Noack wrote:
Calculating the covariance requires two
Calculating the covariance requires two sequences of data points. Either
from two vectors or between the columns of a matrix. The mean is different
as it requires one sequence. What did you expect to get from the covariance
function of a vector? The variance?
2015-05-08 16:01 GMT-04:00 JPi
As I'm writing this, I'm running Julia on a pretty new 90 node cluster. I
don't know if that counts as medium size cluster, but recently it was
reported on the mailing list that Julia was running on
http://www.top500.org/system/178451
which I think counts as a supercomputer.
2015-04-28 19:58
I like the idea of something like factorize(MyType,...), but it is not
without problems for generic programming. Right now cholfact(Matrix) and
cholfact(SparseMatrixCSC) return different types, i.e. LinAlg.Cholesky and
SparseMatrix.CHOLMOD.Factor. The reason is that internally, they are very
, Andreas Noack wrote:
If B and D are different then why is it not okay to calculate x = C\D and
then B'x afterwards?
2015-04-27 15:24 GMT-04:00 matt katz...@gmail.com:
I would like to compute multiple quadratic forms B'*C^(-1)*D, where B,
C, and D are sparse, and C is always the same symmetric
at a time and deflate A by removing the
corresponding eigenvector?
Thanks, Mladen
On Tuesday, April 21, 2015 at 9:09:29 AM UTC-5, Andreas Noack wrote:
I'm not sure what the best solution is here because I don't fully
understand your objective. If A has low rank then the solution
If B and D are different then why is it not okay to calculate x = C\D and
then B'x afterwards?
2015-04-27 15:24 GMT-04:00 matt katzf...@gmail.com:
I would like to compute multiple quadratic forms B'*C^(-1)*D, where B, C,
and D are sparse, and C is always the same symmetric positive matrix. For
Hej Valentin
There is a couple of simple examples. At least this
http://acooke.org/cute/FiniteFiel1.html
and I did one in this
http://andreasnoack.github.io/talks/2015AprilStanford_AndreasNoack.ipynb
notebook. The arithmetic definitions are simpler for GF(2), but should be
simple
This problem is quite common in the LinAlg code. We have two type of
definitions to handle the conversion of the element types. First an idea
due to Jeff as implemented in
https://github.com/JuliaLang/julia/blob/237cdab7100b29a6769313391b1f8d2563ada06e/base/linalg/triangular.jl#L20
and
The reason is that it is not exported so you can either use the full path
bar(Base.LinAlg.CHOLMOD.CholmodFactor{Float64,Int64}) = 1
or use the type first
using Base.LinAlg.CHOLMOD.CholmodFactor
bar(CholmodFactor{Float64,Int64}) = 1
2015-04-21 10:27 GMT-04:00 andreas scheidegge...@gmail.com:
I'm not sure what the best solution is here because I don't fully
understand your objective. If A has low rank then the solution is not
unique and if it has almost low rank, the solution is very ill conditioned.
A solution could our new shift argument to our complex Cholesky
factorization. This
Hi Michela
It is easier to help if your example is complete such that it can just be
pasted into the terminal. The variable gmax is not defined in your example,
but I guess it is equal to length(SUC_C). It is also useful to provide the
exact error message.
That said, I think the root of the
We mainly use SymTridiagonal for eigenvalue problem and therefore it is not
necessary to allow for complex matrices because the Hermitian eigenvalue
problem can be reduced to a real symmetric problem. It might be easier to
specify the problem in Hermitian form, so we might change this. What is
This has been fixed on master. I've just backported the fix to the release
branch so it should be okay in 0.3.8.
2015-04-16 4:05 GMT-04:00 Rasmus Brandt rasmus...@gmail.com:
Hey everyone,
I just stumbled over this behaviour in Julia-0.3.7, which seems a bit
unintuitive to me:
julia pinv(0)
The notebook is now available from
http://andreasnoack.github.io/talks/2015AprilStanford_AndreasNoack.ipynb
Note that it is based on master so some parts of the code might fail on
Julia release.
2015-04-11 15:21 GMT-04:00 Andreas Noack andreasnoackjen...@gmail.com:
I've been in transit back
at
Stanford is pleased to have Andreas Noack and Jiahoa Chen speaking
in our Linear Algebra and Optimization seminar this Thursday and next.
Today's talk will be livestreamed via YouTube starting at 4:15pm PDT.
Livestream link: https://www.youtube.com/watch?v=a_bFB1BZbvI
(Videos will also
Which pca?
2015-04-06 6:53 GMT-07:00 Steven Sagaert steven.saga...@gmail.com:
does pca() center the input output data or do you have to do that
yourself?
There is no pca in Julia Base
2015-04-06 9:16 GMT-07:00 Steven Sagaert steven.saga...@gmail.com:
the one from the standard lib
On Monday, April 6, 2015 at 4:01:00 PM UTC+2, Andreas Noack wrote:
Which pca?
2015-04-06 6:53 GMT-07:00 Steven Sagaert steven@gmail.com:
does pca() center
I think that you could use sub(A,:,2:2:4) for BLAS, but not sub(A,:,[2,4])
because the indexing has to be with ranges for BLAS to be able to extract
the right elements of the matrix.
2015-03-29 17:34 GMT-04:00 Dominique Orban dominique.or...@gmail.com:
Sorry if this is another [:] kind of
supposed to do, it just doesn't update C (which is awkward
for a ! function, though I realize there's something else going on here).
On Sunday, March 29, 2015 at 5:56:27 PM UTC-4, Andreas Noack wrote:
I think that you could use sub(A,:,2:2:4) for BLAS, but not
sub(A,:,[2,4]) because the indexing
:00 Dominique Orban dominique.or...@gmail.com:
Unfortunately, my idx will be computed on the fly and there's zero chance
that it would be a range. Is there a plan to support more general indexing
in subarrays and/or ArrayView?
On Sunday, March 29, 2015 at 6:13:16 PM UTC-4, Andreas Noack wrote
Distributed reduce is already implemented, so maybe these slightly simpler
with e.g. sum(A::DArray) = reduce(Base.AddFun(), A)
2015-03-26 8:41 GMT-04:00 Jameson Nash vtjn...@gmail.com:
`eval` (typically) isn't allowed to handle `import` and `export`
statements. those must be written explicitly
I think that countmap in StatsBase does that
Den torsdag den 26. marts 2015 skrev DumpsterDoofus
peter.richter@gmail.com:
In Mathematica, there is a function called Tally which takes a list and
returns a list of the unique elements of the input list, along with their
multiplicities. I.e,
It has caused a lot of frustration. See #9118. I think the easiest right
now is
for p in procs()
@spawnat p blas_set_num_threads(k)
end
2015-03-17 23:19 GMT-04:00 Sheehan Olver dlfivefi...@gmail.com:
Hi,
I've created the following to test the performance of parallel processing
on our
new_array = vcat(data...)
2015-03-17 20:59 GMT-04:00 Christopher Fisher fishe...@miamioh.edu:
Hi all-
pmap outputs the results as an array of arrays and I am trying to find a
flexible way to change it into a one dimensional array. I can hardcode the
results as new_array =
I've tried to make the package that Jiahao mentioned usable. I think it
works, but it probably still has some rough edges. You can find it here
https://github.com/andreasnoack/TSVD.jl
and there is a help entry for tsvd that explains the arguments.
For a 2000x2000 dense non-symmetric complex
On 0.3.x there is a very expensive error bounds calculation in the
triangular solve which the reason for the surprisingly slow calculation.
This is not acceptable and we have therefore removed the error bounds
calculation in 0.4. On my machine I get
julia @time L\B;
elapsed time: 2.535437796
Good to to hear that. I've filed an issue to figure out how to make this
more consistent
https://github.com/JuliaLang/julia/issues/10520
2015-03-14 21:06 GMT-04:00 Kristoffer Carlsson kcarlsso...@gmail.com:
On Sunday, March 15, 2015 at 1:02:11 AM UTC+1, Andreas Noack wrote:
You can get
You can get around this by specifying that the matrix is symmetric. This
can be done with
cholfact(Symmetric(A, :L))
which then bypasses the test for symmetry and cholfact only looks at the
lower triangle.
However, the error you got is not consistent with the way cholfact works
for dense
@elapsed is what you are looking for
2015-03-11 7:43 GMT-04:00 Patrick Kofod Mogensen patrick.mogen...@gmail.com
:
I am testing the run times of two different algorithms, solving the same
problem. I know there is the @time macro, but I cannot seem to wrap my head
around how I should save the
It is more helpful if you can provide a self contained example that we can
run. However, I think you've been bitten by our white space concatenation.
When you define f, the second element is written
*-w^2*sin(x)-u*w^2*cos(x) -2γ*y*
*but I think that is getting parsed as*
Hi Weijian
This is a great functionality. It seems that you are using MAT.jl to read
in the sparse matrices. You could consider using the the MatrixMarket
reader in Base e.g.
A = sparse(Base.SparseMatrix.CHOLMOD.Sparse(matrix.mtx))
It will also have the benefit of using the Symmetric matrix
to this reader when Julia
v0.4 is released.
Best,
Weijian
On Friday, 6 March 2015 20:35:43 UTC, Andreas Noack wrote:
Hi Weijian
This is a great functionality. It seems that you are using MAT.jl to read
in the sparse matrices. You could consider using the the MatrixMarket
reader in Base
UTC+1, Andreas Noack wrote:
I don't think it is possible right now. We have been discussing more
flexible solutions, but so far nothing has been done.
2015-03-04 9:22 GMT-05:00 Simone Ulzega simone...@gmail.com:
Is it possible to construct a DArray with unevenly distributed chunks?
For example
I don't think it is possible right now. We have been discussing more
flexible solutions, but so far nothing has been done.
2015-03-04 9:22 GMT-05:00 Simone Ulzega simoneulz...@gmail.com:
Is it possible to construct a DArray with unevenly distributed chunks?
For example, I want to create a
I don't see an obvious reason for this so please try to post this as an
issue on GLM.jl.
2015-02-27 10:47 GMT-05:00 Andrew Newman andrew.brudev...@gmail.com:
Hi Julia-users,
I am trying to run a few simple regressions on simulated data. I had no
problem with a logit and was able to run it
Steve, I don't think that method works. The mapping between the argument to
srand and the internal state of the MT is quite complicated. We are calling
a seed function in the library we are using that maps an integer to a state
vector so srand(1) and srand(2) end up as two quite different streams.
vanaf mijn iPhone
Op 27-feb.-2015 om 21:27 heeft Andreas Noack andreasnoackjen...@gmail.com
het volgende geschreven:
I think it is fine that the type of the argument determines the behavior
here. Having type in the name would be a bit like having
`fabs(x::Float64)`.
2015-02-27 15:21 GMT-05:00
I'd like to have something like this.
2015-02-27 15:02 GMT-05:00 Jutho juthohaege...@gmail.com:
Or in this particular case, maybe their should be some functionality like
that in Base, or at least in Base.LinAlg, where is often necessary to mix
complex variables and real variables of the same
type. Maybe something like realtype , or typereal if we want
to go with the other type... functions.
Op vrijdag 27 februari 2015 21:18:34 UTC+1 schreef Andreas Noack:
I'd like to have something like this.
2015-02-27 15:02 GMT-05:00 Jutho juthoh...@gmail.com:
Or in this particular case
@everywhere srand(seed) would give reproducibility, but it would probably
not be a good idea since the exact same random variates will be generated
on each process. Maybe something like
for p in workers()
@spawnat p srand(seed + p)
end
However, out RNG gives no guarantees about independence of
1 - 100 of 323 matches
Mail list logo