Re: Orientation? was: Re: [julia-users] different Eigenvalue results Julia vs Matlab

2016-09-10 Thread Stuart Brorson

I don't think you can define that in a continuous way.
In general, if your application relies on certain property of the basis,
you should just normalize it that way. If you don't have a requirement than
you should worry about it.


Thanks for the thoughts.

I did a little more thinking and Googling.  My conclusion:

Yes, you can define the concept of orientation very clearly.  The
orientation of a vector space is given by the determinant of the
matrix formed by its basis set (eigenvectors).  The determinant will
be either + or -1 for an orthonormal set of  basis vectors.  The sign
of the determinant divides the orthonormal set into two equivalence
classes of basis vectors, one with + orientation, the other with -.

I guess the concept of eigenvector/eigevalue is more general than the
concept of orientation.  Put another way, you can have a basis set
(eigenvectors) without defining the orientation property, but you
can't have orientation without first having a basis set.

The OP's results from Matlab and Julia belong to opposite
equivalence classes, which is what bothers me.  However,
from the standpoint of getting a basis, the concept of orientation is
not required.  One would need to impose some additional constraint on
the eigenvalue decomposition algorithm in order to always get an
eigenvector set with the same orientation, and I guess that additional
constraint isn't important for the problems usually solved by
eigendecomposition.

Thanks for making me think about this,

Stuart



On Sat, 10 Sep 2016, Yichao Yu wrote:


On Sat, Sep 10, 2016 at 12:20 PM, Stuart Brorson  wrote:


Just a question from a non-mathematician.  Also, this is a math
question, not a Julia/Matlab question.

I agree that Matlab and Julia are both correct -- within the
definitions of eigenvector and eigenvalue they compute, it's OK that
one eigenvector differes between the two by a factor -1.  They're
probably calling the same library anyway.

However, from a physics perspective, this bugs me.  In physics, an
important feature of a space is its orientation.  For example,
physics and engineering undergrads learn early on about about the
"right-hand rule" for taking cross products of vectors.  This type of
product imposes an orientation (or chirality) on the vector space
under consideration.  Real mathematicians may be familiar with
the wedge product in exterior algebra, which (I belive) also imposes
an orientation on a vector space.  Wikipedia says:

https://en.wikipedia.org/wiki/Orientation_(vector_space)

The problem I have with Julia and Matlab returning a set of
eigenvectors where only one eigenvector is the negative of the other
is that the two eigenvector sets span spaces of opposite
orientation, so the returns are -- in some sense -- not the same.

So my question:  Is there anything like a set of "oriented"
eigenvectors?  Is this even possible?  Or am I totally wrong, and
the two different sets of eigenvectors span the same space independent
of concerns about orientation?




I don't think you can define that in a continuous way.
In general, if your application relies on certain property of the basis,
you should just normalize it that way. If you don't have a requirement than
you should worry about it.




Stuart




On Sat, 10 Sep 2016, Tracy Wadleigh wrote:

Looks good to me. The eigenvalues look the same up to the precision shown

and eigenvectors are only unique up to a scalar multiple.

On Sep 10, 2016 8:11 AM, "Dennis Eckmeier" <
dennis.eckme...@neuro.fchampalimaud.org> wrote:

Hi!


Here is a simple example:

*Matlab:*
(all types are double)
 covX(1:4,1:4)

  1.0e+006 *

3.86263.41572.40491.2403
3.41573.73753.33952.3899
2.40493.33953.73723.4033
1.24032.38993.40333.8548

[V,D] = eig(covX(1:4,1:4))

V =

0.25830.5402   -0.65760.4571
   -0.6622   -0.4518   -0.25570.5404
0.6565   -0.46040.25520.5403
   -0.25250.54040.66110.4551


D =

  1.0e+007 *

0.0006 0 0 0
 00.0197 0 0
 0 00.3010 0
 0 0 01.1978



*Julia:*
(all types are Float 64)
*D = *
*5759.7016...*
*197430.4962...*
*3.0104794 ... e6*
*1.1978 ... e7*

V =
   0.25830.5402-0.6576 *-0.4571*
   -0.6622   -0.4518   -0.2557*-0.5404*
0.6565   -0.46040.2552*-0.5403*
   -0.25250.54040.6611*-0.4551*

cheers,

D


On Saturday, September 10, 2016 at 12:48:17 PM UTC+1, Michele Zaffalon
wrote:



You mean they are not simply permuted? Can you report the MATLAB and
Julia results in the two cases for a small covX matrix?

On Sat, Sep 10, 2016 at 1:34 PM, Dennis Eckmeier <
dennis@neuro.fchampalimaud.org> wrote:

Hi,


I am new to Julia and rather lay in math. For practice, I am
translating
a Matlab script (written by somebody else) and co

Orientation? was: Re: [julia-users] different Eigenvalue results Julia vs Matlab

2016-09-10 Thread Stuart Brorson

Just a question from a non-mathematician.  Also, this is a math
question, not a Julia/Matlab question.

I agree that Matlab and Julia are both correct -- within the
definitions of eigenvector and eigenvalue they compute, it's OK that
one eigenvector differes between the two by a factor -1.  They're
probably calling the same library anyway.

However, from a physics perspective, this bugs me.  In physics, an
important feature of a space is its orientation.  For example,
physics and engineering undergrads learn early on about about the
"right-hand rule" for taking cross products of vectors.  This type of
product imposes an orientation (or chirality) on the vector space
under consideration.  Real mathematicians may be familiar with
the wedge product in exterior algebra, which (I belive) also imposes
an orientation on a vector space.  Wikipedia says:

https://en.wikipedia.org/wiki/Orientation_(vector_space)

The problem I have with Julia and Matlab returning a set of
eigenvectors where only one eigenvector is the negative of the other
is that the two eigenvector sets span spaces of opposite
orientation, so the returns are -- in some sense -- not the same.

So my question:  Is there anything like a set of "oriented"
eigenvectors?  Is this even possible?  Or am I totally wrong, and
the two different sets of eigenvectors span the same space independent
of concerns about orientation?

Stuart




On Sat, 10 Sep 2016, Tracy Wadleigh wrote:


Looks good to me. The eigenvalues look the same up to the precision shown
and eigenvectors are only unique up to a scalar multiple.

On Sep 10, 2016 8:11 AM, "Dennis Eckmeier" <
dennis.eckme...@neuro.fchampalimaud.org> wrote:


Hi!

Here is a simple example:

*Matlab:*
(all types are double)
 covX(1:4,1:4)

  1.0e+006 *

3.86263.41572.40491.2403
3.41573.73753.33952.3899
2.40493.33953.73723.4033
1.24032.38993.40333.8548

[V,D] = eig(covX(1:4,1:4))

V =

0.25830.5402   -0.65760.4571
   -0.6622   -0.4518   -0.25570.5404
0.6565   -0.46040.25520.5403
   -0.25250.54040.66110.4551


D =

  1.0e+007 *

0.0006 0 0 0
 00.0197 0 0
 0 00.3010 0
 0 0 01.1978



*Julia:*
(all types are Float 64)
*D = *
*5759.7016...*
*197430.4962...*
*3.0104794 ... e6*
*1.1978 ... e7*

V =
   0.25830.5402-0.6576 *-0.4571*
   -0.6622   -0.4518   -0.2557*-0.5404*
0.6565   -0.46040.2552*-0.5403*
   -0.25250.54040.6611*-0.4551*

cheers,

D


On Saturday, September 10, 2016 at 12:48:17 PM UTC+1, Michele Zaffalon
wrote:


You mean they are not simply permuted? Can you report the MATLAB and
Julia results in the two cases for a small covX matrix?

On Sat, Sep 10, 2016 at 1:34 PM, Dennis Eckmeier <
dennis@neuro.fchampalimaud.org> wrote:


Hi,

I am new to Julia and rather lay in math. For practice, I am translating
a Matlab script (written by somebody else) and compare the result of each
step in Matlab and Julia.

The Matlab script uses  [V,D] = eig(covX);

which I translated to Julia as: (D,V) = eig(covX)

However, the outcomes don't match, although covX is the same.

D.









Re: [julia-users] Re: Julia equivalent to Matlab's [Q, R, P] = qr(A)?

2016-09-07 Thread Stuart Brorson

Thank you, Michele & Chris for your suggestion to use qrfact.  Good
idea.  Unfortunately, I neglected to mention that my matrix is a
BigFloat array, and qrfact with pivoting fails like this:

julia> Z = qrfact(A, true)
ERROR: MethodError: `qrfact` has no method matching
qrfact(::Array{BigFloat,2}, ::Bool)

H   I guess I'll need to figure out how to implement the
QR-with-pivot algorithm in Golub and Van Loan (ugh!).  Or is there a
clear reference to the algorithm Julia uses?

Stuart


On Tue, 6 Sep 2016, Chris Rackauckas wrote:


Use qrfact() (or the in-place version). It doesn't return [Q,R,P] because
Julia is smarter than that. It returns a specially designed type for the QR
return. It holds all of the those parts so you can get Q, R, and P out, but
most of the time you won't need to. For example, if A is the return of
qrfact, then A\b will automatically dispatch to do the fast algorithm for a
QR-factored matrix (which is the most common use for accessing Q and R).

On Tuesday, September 6, 2016 at 6:37:59 PM UTC-7, Stuart Brorson wrote:


Hello Julia users,

Matlab has a variant of the QR decomposition invoked like this:

[Q,R,P] = qr(A)

This variant of qr() returns matrix R where the diagonal elements are
sorted from largest to smallest magnitude as you go from upper left to
lower right.  The matrix P is the permutation matrix which permutes
the rows/cols of A to give this ordering of the diagonal elements of
R.  That is, Q*R = A*P.

I tried doing the naive, analogous thing in Julia, but get an error:

julia> A = rand(3,3)
3x3 Array{Float64,2}:
  0.243071  0.454947   0.89657
  0.112843  0.802457   0.375417
  0.154241  0.0182734  0.992542

julia> Q,R,P = qr(A)
ERROR: BoundsError: attempt to access (
3x3 Array{Float64,2}:
  -0.786117   0.0985642  -0.610168
  -0.364946  -0.8707630.329523
  -0.498833   0.4817230.720492,

3x3 Array{Float64,2}:
  -0.309204  -0.659611  -1.33693
   0.0   -0.645106   0.2396
   0.00.00.29177)
   at index [3]
  in indexed_next at tuple.jl:21

My question:  What's the best way to get the equivalent of Matlab's
[Q,R,P] = qr(A) in Julia?  Should I write my own qr() (not too
difficult)?  Or just do some row/col permutation of the output of
regular qr()?

Thanks for any advice,

Stuart

p.s.  I am using Version 0.4.3-pre+6 (2015-12-11 00:38 UTC)





[julia-users] Julia equivalent to Matlab's [Q, R, P] = qr(A)?

2016-09-06 Thread Stuart Brorson

Hello Julia users,

Matlab has a variant of the QR decomposition invoked like this:

[Q,R,P] = qr(A)

This variant of qr() returns matrix R where the diagonal elements are
sorted from largest to smallest magnitude as you go from upper left to
lower right.  The matrix P is the permutation matrix which permutes
the rows/cols of A to give this ordering of the diagonal elements of
R.  That is, Q*R = A*P.

I tried doing the naive, analogous thing in Julia, but get an error:

julia> A = rand(3,3)
3x3 Array{Float64,2}:
 0.243071  0.454947   0.89657
 0.112843  0.802457   0.375417
 0.154241  0.0182734  0.992542

julia> Q,R,P = qr(A)
ERROR: BoundsError: attempt to access (
3x3 Array{Float64,2}:
 -0.786117   0.0985642  -0.610168
 -0.364946  -0.8707630.329523
 -0.498833   0.4817230.720492,

3x3 Array{Float64,2}:
 -0.309204  -0.659611  -1.33693
  0.0   -0.645106   0.2396
  0.00.00.29177)
  at index [3]
 in indexed_next at tuple.jl:21

My question:  What's the best way to get the equivalent of Matlab's
[Q,R,P] = qr(A) in Julia?  Should I write my own qr() (not too
difficult)?  Or just do some row/col permutation of the output of
regular qr()?

Thanks for any advice,

Stuart

p.s.  I am using Version 0.4.3-pre+6 (2015-12-11 00:38 UTC)


Re: [julia-users] Newbie Question : Averaging neighbours for every element

2016-02-01 Thread Stuart Brorson

Why don't you just write a set of nested "for" loops and iterate over
each element?  One of Julia's strengths is that "for" loops are fast,
so you don't need to worry about twisting your logic or your code
around in order to vectorize it as you do in Matlab or other such
languages.  If your logic is naturally expressed using "for" loops,
then write it out that way.

Stuart


On Sun, 31 Jan 2016, Omkar Patil wrote:


Hello,

I have a 3D array of float64 elements. And I need to average values of
neighbouring elements in a sphere for every element.
Map can run a function on every element. But how can send indices of
current element so that the function knows neighbouring elements for each
element.

I have just started studying Julia. Sorry for my naivety.

Thanks and regards,
Omkar





[julia-users] Creating and destroying a matrix in a loop

2016-01-03 Thread Stuart Brorson

Hello again Julia users,

Sorry for the noise today, but I have another question which a quick
search didn't answer.

I am working on a function with a loop which creates an enlarged copy
of the same matrix at every iteration.  Let's say it looks similar to
this:

for i = 1:N
  A = zeros(BigFloat, 2^i, 2^i)
  # Now do something with A...
end

I will guess that this is bad performance-wise since Julia needs to
allocate a new array at each iteration through the loop.  Also, the GC
needs to come along occasionally and clean up all my old, discarded
arrays.  Meanwhile, I have accumulated a large number of old, unused
copies of A.  Am I right about the performance problem?

In C I would do something like this:

for(i = 1; i<=N; i++) {
  BigFloat* Ar = malloc((2i*2^i)*sizeof(BigFloat));
  /* Please forgive the obvious problem with pointers here... */
  /* Here I use the array */
  free(Ar);
}

That would release the memory at the end of each iteration, and I
could allocate a new Ar at the start of the next iteration.

What's the right way to do this in Julia?  Is it OK to invoke a GC run
at the end of each loop, or should I stop worrying and let Julia take
care of memory allocation/deallocation?

Thanks,

Stuart


Re: [julia-users] Question about type inference and profiling

2016-01-03 Thread Stuart Brorson

Duh!


*  If I need to nail down my variable types better (to eliminate type
  instability and get better performance), what is the recommended
  way to identify variables or locations in the code where types
  should be better defined?  A pointer to blog posts or documentation
  is a fine way to answer this question.  But I'm wondering if there
  are "lint"-type tools which identify uncertain types in Julia code?


OK, I just re-read the documentation and found  @code_warntype.  I will
play with that.  Sorry for the noise.

Stuart


[julia-users] Question about type inference and profiling

2016-01-03 Thread Stuart Brorson

Hello Julia users,

I have been using the profiler in an attempt to identify where my code
spends most of its time.  When I run the profiler, I get a large
number of calls into inference.jl.  Here are the first (approximately)
10 lines of my profile report:

julia> Profile.print(format=:flat)

 Count File 
 Function   Line
24 ./base.jl
 finalizer72
55 ./base.jl
 finalizer75
 1 ./inference.jl   
 abstract_apply  811
 1 ./inference.jl   
 abstract_call   848
72 ./inference.jl   
 abstract_call   879
 2 ./inference.jl   
 abstract_call   896
 3 ./inference.jl   
 abstract_call_gf676
69 ./inference.jl   
 abstract_call_gf737
83 ./inference.jl   
 abstract_eval   961
11 ./inference.jl   
 abstract_eval_call  906
72 ./inference.jl   
 abstract_eval_call  934

The profile report goes on from there.  The report shows I have many
more inference calls, as well as calls to other functions in my
program.

My questions:

*  Based on the above report, I guess that the compiler is doing a lot
   of guessing about types I am using in my program.  Is that correct?
   Or does a well-typed program also have lots of calls into
   inference.jl?

*  If I need to nail down my variable types better (to eliminate type
   instability and get better performance), what is the recommended
   way to identify variables or locations in the code where types
   should be better defined?  A pointer to blog posts or documentation
   is a fine way to answer this question.  But I'm wondering if there
   are "lint"-type tools which identify uncertain types in Julia code?

Thanks for your thoughts,

Stuart




Re: [julia-users] Re: How to display in string?

2015-12-27 Thread Stuart Brorson

Thanks for everybody's help.  I got something working which satisfies
me.

Regarding Cedric's question, I am trying to compute the Feigenbaum
number delta to arbitrary high precision.

https://en.wikipedia.org/wiki/Feigenbaum_constants

I am using a brute-force method to compute delta.  The basic idea is
that each bifurcation point may be found by solving a polynomial
system of order N+1, where N is the number of periods.  Delta is then
found in the limit via a ratio of bifurcation points.  I'm using
Newton's method to solve the system giving the bifurcation points.  My
method is not particularly clever, but it's direct and easily
understandable for a math-dud like me.

Julia has two features which make this computational method possible:
1. I can generate code on the fly using metaprogramming.  That's how I
generate a function implementing my N+1st order system (and the
associated Jacobean) which I can then call from Newton's method
at each bifurcation point.  2.  Ability to easily use BigFloat as my
variable type, giving aribtrary precision.  I do a lot of work in
Matlab, and you can't do either of these things in Matlab.

Regarding generating the code, I fiddled around with directly
generating the AST briefly.  Then I decided that it was a little too
tricky for my knowledge of Julia at this point.  I decided that I
liked the approach where I generate a string, and then parse and eval
it.  The reason is that I can easily inspect and verify a textual
representation of my program since it is simply human-readable Julia
code.  The AST is one level of indirection away, and I'm not an expert
enough to be able to read it as easily as Julia code itself.

Here's what I eventually did.  This builds the textual representation
of my polynomial system:

function build_f(N)

  mymod(x, y) = mod(x-1, y)+1

  P = string("function f(X::Vector)\n")

  P = string(P, "z = zeros(X);\n")

  for i in 1:N
P = string(P, "x$i = X[$i];\n");
  end
  P = string(P, "mu = X[$(N+1)];\n")

  for i in 1:N
ip1 = mymod(i+1, N)
P = string(P, "z[$i] = mu*x$i*(1-x$i) - x$ip1;\n")
  end

  Q = "mu"
  for i = 2:N
Q = string(Q, "*mu")
  end
  P = string(P, "mu$N = $Q;\n")

  for i in 1:N
P = string(P, "xx$i = 1-2*x$i;\n");
  end

  Q = "mu$N"
  for i in 1:N
Q = string(Q, "*xx$i")
  end
  P = string(P, "t = $Q;\n")
  P = string(P, "z[$(N+1)] = t*t-1;\n")

  P = string(P, "return z;\n");
  P = string(P, "end;\n")

  return P

end

And this is the result of calling it for N=4:

julia> y = build_f(4)
"function f(X::Vector)\nz = zeros(X);\nx1 = X[1];\nx2 = X[2];\nx3 =
X[3];\nx4 = X[4];\nmu = X[5];\nz[1] = mu*x1*(1-x1) - x2;\nz[2] =
mu*x2*(1-x2) - x3;\nz[3] = mu*x3*(1-x3) - x4;\nz[4] = mu*x4*(1-x4) -
x1;\nmu4 = mu*mu*mu*mu;\nxx1 = 1-2*x1;\nxx2 = 1-2*x2;\nxx3 =
1-2*x3;\nxx4 = 1-2*x4;\nt = mu4*xx1*xx2*xx3*xx4;\nz[5] =
t*t-1;\nreturn z;\nend;\n"

julia> println(y)
function f(X::Vector)
z = zeros(X);
x1 = X[1];
x2 = X[2];
x3 = X[3];
x4 = X[4];
mu = X[5];
z[1] = mu*x1*(1-x1) - x2;
z[2] = mu*x2*(1-x2) - x3;
z[3] = mu*x3*(1-x3) - x4;
z[4] = mu*x4*(1-x4) - x1;
mu4 = mu*mu*mu*mu;
xx1 = 1-2*x1;
xx2 = 1-2*x2;
xx3 = 1-2*x3;
xx4 = 1-2*x4;
t = mu4*xx1*xx2*xx3*xx4;
z[5] = t*t-1;
return z;
end;

I'm very pleased I can inspect the code in this format and make sure I
haven't made any syntax errors.

To solve the system, I pass the fcns to Newton's method like this:

# Create fcn whose roots we want to find.
y = build_f(N);
Pf = eval(parse(y));
# Build Jacobean of fcn.
y = build_J(N);
PJ = eval(parse(y));

# Do N dimensional Newton's method.
Xstar = newtonND(f, J, X0, tol);

I probably don't need to assign the result of eval(parse()) to a
variable Pf since eval(parse()) drops the name of the function into my
current namespace, right?

The other question is: does this approach create performance-degrading
global objects?  If so, how do I mitigate that problem?

At this point I am struggling with getting Newton's method to converge
properly for high N.  However, that has to do with selecting the
correct initial condition -- i.e. a typical numerical analysis problem
-- and is unrelated to the specifics of Julia.  What's cool is that I
have spent only 1 1/2 days on this project, and am quite far along on
completing it.  If I had to stitch everything together using, say, C++
or Mathematica, I'd probably still be struggling with the question of
how to create the system I need to solve.

Stuart



On Sat, 26 Dec 2015, Cedric St-Jean wrote:





That is, I'm solving a set of N polynomial equations, where N is a
parameter.



That sounds interesting, could you post some more details? At this stage,
it's not clear if you really need to generate a new program 

[julia-users] Sparse matrices and BigFloats?

2015-12-26 Thread Stuart Brorson

Hi again --

Another question.  I want to create a sparse matrix whose non-zero
elements are BigFloats.  Neither of the two approaches I tried (below)
seem to work.

julia> Asp = spzeros(4,4)
4x4 sparse matrix with 0 Float64 entries:

julia> Asp[1,1] = BigFloat(3.4)
3.399911182158029987476766109466552734375000

julia> Asp
4x4 sparse matrix with 1 Float64 entries:
[1, 1]  =  3.4

julia> Asp = BigFloat(spzeros(4,4))
ERROR: MethodError: `convert` has no method matching convert(::Type{BigFloat}, 
::SparseMatrixCSC{Float64,Int64})
This may have arisen from a call to the constructor BigFloat(...),
since type constructors fall back to convert methods.
Closest candidates are:
  call{T}(::Type{T}, ::Any)
  convert{T<:Real}(::Type{T<:Real}, ::Complex{T<:Real})
  convert(::Type{BigFloat}, ::Rational{T<:Integer})
  ...
 in call at essentials.jl:56


I tried fiddling around with using SparseMatrixCSC{Tv,Ti<:Integer} as
a constructor, but no joy there since it probably is a type, and can't
be used as a constructor.

What's the easiest way to create a sparse matrix whose entries are not
doubles?

Thanks,

Stuart


Re: [julia-users] Re: How to display in string?

2015-12-26 Thread Stuart Brorson

Cool.  Thanks for the code example.  I will play with it to see what I
can do with it.

Thanks,

Stuart


On Sat, 26 Dec 2015, Josh Langsfeld wrote:


As Yichao is hinting, you may find a macro to be a cleaner way of making
your functions instead of constructing and parsing a string.

macro return_fcn(N)
   xexprs = Expr[:($(symbol(:x,i)) = X[$i]) for i=1:N]
   return esc(:(
   function $(symbol(:f,N))(X::Vector)
   $(xexprs...)
   mu = X[$(N+1)]
   end
   ))
end

julia> macroexpand(:(@return_fcn(2)))
:(function f2(X::Vector)
   x1 = X[1]
   x2 = X[2]
   mu = X[3]
   end)

julia> macroexpand(:(@return_fcn(4)))
:(function f4(X::Vector)
   x1 = X[1]
   x2 = X[2]
   x3 = X[3]
   x4 = X[4]
   mu = X[5]
   end)



On Saturday, December 26, 2015 at 2:03:56 PM UTC-5, Stuart Brorson wrote:


Julia users,

I'm fiddling around with Julia's strings & metaprogramming.  I am
constructing a function by concatenating a bunch of strings together
to create my function, like this:

function return_fcn(N)
   P = string("function f$N(X::Vector)\n")
   for i in 1:N
 P = string(P, "x$i = X[$i];\n");
   end
   P = string(P, "mu = X[$(N+1)];\n")
   etc

When I execute this code, I get:

julia> y = return_fcn(2)
"function f2(X::Vector)\nx1 = X[1];\nx2 = X[2];\nmu = X[3];\n"

However, what I really want to see is

function f2(X::Vector)
x1 = X[1];
x2 = X[2];
mu = X[3];

"show(y)" doesn't seem to do what I want.  Later, when I do

eval(parse(y))

then I get a function which executes correctly.  My problem is simply
that I can't get Julia to give me a string I can read easily.  This
will be a very big issue for me when N -> 1024, 2048, etc

Questions:

1.  How can I escape the \n to get a real  in my displayed
string?

2.  Is this the optimal way to construct a program for later execution
(i.e. metaprogramming)?

Thanks for all wisdom you have to offer.

Stuart





Re: [julia-users] How to display in string?

2015-12-26 Thread Stuart Brorson

2.  Is this the optimal way to construct a program for later execution
(i.e. metaprogramming)?


No. Construct the AST directly. The metaprogramming section of the doc
already covers this.


Thanks for your response.  I'm probably the dumbest person on this
list, so if I have misunderstood something, forgive me.

I did spend a good deal of time reading and re-reading the
metaprogramming section of the online docs, and it wasn't clear to me
how to construct the AST directly.  There are some one-line examples
in the doc, but my program will eventually be a couple of hundred
lines long, and it wasn't clear how to scale out given the examples in
the doc.

Actually, I plan to have a set of programs whose internal structure is
controlled by a parameter.  (That is, I'm solving a set of N
polynomial equations, where N is a parameter.)  Since I'm quite used
to using Perl (and Python) to generate mid-sized text documents, I
opted for the string-based approach, but am happy to use the AST
directly if that's a viable approach.

I looked through the examples included in the Julia distribution, but
nothing struck me as applicable (however, I'm a newbie).  Are there
scaled-out examples of metaprogramming available on the web somewhere?
Examples where the structure of generated program itself may be
controlled by a parameter?

Thanks,

Stuart


[julia-users] How to display in string?

2015-12-26 Thread Stuart Brorson

Julia users,

I'm fiddling around with Julia's strings & metaprogramming.  I am
constructing a function by concatenating a bunch of strings together
to create my function, like this:

function return_fcn(N)
  P = string("function f$N(X::Vector)\n")
  for i in 1:N
P = string(P, "x$i = X[$i];\n");
  end
  P = string(P, "mu = X[$(N+1)];\n")
  etc

When I execute this code, I get:

julia> y = return_fcn(2)
"function f2(X::Vector)\nx1 = X[1];\nx2 = X[2];\nmu = X[3];\n"

However, what I really want to see is

function f2(X::Vector)
x1 = X[1];
x2 = X[2];
mu = X[3];

"show(y)" doesn't seem to do what I want.  Later, when I do

eval(parse(y))

then I get a function which executes correctly.  My problem is simply
that I can't get Julia to give me a string I can read easily.  This
will be a very big issue for me when N -> 1024, 2048, etc

Questions:

1.  How can I escape the \n to get a real  in my displayed
string?

2.  Is this the optimal way to construct a program for later execution
(i.e. metaprogramming)?

Thanks for all wisdom you have to offer.

Stuart


Re: [julia-users] A trivial multiplication and Julia is giving wrong output. Why ?

2015-12-23 Thread Stuart Brorson

Julia is assuming your inputs are 32 bit integers.  The result of
multiplying them is larger than 2^32-1.  Therefore, the integer
multiplication is performed modulo 2^32, which is what the hardware
does naturally.

Try doing typeof(1234567) to see what type Julia thinks your numbers
are.

Read more about it here: 
http://docs.julialang.org/en/release-0.4/manual/integers-and-floating-point-numbers/


Stuart



On Wed, 23 Dec 2015, ?mit ?zden wrote:




Folks,
I am using Julia 0.4.1 on Windows10. I did the following
trivial multiplication  and Julia is giving wrong output ? Is there any
problem with my installation of Julia ?

`
In [2]

1234567*12345678

Out[2]:

-1272282078


The correct output should be: 15241566651426

Thank you.



Re: [julia-users] higher rank sparse matrices

2015-09-12 Thread Stuart Brorson

On Sat, 12 Sep 2015, Frank Kampas wrote:


Is it possible to create sparse matrices with a rank other than 2?


I'll let the Julia experts answer this question (I suspect the answer
is no).  However, I'm curious -- what is your use-case?  Do you have a
problem requiring sparse arrays of dimension 3 or more?  What is the
problem?

From my limited knowledge, sparse matrices arise from a small set of 

typical problems:

*  Differentiation operators in finite difference problems.
*  Finite element analysis 
*  Representation of graphs


What kinds of problems would require sparse arrays of more than 2
dimensions?

Stuart



Re: [julia-users] ANN: support for the orignial role of quiet NaNs

2015-08-03 Thread Stuart Brorson

On Sun, 2 Aug 2015, Jeffrey Sarnoff wrote:


Quiet NaNs (QNaNs) were introduced into the Floating Point Standard as a
tool for applied numerical work.  That's why there are so many of them
(Float64s have nearly 2^52 of them, Float32s have nearly 2^23 and Float16s
have nearly 2^10 QNaNs).  AFAIK Julia and most other languages use one or
two of each in most circumstances.  Half of the QNaNs are in some sense
positive and the other half negative (their sign bits can be queried, even
though they are not magnitudes).  While QNaNs are unordered by definition,
they each have an embedded *payload:* an embedded integer value that exists
to be set with information of reflective value. And then to carry it,
propagating through the rest of the numerical work so it becomes available
for use by the designer or investigator.


A logical application for the many different quiet NaNs is to encode
different types of meta-numeric value.  Of course, there is the basic
NaN, for example 0/0 => NaN.  However using a different payload in the
NaN might be used to signal NA (i.e. missing).  One can think of many
other fault states which arise in numerical computing with real data,
such as "value out of bounds", "invalid value", etc.  All these
different states might be encoded using NaNs of different payloads.

The devil is in the details, however.  For example, the missing value
NA propagates differently from the standard NaN.  Consider:

mean([1 2 NA 4 5]) => 3
mean([1 2 NaN 4 5]) => NaN.

Therefore, the function "mean()" needs to know how to treat the NaN
differently from the NA.

Moreover, I believe that to make use of the different NaN payloads,
hardware makers would need to build knowledge of the different NaN
types (and propagation rules) into their floating point ALUs.  Is this
right?

One can implement this scheme in software,
but the problem is that one needs to match the NaN payload in
software, which degrades floating point performance in a big way.
Therefore, standardization and hardware support are important.

My question:  Have any hardware makers ever looked into utilizing the
different NaN payloads for the above scheme?  How about
standardization bodies?

Stuart


Re: [julia-users] Re: Functions in degrees

2015-03-03 Thread Stuart Brorson

On Tue, 3 Mar 2015, Steven G. Johnson wrote:

On Monday, March 2, 2015 at 7:57:10 PM UTC-5, MA Laforge wrote:


The real question is how much programming overhead is required to use
these types (assuming the compiler does the grunt work reducing the
*performance* overhead)



No, the real question is whether the benefit of using special types for
radians vs. degrees would outweigh the costs of retraining every single
user from every single previous mainstream programming language.   To me,
there is no contest ? doing angles differently from every other programming
language would impose enormous costs in teaching, documentation, and
support (imagine the never-ending flood of mailing-list questions) that
would completely overwhelm the benefit of eliminating a "d" suffix on a few
functions and automating a few angle conversions.


+1

One of the reasons the language from the MathWorks company is so
popular is that it lifts the burden of worring about types, memory
allocation, and a host of other issues from the backs of people who
only want to do math.  Requring users to specify a type in order to
take a sin or cos (or sind or cosd) will make Julia very unpopular.


From what I understand, Julia uses types sparingly, and for two

reasons:

  *  When needed for high performance.
  *  As a way to do something like object-oriented programming.

As I understand it, types are not systematically required as a basic
part of writing a program to so ordinary users can become effective
at solving their math problem quickly.  This is opposed to, say, C++,
which requires strict typing from the get-go, and is accordingly
difficult for non-programmers to use for doing math.

Since types should be used sparingly, I don't think it a good idea to
replace convenience functions like sind and cosd with a type-driven
invocation mechanism -- ordinary users will be confused.

Stuart


Re: [julia-users] Re: Functions in degrees

2015-03-01 Thread Stuart Brorson

Am Dienstag, 4. Februar 2014 17:57:16 UTC+1 schrieb Hans W Borchers:


I was astonished to see the following functions in the Julia Standard
Library,
functions that accept (or return) degrees instead of radians:

sindasind   secdasecd
cosdacosd   cscdacscd
tandatand   cotdacotd

I didn't find these function names in any other technical computing system.


Actually, these are present in Matlab.  Two examples:

http://www.mathworks.com/help/matlab/ref/sind.html
http://www.mathworks.com/help/matlab/ref/acotd.html

Many people come to Julia from Matlab, and may expect to find those
functions present.  One man's namespace clutter is another man's
convenience function.

I'm not arguing for or against inclusion of these fcns in Julia, but I
am pointing out that they have precedent in other, popular computing
systems.

Stuart


Re: [julia-users] ANN: IndexedArrays.jl

2015-02-27 Thread Stuart Brorson

Hi --

Not a criticism, but rather a question.  You put your Julia functions
into IndexedArrays.jl, but put one macro into delegate.jl.  Why? 
Is this a stylistic thing (i.e. logically separate macro definitions

from functions)?  Do other julia developers do this too?  I haven't
seen it before  (but I'm no expert).

Thanks,

Stuart


On Thu, 26 Feb 2015, Jim Garrison wrote:

I made an `IndexedArrays` package with an `IndexedArray` data structure. This 
acts like a `Vector` of unique elements, but also maintains the reverse 
mapping in a dictionary that is updated in sync with the vector, allowing for 
quick lookup of the index of any element.  A simple example is probably more 
useful:


   julia> using IndexedArrays

   julia> ia = IndexedArray(["cat", "dog", "mouse"])
   3-element IndexedArray{ASCIIString}:
"cat"
"dog"
"mouse"

   julia> ia[1]
   "cat"

   julia> findfirst(ia, "dog") # executes quickly via a dictionary 
lookup, not sequential search

   2

More details at https://github.com/garrison/IndexedArrays.jl

In particular, I am looking for

- A link if something similar already exists
- Obligatory bikeshedding on the chosen package name
- Any thoughts/comments/suggestions!

Jim



Re: [julia-users] Rounding and the IEEE rule

2014-12-27 Thread Stuart Brorson

On Sat, 27 Dec 2014, Tamas Papp wrote:


The only reasonably cogent argument I know is mentioned in #8750
(unbiasedness). I am glad that the issue is now closed, since that means
no more electrons are wasted on it, but I am still curious about
practical examples where rounding mode makes a difference in numerical
code. All I have seen rely on ill-conditioning, which is a separate
issue.


My understanding is that the ability to round up or down is useful for
examining your computation's sensitivity to round off errors.  This
use case assumes you can manually set the rounding mode to one of the
flavors supported by IEEE-754.  For example, you'd set rounding to
"round towards +inf", do your computation and get your result.  Then
you'd set rounding to "round towards -inf", and do the same thing.
Then compare the two results.  If they're "close enough", then your
computation is robust.  If the results differ wildly, then you have a
problem -- probably your computation is ill conditioned, as you say.

FWIW, this is similar to (but not the same as) "interval arithmetic",
another way to quantify your computation's sensitivity to round-off
errors.

http://en.wikipedia.org/wiki/Interval_arithmetic

I don't know if Julia supports the ability to change rounding mode on
the fly.  However, I believe that the Intel C compiler does support
changing rounding modes at compile time, albeit using compiler flags
-- not as convenient as issuing a library call or typing a command at
a command prompt.

Stuart


Re: [julia-users] Best approach to transform equations?

2014-11-18 Thread Stuart Brorson

This type of problem is usually handed by employing a standardized
file format tailored to the problem at hand.  For linear programming
problems, the old, commonly-used format is MPS:

http://en.wikipedia.org/wiki/MPS_(format)

I don't know if there is a format which might be better/newer (if any)
since I don't pretend to be an expert in this area.

Nonetheless, I think your best bet would be to find your preferred
file format for LP problems and write an importer for it (or find one
which somebody has already created).  Don't try to write a parser for
your own format since it's a pain to do, and the problem has already
been solved a number of times.

Best of luck,

Stuart



On Tue, 18 Nov 2014, Bodo Kaiser wrote:


In university we have a course where we have a course where you define a
"linear program" to find out which product or product-combination is best
to product to have maximum revenue.

The basic approach is to define a target equation:

z = 4x + 3y


x := amount of product 1
y := amount of product 2

which we maximize under some side conditions:

1. 5x + 10y ? 100
2. x ? 20


The first case could mean that we have 100h of working hours as capacity
where it takes 5h to produce product 1 and 10h to produce product 2.
The second case could mean that we have to supply the market with a minimum
amount of 20 units of product 1.

You can put this into a matrix:

A = [5 10 1 0 100; 1 0 0 1 20; 4 3 0 0 -1]


and apply the "Simplex" algorithm to find the value of x and y where z
reaches maximum without breaking side conditions.

To cut a long story short to apply the simplex algorithm we need to
transform all conditions into a "canonical form" (e.g. convert greater then
to less then).
If I now would like to write a julia script which does solve linear
programs for me I need to "read" the equation and change it.
Do I need to write some sort of string based equation parser or is there
some package which would do this for me?

Best,
Bo



Re: [julia-users] How Julia do math operations

2014-11-04 Thread Stuart Brorson

Just to follow up on this topic, here's what Matlab does:


2*10.97 + 23.9985


ans =

   45.9385


2*10.97 + 23.9985 == 45.9385


ans =

 0


format long
2*10.97 + 23.9985


ans =

  45.9385005


2*10.97 + 23.9985 == 45.9385005


ans =

 1

Note that Matlab's display value is 45.9385, but the actual result is
45.9385005.  You only see the display value if you set the
display to show all digits ("format long")

Stuart



On Wed, 5 Nov 2014, K Leo wrote:


julia> 2*10.97 + 23.9985
45.9385005

julia> 2*10.97 + 23.9985 == 45.9385005
true

Amazing.  I never expected this.  Is floating point comparison going to be 
guaranteed?


On 2014?11?05? 08:48, Stefan Karpinski wrote:
Some systems round their answers as John said but it's easy to check that 
it's a lie:


R version 3.1.0 (2014-04-10) -- "Spring Dance"
> 2*10.97 + 23.9985
[1] 45.9385
> 2*10.97 + 23.9985 == 45.9385
[1] FALSE

This is perl 5, version 16, subversion 2 (v5.16.2)
  DB<1> x 2*10.97 + 23.9985
0  45.9385
  DB<2> x 2*10.97 + 23.9985 == 45.9385
0  ''


I don't have a working copy of Matlab right now, but I think it does this 
too.


On Tue, Nov 4, 2014 at 8:31 PM, Neil Devadasan > wrote:


Thanks

On Tuesday, November 4, 2014 2:13:37 PM UTC-5, John Myles White
wrote:

Hi Neil,

Julie does math the same way that all computers do math.
You're probably coming from another language where a lot of
effort is invested into pretending that computers offer a
closer approximation to abstract mathematics than they
actually do. Those systems have been lying to you.

Put another way: you just took the red pill by using Julia.

 -- John

On Nov 4, 2014, at 11:06 AM, Neil Devadasan
 wrote:

> julia> f(x::Float64, y::Float64) = 2x + y;
>
> julia> f(10.97,23.9985)
> 45.9385005
>
> The above method execution of function f returns an answer
that I cannot understand.  Can someone clarify?
>
> Thank you.







Re: [julia-users] How Julia do math operations

2014-11-04 Thread Stuart Brorson

Don't know what you mean by "guaranteeing a floating point comparison".

In any event, you should never check equality when comparing floating
point numbers (except perhaps in special cases).  Instead, use a
tolerance:

tol = 1e-12;
if (abs(a-b) < tol)
  # Close enough
else
  # not equal
end

The problems you see in other languages are due to the fact they round
the values they display (but the underlying values are not rounded).
The return you got from Juila was not rounded for display.

Stuart


On Wed, 5 Nov 2014, K Leo wrote:


julia> 2*10.97 + 23.9985
45.9385005

julia> 2*10.97 + 23.9985 == 45.9385005
true

Amazing.  I never expected this.  Is floating point comparison going to be 
guaranteed?


On 2014?11?05? 08:48, Stefan Karpinski wrote:
Some systems round their answers as John said but it's easy to check that 
it's a lie:


R version 3.1.0 (2014-04-10) -- "Spring Dance"
> 2*10.97 + 23.9985
[1] 45.9385
> 2*10.97 + 23.9985 == 45.9385
[1] FALSE

This is perl 5, version 16, subversion 2 (v5.16.2)
  DB<1> x 2*10.97 + 23.9985
0  45.9385
  DB<2> x 2*10.97 + 23.9985 == 45.9385
0  ''


I don't have a working copy of Matlab right now, but I think it does this 
too.


On Tue, Nov 4, 2014 at 8:31 PM, Neil Devadasan > wrote:


Thanks

On Tuesday, November 4, 2014 2:13:37 PM UTC-5, John Myles White
wrote:

Hi Neil,

Julie does math the same way that all computers do math.
You're probably coming from another language where a lot of
effort is invested into pretending that computers offer a
closer approximation to abstract mathematics than they
actually do. Those systems have been lying to you.

Put another way: you just took the red pill by using Julia.

 -- John

On Nov 4, 2014, at 11:06 AM, Neil Devadasan
 wrote:

> julia> f(x::Float64, y::Float64) = 2x + y;
>
> julia> f(10.97,23.9985)
> 45.9385005
>
> The above method execution of function f returns an answer
that I cannot understand.  Can someone clarify?
>
> Thank you.







Re: [julia-users] simultaneous sin and cos: sincos in julia from openlibm?

2014-07-28 Thread Stuart Brorson

Yup, you certainly don't want to use CORDIC.  My e-mail is just a
historical note.

The standard (fdlibm) implementation of sin & cos involves
summing a polynomial with 6 or 8 coefficients (after folding down to
the first quadrant).  Here's the kernel:

http://www.netlib.org/fdlibm/k_sin.c

This impl seems pretty quick, IMO.  I'd wager that
there's probably not much room for improvement over the existing,
separate implementations.

Stuart


On Mon, 28 Jul 2014, Stefan Karpinski wrote:


This CORDIC explanation seems quite plausible. However, it seems like we
probably don't want to use this algorithm, which means that we're not at
this point able to get any kind of speedup for sincos(x) relative to
sin(x), cos(x).


On Mon, Jul 28, 2014 at 10:13 AM, Stuart Brorson  wrote:


A bell rang in the back of my head as I was on my way to work this
morning.  I was thinking about sincos again, and remembered something
about CORDIC algorithms from the distant past.  These are add and
shift algorithms used to compute certain trig and other elementary
functions.  They were very popular for scientific calculators back when
hand-held calculators were new since they are easily implementable in
hardware, and don't require floating point multiply.  A couple of
references:

http://en.wikipedia.org/wiki/CORDIC
http://ww1.microchip.com/downloads/en/AppNotes/01061A.pdf

It appears that a common CORDIC computation will product sin & cos
together.  I'll bet dollars to doughnuts (without actually knowing)
that the x87 assembly instruction mentioned below was doing a CORDIC
computation, and it made sense to return both sin & cos since
they were computed together.

The paper by Jeannerod & JourdanLu refer to CORDIC methods, but is
apparently an extension, as far as I can tell.

Stuart




On Mon, 28 Jul 2014, Simon Byrne wrote:

 I've often wondered this myself. As I understand it, the purpose of the

sincos function was to call the FSINCOS assembly instruction for x87 FPU.
On modern processors however, it is generally acknowledged that calling a
well-written math library compiled to use SSE instructions is typically
faster (and can be more accurate) than using x87 trig instructions. See
this discussion:
http://stackoverflow.com/questions/12485190/calling-
fsincos-instruction-in-llvm-slower-than-calling-libc-sin-cos-functions

Calling sincos using Isaiah's method seems to be about 9 times slower than
calling the sin and cos separately:
https://gist.github.com/734dcacde1f107397b3b.git
though a lot of this seems to be due to the overhead in creating and
destroying the arrays for return values.


On Monday, 28 July 2014 13:34:53 UTC+1, Kevin Squire wrote:



This paper seems relevant, though possibly only for 32-bit:


http://hal.archives-ouvertes.fr/docs/00/67/23/27/PDF/
Jeannerod-JourdanLu.pdf

Cheers,
   Kevin

On Monday, July 28, 2014, Stuart Brorson >

wrote:

 On Sun, 27 Jul 2014, Viral Shah wrote:


 Is sincos a standard libm function?





Out of curiosity I looked into sincos since I had never heard of it.
A quick check shows there's no sincos in fdlibm
(on netlib).  However, a little Googling reveals an old Sun math
library libsunmath seems to implement it.

I did find a couple of libm variants which implemented sincos.
However, they simply called sin & cos separately.  As Stephan says
upthread, no performance improvement.

As far as I know, sin & cos are usually computed using mod to fold
the input x down to the first quadrant, and then using a power series
(needs only 6 or 8 terms IIRC) to compute the function.  Perhaps
libsunmath computed e.g. cos first, and then did sin = sqrt(1 -
cos^2)?  Taking the sqrt seems non-performant compared to evaluating a
short power series, but maybe they had a reason?  Another thought: sin
and cos are reflections of each other (over the line x = pi/4) in the
first quadrant.  Perhaps there some other clever way to get sin from
cos?  I couldn't think if any in the short time I spent considering
it.

Stuart


On Sun, 27 Jul 2014, Viral Shah wrote:

 Is sincos a standard libm function?



Also, I wonder if creating the one entry array is too expensive, and if
we
should just call sin and cos separately. The vectorized version may be
able
to benefit from calling sincos directly.

-viral

On Monday, July 28, 2014 1:02:06 AM UTC+5:30, Isaiah wrote:



It doesn't appear to be wrapped, but you can call it yourself like
this:

julia> sincos(x) = begin psin = Cdouble[0]; pcos = Cdouble[0];
ccall(:sincos, Void, (Cdouble, Ptr{Cdouble}, Ptr{Cdouble}), x, psin,
pcos);
(psin[1], pcos[1]); end
sincos (generic function with 1 method)

julia> sincos(pi)
(1.2246467991473532e-16,-1.0)

Feel free to open an issue or pull request if you think it should be
exported - might have just been an oversight.


On Sun, Jul 27, 2014 at 3:25 PM, Ken B 
wrote:

 Hi,



I want to calculated sine and cosine together of the same an

Re: [julia-users] simultaneous sin and cos: sincos in julia from openlibm?

2014-07-28 Thread Stuart Brorson

A bell rang in the back of my head as I was on my way to work this
morning.  I was thinking about sincos again, and remembered something
about CORDIC algorithms from the distant past.  These are add and
shift algorithms used to compute certain trig and other elementary
functions.  They were very popular for scientific calculators back when
hand-held calculators were new since they are easily implementable in
hardware, and don't require floating point multiply.  A couple of
references:

http://en.wikipedia.org/wiki/CORDIC
http://ww1.microchip.com/downloads/en/AppNotes/01061A.pdf

It appears that a common CORDIC computation will product sin & cos
together.  I'll bet dollars to doughnuts (without actually knowing)
that the x87 assembly instruction mentioned below was doing a CORDIC
computation, and it made sense to return both sin & cos since
they were computed together.

The paper by Jeannerod & JourdanLu refer to CORDIC methods, but is
apparently an extension, as far as I can tell.

Stuart



On Mon, 28 Jul 2014, Simon Byrne wrote:


I've often wondered this myself. As I understand it, the purpose of the
sincos function was to call the FSINCOS assembly instruction for x87 FPU.
On modern processors however, it is generally acknowledged that calling a
well-written math library compiled to use SSE instructions is typically
faster (and can be more accurate) than using x87 trig instructions. See
this discussion:
http://stackoverflow.com/questions/12485190/calling-fsincos-instruction-in-llvm-slower-than-calling-libc-sin-cos-functions

Calling sincos using Isaiah's method seems to be about 9 times slower than
calling the sin and cos separately:
https://gist.github.com/734dcacde1f107397b3b.git
though a lot of this seems to be due to the overhead in creating and
destroying the arrays for return values.


On Monday, 28 July 2014 13:34:53 UTC+1, Kevin Squire wrote:


This paper seems relevant, though possibly only for 32-bit:


http://hal.archives-ouvertes.fr/docs/00/67/23/27/PDF/Jeannerod-JourdanLu.pdf

Cheers,
   Kevin

On Monday, July 28, 2014, Stuart Brorson >
wrote:


On Sun, 27 Jul 2014, Viral Shah wrote:

 Is sincos a standard libm function?




Out of curiosity I looked into sincos since I had never heard of it.
A quick check shows there's no sincos in fdlibm
(on netlib).  However, a little Googling reveals an old Sun math
library libsunmath seems to implement it.

I did find a couple of libm variants which implemented sincos.
However, they simply called sin & cos separately.  As Stephan says
upthread, no performance improvement.

As far as I know, sin & cos are usually computed using mod to fold
the input x down to the first quadrant, and then using a power series
(needs only 6 or 8 terms IIRC) to compute the function.  Perhaps
libsunmath computed e.g. cos first, and then did sin = sqrt(1 -
cos^2)?  Taking the sqrt seems non-performant compared to evaluating a
short power series, but maybe they had a reason?  Another thought: sin
and cos are reflections of each other (over the line x = pi/4) in the
first quadrant.  Perhaps there some other clever way to get sin from
cos?  I couldn't think if any in the short time I spent considering
it.

Stuart


On Sun, 27 Jul 2014, Viral Shah wrote:

 Is sincos a standard libm function?


Also, I wonder if creating the one entry array is too expensive, and if
we
should just call sin and cos separately. The vectorized version may be
able
to benefit from calling sincos directly.

-viral

On Monday, July 28, 2014 1:02:06 AM UTC+5:30, Isaiah wrote:



It doesn't appear to be wrapped, but you can call it yourself like this:

julia> sincos(x) = begin psin = Cdouble[0]; pcos = Cdouble[0];
ccall(:sincos, Void, (Cdouble, Ptr{Cdouble}, Ptr{Cdouble}), x, psin,
pcos);
(psin[1], pcos[1]); end
sincos (generic function with 1 method)

julia> sincos(pi)
(1.2246467991473532e-16,-1.0)

Feel free to open an issue or pull request if you think it should be
exported - might have just been an oversight.


On Sun, Jul 27, 2014 at 3:25 PM, Ken B 
wrote:

 Hi,


I want to calculated sine and cosine together of the same angle. I saw
this function is implemented in openlibm, but is it available in julia
and
how?


https://github.com/JuliaLang/openlibm/blob/
18f475de56ec7b478b9220a5f28eb9a23cb51d96/src/s_sincos.c

Thanks!
Ken







Re: [julia-users] simultaneous sin and cos: sincos in julia from openlibm?

2014-07-28 Thread Stuart Brorson

On Sun, 27 Jul 2014, Viral Shah wrote:


Is sincos a standard libm function?


Out of curiosity I looked into sincos since I had never heard of it.
A quick check shows there's no sincos in fdlibm
(on netlib).  However, a little Googling reveals an old Sun math
library libsunmath seems to implement it.

I did find a couple of libm variants which implemented sincos.
However, they simply called sin & cos separately.  As Stephan says
upthread, no performance improvement.

As far as I know, sin & cos are usually computed using mod to fold
the input x down to the first quadrant, and then using a power series
(needs only 6 or 8 terms IIRC) to compute the function.  Perhaps
libsunmath computed e.g. cos first, and then did sin = sqrt(1 -
cos^2)?  Taking the sqrt seems non-performant compared to evaluating a
short power series, but maybe they had a reason?  Another thought: sin
and cos are reflections of each other (over the line x = pi/4) in the
first quadrant.  Perhaps there some other clever way to get sin from
cos?  I couldn't think if any in the short time I spent considering
it.

Stuart


On Sun, 27 Jul 2014, Viral Shah wrote:


Is sincos a standard libm function?

Also, I wonder if creating the one entry array is too expensive, and if we
should just call sin and cos separately. The vectorized version may be able
to benefit from calling sincos directly.

-viral

On Monday, July 28, 2014 1:02:06 AM UTC+5:30, Isaiah wrote:


It doesn't appear to be wrapped, but you can call it yourself like this:

julia> sincos(x) = begin psin = Cdouble[0]; pcos = Cdouble[0];
ccall(:sincos, Void, (Cdouble, Ptr{Cdouble}, Ptr{Cdouble}), x, psin, pcos);
(psin[1], pcos[1]); end
sincos (generic function with 1 method)

julia> sincos(pi)
(1.2246467991473532e-16,-1.0)

Feel free to open an issue or pull request if you think it should be
exported - might have just been an oversight.


On Sun, Jul 27, 2014 at 3:25 PM, Ken B  wrote:


Hi,

I want to calculated sine and cosine together of the same angle. I saw
this function is implemented in openlibm, but is it available in julia and
how?


https://github.com/JuliaLang/openlibm/blob/18f475de56ec7b478b9220a5f28eb9a23cb51d96/src/s_sincos.c

Thanks!
Ken






Re: [julia-users] Re: Functions in degrees

2014-02-04 Thread Stuart Brorson

These functions are included in Matlab as convenience functions.  They
do get used frequently by folks working with real-world measurements.
It is true that one can always do rad = (x/180)*pi, but this
introduces the possibility of one ULP error in the calling arg.  Not
big, admittedly, but try doing sind(180) vs. sin(pi) and see what the
results are.

In any event, keeping them around as convenience functions is useful
for folks coming from (or porting code from) at Matlab background.

Just my opinion.

Stuart


On Tue, 4 Feb 2014, Johan Sigfrids wrote:


I agree with this. It seems like a lot of extra namespace usage. Besides,
if degrees2radians() is too long you always could define a constant and use
that:

const DEGREE = pi/180
sin(90DEGREE)



On Tuesday, February 4, 2014 6:57:16 PM UTC+2, Hans W Borchers wrote:


I was astonished to see the following functions in the Julia Standard
Library,
functions that accept (or return) degrees instead of radians:

sindasind   secdasecd
cosdacosd   cscdacscd
tandatand   cotdacotd

I didn't find these function names in any other technical computing system.
Each of these functions can easily be reconstructed by the user applying
the
functions degrees2radians() and radians2degrees().

I feel these function names clutter the namespace of Julia Base without
being
of any real value. Therefore, I would like to vote for deprecating use of
these
functions in the next version of Julia.

Hans Werner