[julia-users] option to return success flag in chol() and constructor for Triangular

2015-05-27 Thread Roy Wang
Is there an easy way to return the LAPACK success flag of Cholesky 
decompositions? I am currently calling 
(B,success_flag)=Base.LinAlg.LAPACK.potrf!(my_inputs) in order to do this. 
I suspect this is not a good way to do this.

Is there a constructor for the Triangular type? I wish to make B into a 
Triangular type.


[julia-users] questions about the format of function arguments in the documentation

2014-07-29 Thread Roy Wang
Hi everyone, I started using Julia since last Saturday for my PhD work. My 
background in programming is mostly C++, C, and some 
non-performance-oriented MATLAB. I have never used R. Despite my lack of 
modern programming languages, I was able to convert some of my previous 
prototype code from MATLAB within hours. Thank you all for your hard work 
in contributing to this language! 

I have some questions about the format of the documentation. Consider the 
*sparse(I, 
J, V[, m, n, combine]**)* function. After some trial-and-error at 3am, I 
finally figured out I just add *m* and *n* to the argument list of 
*sparse()* if I wish to specify the size of the sparse matrix with this 
constructor function. 

1) Does this mean the notation [, blah, blah2, ...] in the documentation 
mean "blah, blah2, ..." are all optional arguments? 

2) If this a standard notation across other modern object-oriented 
languages, would it be possible to make a wikipedia link to this notation, 
and put it on the tutorial or the documentation page? If this isn't a 
standard notation, would it be possible to include a short example in the 
documentation?


[julia-users] Re: questions about the format of function arguments in the documentation

2014-07-29 Thread Roy Wang

Thanks, Ivar! I will review this thread. I wish the documentation alert 
readers that [, a,b,...] mean optional parameters. 


[julia-users] Question about returning an Array from a function

2014-08-25 Thread Roy Wang
I have some ideas from my experience in C++11, but I'd like to learn the 
proper "Julian" way :)

My goal is to implement a function that computes an Array. I do this if I 
want speed:

function computestuff!(A::Array{FloatingPoint,1})
A=fill!(A,0); #reset array values
#modify the contents of A
end


#in the calling routine...
A::Array{FloatingPoint,1}=Array(FloatingPoint,5);
computestuff!(A);

Question 1: Is there an even faster way in Julia?


Question 2: I wish to hide the details inside that function (i.e. allocate 
the size of A inside of the function) without sacrificing speed. This way, 
in the calling routine, I can just write

#in the calling routine...
A=computestuff(5);


Is this possible?
I think any new memory allocated inside that function will be 
undefined/freed once the function exits. If the pointer A is assigned to 
this memory, I'd get undefined results. Julia probably checks against this 
kind of situation and assigns a deep-copy instead, which slow things down.

Thanks!




Re: [julia-users] Question about returning an Array from a function

2014-08-25 Thread Roy Wang

Thanks Patrick. So to clarify: *FloatingPoint* is not a concrete types, so 
explicitly defining variables or function inputs using it will not speed 
things up. Instead, I should use *Float64*, *Float32*, etc. 

Is *Int* an abstract type as well? I'm wondering if I should go back and 
rename everything *my_var::Int* to *my_var::Int32*.


On Monday, 25 August 2014 14:54:14 UTC-4, Patrick O'Leary wrote:
>
> On Monday, August 25, 2014 12:28:00 PM UTC-5, John Myles White wrote:
>>
>> Array{FloatingPoint} isn't related to Array{Float64}. Julia's type system 
>> always employs invariance for parametric types: 
>> https://en.wikipedia.org/wiki/Covariance_and_contravariance_(computer_science)
>>  
>> 
>>
>
> To underline this point a bit, it's even a bit worse than that: 
> Array{FloatingPoint} will work just fine for a lot of things, but it stores 
> all elements as heap pointers, so array-like operations (such as linear 
> algebra routines) will often be extremely slow.
>
> As a rule, you almost never use an abstract type as the type parameter of 
> a parametric type for this reason. Where you wish to be generic over a 
> specific family of types under an abstract type, you can use type 
> constraints:
>
> function foo{T<:FloatingPoint}(src::Array{T,1})
>  ...
> end
>
> But often type annotations can be omitted completely.
>


Re: [julia-users] Question about returning an Array from a function

2014-08-25 Thread Roy Wang

Thanks guys. So to clarify: FloatingPoint is not a concrete types, so 
explicitly defining variables or function inputs using it will not speed 
things up. Instead, I should use Float64, Float32, etc.

Is Int an abstract type as well? I'm wondering if I should go back and 
rename everything my_var::Int to my_var::Int32.

John: I couldn't find the mutate!() function in the Julia Standard Library 
v0.3. Do you mean my own function that mutates the source array?

On Monday, 25 August 2014 14:54:14 UTC-4, Patrick O'Leary wrote:
>
> On Monday, August 25, 2014 12:28:00 PM UTC-5, John Myles White wrote:
>>
>> Array{FloatingPoint} isn't related to Array{Float64}. Julia's type system 
>> always employs invariance for parametric types: 
>> https://en.wikipedia.org/wiki/Covariance_and_contravariance_(computer_science)
>>  
>> 
>>
>
> To underline this point a bit, it's even a bit worse than that: 
> Array{FloatingPoint} will work just fine for a lot of things, but it stores 
> all elements as heap pointers, so array-like operations (such as linear 
> algebra routines) will often be extremely slow.
>
> As a rule, you almost never use an abstract type as the type parameter of 
> a parametric type for this reason. Where you wish to be generic over a 
> specific family of types under an abstract type, you can use type 
> constraints:
>
> function foo{T<:FloatingPoint}(src::Array{T,1})
>  ...
> end
>
> But often type annotations can be omitted completely.
>


Re: [julia-users] Question about returning an Array from a function

2014-08-25 Thread Roy Wang
Thanks Tom. Pweh, that's what I suspected. 

I glanced at boot.jl, and it doesn't seem Julia has a typealias for 
doubles. I'll define my own to check for 32 vs. 64-bit systems.

On Monday, 25 August 2014 15:10:30 UTC-4, Tomas Lycken wrote:
>
> Actually, Int (and UInt) are aliases to the “native size integer”, so if 
> you specify Int you will get Int32 on a 32-bit system and Int64 on a 
> 64-bit system. So no, don’t change my_var::Int to my_var::Int32 - that’ll 
> make your code *worse* on 64-bit systems ;)
>
> // T
>
> On Monday, August 25, 2014 9:05:06 PM UTC+2, Roy Wang wrote:
>
>
>> Thanks guys. So to clarify: FloatingPoint is not a concrete types, so 
>> explicitly defining variables or function inputs using it will not speed 
>> things up. Instead, I should use Float64, Float32, etc.
>>
>> Is Int an abstract type as well? I'm wondering if I should go back and 
>> rename everything my_var::Int to my_var::Int32.
>>
>> John: I couldn't find the mutate!() function in the Julia Standard 
>> Library v0.3. Do you mean my own function that mutates the source array?
>>
>> On Monday, 25 August 2014 14:54:14 UTC-4, Patrick O'Leary wrote:
>>>
>>> On Monday, August 25, 2014 12:28:00 PM UTC-5, John Myles White wrote:
>>>>
>>>> Array{FloatingPoint} isn't related to Array{Float64}. Julia's type 
>>>> system always employs invariance for parametric types: 
>>>> https://en.wikipedia.org/wiki/Covariance_and_contravariance_(computer_science)
>>>>  
>>>> <https://www.google.com/url?q=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FCovariance_and_contravariance_%28computer_science%29&sa=D&sntz=1&usg=AFQjCNH5Mpuwh71o9dv0_TDx9OcMvvKfWg>
>>>>
>>>
>>> To underline this point a bit, it's even a bit worse than that: 
>>> Array{FloatingPoint} will work just fine for a lot of things, but it stores 
>>> all elements as heap pointers, so array-like operations (such as linear 
>>> algebra routines) will often be extremely slow.
>>>
>>> As a rule, you almost never use an abstract type as the type parameter 
>>> of a parametric type for this reason. Where you wish to be generic over a 
>>> specific family of types under an abstract type, you can use type 
>>> constraints:
>>>
>>> function foo{T<:FloatingPoint}(src::Array{T,1})
>>>  ...
>>> end
>>>
>>> But often type annotations can be omitted completely.
>>>
>> ​
>


Re: [julia-users] Question about returning an Array from a function

2014-08-25 Thread Roy Wang
I didn't know Float64 and Float32 are the same on 32-bit systems. Thanks.

On Monday, 25 August 2014 15:48:30 UTC-4, Tobias Knopp wrote:
>
> Thats for a reason. Float64 and Float32 are the same on 64 and 32 bit 
> computers. Its only the integer types where this matters.
>
> Am Montag, 25. August 2014 21:38:44 UTC+2 schrieb Roy Wang:
>>
>> Thanks Tom. Pweh, that's what I suspected. 
>>
>> I glanced at boot.jl, and it doesn't seem Julia has a typealias for 
>> doubles. I'll define my own to check for 32 vs. 64-bit systems.
>>
>> On Monday, 25 August 2014 15:10:30 UTC-4, Tomas Lycken wrote:
>>>
>>> Actually, Int (and UInt) are aliases to the “native size integer”, so 
>>> if you specify Int you will get Int32 on a 32-bit system and Int64 on a 
>>> 64-bit system. So no, don’t change my_var::Int to my_var::Int32 - 
>>> that’ll make your code *worse* on 64-bit systems ;)
>>>
>>> // T
>>>
>>> On Monday, August 25, 2014 9:05:06 PM UTC+2, Roy Wang wrote:
>>>
>>>
>>>> Thanks guys. So to clarify: FloatingPoint is not a concrete types, so 
>>>> explicitly defining variables or function inputs using it will not speed 
>>>> things up. Instead, I should use Float64, Float32, etc.
>>>>
>>>> Is Int an abstract type as well? I'm wondering if I should go back and 
>>>> rename everything my_var::Int to my_var::Int32.
>>>>
>>>> John: I couldn't find the mutate!() function in the Julia Standard 
>>>> Library v0.3. Do you mean my own function that mutates the source array?
>>>>
>>>> On Monday, 25 August 2014 14:54:14 UTC-4, Patrick O'Leary wrote:
>>>>>
>>>>> On Monday, August 25, 2014 12:28:00 PM UTC-5, John Myles White wrote:
>>>>>>
>>>>>> Array{FloatingPoint} isn't related to Array{Float64}. Julia's type 
>>>>>> system always employs invariance for parametric types: 
>>>>>> https://en.wikipedia.org/wiki/Covariance_and_contravariance_(computer_science)
>>>>>>  
>>>>>> <https://www.google.com/url?q=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FCovariance_and_contravariance_%28computer_science%29&sa=D&sntz=1&usg=AFQjCNH5Mpuwh71o9dv0_TDx9OcMvvKfWg>
>>>>>>
>>>>>
>>>>> To underline this point a bit, it's even a bit worse than that: 
>>>>> Array{FloatingPoint} will work just fine for a lot of things, but it 
>>>>> stores 
>>>>> all elements as heap pointers, so array-like operations (such as linear 
>>>>> algebra routines) will often be extremely slow.
>>>>>
>>>>> As a rule, you almost never use an abstract type as the type parameter 
>>>>> of a parametric type for this reason. Where you wish to be generic over a 
>>>>> specific family of types under an abstract type, you can use type 
>>>>> constraints:
>>>>>
>>>>> function foo{T<:FloatingPoint}(src::Array{T,1})
>>>>>  ...
>>>>> end
>>>>>
>>>>> But often type annotations can be omitted completely.
>>>>>
>>>> ​
>>>
>>

Re: [julia-users] Question about returning an Array from a function

2014-08-25 Thread Roy Wang
Okay guys, thanks!

On Monday, 25 August 2014 17:59:39 UTC-4, John Myles White wrote:
>
> Yes, I meant for mutate! to be your mutating implementation of the 
> function in question.
>
>  -- John
>
> On Aug 25, 2014, at 12:05 PM, Roy Wang > 
> wrote:
>
>
> Thanks guys. So to clarify: FloatingPoint is not a concrete types, so 
> explicitly defining variables or function inputs using it will not speed 
> things up. Instead, I should use Float64, Float32, etc.
>
> Is Int an abstract type as well? I'm wondering if I should go back and 
> rename everything my_var::Int to my_var::Int32.
>
> John: I couldn't find the mutate!() function in the Julia Standard Library 
> v0.3. Do you mean my own function that mutates the source array?
>
> On Monday, 25 August 2014 14:54:14 UTC-4, Patrick O'Leary wrote:
>>
>> On Monday, August 25, 2014 12:28:00 PM UTC-5, John Myles White wrote:
>>>
>>> Array{FloatingPoint} isn't related to Array{Float64}. Julia's type 
>>> system always employs invariance for parametric types: 
>>> https://en.wikipedia.org/wiki/Covariance_and_contravariance_(computer_science)
>>>  
>>> <https://www.google.com/url?q=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FCovariance_and_contravariance_%28computer_science%29&sa=D&sntz=1&usg=AFQjCNH5Mpuwh71o9dv0_TDx9OcMvvKfWg>
>>>
>>
>> To underline this point a bit, it's even a bit worse than that: 
>> Array{FloatingPoint} will work just fine for a lot of things, but it stores 
>> all elements as heap pointers, so array-like operations (such as linear 
>> algebra routines) will often be extremely slow.
>>
>> As a rule, you almost never use an abstract type as the type parameter of 
>> a parametric type for this reason. Where you wish to be generic over a 
>> specific family of types under an abstract type, you can use type 
>> constraints:
>>
>> function foo{T<:FloatingPoint}(src::Array{T,1})
>>  ...
>> end
>>
>> But often type annotations can be omitted completely.
>>
>
>

[julia-users] copy assignment opeartion on existing and valid destination arrays

2014-10-02 Thread Roy Wang
I often need a "copy assignment" type of operation to an existing 
destination array of the exact same element type and size. Let's only talk 
about arrays of concrete types, like a multi-dimensional array of floats. 
This is useful when I write optimization solvers, and I need to store 
vectors or matrices from the previous step. I usually pre-allocate a pair 
of arrays of the same type and size, *x* and *x_next*, then do:

*x_next = x;*
 
at the end of each iteration of my solver.

At first, I thought using *copy()* (shallow copy) on them is fine to make 
sure they are separate entities, since floating point numbers and integers 
are concrete types in Julia. While I verified this is true (at least on 
arrays of Float64s), I looked at (around line 202 at the time of this 
post), and *copy()* seems to call *copy!( similar(a), a)*. To my 
understanding, this allocates a new destination array, fills it with the 
corresponding values from the source array, then assigns the pointer of 
this new destination array to *x_next*, and the garbage collector removes 
the old array that *x_next* was pointing to. This is a lot of work when I 
just want to traverse through *x_next*, and assign it the corresponding 
values from* x*. Please correct me if my understanding is wrong!

This is a really common operation. I'd appreciate it if someone can advise 
me whether there is already an existing method for doing this (or a better 
solution) before I write my own.

Cheers,

Roy


[julia-users] Re: copy assignment opeartion on existing and valid destination arrays

2014-10-02 Thread Roy Wang

This kind of routine is what I'm talking about...

# copy assignment for vectors
function copyassignment!(a::Vector,b::Vector)
@assert length(a) == length(b)
for n=1:length(a)
a[n]=b[n];
end
end

My questions:
1) Is there a standard function that does this?
2) Is there a better way to do this so it'll handle any type of 
multi-dimensional array of integers and floats without performance penalty?


On Thursday, 2 October 2014 18:09:16 UTC-4, Roy Wang wrote:
>
> I often need a "copy assignment" type of operation to an existing 
> destination array of the exact same element type and size. Let's only talk 
> about arrays of concrete types, like a multi-dimensional array of floats. 
> This is useful when I write optimization solvers, and I need to store 
> vectors or matrices from the previous step. I usually pre-allocate a pair 
> of arrays of the same type and size, *x* and *x_next*, then do:
>
> *x_next = x;*
>  
> at the end of each iteration of my solver.
>
> At first, I thought using *copy()* (shallow copy) on them is fine to make 
> sure they are separate entities, since floating point numbers and integers 
> are concrete types in Julia. While I verified this is true (at least on 
> arrays of Float64s), I looked at (around line 202 at the time of this 
> post), and *copy()* seems to call *copy!( similar(a), a)*. To my 
> understanding, this allocates a new destination array, fills it with the 
> corresponding values from the source array, then assigns the pointer of 
> this new destination array to *x_next*, and the garbage collector removes 
> the old array that *x_next* was pointing to. This is a lot of work when I 
> just want to traverse through *x_next*, and assign it the corresponding 
> values from* x*. Please correct me if my understanding is wrong!
>
> This is a really common operation. I'd appreciate it if someone can advise 
> me whether there is already an existing method for doing this (or a better 
> solution) before I write my own.
>
> Cheers,
>
> Roy
>


[julia-users] Re: copy assignment opeartion on existing and valid destination arrays

2014-10-02 Thread Roy Wang

oops, add 
@assert eltype(a) == eltype(b)
in the checks too, otherwise there might be an "inexact error" when mixing 
integers and floats.




Re: [julia-users] Re: copy assignment opeartion on existing and valid destination arrays

2014-10-02 Thread Roy Wang
Hey John,

Ah geez, copy!() was only 2 lines lower than copy() in abstractarray.jl. 
Thanks!


On Thursday, 2 October 2014 18:25:22 UTC-4, John Myles White wrote:
>
> Why not use copy!
>
>   -- John
>
> On Oct 2, 2014, at 3:24 PM, Roy Wang > 
> wrote:
>
>
> This kind of routine is what I'm talking about...
>
> # copy assignment for vectors
> function copyassignment!(a::Vector,b::Vector)
> @assert length(a) == length(b)
> for n=1:length(a)
> a[n]=b[n];
> end
> end
>
> My questions:
> 1) Is there a standard function that does this?
> 2) Is there a better way to do this so it'll handle any type of 
> multi-dimensional array of integers and floats without performance penalty?
>
>
> On Thursday, 2 October 2014 18:09:16 UTC-4, Roy Wang wrote:
>>
>> I often need a "copy assignment" type of operation to an existing 
>> destination array of the exact same element type and size. Let's only talk 
>> about arrays of concrete types, like a multi-dimensional array of floats. 
>> This is useful when I write optimization solvers, and I need to store 
>> vectors or matrices from the previous step. I usually pre-allocate a pair 
>> of arrays of the same type and size, *x* and *x_next*, then do:
>>
>> *x_next = x;*
>>  
>> at the end of each iteration of my solver.
>>
>> At first, I thought using *copy()* (shallow copy) on them is fine to 
>> make sure they are separate entities, since floating point numbers and 
>> integers are concrete types in Julia. While I verified this is true (at 
>> least on arrays of Float64s), I looked at (around line 202 at the time of 
>> this post), and *copy()* seems to call *copy!( similar(a), a)*. To my 
>> understanding, this allocates a new destination array, fills it with the 
>> corresponding values from the source array, then assigns the pointer of 
>> this new destination array to *x_next*, and the garbage collector 
>> removes the old array that *x_next* was pointing to. This is a lot of 
>> work when I just want to traverse through *x_next*, and assign it the 
>> corresponding values from* x*. Please correct me if my understanding is 
>> wrong!
>>
>> This is a really common operation. I'd appreciate it if someone can 
>> advise me whether there is already an existing method for doing this (or a 
>> better solution) before I write my own.
>>
>> Cheers,
>>
>> Roy
>>
>
>

Re: [julia-users] Re: copy assignment opeartion on existing and valid destination arrays

2014-10-02 Thread Roy Wang
Is there a version of copy!() that uses @parallel? My x and x_next are 
usually huge.


On Thursday, 2 October 2014 18:32:32 UTC-4, Roy Wang wrote:
>
> Hey John,
>
> Ah geez, copy!() was only 2 lines lower than copy() in abstractarray.jl. 
> Thanks!
>
>
> On Thursday, 2 October 2014 18:25:22 UTC-4, John Myles White wrote:
>>
>> Why not use copy!
>>
>>   -- John
>>
>> On Oct 2, 2014, at 3:24 PM, Roy Wang  wrote:
>>
>>
>> This kind of routine is what I'm talking about...
>>
>> # copy assignment for vectors
>> function copyassignment!(a::Vector,b::Vector)
>> @assert length(a) == length(b)
>> for n=1:length(a)
>> a[n]=b[n];
>> end
>> end
>>
>> My questions:
>> 1) Is there a standard function that does this?
>> 2) Is there a better way to do this so it'll handle any type of 
>> multi-dimensional array of integers and floats without performance penalty?
>>
>>
>> On Thursday, 2 October 2014 18:09:16 UTC-4, Roy Wang wrote:
>>>
>>> I often need a "copy assignment" type of operation to an existing 
>>> destination array of the exact same element type and size. Let's only talk 
>>> about arrays of concrete types, like a multi-dimensional array of floats. 
>>> This is useful when I write optimization solvers, and I need to store 
>>> vectors or matrices from the previous step. I usually pre-allocate a pair 
>>> of arrays of the same type and size, *x* and *x_next*, then do:
>>>
>>> *x_next = x;*
>>>  
>>> at the end of each iteration of my solver.
>>>
>>> At first, I thought using *copy()* (shallow copy) on them is fine to 
>>> make sure they are separate entities, since floating point numbers and 
>>> integers are concrete types in Julia. While I verified this is true (at 
>>> least on arrays of Float64s), I looked at (around line 202 at the time of 
>>> this post), and *copy()* seems to call *copy!( similar(a), a)*. To my 
>>> understanding, this allocates a new destination array, fills it with the 
>>> corresponding values from the source array, then assigns the pointer of 
>>> this new destination array to *x_next*, and the garbage collector 
>>> removes the old array that *x_next* was pointing to. This is a lot of 
>>> work when I just want to traverse through *x_next*, and assign it the 
>>> corresponding values from* x*. Please correct me if my understanding is 
>>> wrong!
>>>
>>> This is a really common operation. I'd appreciate it if someone can 
>>> advise me whether there is already an existing method for doing this (or a 
>>> better solution) before I write my own.
>>>
>>> Cheers,
>>>
>>> Roy
>>>
>>
>>

Re: [julia-users] Re: copy assignment opeartion on existing and valid destination arrays

2014-10-21 Thread Roy Wang
This is great, thanks!

I verified pointer(X) is the same as what pointer(x_next) used to be before 
running your line of code, and pointer(x_next) after running your line of 
code is the same as pointer(x).

I did not know of this about Julia!

On Saturday, 4 October 2014 11:05:33 UTC-4, David P. Sanders wrote:
>
> Hi, 
>
> Are you sure that you need a copy operation? If I understand correctly 
> what you are doing, you just need access to the x from the previous 
> iteration. 
>
> Could you not just swap x and x_next and avoid the copy :
>
> X, x_next = x_next, x
>
> (so you create them once only). 
>
>

[julia-users] qustion about storing a read-only large structures inside a composite type

2014-10-26 Thread Roy Wang
 Hi everyone, I how to best store potentially very large structures in a 
composite type. These structures won't be modified after initialization. My 
primary concern right now is performance instead of the possibility of 
accidentally overwriting the entries of these large structures. I'm 
experimenting with defining constants and immutable types. Here is a 
concrete example for my approach using immutable types:

immutable GPDataType
  data_X::Matrix; # could be huge
  data_y::Vector; # could be huge
  # ... and other things
end

type GPType
  data::GPData;
  choiceofalgorithm::Function;
  # ... and other things
end

I initialize by doing
my_GP=GPType(GPDataType(X,y,...etc), onewaytoprocessGP, ... etc);


I frequently call the function that is in the choiceofalgorithm field, 
which does a lot of this at the beginning to save the time it takes to 
derefence the structure pointers (a habit I have from C++):
function onewaytoprocessGP(my_GP::GPType)
  data=my_GP.Data;
  # and other routines that uses Data in a read-only fashion.
end

My concern with this approach is that the documentation says immutable 
types are passed by value in function calls and assignments. If my 
understanding is correct, then there would be a lot of copies of the huge 
structure each time I do data=my_GP.Data;

***
Here is a concrete example for my approach by defining a constant object to 
type GPDataType:
type GPDataType
  data_X::Matrix; # could be huge
  data_y::Vector; # could be huge
  # ... and other things
end

type GPType
  data::GPData;
  choiceofalgorithm::Function;
  # ... and other things
end

I initialize by doing
const tmp=GPDataType(X,y,...etc);
my_GP=GPType(tmp, onewaytoprocessGP, ... etc);

I use the same onewaytoprocessGP(my_GP::GPType) as before.

My concern with this approach is that I'm unsure if this actually creates 
any performance savings over the non-constant object case.

Any constructive advice is greatly appreciated :)

Roy


[julia-users] Looking for suggsetions to speed up function calls that have @cpp ccall()

2015-01-26 Thread Roy Wang
Hey everyone,

I'm trying to use @cpp ccall() to call myfunc() in a custom shared library 
I wrote in C++, and I noticed there is some overhead if I put the @cpp 
ccall inside a function.

The relevant portion of my Julia code:
# mypath is a constant string type that specifies the path of my custom 
shared library

function usecppfunc!(out,a,b)
@cpp ccall((:myfunc, mypath), Void, (Ptr{Float64}, Ptr{Float64}, Ptr{
Float64}, Int32), out, a, b, int32(length(a)) );
return nothing;
end

@time @cpp ccall((:myfunc, mypath), Void, (Ptr{Float64}, Ptr{Float64}, Ptr{
Float64}, Int32), out, a, b, int32(length(a)) );
@time cversion(out2,a,b);

The output:

> elapsed time: 1.0682e-5 seconds (48 bytes allocated)
> elapsed time: 0.002791817 seconds (59664 bytes allocated)
>

The @cpp ccall() wrapped in a function is much slower than the one I called 
from the Julia REPL. I suspect there is some kind of garbage collection 
overhead at the end of the usecppfunc!() call. Your help is greatly 
appreciated!


[julia-users] Re: Looking for suggsetions to speed up function calls that have @cpp ccall()

2015-01-26 Thread Roy Wang
Thanks guys,

I think that was it.



On Monday, 26 January 2015 16:59:54 UTC-5, Ivar Nesje wrote:
>
> Did you time your function twice? The first time you call a function it 
> needs to be compiled, and for very small tests, that process will dominate.



[julia-users] quick type-stability question about using similar()

2015-02-19 Thread Roy Wang
I'd appreciate it if someone can help me out with two questions:

I think this function is not a type-stable function. This is because the 
type of out depends on the input A. If it is type-stable, why?
function myfunc(A)
out=similar(A)

#performs some computation that is then stored in out
return out
end


I think this function is a type-stable function. If it isn't, why?
function myfunc!(out,A)

#performs some computation that mutates out
return nothing
end

If my conclusions are correct, would you say it is better programming 
practice to use the second function as much as possible?




Re: [julia-users] quick type-stability question about using similar()

2015-02-19 Thread Roy Wang
Thanks for your explanation, Milan!

On Thursday, 19 February 2015 10:29:10 UTC-5, Milan Bouchet-Valat wrote:
>
> Le jeudi 19 février 2015 à 07:10 -0800, Roy Wang a écrit : 
> > I'd appreciate it if someone can help me out with two questions: 
> > 
> > I think this function is not a type-stable function. This is because 
> > the type of out depends on the input A. If it is type-stable, why? 
> No, it *is* type stable, precisely because the type of the output 
> depends only on the *type* of the input (or at least it seems it works 
> that way, but without more code it's hard to tell). Type-instability is 
> when the type of the output depends on the *value* of the input. 
>
> > function myfunc(A) 
> > out=similar(A) 
> > 
> > #performs some computation that is then stored in out 
> > return out 
> > 
> > end 
> > 
> > 
> > 
> > I think this function is a type-stable function. If it isn't, why? 
> > function myfunc!(out,A) 
> > 
> > #performs some computation that mutates out 
> > return nothing 
> > 
> > end 
> > 
> > 
> > If my conclusions are correct, would you say it is better programming 
> > practice to use the second function as much as possible? 
> The second function is also type stable (AFAICT). The difference is that 
> it does not create a copy of the input, i.e. it works in place. This can 
> be better programming practice as it allocates less memory. The 
> convention in Julia is to provide both myfunc!() and a convenience 
> wrapper caller myfunc() which calls myfunc!(similar(A)). 
>
>
> Regards 
>