[julia-users] Re: anyway to get immutable constants?

2015-02-19 Thread Petr Krysl
Could not functions be used for this purpose?

Let us say you wish to get a constant array: How about you call a function 
that will give you that array by CONSTRUCTING it on the fly. Since the 
logic of the function is immutable, you get a constant array.

P

On Wednesday, February 18, 2015 at 4:37:08 PM UTC-8, Zouhair Mahboubi wrote:
>
> I think I read somewhere in the docs that constants in Julia can be 
> modified, but only as long as the type is kept the same.
>
> This is not ideal, but at least there’s a warning message about it. Except 
> that if the type is an array, and one modifies an entry in it, or modifies 
> a reference to this const, there is no warning…
>
> const z = [0,0]
> z = [1, 1]; #produces warning
> z[1] = 2; #does not produce warning
> y = z;
> y[1] = 3; #does not produce warning
>
> this has caused me some pain as having code like this can lead to 
> confusing behavior:
>
>
> const z0 = [0,0]
> function foo(x)
> y = z0
> if x != 0
>   y[1] = x
> end
> return y
> end
> foo(0) #returns [0,0]
> foo(1) #returns [1,0]
> foo(0) #returns [1,0]
>
> And of course at no point do I get a warning that z0 was changed…
>
> I guess my question is how do I get around this? do I have to do a 
> copy(z0)? or is there a way to enforce immutability of constants?
>
> Thanks,
> -z
> ​
>


[julia-users] Re: Automatic triangulation package: testing?

2015-02-17 Thread Petr Krysl
Apparently it works fine (Thanks very much, Rob!).

Enjoy, and if you run into any trouble please feel free to let me know.

Petr

On Tuesday, February 17, 2015 at 12:20:22 PM UTC-8, Petr Krysl wrote:
>
> Hi guys,
>
> I have uploaded to Github a Julia package that wraps an automatic 2D 
> triangulation tool (binary). I only have access to Windows and Linux, so I 
> was wondering if anyone would be willing to take it for a spin on the Mac?
>
> https://github.com/PetrKryslUCSD/Targe2.jl
>
> Thanks!
>
> Petr
>


[julia-users] Automatic triangulation package: testing?

2015-02-17 Thread Petr Krysl
Hi guys,

I have uploaded to Github a Julia package that wraps an automatic 2D 
triangulation tool (binary). I only have access to Windows and Linux, so I 
was wondering if anyone would be willing to take it for a spin on the Mac?

https://github.com/PetrKryslUCSD/Targe2.jl

Thanks!

Petr


[julia-users] Re: [ANN] Differential Equation solver test suite

2015-02-03 Thread Petr Krysl
Mauro,

I would be curious to learn details of what you did.

How did you define the reference solution?  I presume  it is the value of 
the solution at some point  in time?   
Did you use extrapolation to the limit?

How did you measure  the number of significant digits?  In particular, for 
the fixed--step  integrators  one would not  expect  the  increase  in 
accuracy  measured in significant digits  to be a nice and smooth function 
of the number of steps (as a proxy for the wall-clock time).

I know I could probably find out all of this by rooting around in your 
code, but I think it's going to be more efficient, if you have the time, to 
simply ask for an explanation.

Thanks a lot,

Petr


On Friday, January 30, 2015 at 6:03:24 AM UTC-8, Mauro wrote:
>
> Hi all, 
>
> those of you who solve initial value problems (IVP) of ordinary and 
> algebraic differential equations (ODE/DAE) might be interested in: 
> https://github.com/mauro3/IVPTestSuite.jl It provides several test cases 
> for ODE and DAE solvers based on previous, well-known test sets.  It can 
> easily be adapted to run with your solvers. 
>
> I ran the tests for the three ODE-solver packages ODE.jl, DASSL.jl and 
> Sundials.jl.  Results are here: 
> https://github.com/mauro3/IVPTestSuite.jl/blob/master/results/results.md 
>
> I found: 
> - DASSL.jl seems to be as capable as the Sundials solvers if sometimes 
>   slower but also sometimes faster. 
> - For large(-ish) systems, e.g. stemming from PDEs, DASSL.jl seems to 
>   be the best and fastest choice at the moment because of the patchy 
> support of 
>   sparse Jacobians in Sundials. (please correct me if wrong). 
> - ODE.ode23s does ok too but is generally a lot slower than DASSL, 
>   presumably because of its lower order. 
>
> However, take those results with a grain of salt as I haven't spent too 
> much time optimising yet. 
>
> -- Mauro 
>


Re: [julia-users] Re: workflow recommendation/tutorial

2015-01-20 Thread Petr Krysl
I think it would be worthwhile to point out  that enclosing  the code of 
one's "scratch" file  in a module  has a number of advantages.

1. The  global workspace is not polluted  with too many variables and  
conflicts are avoided.
2. The  variables  defined within that module  are accessible from the REPL 
as if the variables were defined at the global level.

Example:

module m1

using JFinEALE

t0 = time()

rho=1.21*1e-9;# mass density
c =345.0*1000;# millimeters per second
bulk= c^2*rho;
Lx=1900.0;# length of the box, millimeters
Ly=800.0; # length of the box, millimeters

fens,fes = Q4block(Lx,Ly,3,2); # Mesh
show(fes.conn)

end

julia> include("./module_env.jl")
Warning: replacing module m1
[1 2 6 5
 5 6 10 9
 2 3 7 6
 6 7 11 10
 3 4 8 7
 7 8 12 11]
julia> m1. # I hit the tab key at this point, and I got this list of 
variables that I can access
Lx   Lybulk  c eval  fens  fes   rho   t0

I'm sure this is no news to the Julian  wizards, but to a newbie like me it 
is useful information.  (I haven't seen this in the documentation.   
Perhaps it is there,  but  if that is not the case I would be all for 
adding it in.)

Petr

On Tuesday, January 20, 2015 at 1:45:02 PM UTC-8, Jameson wrote:
>
> My workflow is very similar. I'll add that I'll make a throwaway module 
> ("MyModuleTests") so that I can use "using" in the test file. Doing this at 
> the REPL (defining a new module directly at the prompt) is also a nice way 
> of encapsulating a chunk of code to isolate it from existing definitions 
> (including old using statements). It's also similar to how I'll use a large 
> begin/end block to group a large chunk of initialization code so that I can 
> iterate and rerun it easily. 
> On Tue, Jan 20, 2015 at 4:34 PM Tim Holy > 
> wrote:
>
>> My workflow (REPL-based, Juno in particular is probably different):
>> - Open a file ("MyModule.jl") that will consist of a single module, and
>> contains types & code
>> - Open a 2nd file ("mymodule_tests.jl") that will be the tests file for 
>> the
>> module. Inside of this file, say `import MyModule` rather than `using
>> MyModule`; you'll have to scope all calls, but that's a small price to 
>> pay for
>> the ability to `reload("MyModule")` and re-run your tests.
>> - Open a julia REPL
>> - Start playing with ideas/code in the REPL. Paste the good ones into the
>> files. And sometimes vice-versa, when it's easier to type straight into 
>> the
>> files.
>> - When enough code is in place, restart the repl. Cycle through
>> reload("MyModule")
>> include("mymodule_tests.jl")
>> 
>> until things actually work.
>>
>> --Tim
>>
>> On Tuesday, January 20, 2015 01:09:01 PM Viral Shah wrote:
>> > This is pretty much the workflow a lot of people use, with a few julia
>> > restarts to deal with the issues a) and b). I often maintain a script as
>> > part of my iterative/exploratory work, so that I can easily get to the
>> > desired state when I have to restart.
>> >
>> > -viral
>> >
>> > On Tuesday, January 20, 2015 at 4:15:13 PM UTC+5:30, Tamas Papp wrote:
>> > > Hi,
>> > >
>> > > I am wondering what the best workflow is for iterative/exploratory
>> > > programming (as opposed to, say, library development).  I feel that my
>> > > questions below all have solutions, it's just that I am not 
>> experienced
>> > > enough in Julia to figure them out.
>> > >
>> > > The way I have been doing it so far:
>> > > 1. open a file in the editor,
>> > > 2. start `using` some libraries,
>> > > 3. write a few functions, load data, plot, analyze
>> > > 4. rewrite functions, repeat 2-4 until satisfied.
>> > >
>> > > I usually end up with a bunch of functions, followed by the actual
>> > > runtime code.
>> > >
>> > > However, I run into the following issues (or, rather, inconveniences)
>> > > with nontrivial code:
>> > >
>> > > a. If I redefine a function, then I have to recompile dependent
>> > > functions, which is tedious and occasionally a source of bugs
>> > > (cf. https://github.com/JuliaLang/julia/issues/265 )
>> > >
>> > > b. I can't redefine types.
>> > >
>> > > I can solve both by restarting (`workspace()`), but then I have to
>> > > reload & recompile everything.
>> > >
>> > > I am wondering if there is a more organized way of doing this --- eg 
>> put
>> > > some stuff in a module in a separate file and just keep reloading 
>> that,
>> > > etc. Any advice, or pointers to tutorials would be appreciated.
>> > >
>> > > I am using Emacs/ESS.
>> > >
>> > > Also, is there a way to unintern symbols (a la CL) that would solve 
>> the
>> > > type redefinition issue?
>> > >
>> > > Best,
>> > >
>> > > Tamas
>>
>>  



[julia-users] Re: What's your favourite editor?

2015-01-19 Thread Petr Krysl
Does anyone use  sublime using a touch screen?  I've tried to teach 
sublime  to select with a finger,  but no luck so far.…
And,  actually the same thing with  the atom  editor: I cannot with a 
finger,  it always  results  in the dragging  of the entire pane.

Thanks,

Petr

On Monday, January 19, 2015 at 10:00:17 AM UTC-8, Ariel Keselman wrote:
>
> I'm using sublime, considering atom



[julia-users] SLOW sortrows?

2015-01-17 Thread Petr Krysl
Hi guys,

This one has me scratching my head. 

Matlab code:

function  Out  =myunique(A)
sA=sort(A,2);
[sA,rix]  = sortrows(sA);;
d=sA(1:end-1,:)~=sA(2:end,:);
ad=[true; any(d,2)];
iu =find((ad&[ad(2:end);true])==true);
Out =A(rix(iu),:);
end

was rewritten in Julia. Since some of the functionality is different (or 
slower) as written, I had to rewrite a bit.  The surprise was that even 
after the rewrite the Julia code runs in around 24 seconds whereas the 
Matlab code gets it done in two seconds.  As I went poking around to find 
what is slow, I found that 95% of the time were spent in the sort() and 
sortrrows() functions.

Any idea of what could cause this slowness? By the way, @code_warntype 
gives the code a clean bill... So are those two functions somehow slow by 
themselves?

The Julia version of this  is posted at 
https://gist.github.com/PetrKryslUCSD/cde67dfa0f1b0a1f98ac

Thanks,

Petr


Re: [julia-users] Speed of Julia when a function is passed as an argument, and a different, but much faster coding.

2015-01-15 Thread Petr Krysl
Thanks. Cool ideas!

P

On Thursday, January 15, 2015 at 12:08:05 PM UTC-8, Mauro wrote:
>
> Have you read this recent thread and its examples? 
> https://groups.google.com/d/msg/julia-dev/JEiH96ofclY/_amm9Cah6YAJ 
>
> M 
>
> On Thu, 2015-01-15 at 20:50, Petr Krysl > 
> wrote: 
> > So the "call" syntax on a type works, but using the implementation 
> > referenced above, it is SLOW. 
> > 
> > # Slow, function callback 
> > function test1(f, n) 
> > summ=0 
> > for i=1:n 
> > summ=summ+f(i) 
> > end 
> > return summ 
> > end 
> > 
> > g(i::Int)=i 
> > 
> > # Fast, hardwired function g() 
> > function test2(n) 
> > summ=0 
> > for i=1:n 
> > summ=summ+g(i) 
> > end 
> > return summ 
> > end 
> > 
> > # Fast, using method defined for an object 
> > function test4{T}(o::T, n) 
> > summ=0 
> > for i=1:n 
> > summ=summ+get(o,i) 
> > end 
> > return summ 
> > end 
> > 
> > type MyFun1 
> > end 
> > function get(o::MyFun1,i::Int) 
> > return g(i) 
> > end 
> > 
> > # Slow, using "call" on a singleton 
> > type MyFun3 end 
> > call(::Type{MyFun3},i::Int)=g(i)::Int 
> > 
> > n=10002345; 
> > @time println("test1(g, n) = $( test1(g, n) )") 
> > @time println("test2(n) = $( test2(n) )") 
> > @time println("test4(MyFun1(), n) = $( test4(MyFun1(), n) )")   
> > @time println("test1(MyFun3, n) = $( test1(MyFun3, n) )") 
> > 
> > julia> include("fun.jl") 
> > test1(g, n) = 50023457750685 
> > elapsed time: 0.610016329 seconds (480173520 bytes allocated, 27.80% gc 
> > time) 
> > test2(n) = 50023457750685 
> > elapsed time: 0.003086749 seconds (75304 bytes allocated) 
> > test4(MyFun1(), n) = 50023457750685 
> > elapsed time: 0.004098207 seconds (90436 bytes allocated) 
> > test1(MyFun3, n) = 50023457750685 
> > elapsed time: 0.665630908 seconds (480184232 bytes allocated, 23.21% gc 
> > time) 
>
>

Re: [julia-users] Speed of Julia when a function is passed as an argument, and a different, but much faster coding.

2015-01-15 Thread Petr Krysl
Aha: 

# Fast, got to declare the object!
function test5{T}(o::Type{T}, n)
summ=0
for i=1:n
summ=summ+o(i)
end
return summ
end

julia> include("fun.jl")
test1(g, n) = 50023457750685
elapsed time: 0.619951473 seconds (480173520 bytes allocated, 27.32% gc 
time)
test2(n) = 50023457750685
elapsed time: 0.002951912 seconds (75304 bytes allocated)
test4(MyFun1(), n) = 50023457750685
elapsed time: 0.003362795 seconds (90436 bytes allocated)
test1(MyFun3, n) = 50023457750685
elapsed time: 0.661085013 seconds (480184232 bytes allocated, 23.39% gc 
time)
test5(MyFun3, n) = 50023457750685
elapsed time: 0.003583276 seconds (116768 bytes allocated)



Re: [julia-users] Speed of Julia when a function is passed as an argument, and a different, but much faster coding.

2015-01-15 Thread Petr Krysl
So the "call" syntax on a type works, but using the implementation 
referenced above, it is SLOW.

# Slow, function callback 
function test1(f, n)
summ=0
for i=1:n
summ=summ+f(i)
end
return summ
end

g(i::Int)=i

# Fast, hardwired function g()
function test2(n)
summ=0
for i=1:n
summ=summ+g(i)
end
return summ
end

# Fast, using method defined for an object
function test4{T}(o::T, n)
summ=0
for i=1:n
summ=summ+get(o,i)
end
return summ
end

type MyFun1
end
function get(o::MyFun1,i::Int)
return g(i)
end

# Slow, using "call" on a singleton
type MyFun3 end
call(::Type{MyFun3},i::Int)=g(i)::Int

n=10002345;
@time println("test1(g, n) = $( test1(g, n) )")
@time println("test2(n) = $( test2(n) )") 
@time println("test4(MyFun1(), n) = $( test4(MyFun1(), n) )")  
@time println("test1(MyFun3, n) = $( test1(MyFun3, n) )") 

julia> include("fun.jl")
test1(g, n) = 50023457750685
elapsed time: 0.610016329 seconds (480173520 bytes allocated, 27.80% gc 
time)
test2(n) = 50023457750685
elapsed time: 0.003086749 seconds (75304 bytes allocated)
test4(MyFun1(), n) = 50023457750685
elapsed time: 0.004098207 seconds (90436 bytes allocated)
test1(MyFun3, n) = 50023457750685
elapsed time: 0.665630908 seconds (480184232 bytes allocated, 23.21% gc 
time)


Re: [julia-users] Speed of Julia when a function is passed as an argument, and a different, but much faster coding.

2015-01-15 Thread Petr Krysl
Thank you all that responded re: the support for call methods in 0.4 .

Petr

On Thursday, January 15, 2015 at 7:28:06 AM UTC-8, Steven G. Johnson wrote:
>
> See also:
>
>
> http://docs.julialang.org/en/latest/manual/methods/#call-overloading-and-function-like-objects
>
> On Thursday, January 15, 2015 at 4:29:57 AM UTC-5, Mauro wrote:
>>
>> > Is there an example of  the use  of this new syntax  for  0.4? 
>>
>> It's in the manual: 
>> http://docs.julialang.org/en/latest/stdlib/base/?highlight=call#Base.call 
>>
>> http://docs.julialang.org/en/latest/manual/constructors/?highlight=call#constructors-call-and-conversion
>>  
>>
>> > Thanks, 
>> > 
>> > Petr 
>> > 
>> > On Sunday, January 11, 2015 at 12:20:39 PM UTC-8, Tobias Knopp wrote: 
>> >> 
>> >> The call syntax is part of 0.4 
>> >> 
>> >> Cheers 
>> >> 
>> >> Tobi 
>> >> 
>> >> Am Sonntag, 11. Januar 2015 19:35:06 UTC+1 schrieb Petr Krysl: 
>> >>> 
>> >>> On Sunday, January 11, 2015 at 10:22:05 AM UTC-8, Tim Holy wrote: 
>> >>>> 
>> >>>> 
>> >>>> Already implement in julia 0.4. 
>> >>>> 
>> >>>   
>> >>> Very cool! Just checking: are you saying that  the solution  I 
>> proposed   
>> >>> with the  empty  types  would actually address the problem of 
>>  functions 
>> >>> being inefficient  as when passed in as callbacks, and that this is 
>> now 
>> >>> part of 0.4? Or rather that the syntactic sugar of being able to call 
>> on 
>> >>> object is part of 0.4? 
>> >>> 
>> >>> Thanks, 
>> >>> 
>> >>> P 
>> >>> 
>> >> 
>>
>>

Re: [julia-users] Speed of Julia when a function is passed as an argument, and a different, but much faster coding.

2015-01-14 Thread Petr Krysl
Hi Tobias,

Is there an example of  the use  of this new syntax  for  0.4?

Thanks,

Petr

On Sunday, January 11, 2015 at 12:20:39 PM UTC-8, Tobias Knopp wrote:
>
> The call syntax is part of 0.4
>
> Cheers
>
> Tobi
>
> Am Sonntag, 11. Januar 2015 19:35:06 UTC+1 schrieb Petr Krysl:
>>
>> On Sunday, January 11, 2015 at 10:22:05 AM UTC-8, Tim Holy wrote:
>>>
>>>
>>> Already implement in julia 0.4. 
>>>
>>  
>> Very cool! Just checking: are you saying that  the solution  I proposed  
>> with the  empty  types  would actually address the problem of  functions 
>> being inefficient  as when passed in as callbacks, and that this is now 
>> part of 0.4? Or rather that the syntactic sugar of being able to call on 
>> object is part of 0.4?
>>
>> Thanks,
>>
>> P
>>
>

Re: [julia-users] Re: Later methods definition replaces the previous one?

2015-01-13 Thread Petr Krysl
Very true.
P

On Tuesday, January 13, 2015 at 6:17:30 PM UTC-8, ele...@gmail.com wrote:
>
>
>
> On Wednesday, January 14, 2015 at 11:34:55 AM UTC+10, Petr Krysl wrote:
>>
>> By the way: what is wrong with having 
>>
>> length(Boat)
>> length(Vacation)
>> length(Membership)
>> length(Stay)
>> length(Nose)
>>
>
> Hi Peter,
>
> This makes sense because length() is generally a single concept, but for 
> example bark() is a very different concept when applied to trees compared 
> to when applied to dogs.
>
> Currently there is nothing in Julia to indicate when a generic function 
> (which has only a name like bark) refers to the same concept.  So the 
> compiler can't be told when it is intended that two generic functions 
> defined in different modules refer to the same concept and so the methods 
> are intended to be combined.  You have to define the methods on the same 
> generic function to combine them, eg Base.length() or the method Milan 
> proposed.
>
> Choosing a default (combining and dropping method clashes, combining and 
> erroring method clashes, not combining and dropping both functions, other) 
> is never going to be "correct" in all cases.  The best the Julia devs can 
> do is choose a default which causes module users minimal risk and ensures 
> everybody else is equally unhappy, until it becomes possible to specify 
> combine or not.
>
> Cheers
> Lex
>  
>
>>
>> ? I think the answer is NOTHING. So why should length() take on a 
>> definite (and fixed) meaning in the Base module?
>> Julia already SUPPORTS this, so why limit this when it could be a good 
>> thing?
>>
>> P
>>
>>>
>>>>

Re: [julia-users] Re: Later methods definition replaces the previous one?

2015-01-13 Thread Petr Krysl
By the way: what is wrong with having 

length(Boat)
length(Vacation)
length(Membership)
length(Stay)
length(Nose)

? I think the answer is NOTHING. So why should length() take on a definite 
(and fixed) meaning in the Base module?
Julia already SUPPORTS this, so why limit this when it could be a good 
thing?

P

On Tuesday, January 13, 2015 at 4:06:30 PM UTC-8, Petr Krysl wrote:
>
> Mauro,
>
> On Tuesday, January 13, 2015 at 1:45:36 PM UTC-8, Mauro wrote:
>>
>>
>> I don't think it is good practice to add unrelated methods to generic 
>> functions in Base.  A guide-line could be to check whether your methods 
>> matches up with the help text, if so, add it to Base.count otherwise 
>> not.  If not, either give it a different name or do not export it, so 
>> the user has to fully qualify it MyMod.count. 
>>
>
> I generally agree  with this sentiment, but my libertarian core  rebels  
> against the notion that the writer  of the software that I use gets to 
> decide the meaning of the words that are used as identifiers of functions  
> and such. Many common concepts,  such as length, angle, domain,… will 
> eventually be taken and therefore unavailable to future software 
> developers. (Of course I realize that  they are still available as 
> qualified names.)
>
> More importantly, since the possibility of writing totally unrelated  
> methods for a given generic function cannot be eliminated, I believe that 
> even the double import  of a generic function  resulting in merged methods  
> should be allowed. It would provide for uniformity, as  even this 
> restriction  can be easily bypassed by defining a dummy generic function in 
> the environment that "uses" two modules that export  methods  for this 
> function.  Not allowing  repeated imports does not accomplish  anything 
> except that it gets in the way.
>
> Petr
>  
>
>> it. 
>>
>

[julia-users] Re: How to combine portable Julia with Portable Python under Windows7 for e.g. PyPlot?

2015-01-13 Thread Petr Krysl
I think I've had the same problem. I was given the advice of  installing 
the Anaconda distribution of python..
That worked.

Petr

On Tuesday, January 13, 2015 at 11:29:41 AM UTC-8, Manfred wrote:
>
> As Julia user I  am below novice level which is due to the fact that I 
> have not yet managed to complete the Installation of my Julia Environment 
> successfully. My main Problem is to bring PyPlot to work from within a 
> portable Julia "Installation" along with a portable "Python" Installation.
>
> Basic stuff with Julia alone and Juno in combination with Julia work, but 
> so far, I get the following command/error message Dialog when trying to 
> plot with PyPlot:
>
> julia> using PyPlot
> Warning: error initializing module PyPlot:
> ErrorException("could not load module python: The specified module could 
> not be
> found.
> ")
>
> *Conditions:*
>
> *Computer/OS:* 
> Window7 Enterprise
> Service pack 1
> Intel(R) Core(TM) i5-3320M CPU @2.6 GHz, 4 GB RAM
> 64 bit OS
>
> no ADMIN rights, thus, the need for portable use!
>
> * JULIA*
> Version 0.3.4 (2014-12-26 10:42 UTC)
>  Official http://julialang.org/ release
>  x86_64-w64-mingw32
>
> installed as part of the JUNO-JULIA bundle as downloaded on 
> 10-Jan-2015 from 
> https://junolab.s3.amazonaws.com/release/1.0.0/juno-windows-x64.zip 
>
> Installation Directory: C:\Daten\portable\juno_julia_windows64
>
> C:\Daten\portable\juno_julia_windows64>dir
>  Volume in drive C has no label.
>  Volume Serial Number is 3261-EC6B
>  Directory of C:\Daten\portable\juno_julia_windows64
> 12.01.2015  20:53  .
> 12.01.2015  20:53  ..
> 30.12.2014  16:4210,244 .DS_Store
> 22.12.2014  23:38   814,944 .ninja_deps
> 22.12.2014  23:38   188,141 .ninja_log
> 13.12.2014  21:4256,639,488 chromiumcontent.dll
> 13.12.2014  21:42   215,705 content_resources_200_percent.pak
> 13.12.2014  21:42 8,682,170 content_shell.pak
> 13.12.2014  21:42 3,231,832 d3dcompiler_46.dll
> 13.12.2014  21:42 1,712,128 ffmpegsumo.dll
> 13.12.2014  21:4210,490,576 icudtl.dat
> 22.12.2014  23:38 7,341,056 juno.exe
> 11.12.2014  17:40   370,070 juno.ico
> 13.12.2014  21:4247,616 libEGL.dll
> 13.12.2014  21:42 1,324,032 libGLESv2.dll
> 12.01.2015  13:54  locales
> 13.12.2014  21:42   455,328 msvcp120.dll
> 13.12.2014  21:42   970,912 msvcr120.dll
> 12.01.2015  13:55  resources
> 13.12.2014  21:42   197,960 ui_resources_200_percent.pak
> 13.12.2014  21:42   247,984 vccorlib120.dll
> 13.12.2014  21:4281,768 xinput1_3.dll
>   18 File(s) 93,021,954 bytes
>4 Dir(s)  84,276,662,272 bytes free
>
>
> julia> Pkg.status()
> 2 required packages:
>  - Jewel 1.0.4
>  - PyPlot1.5.0
> 15 additional packages:
>  - Color 0.3.15
>  - Compat0.2.9
>  - Compose   0.3.10
>  - DataStructures0.3.5
>  - Dates 0.3.2
>  - FactCheck 0.2.5
>  - FixedPointNumbers 0.0.6
>  - Iterators 0.1.7
>  - JSON  0.4.0
>  - JuliaParser   0.6.1
>  - LNR   0.0.1
>  - LaTeXStrings  0.1.2
>  - Lazy  0.8.3
>  - PyCall0.7.3
>  - Requires  0.1.1
>
>
> *Python:*
>
> *Winpython (portable)*
> Python 2.7.9 (default, Dec 10 2014, 12:28:03) [MSC v.1500 64 bit (AMD64)] 
> on win
> 32
>
> downloaded from http://winpython.sourceforge.net/ 
>
> "Installation Directory": C:\Daten\portable\WinPython-64bit-2.7.9.1
>
>  Directory of C:\Daten\portable\WinPython-64bit-2.7.9.1
> 11.01.2015  12:47  .
> 11.01.2015  12:47  ..
> 13.12.2014  10:0566,192 IDLE (Python GUI).exe
> 13.12.2014  10:05   148,128 IPython Notebook.exe
> 13.12.2014  10:05   148,133 IPython Qt Console.exe
> 11.01.2015  12:47  python-2.7.9.amd64
> 13.12.2014  10:05   157,822 Qt Assistant.exe
> 13.12.2014  10:05   147,107 Qt Demo.exe
> 13.12.2014  10:05   150,144 Qt Designer.exe
> 13.12.2014  10:05   155,775 Qt Linguist.exe
> 11.01.2015  12:47  scripts
> 11.01.2015  14:25  settings
> 13.12.2014  10:05   146,075 Spyder (light).exe
> 13.12.2014  10:05   147,094 Spyder.exe
> 11.01.2015  12:47  tools
> 13.12.2014  10:0578,979 WinPython Command Prompt.exe
> 13.12.2014  10:05   134,789 WinPython Control Panel.exe
> 13.12.2014  10:0566,171 WinPython Interpreter.exe
>   12 File(s)  1,546,409 bytes
>6 Dir(s)  84,273,041,408 bytes free
>
>
> In standalone mode and accessed via Spider, Basic Python functions work 
> and I am abl

Re: [julia-users] Re: Later methods definition replaces the previous one?

2015-01-13 Thread Petr Krysl
Mauro,

On Tuesday, January 13, 2015 at 1:45:36 PM UTC-8, Mauro wrote:
>
>
> I don't think it is good practice to add unrelated methods to generic 
> functions in Base.  A guide-line could be to check whether your methods 
> matches up with the help text, if so, add it to Base.count otherwise 
> not.  If not, either give it a different name or do not export it, so 
> the user has to fully qualify it MyMod.count. 
>

I generally agree  with this sentiment, but my libertarian core  rebels  
against the notion that the writer  of the software that I use gets to 
decide the meaning of the words that are used as identifiers of functions  
and such. Many common concepts,  such as length, angle, domain,… will 
eventually be taken and therefore unavailable to future software 
developers. (Of course I realize that  they are still available as 
qualified names.)

More importantly, since the possibility of writing totally unrelated  
methods for a given generic function cannot be eliminated, I believe that 
even the double import  of a generic function  resulting in merged methods  
should be allowed. It would provide for uniformity, as  even this 
restriction  can be easily bypassed by defining a dummy generic function in 
the environment that "uses" two modules that export  methods  for this 
function.  Not allowing  repeated imports does not accomplish  anything 
except that it gets in the way.

Petr
 

> it. 
>


Re: [julia-users] Re: Later methods definition replaces the previous one?

2015-01-13 Thread Petr Krysl
I would argue that this would be prohibited by the compiler as it could not 
distinguish between
 foo(x::Number) = 2 
and
foo(x::Float64) = 3 

But, it would be perfectly safe to have
foo(a::TypeA)
and
foo(b::TypeB)

Different functions in that case.

In the example I gave, count was allowed to be exported from my two modules 
just by being defined by Base. Yet the two count's have different meanings 
from each other and from the Base.count. On the other hand steadystate() 
was not merged even though the end-result is the SAME: one would have two 
methods that work on different types of arguments and do different things.

P 


Re: [julia-users] Re: Later methods definition replaces the previous one?

2015-01-13 Thread Petr Krysl
I think your explanation makes sense. But:

Consider that the two modules may be due to two independent developers. A 
third guy (me) wishes to use both modules. Why should it be prohibited to 
have the two "using" define a single generic function foo with two methods 
(provided they can be disambiguated)?

I think the discussion at 4345 was about the tricky nature of "merging" the 
definitions of the generic functions. But it seems to me it is only tricky 
when the arguments do not allow for the compiler to distinguish between 
them. In that case the compiler should complain or reject the definitions 
(whichever is practicable), but otherwise it should be done for uniformity.

See, when A defines a foo, and B and C import A.foo and re-export it, there 
are no guarantees that the foo's are somehow related. They could still 
implement entirely different (separate) concepts, which could both be 
different from the concept that the original foo was designed to handle.

P

On Tuesday, January 13, 2015 at 12:06:47 PM UTC-8, Mauro wrote:
>
> Is this problem not more related to issue 
> https://github.com/JuliaLang/julia/issues/2327 ? 
>
> The way it works is thus:  If you make a method in a module, say 
>
> module A 
> export foo 
> foo() = 1 
> end 
>
> This will create a new generic function foo with one method in it. 
> Now do the same in another module: 
>
> module B 
> export foo 
> foo(x) = x 
> end 
>
> This defines another generic function, of the same name foo, with a 
> method in it.  Now doing 
>
> using A 
> using B 
>
> Will result in the generic function foo from module B being 'used', 
> i.e. the binding of the symbol foo points to B.foo and not to A.foo 
> anymore.  So, essentially generic functions do not get merged which is 
> what I think you expect to happen. 
>
> The way to solve your problem is to define foo in some other module and 
> import it to A and B 
>
> module U 
> export foo 
> foo() = error("to be specified later") 
>
> module A 
> import ..U 
> U.foo() = 1 
> end 
> module B 
> import ..U 
> U.foo(x) = x 
> end 
>
> end 
>
> using U 
> methods(foo) 
>
> now shows both methods: 
>
> # 2 methods for generic function "foo": 
> foo() at none:7 
> foo(x) at none:11 
>
>
> On Tue, 2015-01-13 at 17:22, Petr Krysl > 
> wrote: 
> > I have a trouble following the reasoning in the 4345 issues trail. 
> > 
> > If a module defines a method (let us say "count") and brings in into an 
> > environment that already has a method for the function count(), both are 
> > available provided there signatures allow for the compiler to 
> distinguish 
> > between them to decide which one to call. 
> > 
> > I already have a situation like this in my code: I have two modules that 
> > define the function count(), and  they export  that function  count() 
>  and 
> > in the main there are already two methods  count().. 
> > 
> > julia> methods(count) 
> > # 4 methods for generic function "count": 
> > count(pred::Union(Function,Func{1}),a::AbstractArray{T,N}) at 
> reduce.jl:436 
> > count(pred::Union(Function,Func{1}),itr) at reduce.jl:426 
> > count{T<:FESet}(me::T<:FESet) at 
> > C:\Users\pkrysl\Documents\GitHub\JFinEALE.jl\src\FESetModule.jl:32 
> > count(self::FENodeSet) at 
> > C:\Users\pkrysl\Documents\GitHub\JFinEALE.jl\src\FENodeSetModule.jl:47 
> > 
> > The two methods that failed to get both exported are (it appears)  in 
> > precisely the same  situation, except  that the main module does not 
> have 
> > any definition  of a method with the same name: 
> > 
> > julia> methods(JFinEALE.HeatDiffusionAlgorithmModule.steadystate) 
> > # 1 method for generic function "steadystate": 
> > 
> steadystate(algo::HeatDiffusionAlgorithm,modeldata::Dict{ASCIIString,Any}) 
> > at C:\Users\pkrysl\Documents\GitHub 
> > \JFinEALE.jl\src\HeatDiffusionAlgorithmModule.jl:83 
> > 
> > julia> methods(JFinEALE.AcousticsAlgorithmModule.steadystate) 
> > # 1 method for generic function "steadystate": 
> > steadystate(algo::AcousticsAlgorithm,modeldata::Dict{ASCIIString,Any}) 
> at 
> > C:\Users\pkrysl\Documents\GitHub\JFi 
> > nEALE.jl\src\AcousticsAlgorithmModule.jl:85 
> > 
> > As you can see, the compiler  should be able to decide which of these 
> > methods to call  as they get passed arguments of different types.. 
> > 
> > So,,  this succeeds: 
> > 
> > include("FESetModule.jl") 
> > using JFinEALE.FESetModule 
> > ... 
> > export

[julia-users] Re: Later methods definition replaces the previous one?

2015-01-13 Thread Petr Krysl
I realized that one reason the discussion at 
https://github.com/JuliaLang/julia/issues/4345 

 was 
not addressing my viewpoint was that there the issue is methods that 
implement the SAME concept embodied by a given generic function.

What I would like to see addressed are DIFFERENT concepts addressed by a 
function name. There was just a discussion like this here:
https://groups.google.com/forum/?fromgroups=#!topic/julia-users/CejvtDWUIjk

For instance, take PDEs: a number of PDEs are "steady-state" (Poisson, 
Helmholtz, harmonic vibration, ...). The solver may be quite reasonably 
called "steadystate()". The logic will be certainly very different inside 
each solver, and the argument to the solver would also indicate that 

steadystate(p::Poisson)
steadystate(p::Helmholtz)
steadystate(p::HarmonicVib)

An economics package might also wish to export a name like this. Even if we 
were to argue that somehow the above are similar, there are certainly 
legitimate uses for function names meaning very different things.

Should this be disallowed as suggested in 
https://github.com/JuliaLang/julia/issues/4345 

?

It seems to me that situations like this might only lead to trouble if the 
compiler encounters an ambiguous choice as to which function to call, no? 
Such as when there are two functions f() (no arguments), the compiler 
cannot possibly know which is intended and such a double definition should 
not be allowed.

P

On Monday, January 12, 2015 at 11:24:07 PM UTC-8, Ivar Nesje wrote:
>
> New method definitions will replace the previous definition. 
>
> If you put the function in a module and bring them into you scope with 
> using/importall, you'll run into 
> https://github.com/JuliaLang/julia/issues/4345 
> ,
>  
> which can be considered either a bug or a missing feature. 



[julia-users] Re: Later methods definition replaces the previous one?

2015-01-13 Thread Petr Krysl
I have a trouble following the reasoning in the 4345 issues trail.

If a module defines a method (let us say "count") and brings in into an 
environment that already has a method for the function count(), both are 
available provided there signatures allow for the compiler to distinguish 
between them to decide which one to call.

I already have a situation like this in my code: I have two modules that 
define the function count(), and  they export  that function  count()  and 
in the main there are already two methods  count()..

julia> methods(count)
# 4 methods for generic function "count":
count(pred::Union(Function,Func{1}),a::AbstractArray{T,N}) at reduce.jl:436
count(pred::Union(Function,Func{1}),itr) at reduce.jl:426
count{T<:FESet}(me::T<:FESet) at 
C:\Users\pkrysl\Documents\GitHub\JFinEALE.jl\src\FESetModule.jl:32
count(self::FENodeSet) at 
C:\Users\pkrysl\Documents\GitHub\JFinEALE.jl\src\FENodeSetModule.jl:47

The two methods that failed to get both exported are (it appears)  in 
precisely the same  situation, except  that the main module does not have 
any definition  of a method with the same name:

julia> methods(JFinEALE.HeatDiffusionAlgorithmModule.steadystate)
# 1 method for generic function "steadystate":
steadystate(algo::HeatDiffusionAlgorithm,modeldata::Dict{ASCIIString,Any}) 
at C:\Users\pkrysl\Documents\GitHub
\JFinEALE.jl\src\HeatDiffusionAlgorithmModule.jl:83

julia> methods(JFinEALE.AcousticsAlgorithmModule.steadystate)
# 1 method for generic function "steadystate":
steadystate(algo::AcousticsAlgorithm,modeldata::Dict{ASCIIString,Any}) at 
C:\Users\pkrysl\Documents\GitHub\JFi
nEALE.jl\src\AcousticsAlgorithmModule.jl:85

As you can see, the compiler  should be able to decide which of these 
methods to call  as they get passed arguments of different types..

So,,  this succeeds:

include("FESetModule.jl")
using JFinEALE.FESetModule
...
export count


include("FENodeSetModule.jl")
using JFinEALE.FENodeSetModule
...
export count

and this fails

include("AcousticsAlgorithmModule.jl")
using JFinEALE.AcousticsAlgorithmModule
export AcousticsAlgorithm
export steadystate

include("HeatDiffusionAlgorithmModule.jl")
using JFinEALE.HeatDiffusionAlgorithmModule
export HeatDiffusionAlgorithm
export steadystate

I find this strange and inconsistent. Could someone please explain  
whether  this is something I should fix up at my end or that this is a  
problem with the logic of the programming language..

Petr

On Monday, January 12, 2015 at 11:24:07 PM UTC-8, Ivar Nesje wrote:
>
> New method definitions will replace the previous definition. 
>
> If you put the function in a module and bring them into you scope with 
> using/importall, you'll run into 
> https://github.com/JuliaLang/julia/issues/4345, which can be considered 
> either a bug or a missing feature. 



[julia-users] Later methods definition replaces the previous one?

2015-01-12 Thread Petr Krysl
I have two nested modules, both of them export a function of the  same name.
The functions have arguments of different types.
Both functions are also exported  from  the parent module. At least that is 
the intent,  but it fails.
Only  the latest definition is visible outside (unqualified).

module JFinEALE



include("AcousticsAlgorithmModule.jl")
using JFinEALE.AcousticsAlgorithmModule
export AcousticsAlgorithm
export steadystate

include("HeatDiffusionAlgorithmModule.jl")
using JFinEALE.HeatDiffusionAlgorithmModule
export HeatDiffusionAlgorithm
export steadystate # <=== only this one is visible

end

Is this a bug?  I don't  know if it's supposed to work like this, but I 
suspect not.

Petr




Re: [julia-users] Package structure

2015-01-11 Thread Petr Krysl
Hi Rob,

I can't  keep up with you. Your code is a moving target all the time. :)


On Sunday, January 11, 2015 at 11:14:04 AM UTC-8, Rob J Goedman wrote:
>
> Hi Petr,
>
>
> 1. I'm not sure I like to the levels of modules.  This way a user has to 
> know which 2nd level modules he/she should be using.
>
> Is there a reason you choose this setup?
>

I thought of that it was going to be easier  controlling  access to the 
methods  defined within individual modules.  In fact now I can see that  
you have transitioned  your fork  to a single module. And  as expected  you 
ran into some difficulties with exporting.

In both the Acoustics Algorithm module and the Heat conduction Algorithm 
module  defined the method  called  steadystate().  When the  method is 
used by qualification from within the module things are fine, but if you're 
trying to export it from the single-layer  module the naming convention 
breaks  and you have to disambiguate the methods  by prefixes..

 Also problematic is to  decide  at the top level  what is going to be 
exported. When these decisions are made  where  the methods are defined it 
seems a little bit more natural.  In this case the export happens at a 
place that is different from words defined. But, could be that a central 
location for all the exports might have an advantage.


> I typically prefer that the end user only needs a single 'using JFinEALE' 
> statement. That requires some additional work, so wanted to have your views 
> first before going there. Even then there are a few ways of how we can 
> implement that, e.g. remove the 2nd layer of modules (make them all 
> methods) or export more in JFinEALE.
>
>
Two layers of modules  would actually not be too bad.
 

> 2. For now I've added *.vtk to .gitignore. 
>
> I prefer to have examples in their own directory and .vtk files can be 
> stored there as well. Do you have a preference?
>

The vtk files are not part  of my repository  as they are generated 
on-the-fly  when the examples run.

>
> 3. I have not Julia-normalized the Paraview paths lines in the examples. 
> In most cases, for now, I've made them @windows_only ...
>
> We could check if Paraview is available, e.g. by looking for an 
> environment variable or a definition in .juliarc.jl.
>

Sure, what is in my repository is an ugly hack  which is not going to work 
on other people's machines. 
Your solution would be a good one.
 

>
> 4. I do export 'phun' right now in JFinEALE.
>
> Let me know what your preference is.
>

By the way,  last time I checked  you had the bbasic datatypes defined 
twice,  once in the foundation module and once in the JFinEALE module.


> You should be able to Pkg.clone("https://github.com/goedman/JFinEALE.jl";) 
> after Pkg.rm("JFinEALE") first. 
> Once this version is installed, you can run Pkg.test("JFinEALE"), it 
> should run all examples (twice in fact, to see if first time is slower than 
> subsequent times).
>

The  package test should actually not run all the examples..  It should run 
only  examples instrumented  to perform  tests..   Such as the ones  I 
instrumented  for the acoustics..
 

>
> Or we could make this version a branch or I could generate a PR first.
>

Perhaps you could enable "issues" in your github repository so that we 
could  carry on this conversation there?


Regards,

Petr


> Regards,
> Rob J. Goedman
> goe...@mac.com 
>
>
>
>
>  
> On Jan 11, 2015, at 9:30 AM, Petr Krysl > 
> wrote:
>
> Sounds like a good idea.
> P
>
> On Saturday, January 10, 2015 at 7:23:25 PM UTC-8, Rob J Goedman wrote:
>>
>> Petr, 
>>
>> Would you like me to prepare a simple pull request tomorrow? 
>>
>> Regards, 
>> Rob 
>>
>> Sent from my iPhone 
>>
>> > On Jan 10, 2015, at 6:22 PM, Petr Krysl  wrote: 
>> > 
>> > Hello everybody, 
>> > 
>> > I wonder if I could get some advice on how to structure a package?  I 
>> could see  how the SRC and TEST folders  had a place in the package 
>> structure, but so far I could not discern a systematic way for access to 
>> examples that use the package. In particular, when I create a package 
>>  (clone()),, and the package has an EXAMPLES folder, none of the stuff in 
>> that folder is  actually visible at the REPL... 
>> > 
>> > How should this be handled? 
>> > 
>> > Thanks, 
>> > 
>> > P 
>>
>
>

Re: [julia-users] Speed of Julia when a function is passed as an argument, and a different, but much faster coding.

2015-01-11 Thread Petr Krysl
On Sunday, January 11, 2015 at 10:22:05 AM UTC-8, Tim Holy wrote:
>
>
> Already implement in julia 0.4. 
>
 
Very cool! Just checking: are you saying that  the solution  I proposed  
with the  empty  types  would actually address the problem of  functions 
being inefficient  as when passed in as callbacks, and that this is now 
part of 0.4? Or rather that the syntactic sugar of being able to call on 
object is part of 0.4?

Thanks,

P


Re: [julia-users] Speed of Julia when a function is passed as an argument, and a different, but much faster coding.

2015-01-11 Thread Petr Krysl
Here is a kludgy sort of  solution.  Tim  and the other  wizards will 
probably  know if it was  suggested before.

https://gist.github.com/PetrKryslUCSD/7b96cd937e2e6cdb58db

Measurements:

elapsed time: 0.440441867 seconds (320050384 bytes allocated, 31.29% gc 
time)
elapsed time: 0.001976364 seconds (62724 bytes allocated)
elapsed time: 1.001955839 seconds (320046408 bytes allocated, 13.32% gc 
time)
elapsed time: 0.002634663 seconds (76500 bytes allocated)
elapsed time: 0.002585548 seconds (81116 bytes allocated)

The last solution on the list could be  much improved syntactically if one 
could define  a "call" syntax  for an object. Very recent post addresses  
this need: 
https://groups.google.com/forum/?fromgroups=#!topic/julia-users/blreJXpp-FQ

Petr




Re: [julia-users] Package structure

2015-01-11 Thread Petr Krysl
Sounds like a good idea.
P

On Saturday, January 10, 2015 at 7:23:25 PM UTC-8, Rob J Goedman wrote:
>
> Petr, 
>
> Would you like me to prepare a simple pull request tomorrow? 
>
> Regards, 
> Rob 
>
> Sent from my iPhone 
>
> > On Jan 10, 2015, at 6:22 PM, Petr Krysl  > wrote: 
> > 
> > Hello everybody, 
> > 
> > I wonder if I could get some advice on how to structure a package?  I 
> could see  how the SRC and TEST folders  had a place in the package 
> structure, but so far I could not discern a systematic way for access to 
> examples that use the package. In particular, when I create a package 
>  (clone()),, and the package has an EXAMPLES folder, none of the stuff in 
> that folder is  actually visible at the REPL... 
> > 
> > How should this be handled? 
> > 
> > Thanks, 
> > 
> > P 
>


Re: [julia-users] Re: FE solver in Julia fast and faster

2015-01-11 Thread Petr Krysl
I'm not  SURE, but the  toolkit  name (and the toolkit itself) is my 
invention.   First used in 2012.
This is not part of  standard  Matlab distribution

Petr

On Sunday, January 11, 2015 at 2:04:36 AM UTC-8, Milan Bouchet-Valat wrote:
>
> Le samedi 10 janvier 2015 à 18:10 -0800, Petr Krysl a écrit : 
> > Correction: The repository  has been now updated to look like a true 
> > Julia package. 
> > It can be cloned and there's even a rudimentary test. 
> > 
> > https://github.com/PetrKryslUCSD/JFinEALE.jl 
> This sounds great. I was just wondering: are you sure the name FinEALE 
> hasn't been trademarked by MATLAB? Or is it a standard mathematical/CS 
> term? If you become a serious competitor, they won't hesitate to sue you 
> if they can. Better anticipate before setting the name in stone. 
>
>
> Regards 
>
> > P 
> > 
> > On Saturday, January 10, 2015 at 11:57:48 AM UTC-8, Petr Krysl wrote: 
> > The optimized toolkit JFinEALE is now available on github: 
> > 
> > https://github.com/PetrKryslUCSD/JFinEALE 
> > 
> > Petr 
> > 
>
>

[julia-users] Package structure

2015-01-10 Thread Petr Krysl
Hello everybody,

I wonder if I could get some advice on how to structure a package?  I could 
see  how the SRC and TEST folders  had a place in the package structure, 
but so far I could not discern a systematic way for access to examples that 
use the package. In particular, when I create a package  (clone()),, and 
the package has an EXAMPLES folder, none of the stuff in that folder is  
actually visible at the REPL...

How should this be handled?

Thanks,

P


[julia-users] Re: FE solver in Julia fast and faster

2015-01-10 Thread Petr Krysl
Correction: The repository  has been now updated to look like a true Julia 
package.
It can be cloned and there's even a rudimentary test.

https://github.com/PetrKryslUCSD/JFinEALE.jl

P

On Saturday, January 10, 2015 at 11:57:48 AM UTC-8, Petr Krysl wrote:
>
> The optimized toolkit JFinEALE is now available on github:
>
> https://github.com/PetrKryslUCSD/JFinEALE
>
> Petr
>


[julia-users] Re: FE solver in Julia fast and faster

2015-01-10 Thread Petr Krysl
Simon,

The commercial solver  was Comsol 4.4.   I suppose it is implemented in C 
or C++.  Some of the user interface is probably in Java.

I will certainly consider your suggestion about the types. Frankly, I 
didn't know better how to provide both type stability and flexibility. 
Hopefully I will learn as I go. If you have a concrete way of managing that 
in mind, do let me know (or implement it ;).

Petr


On Saturday, January 10, 2015 at 12:24:06 PM UTC-8, Simon Danisch wrote:
>
> Wow =)
> Do you have more details about the commercial solver? Is it implemented in 
> C/C++?
> If yes, this might become my favorite example to show off Julia's 
> strengths! Commercial C/C++ software looses against non commercial high 
> level code makes up a pretty good story =)
>
> One thing I notice while scanning through your code:
> You're extensively using your own type(alias). I guess you have your 
> reasons for that, but it made it a little harder for me to read your code 
> and reminds me a lot of C/C++.
> Like this, if I want to use another array type instead of your defined 
> type, I'd need to go into your JFFoundationModule and change your code. 
> If it would be programmed against the abstract interfaces of 
> FloatingPoint/AbstractArray/etc..., together with type stable functions, 
> your library would be more flexible to use, as the types would simply be 
> defined by the input types.
> I don't know if you're just used to C/C++ style programming, or if your 
> case is a little more complicated than I imagine.
>
> Anyway, thanks for this great package! :)
>
> Am Samstag, 10. Januar 2015 03:42:24 UTC+1 schrieb Petr Krysl:
>>
>> Hello all,
>>
>> Big thanks to  Tim Holy and Andreas Noack. The FE solver implemented in 
>> Julia as described previously (
>> https://groups.google.com/forum/?fromgroups=#!searchin/julia-users/Krysl/julia-users/dgNqrJBZx5U/oGeXMZFjToMJ)
>>  
>> has  been further optimized with their helpful hints and pointers.   The 
>> comparison problem now runs  in around 9.6 seconds, compared to  16 seconds 
>> solve phase achieved with the commercial FEA software .
>>
>> Not bad, Julia!
>>
>> Petr
>>
>

[julia-users] Re: FE solver in Julia fast and faster

2015-01-10 Thread Petr Krysl
Jeff,

Not sure this is answering your question: both the commercial software and 
the Julia solver were run on the same machine  (i7 with 16 GB of memory  
running Windows 7).

On an equivalent Linux laptop the Julia solver is even faster. Comsol is 
not available on that laptop.

P

On Saturday, January 10, 2015 at 12:35:18 PM UTC-8, Jeff Waller wrote:
>
> Hmm, not knowing a lot about this, just the keyword I have the question. 
>  Is it possible to make it 3x faster yet and/or what hardware is that 
> benchmark obtained from?
>


[julia-users] Re: FE solver in Julia fast and faster

2015-01-10 Thread Petr Krysl
The optimized toolkit JFinEALE is now available on github:

https://github.com/PetrKryslUCSD/JFinEALE

Petr


Re: [julia-users] How to define help text for a module function so that it displays in REPL

2015-01-10 Thread Petr Krysl
Hi,

I had a look at your package: very nice indeed!  It is not clear to me 
where the actual documentation is though.  I followed some links to source 
code and there was no documentation in there…

Petr

On Saturday, January 10, 2015 at 8:35:24 AM UTC-8, tshort wrote:
>
> Docile.jl works great. With Lexicon (or base Julia 0.4dev), you get nice 
> REPL help.
>
> Mkdocs (http://mkdocs.org) works great for taking Markdown from Docile 
> output along with other Markdown files to make online documentation. Mkdocs 
> is super easy and works great with Github Pages. It also works with 
> readthedocs (but I haven't tried that).
>
> Here is an example of package documentation developed with Docile and 
> Mkdocs (still a work in progress):
>
> https://tshort.github.io/Sims.jl/
>
>
>
> On Fri, Jan 9, 2015 at 3:41 AM, Ján Dolinský  > wrote:
>
>> Hi,
>>
>> I would like to know how to write a help (comment) text e.g. for a module 
>> function so that once module is loaded it is displayed if in REPL help mode.
>> E.g. for "sum" function
>> help?>sum
>> displays brief help text for "sum"
>>
>> Thanks,
>> Jan 
>>
>
>

[julia-users] FE solver in Julia fast and faster

2015-01-09 Thread Petr Krysl
Hello all,

Big thanks to  Tim Holy and Andreas Noack. The FE solver implemented in 
Julia as described previously 
(https://groups.google.com/forum/?fromgroups=#!searchin/julia-users/Krysl/julia-users/dgNqrJBZx5U/oGeXMZFjToMJ)
 
has  been further optimized with their helpful hints and pointers.   The 
comparison problem now runs  in around 9.6 seconds, compared to  16 seconds 
solve phase achieved with the commercial FEA software .

Not bad, Julia!

Petr


Re: [julia-users] Why is memory allocated to run loop?

2015-01-09 Thread Petr Krysl
Thanks Andreas, that was helpful. See message to Tim in this thread.

P

On Friday, January 9, 2015 at 11:55:20 AM UTC-8, Andreas Noack wrote:
>
> Sometimes Julia can have problems inferring the type of fields of types 
> which is what conn is. Using @code_warntype as suggested by Tim could give 
> an indication if it is the case.
>
> 2015-01-09 14:51 GMT-05:00 Petr Krysl >:
>
>> The size() function is from the base. fes.conn is a 2D array of ints.
>>
>> P
>>
>>
>> On Friday, January 9, 2015 at 11:25:34 AM UTC-8, Tim Holy wrote:
>>>
>>> I wonder if your `size` function isn't returning an Int, or that the 
>>> compiler 
>>> can't infer that. Did you try this on julia 0.4 with @code_warntype? 
>>>
>>> --Tim 
>>>
>>> On Friday, January 09, 2015 08:06:43 AM Petr Krysl wrote: 
>>> > Actually, I realized that  the example shows the same problem, but for 
>>> a 
>>> > different function. This time the allocation occurs  in the function 
>>> > CALLING getconn!. 
>>> > 
>>> > P 
>>> > 
>>> > On Friday, January 9, 2015 at 8:03:00 AM UTC-8, Petr Krysl wrote: 
>>> > > Tim, 
>>> > > 
>>> > > Your explanation is very helpful.  There is a complication though: 
>>> > > The function is actually called from another  function (from a chain 
>>> of 
>>> > > 
>>> > > functions). Here is the measurement from a simplified situation: 
>>> > > - using JFFoundationModule 
>>> > > - using FESetModule 
>>> > > - 
>>> > > - n=100; 
>>> > > - function test(n) 
>>> > > 0 fes=FESetH8(conn=rand(JFInt,n, 
>>> > > 
>>> > > 8)) 
>>> > > 
>>> > >   112 conn1::JFIntMat=zeros(JFInt,nfense(fes),1); 
>>> > >   
>>> > >   6383704 for j=1:size(fes.conn,1) 
>>> > >   
>>> > > 0 getconn!(fes,conn1,j); 
>>> > > - end 
>>> > > 0 return true 
>>> > > - end 
>>> > > - 
>>> > > - test(n) 
>>> > > - clear_malloc_data() 
>>> > > - n=10; 
>>> > > - test(n) 
>>> > > 
>>> > > As you can see the loop over the rows of the array fes.conn 
>>> allocates 
>>> > > substantial amount of memory. 
>>> > > 
>>> > > Here is the type: 
>>> > > https://gist.github.com/PetrKryslUCSD/794f521a8e5b057e5e4e 
>>> > > 
>>> > > Petr 
>>>
>>>
>

Re: [julia-users] Why is memory allocated to run loop?

2015-01-09 Thread Petr Krysl
Thanks, that was very helpful. 

The problem was eliminated by parameterizing wrt the type S<:FESet. The 
lesson I learned was that knowing the abstract type is not enough: the 
compiler needs to know the concrete type. That is something I was not used 
to and it tripped me up again this time. I hope I will remember from now on.

Thanks again,

Petr

On Friday, January 9, 2015 at 12:08:58 PM UTC-8, Tim Holy wrote:
>
> On Friday, January 09, 2015 11:51:15 AM Petr Krysl wrote: 
> > The size() function is from the base. fes.conn is a 2D array of ints. 
>
> No it's not, it's a JFIntMat :-). But seriously, _you_ might know that 
> corresponds to a 2D array of ints, but the key question is if you've given 
> enough information in your declaration of JFIntMat for the _compiler_ to 
> know 
> that. 
>
> Bottom line is that the behavior you're observing isn't normal. It's also 
> extremely unlikely to be a bug in julia, because we do this kind of stuff 
> all 
> over julia without the kind of allocations you're observing. So it's 
> almost 
> certainly something in your code. @code_warntype is your best way of 
> finding 
> out. 
>
> --Tim 
>
> > 
> > P 
> > 
> > On Friday, January 9, 2015 at 11:25:34 AM UTC-8, Tim Holy wrote: 
> > > I wonder if your `size` function isn't returning an Int, or that the 
> > > compiler 
> > > can't infer that. Did you try this on julia 0.4 with @code_warntype? 
> > > 
> > > --Tim 
> > > 
> > > On Friday, January 09, 2015 08:06:43 AM Petr Krysl wrote: 
> > > > Actually, I realized that  the example shows the same problem, but 
> for a 
> > > > different function. This time the allocation occurs  in the function 
> > > > CALLING getconn!. 
> > > > 
> > > > P 
> > > > 
> > > > On Friday, January 9, 2015 at 8:03:00 AM UTC-8, Petr Krysl wrote: 
> > > > > Tim, 
> > > > > 
> > > > > Your explanation is very helpful.  There is a complication though: 
> > > > > The function is actually called from another  function (from a 
> chain 
> > > 
> > > of 
> > > 
> > > > > functions). Here is the measurement from a simplified situation: 
> > > > > - using JFFoundationModule 
> > > > > - using FESetModule 
> > > > > - 
> > > > > - n=100; 
> > > > > - function test(n) 
> > > > > 0 fes=FESetH8(conn=rand(JFInt,n, 
> > > > > 
> > > > > 8)) 
> > > > > 
> > > > >   112 conn1::JFIntMat=zeros(JFInt,nfense(fes),1); 
> > > > >   
> > > > >   6383704 for j=1:size(fes.conn,1) 
> > > > >   
> > > > > 0 getconn!(fes,conn1,j); 
> > > > > - end 
> > > > > 0 return true 
> > > > > - end 
> > > > > - 
> > > > > - test(n) 
> > > > > - clear_malloc_data() 
> > > > > - n=10; 
> > > > > - test(n) 
> > > > > 
> > > > > As you can see the loop over the rows of the array fes.conn 
> allocates 
> > > > > substantial amount of memory. 
> > > > > 
> > > > > Here is the type: 
> > > > > https://gist.github.com/PetrKryslUCSD/794f521a8e5b057e5e4e 
> > > > > 
> > > > > Petr 
>
>

Re: [julia-users] Why is memory allocated to run loop?

2015-01-09 Thread Petr Krysl
The size() function is from the base. fes.conn is a 2D array of ints.

P

On Friday, January 9, 2015 at 11:25:34 AM UTC-8, Tim Holy wrote:
>
> I wonder if your `size` function isn't returning an Int, or that the 
> compiler 
> can't infer that. Did you try this on julia 0.4 with @code_warntype? 
>
> --Tim 
>
> On Friday, January 09, 2015 08:06:43 AM Petr Krysl wrote: 
> > Actually, I realized that  the example shows the same problem, but for a 
> > different function. This time the allocation occurs  in the function 
> > CALLING getconn!. 
> > 
> > P 
> > 
> > On Friday, January 9, 2015 at 8:03:00 AM UTC-8, Petr Krysl wrote: 
> > > Tim, 
> > > 
> > > Your explanation is very helpful.  There is a complication though: 
> > > The function is actually called from another  function (from a chain 
> of 
> > > 
> > > functions). Here is the measurement from a simplified situation: 
> > > - using JFFoundationModule 
> > > - using FESetModule 
> > > - 
> > > - n=100; 
> > > - function test(n) 
> > > 0 fes=FESetH8(conn=rand(JFInt,n, 
> > > 
> > > 8)) 
> > > 
> > >   112 conn1::JFIntMat=zeros(JFInt,nfense(fes),1); 
> > >   
> > >   6383704 for j=1:size(fes.conn,1) 
> > >   
> > > 0 getconn!(fes,conn1,j); 
> > > - end 
> > > 0 return true 
> > > - end 
> > > - 
> > > - test(n) 
> > > - clear_malloc_data() 
> > > - n=10; 
> > > - test(n) 
> > > 
> > > As you can see the loop over the rows of the array fes.conn allocates 
> > > substantial amount of memory. 
> > > 
> > > Here is the type: 
> > > https://gist.github.com/PetrKryslUCSD/794f521a8e5b057e5e4e 
> > > 
> > > Petr 
>
>

Re: [julia-users] Why is memory allocated to run loop?

2015-01-09 Thread Petr Krysl
Actually, I realized that  the example shows the same problem, but for a 
different function. This time the allocation occurs  in the function 
CALLING getconn!.

P

On Friday, January 9, 2015 at 8:03:00 AM UTC-8, Petr Krysl wrote:
>
> Tim,
>
> Your explanation is very helpful.  There is a complication though:
> The function is actually called from another  function (from a chain of 
> functions). Here is the measurement from a simplified situation:
>
> - using JFFoundationModule 
> - using FESetModule 
> -  
> - n=100; 
> - function test(n) 
> 0 fes=FESetH8(conn=rand(JFInt,n,
> 8)) 
>   112 conn1::JFIntMat=zeros(JFInt,nfense(fes),1); 
>   6383704 for j=1:size(fes.conn,1) 
> 0 getconn!(fes,conn1,j); 
> - end 
> 0 return true 
> - end 
> -  
> - test(n) 
> - clear_malloc_data() 
> - n=10; 
> - test(n) 
>
> As you can see the loop over the rows of the array fes.conn allocates 
> substantial amount of memory.
>
> Here is the type: 
> https://gist.github.com/PetrKryslUCSD/794f521a8e5b057e5e4e
>
> Petr
>
>

Re: [julia-users] Why is memory allocated to run loop?

2015-01-09 Thread Petr Krysl
Tim,

Your explanation is very helpful.  There is a complication though:
The function is actually called from another  function (from a chain of 
functions). Here is the measurement from a simplified situation:

- using JFFoundationModule 
- using FESetModule 
-  
- n=100; 
- function test(n) 
0 fes=FESetH8(conn=rand(JFInt,n,
8)) 
  112 conn1::JFIntMat=zeros(JFInt,nfense(fes),1); 
  6383704 for j=1:size(fes.conn,1) 
0 getconn!(fes,conn1,j); 
- end 
0 return true 
- end 
-  
- test(n) 
- clear_malloc_data() 
- n=10; 
- test(n) 

As you can see the loop over the rows of the array fes.conn allocates 
substantial amount of memory.

Here is the type: https://gist.github.com/PetrKryslUCSD/794f521a8e5b057e5e4e

Petr



Re: [julia-users] Why is memory allocated to run loop?

2015-01-08 Thread Petr Krysl
Sorry:  I forgot that  this function got called thousands of times.  I 
guess in that case calling the size()  function  might result in some 
allocation which would then get magnified by the number of calls.  (Am I on 
the right track?)

On Thursday, January 8, 2015 at 7:19:52 PM UTC-8, Petr Krysl wrote:
>
> Andreas,,
>
> Good point about the global scope. What I was  really trying to understand 
> was  this observation:
>
> - function getconn!{T<:FESet}(self::T,conn::JFIntMat,j::JFInt)
>   5959808 for i=1:size(self.conn,2)
> 0 conn[i]=self.conn[j,i];
> - end
> 0 return self
> - end
>
> Isn't  the size  here in  the local context, and also  of a definite type?
>
> Petr
>
> On Thursday, January 8, 2015 at 7:09:35 PM UTC-8, Andreas Noack wrote:
>>
>> The reason is the usual global variable problem. Because n is global the 
>> type is not inferred well and the loop is therefore not well optimized. 
>> This doesn't happen in local scope e.g.
>>
>> julia> let
>>
>>m = 100
>>
>>@allocated for i = 1:m end
>>
>>end
>>
>> 0
>>
>> and it can also be avoided if you specify the type in the loop range
>>
>> julia> @allocated for i = 1:n::Int
>>
>>end
>>
>> 0
>>
>> 2015-01-08 22:02 GMT-05:00 Petr Krysl :
>>
>>> julia> a=rand(100);
>>>
>>> julia> n=length(a);
>>>
>>> julia> @allocated for j=1:n
>>>end
>>> 63983688
>>>
>>> 64 MB to run this loop?  Is that expected?
>>>
>>> Thanks for any insight.
>>>
>>> Petr
>>>
>>
>>

Re: [julia-users] Why is memory allocated to run loop?

2015-01-08 Thread Petr Krysl
Andreas,,

Good point about the global scope. What I was  really trying to understand 
was  this observation:

- function getconn!{T<:FESet}(self::T,conn::JFIntMat,j::JFInt)
  5959808 for i=1:size(self.conn,2)
0 conn[i]=self.conn[j,i];
- end
0 return self
- end

Isn't  the size  here in  the local context, and also  of a definite type?

Petr

On Thursday, January 8, 2015 at 7:09:35 PM UTC-8, Andreas Noack wrote:
>
> The reason is the usual global variable problem. Because n is global the 
> type is not inferred well and the loop is therefore not well optimized. 
> This doesn't happen in local scope e.g.
>
> julia> let
>
>m = 100
>
>@allocated for i = 1:m end
>
>end
>
> 0
>
> and it can also be avoided if you specify the type in the loop range
>
> julia> @allocated for i = 1:n::Int
>
>end
>
> 0
>
> 2015-01-08 22:02 GMT-05:00 Petr Krysl >:
>
>> julia> a=rand(100);
>>
>> julia> n=length(a);
>>
>> julia> @allocated for j=1:n
>>end
>> 63983688
>>
>> 64 MB to run this loop?  Is that expected?
>>
>> Thanks for any insight.
>>
>> Petr
>>
>
>

[julia-users] Why is memory allocated to run loop?

2015-01-08 Thread Petr Krysl
julia> a=rand(100);

julia> n=length(a);

julia> @allocated for j=1:n
   end
63983688

64 MB to run this loop?  Is that expected?

Thanks for any insight.

Petr


Re: [julia-users] Tips and tricks for figuring out where allocation occurs

2015-01-08 Thread Petr Krysl
Oh no, got that wrong.  (Responding to email while rushing to class is not 
a good idea.)

The FIRST statement was incorrect. Parallel to the TIME axis means w(0,t) 
or w(L,t) for all t in range (i.e. between initial and final time).

Parallel to the space axis means w(0,t) or w(L,t) for all t in rang
> These are the boundary conditions
>
> Parallel to the space axis is for all 0 <= x <= L for a fixed time, i.e. 
> initial conditions.
>  
>
>> Would you prefer that I open an errata issue on 
>> https://github.com/PetrKryslUCSD/FinEALE and add these there?
>>
>
>
>>  



[julia-users] Re: Tips and tricks for figuring out where allocation occurs

2015-01-08 Thread Petr Krysl
Sorry, just noticed a typo. The second statement should read:

Parallel to the TIME axis is for all 0 <= x <= L for a fixed time, i.e. 
initial conditions.

On Monday, January 5, 2015 8:48:04 PM UTC-8, Petr Krysl wrote:
>
> Hi guys,
>
> How does one figure out where allocation  of memory occurs?   When I use 
> the @time  macro it tells me there's a lot of memory allocation and 
> deallocation going on.  Just looking at the code I'm at a loss: I can't see 
> the reasons for it there.
>
> So, what are the tips and tricks for the curious?  How do I debug the 
> memory allocation issue?  I looked at the lint, the type check, and the 
> code_typed().  Perhaps I don't know where to look, but  these didn't seem 
> to be of much help.
>
> Thanks a bunch,
>
> Petr
>


Re: [julia-users] Tips and tricks for figuring out where allocation occurs

2015-01-08 Thread Petr Krysl
Rob,

Thanks!

On Wednesday, January 7, 2015 3:37:47 PM UTC-8, Rob J Goedman wrote:
>
>
> I believe the next 2 are not correct:
>
> Section 1.4, 1st sentence: time => space, or parallel => orthogonal
> Section 1.5, 1st sentence: space => time
>
>
Parallel to the space axis means w(0,t) or w(L,t) for all t in range.
These are the boundary conditions.

Parallel to the space axis is for all 0 <= x <= L for a fixed time, i.e. 
initial conditions.
 

> Would you prefer that I open an errata issue on 
> https://github.com/PetrKryslUCSD/FinEALE and add these there?
>

That would certainly work fine. Thanks!
 

>
> Is your intention over time to make jFineale equivalent in functionality 
> to Fineale? Is it intended as a successor to Faesor?
>

FinEALE is a successor to FAESOR. It does not have the same functionality, 
but close. My intention is to eventually wean myself off of Matlab for my 
own research and program my algorithms in Julia. So FinEALE is eventually 
going to be re-written in some way in Julia...  In particular I am 
interested in solid shell elements at this point, and I also have a novel 
high-performance hex element in the current FinEALE. That will be ported 
soon, after I sort out how to avoid performance gotchas in Julia.
 
Best regards,

P


> Rob J. Goedman
> goe...@mac.com 
>
>  
> On Jan 6, 2015, at 6:39 PM, Petr Krysl > 
> wrote:
>
> Oh, in that case I am tickled pink.
>
> Please do let me know if you find any typos or mistakes.
>
> Best regards,
>
> Petr
>
> On Tuesday, January 6, 2015 3:36:34 PM UTC-8, Rob J Goedman wrote:
>>
>> Hi Petr,
>>
>> It’s your book, I used this name for the time being while working my way 
>> through the first 6 or 7 chapters using Julia (and Mathematica 
>> occasionally, don’t have Matlab).
>>
>> If you would prefer that, I can easily change the name, I have no 
>> intention to ever register the package.
>>
>> Just trying to figure out a good way to replace my current (Fortran) 
>> FEM/R program with a Julia equivalent.
>>
>> Regards,
>> Rob J. Goedman
>> goe...@mac.com
>>
>>
>>
>>
>>  
>> On Jan 6, 2015, at 3:01 PM, Petr Krysl  wrote:
>>
>> Rob,
>>
>> Thanks. I did find some .mem files (see above). Not for my own source 
>> files though.
>>
>> Petr
>>
>> PS: You have a "fineale" book? Interesting... I thought no one else had 
>> claimed that name for a software project before...
>>
>> On Tuesday, January 6, 2015 2:46:26 PM UTC-8, Rob J Goedman wrote:
>>>
>>> Petr,
>>>
>>> Not sure if this helps you, but below sequence creates the .mem file.
>>>
>>> ProjDir is set in Ex07.jl and is the directory that contains the .mem 
>>> file
>>>
>>> Regards,
>>> Rob J. Goedman
>>> goe...@mac.com
>>>
>>>
>>> Robs-MacBook-Pro:~ rob$ clear; julia  --track-allocation=user
>>>
>>>*_*
>>>*_**   _ **_**(_)**_** |  A fresh approach to technical 
>>> computing*
>>>   *(_)** | **(_)* *(_)**|  Documentation: 
>>> http://docs.julialang.org <http://docs.julialang.org/>*
>>> *   _ _   _| |_  __ _   |  Type "help()" for help.*
>>> *  | | | | | | |/ _` |  |*
>>> *  | | |_| | | | (_| |  |  Version 0.3.4 (2014-12-26 10:42 UTC)*
>>> * _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ 
>>> <http://julialang.org/> release*
>>> *|__/   |  x86_64-apple-darwin13.4.0*
>>>
>>> *julia> *
>>> *include("/Users/rob/.julia/v0.3/FinealeBook/Examples/Fineale/Ch02/Ex07.jl")*
>>>
>>> *julia> **cd(ProjDir)*
>>>
>>> *julia> **clear_malloc_data()*
>>>
>>> *julia> *
>>> *include("/Users/rob/.julia/v0.3/FinealeBook/Examples/Fineale/Ch02/Ex07.jl")*
>>>
>>>  *shell> **ls*
>>> Ex07.jl Ex07.svg Ex08.svg Ex09.svg Section2.3.svg
>>> Ex07.jl.mem Ex08.jl Ex09.jl Section2.3.jl Section2.4.nb
>>>
>>> On Jan 6, 2015, at 2:15 PM, Petr Krysl  wrote:
>>>
>>> I did this as suggested. The code  executed as shown below, preceded by 
>>> the command line.
>>> The process completes,  but there are no .mem files anywhere. Should I 
>>> ask for them specifically?
>>>
>>> # "C:\Users\pkrysl\AppData\Local\Julia-0.4.0-dev\bin\julia.exe" 
>>> --track-allocation=all memory_debugging.jl
>>> cd( "C:/Users/pkrysl/Documents/GitHub/j

[julia-users] How does one read this declaration please?

2015-01-07 Thread Petr Krysl
What does this mean  please?

function myfunction{f}(::Type{f}, args...)
# Do stuff using f just like an ordinary functionend

I'm sorry  to have to ask this, it would be  much better  to just go to the 
manual, but the manual search does not yield anything for "Type{".

Thanks a lot,

Petr




Re: [julia-users] Tips and tricks for figuring out where allocation occurs

2015-01-06 Thread Petr Krysl
I filed the report. Identical results with versions 0.3.4 and 0.4.  Windows 
7.

Petr

On Tuesday, January 6, 2015 3:06:25 PM UTC-8, Tim Holy wrote:
>
> Sounds like a bug; I'll be curious to find out what platform you're on. 
> But 
> don't do it here: please file an issue (see 
>
> https://github.com/JuliaLang/julia/blob/master/CONTRIBUTING.md#how-to-file-a-bug-report).
>  
>
>
> Also note that there were some recent improvements in the ability to 
> distinguish user-code and base (
> https://github.com/JuliaLang/julia/pull/9581 
> ), but they're not available in 0.3 yet. So if you're using 0.3, there may 
> be 
> some inaccuracies. 
>
> --Tim 
>
> On Tuesday, January 06, 2015 02:59:11 PM Petr Krysl wrote: 
> > Actually, correction: for 0.3.4 the _system_ *.mem files are in the 
> julia 
> > folders. For _my_ source files the .mem files cannot be located. 
> > 
> > P 
> > 
> > On Tuesday, January 6, 2015 2:15:02 PM UTC-8, Petr Krysl wrote: 
> > > I did this as suggested. The code  executed as shown below, preceded 
> by 
> > > the command line. 
> > > The process completes,  but there are no .mem files anywhere. Should I 
> ask 
> > > for them specifically? 
> > > 
> > > # "C:\Users\pkrysl\AppData\Local\Julia-0.4.0-dev\bin\julia.exe" 
> > > --track-allocation=all memory_debugging.jl 
> > > cd( "C:/Users/pkrysl/Documents/GitHub/jfineale"); 
> include("JFinEALE.jl"); 
> > > include("examples/acoustics/sphere_scatterer_example.jl") 
> > > Profile.clear_malloc_data() 
> > > include("examples/acoustics/sphere_scatterer_example.jl") 
> > > quit() 
> > > 
> > > On Tuesday, January 6, 2015 1:50:11 AM UTC-8, Milan Bouchet-Valat 
> wrote: 
> > >> Le lundi 05 janvier 2015 à 20:48 -0800, Petr Krysl a écrit : 
> > >> > Hi guys, 
> > >> > 
> > >> > How does one figure out where allocation  of memory occurs?   When 
> I 
> > >> > use the @time  macro it tells me there's a lot of memory allocation 
> > >> > and deallocation going on.  Just looking at the code I'm at a loss: 
> I 
> > >> > can't see the reasons for it there. 
> > >> > 
> > >> > So, what are the tips and tricks for the curious?  How do I debug 
> the 
> > >> > memory allocation issue?  I looked at the lint, the type check, and 
> > >> > the code_typed().  Perhaps I don't know where to look, but  these 
> > >> > didn't seem to be of much help. 
> > >> 
> > >> See this: 
> > >> 
> > >> 
> http://docs.julialang.org/en/latest/manual/profile/#memory-allocation-ana 
> > >> lysis 
> > >> 
> > >> (Would probably be good to backport to the 0.3 manual...) 
> > >> 
> > >> 
> > >> Regards 
>
>

Re: [julia-users] Tips and tricks for figuring out where allocation occurs

2015-01-06 Thread Petr Krysl
Thanks for the update.   I'm not getting the process to work myself.  I 
suspect it is the Windows platform that is somehow causing the trouble.  I 
will try later  on Linux. (I believe you were successful on the Mac?)

Petr

On Tuesday, January 6, 2015 3:59:45 PM UTC-8, Rob J Goedman wrote:
>
> Petr,
>
> I ran the Poisson_FE_example_model in REPL as shown below and find the 
> .mem files in the src directory and in the top-level directory.
>
> You were running a different example though.
>
> Rob J. Goedman
> goe...@mac.com 
>
> *julia> **cd(Pkg.dir(homedir(), 
> "Projects/Julia/Rob/jfineale_for_trying_out"))*
>
> *julia> *
> *include("/Users/rob/Projects/Julia/Rob/jfineale_for_trying_out/Poisson_FE_example_model.jl")*
> Heat conduction example described by Amuthan A. Ramabathiran
>
> http://www.codeproject.com/Articles/579983/Finite-Element-programming-in-Julia
> :
> Unit square, with known temperature distribution along the boundary, 
> and uniform heat generation rate inside.  Mesh of regular TRIANGLES,
> in a grid of N x N edges. 
> This version uses the JFinEALE algorithm module.
>
> Total time elapsed = 2.8418619632720947s
>
> *julia> **clear_malloc_data()*
>
> *shell> **ls src*
> AssemblyModule.jl HeatDiffusionAlgorithmModule.jl 
> MeshQuadrilateralModule.jl
> FEMMBaseModule.jl IntegRuleModule.jl MeshSelectionModule.jl
> FEMMHeatDiffusionModule.jl JFFoundationModule.jl MeshTriangleModule.jl
> FENodeSetModule.jl MaterialHeatDiffusionModule.jl NodalFieldModule.jl
> FESetModule.jl MeshExportModule.jl PropertyHeatDiffusionModule.jl
> ForceIntensityModule.jl MeshModificationModule.jl
>
> *shell> **ls*
> JFinEALE.jl Poisson_FE_example_model.jl src
> Poisson_FE_Q4_example.jl README.md tests
> Poisson_FE_example.jl annulus_Q4_example.jl
>
> *julia> *
> *include("/Users/rob/Projects/Julia/Rob/jfineale_for_trying_out/Poisson_FE_example_model.jl")*
> Heat conduction example described by Amuthan A. Ramabathiran
>
> http://www.codeproject.com/Articles/579983/Finite-Element-programming-in-Julia
> :
> Unit square, with known temperature distribution along the boundary, 
> and uniform heat generation rate inside.  Mesh of regular TRIANGLES,
> in a grid of N x N edges. 
> This version uses the JFinEALE algorithm module.
>
> Total time elapsed = 0.017609119415283203s
>
> *julia> CTRL-D*
>
>
>
> Robs-MacBook-Pro:jfineale_for_trying_out rob$ 
>
> Robs-MacBook-Pro:~ rob$ pwd
> /Users/rob
> Robs-MacBook-Pro:~ rob$ cd Projects/Julia/Rob/jfineale_for_trying_out/
> Robs-MacBook-Pro:jfineale_for_trying_out rob$ ls src
> AssemblyModule.jl ForceIntensityModule.jl MeshModificationModule.jl
> AssemblyModule.jl.mem ForceIntensityModule.jl.mem 
> MeshQuadrilateralModule.jl
> FEMMBaseModule.jl HeatDiffusionAlgorithmModule.jl MeshSelectionModule.jl
> FEMMBaseModule.jl.mem HeatDiffusionAlgorithmModule.jl.mem 
> MeshSelectionModule.jl.mem
> FEMMHeatDiffusionModule.jl IntegRuleModule.jl MeshTriangleModule.jl
> FEMMHeatDiffusionModule.jl.mem IntegRuleModule.jl.mem 
> MeshTriangleModule.jl.mem
> FENodeSetModule.jl JFFoundationModule.jl NodalFieldModule.jl
> FENodeSetModule.jl.mem MaterialHeatDiffusionModule.jl 
> NodalFieldModule.jl.mem
> FESetModule.jl MaterialHeatDiffusionModule.jl.mem 
> PropertyHeatDiffusionModule.jl
> FESetModule.jl.mem MeshExportModule.jl PropertyHeatDiffusionModule.jl.mem
> Robs-MacBook-Pro:jfineale_for_trying_out rob$ ls
> JFinEALE.jl Poisson_FE_example_model.jl annulus_Q4_example.jl
> Poisson_FE_Q4_example.jl Poisson_FE_example_model.jl.mem src
> Poisson_FE_example.jl README.md tests
> Robs-MacBook-Pro:jfineale_for_trying_out rob$ 
>
>
>
>  
> On Jan 6, 2015, at 3:01 PM, Petr Krysl > 
> wrote:
>
> Rob,
>
> Thanks. I did find some .mem files (see above). Not for my own source 
> files though.
>
> Petr
>
> PS: You have a "fineale" book? Interesting... I thought no one else had 
> claimed that name for a software project before...
>
> On Tuesday, January 6, 2015 2:46:26 PM UTC-8, Rob J Goedman wrote:
>>
>> Petr,
>>
>> Not sure if this helps you, but below sequence creates the .mem file.
>>
>> ProjDir is set in Ex07.jl and is the directory that contains the .mem file
>>
>> Regards,
>> Rob J. Goedman
>> goe...@mac.com
>>
>>
>> Robs-MacBook-Pro:~ rob$ clear; julia  --track-allocation=user
>>
>>*_*
>>*_**   _ **_**(_)**_** |  A fresh approach to technical 
>> computing*
>>   *(_)** | **(_)* *(_)**|  Documentation: 
>> http://docs.julialang.org <http://docs.julialang.org/>*
>> *   _ _

Re: [julia-users] Tips and tricks for figuring out where allocation occurs

2015-01-06 Thread Petr Krysl
Oh, in that case I am tickled pink.

Please do let me know if you find any typos or mistakes.

Best regards,

Petr

On Tuesday, January 6, 2015 3:36:34 PM UTC-8, Rob J Goedman wrote:
>
> Hi Petr,
>
> It’s your book, I used this name for the time being while working my way 
> through the first 6 or 7 chapters using Julia (and Mathematica 
> occasionally, don’t have Matlab).
>
> If you would prefer that, I can easily change the name, I have no 
> intention to ever register the package.
>
> Just trying to figure out a good way to replace my current (Fortran) FEM/R 
> program with a Julia equivalent.
>
> Regards,
> Rob J. Goedman
> goe...@mac.com 
>
>
>
>
>  
> On Jan 6, 2015, at 3:01 PM, Petr Krysl > 
> wrote:
>
> Rob,
>
> Thanks. I did find some .mem files (see above). Not for my own source 
> files though.
>
> Petr
>
> PS: You have a "fineale" book? Interesting... I thought no one else had 
> claimed that name for a software project before...
>
> On Tuesday, January 6, 2015 2:46:26 PM UTC-8, Rob J Goedman wrote:
>>
>> Petr,
>>
>> Not sure if this helps you, but below sequence creates the .mem file.
>>
>> ProjDir is set in Ex07.jl and is the directory that contains the .mem file
>>
>> Regards,
>> Rob J. Goedman
>> goe...@mac.com
>>
>>
>> Robs-MacBook-Pro:~ rob$ clear; julia  --track-allocation=user
>>
>>*_*
>>*_**   _ **_**(_)**_** |  A fresh approach to technical 
>> computing*
>>   *(_)** | **(_)* *(_)**|  Documentation: 
>> http://docs.julialang.org <http://docs.julialang.org/>*
>> *   _ _   _| |_  __ _   |  Type "help()" for help.*
>> *  | | | | | | |/ _` |  |*
>> *  | | |_| | | | (_| |  |  Version 0.3.4 (2014-12-26 10:42 UTC)*
>> * _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ 
>> <http://julialang.org/> release*
>> *|__/   |  x86_64-apple-darwin13.4.0*
>>
>> *julia> *
>> *include("/Users/rob/.julia/v0.3/FinealeBook/Examples/Fineale/Ch02/Ex07.jl")*
>>
>> *julia> **cd(ProjDir)*
>>
>> *julia> **clear_malloc_data()*
>>
>> *julia> *
>> *include("/Users/rob/.julia/v0.3/FinealeBook/Examples/Fineale/Ch02/Ex07.jl")*
>>
>>  *shell> **ls*
>> Ex07.jl Ex07.svg Ex08.svg Ex09.svg Section2.3.svg
>> Ex07.jl.mem Ex08.jl Ex09.jl Section2.3.jl Section2.4.nb
>>
>> On Jan 6, 2015, at 2:15 PM, Petr Krysl  wrote:
>>
>> I did this as suggested. The code  executed as shown below, preceded by 
>> the command line.
>> The process completes,  but there are no .mem files anywhere. Should I 
>> ask for them specifically?
>>
>> # "C:\Users\pkrysl\AppData\Local\Julia-0.4.0-dev\bin\julia.exe" 
>> --track-allocation=all memory_debugging.jl
>> cd( "C:/Users/pkrysl/Documents/GitHub/jfineale"); include("JFinEALE.jl");
>> include("examples/acoustics/sphere_scatterer_example.jl")
>> Profile.clear_malloc_data()
>> include("examples/acoustics/sphere_scatterer_example.jl")
>> quit()
>>
>>
>>
>> On Tuesday, January 6, 2015 1:50:11 AM UTC-8, Milan Bouchet-Valat wrote:
>>>
>>> Le lundi 05 janvier 2015 à 20:48 -0800, Petr Krysl a écrit : 
>>> > Hi guys, 
>>> > 
>>> > How does one figure out where allocation  of memory occurs?   When I 
>>> > use the @time  macro it tells me there's a lot of memory allocation 
>>> > and deallocation going on.  Just looking at the code I'm at a loss: I 
>>> > can't see the reasons for it there. 
>>> > 
>>> > So, what are the tips and tricks for the curious?  How do I debug the 
>>> > memory allocation issue?  I looked at the lint, the type check, and 
>>> > the code_typed().  Perhaps I don't know where to look, but  these 
>>> > didn't seem to be of much help. 
>>> See this: 
>>>
>>> http://docs.julialang.org/en/latest/manual/profile/#memory-allocation-analysis
>>>  
>>>
>>> (Would probably be good to backport to the 0.3 manual...) 
>>>
>>>
>>> Regards 
>>>
>>>
>>
>

Re: [julia-users] Tips and tricks for figuring out where allocation occurs

2015-01-06 Thread Petr Krysl
Rob,

Thanks. I did find some .mem files (see above). Not for my own source files 
though.

Petr

PS: You have a "fineale" book? Interesting... I thought no one else had 
claimed that name for a software project before...

On Tuesday, January 6, 2015 2:46:26 PM UTC-8, Rob J Goedman wrote:
>
> Petr,
>
> Not sure if this helps you, but below sequence creates the .mem file.
>
> ProjDir is set in Ex07.jl and is the directory that contains the .mem file
>
> Regards,
> Rob J. Goedman
> goe...@mac.com 
>
>
> Robs-MacBook-Pro:~ rob$ clear; julia  --track-allocation=user
>
>*_*
>*_**   _ **_**(_)**_** |  A fresh approach to technical 
> computing*
>   *(_)** | **(_)* *(_)**|  Documentation: 
> http://docs.julialang.org <http://docs.julialang.org>*
> *   _ _   _| |_  __ _   |  Type "help()" for help.*
> *  | | | | | | |/ _` |  |*
> *  | | |_| | | | (_| |  |  Version 0.3.4 (2014-12-26 10:42 UTC)*
> * _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ 
> <http://julialang.org/> release*
> *|__/   |  x86_64-apple-darwin13.4.0*
>
> *julia> *
> *include("/Users/rob/.julia/v0.3/FinealeBook/Examples/Fineale/Ch02/Ex07.jl")*
>
> *julia> **cd(ProjDir)*
>
> *julia> **clear_malloc_data()*
>
> *julia> *
> *include("/Users/rob/.julia/v0.3/FinealeBook/Examples/Fineale/Ch02/Ex07.jl")*
>
>  *shell> **ls*
> Ex07.jl Ex07.svg Ex08.svg Ex09.svg Section2.3.svg
> Ex07.jl.mem Ex08.jl Ex09.jl Section2.3.jl Section2.4.nb
>
> On Jan 6, 2015, at 2:15 PM, Petr Krysl > 
> wrote:
>
> I did this as suggested. The code  executed as shown below, preceded by 
> the command line.
> The process completes,  but there are no .mem files anywhere. Should I ask 
> for them specifically?
>
> # "C:\Users\pkrysl\AppData\Local\Julia-0.4.0-dev\bin\julia.exe" 
> --track-allocation=all memory_debugging.jl
> cd( "C:/Users/pkrysl/Documents/GitHub/jfineale"); include("JFinEALE.jl");
> include("examples/acoustics/sphere_scatterer_example.jl")
> Profile.clear_malloc_data()
> include("examples/acoustics/sphere_scatterer_example.jl")
> quit()
>
>
>
> On Tuesday, January 6, 2015 1:50:11 AM UTC-8, Milan Bouchet-Valat wrote:
>>
>> Le lundi 05 janvier 2015 à 20:48 -0800, Petr Krysl a écrit : 
>> > Hi guys, 
>> > 
>> > How does one figure out where allocation  of memory occurs?   When I 
>> > use the @time  macro it tells me there's a lot of memory allocation 
>> > and deallocation going on.  Just looking at the code I'm at a loss: I 
>> > can't see the reasons for it there. 
>> > 
>> > So, what are the tips and tricks for the curious?  How do I debug the 
>> > memory allocation issue?  I looked at the lint, the type check, and 
>> > the code_typed().  Perhaps I don't know where to look, but  these 
>> > didn't seem to be of much help. 
>> See this: 
>>
>> http://docs.julialang.org/en/latest/manual/profile/#memory-allocation-analysis
>>  
>>
>> (Would probably be good to backport to the 0.3 manual...) 
>>
>>
>> Regards 
>>
>>
>

Re: [julia-users] Tips and tricks for figuring out where allocation occurs

2015-01-06 Thread Petr Krysl
Actually, correction: for 0.3.4 the _system_ *.mem files are in the julia 
folders. For _my_ source files the .mem files cannot be located.

P

On Tuesday, January 6, 2015 2:15:02 PM UTC-8, Petr Krysl wrote:
>
> I did this as suggested. The code  executed as shown below, preceded by 
> the command line.
> The process completes,  but there are no .mem files anywhere. Should I ask 
> for them specifically?
>
> # "C:\Users\pkrysl\AppData\Local\Julia-0.4.0-dev\bin\julia.exe" 
> --track-allocation=all memory_debugging.jl
> cd( "C:/Users/pkrysl/Documents/GitHub/jfineale"); include("JFinEALE.jl");
> include("examples/acoustics/sphere_scatterer_example.jl")
> Profile.clear_malloc_data()
> include("examples/acoustics/sphere_scatterer_example.jl")
> quit()
>
>
>
> On Tuesday, January 6, 2015 1:50:11 AM UTC-8, Milan Bouchet-Valat wrote:
>>
>> Le lundi 05 janvier 2015 à 20:48 -0800, Petr Krysl a écrit : 
>> > Hi guys, 
>> > 
>> > How does one figure out where allocation  of memory occurs?   When I 
>> > use the @time  macro it tells me there's a lot of memory allocation 
>> > and deallocation going on.  Just looking at the code I'm at a loss: I 
>> > can't see the reasons for it there. 
>> > 
>> > So, what are the tips and tricks for the curious?  How do I debug the 
>> > memory allocation issue?  I looked at the lint, the type check, and 
>> > the code_typed().  Perhaps I don't know where to look, but  these 
>> > didn't seem to be of much help. 
>> See this: 
>>
>> http://docs.julialang.org/en/latest/manual/profile/#memory-allocation-analysis
>>  
>>
>> (Would probably be good to backport to the 0.3 manual...) 
>>
>>
>> Regards 
>>
>>

Re: [julia-users] Tips and tricks for figuring out where allocation occurs

2015-01-06 Thread Petr Krysl
I did this as suggested. The code  executed as shown below, preceded by the 
command line.
The process completes,  but there are no .mem files anywhere. Should I ask 
for them specifically?

# "C:\Users\pkrysl\AppData\Local\Julia-0.4.0-dev\bin\julia.exe" 
--track-allocation=all memory_debugging.jl
cd( "C:/Users/pkrysl/Documents/GitHub/jfineale"); include("JFinEALE.jl");
include("examples/acoustics/sphere_scatterer_example.jl")
Profile.clear_malloc_data()
include("examples/acoustics/sphere_scatterer_example.jl")
quit()



On Tuesday, January 6, 2015 1:50:11 AM UTC-8, Milan Bouchet-Valat wrote:
>
> Le lundi 05 janvier 2015 à 20:48 -0800, Petr Krysl a écrit : 
> > Hi guys, 
> > 
> > How does one figure out where allocation  of memory occurs?   When I 
> > use the @time  macro it tells me there's a lot of memory allocation 
> > and deallocation going on.  Just looking at the code I'm at a loss: I 
> > can't see the reasons for it there. 
> > 
> > So, what are the tips and tricks for the curious?  How do I debug the 
> > memory allocation issue?  I looked at the lint, the type check, and 
> > the code_typed().  Perhaps I don't know where to look, but  these 
> > didn't seem to be of much help. 
> See this: 
>
> http://docs.julialang.org/en/latest/manual/profile/#memory-allocation-analysis
>  
>
> (Would probably be good to backport to the 0.3 manual...) 
>
>
> Regards 
>
>

Re: [julia-users] Tips and tricks for figuring out where allocation occurs

2015-01-06 Thread Petr Krysl
Thanks very much. This is useful.
P

On Tuesday, January 6, 2015 1:50:11 AM UTC-8, Milan Bouchet-Valat wrote:
>
> Le lundi 05 janvier 2015 à 20:48 -0800, Petr Krysl a écrit : 
> > Hi guys, 
> > 
> > How does one figure out where allocation  of memory occurs?   When I 
> > use the @time  macro it tells me there's a lot of memory allocation 
> > and deallocation going on.  Just looking at the code I'm at a loss: I 
> > can't see the reasons for it there. 
> > 
> > So, what are the tips and tricks for the curious?  How do I debug the 
> > memory allocation issue?  I looked at the lint, the type check, and 
> > the code_typed().  Perhaps I don't know where to look, but  these 
> > didn't seem to be of much help. 
> See this: 
>
> http://docs.julialang.org/en/latest/manual/profile/#memory-allocation-analysis
>  
>
> (Would probably be good to backport to the 0.3 manual...) 
>
>
> Regards 
>
>

Re: [julia-users] Tips and tricks for figuring out where allocation occurs

2015-01-06 Thread Petr Krysl
Splendid! Thanks a lot.

P

On Tuesday, January 6, 2015 8:57:59 AM UTC-8, Tim Holy wrote:
>
> I also recently added some docs on how to analyze the results; that PR is 
> not 
> yet merged, but you can look at 
> https://github.com/IainNZ/Coverage.jl/pull/36/files 
>
> --Tim 
>
> On Tuesday, January 06, 2015 10:49:54 AM Milan Bouchet-Valat wrote: 
> > Le lundi 05 janvier 2015 à 20:48 -0800, Petr Krysl a écrit : 
> > > Hi guys, 
> > > 
> > > How does one figure out where allocation  of memory occurs?   When I 
> > > use the @time  macro it tells me there's a lot of memory allocation 
> > > and deallocation going on.  Just looking at the code I'm at a loss: I 
> > > can't see the reasons for it there. 
> > > 
> > > So, what are the tips and tricks for the curious?  How do I debug the 
> > > memory allocation issue?  I looked at the lint, the type check, and 
> > > the code_typed().  Perhaps I don't know where to look, but  these 
> > > didn't seem to be of much help. 
> > 
> > See this: 
> > 
> http://docs.julialang.org/en/latest/manual/profile/#memory-allocation-analys 
> > is 
> > 
> > (Would probably be good to backport to the 0.3 manual...) 
> > 
> > 
> > Regards 
>
>

[julia-users] Tips and tricks for figuring out where allocation occurs

2015-01-05 Thread Petr Krysl
Hi guys,

How does one figure out where allocation  of memory occurs?   When I use 
the @time  macro it tells me there's a lot of memory allocation and 
deallocation going on.  Just looking at the code I'm at a loss: I can't see 
the reasons for it there.

So, what are the tips and tricks for the curious?  How do I debug the 
memory allocation issue?  I looked at the lint, the type check, and the 
code_typed().  Perhaps I don't know where to look, but  these didn't seem 
to be of much help.

Thanks a bunch,

Petr


[julia-users] Re: Puzzled by conflicting imports

2014-12-26 Thread Petr Krysl
Having read the issues (https://github.com/JuliaLang/julia/issues/4345), I 
think I have a better understanding of the complexity involved in importing 
functions of the same name. While I think some legitimate uses are 
eliminated if such imports are prohibited, the potential tripping points 
are probably too many and a clean solution may not have been possible..

Thinking about the two-methods problem for a while in fact led to  the 
recognition of a better solution  (re-factoring,  and elimination  of the 
duplicated method names).

Thanks for your help,

Petr

On Friday, December 26, 2014 5:58:35 PM UTC-8, ele...@gmail.com wrote:
>
> IIUC import collisions are on name only, not by signature.
>
> From https://github.com/JuliaLang/julia/issues/4345 I understand that 
> this is because the two `distribloads` methods are methods of different 
> functions, one function defined in FEMMHeatDiffusionModule and one 
> in FEMMAcousticsModule.  Its the functions that clash. The issues suggests 
> that combining the functions is hard/risky/brittle/error prone and in 
> general a bad idea (tm).
>
> Cheers
> Lex
>
> On Saturday, December 27, 2014 5:27:31 AM UTC+10, Petr Krysl wrote:
>>
>> Here are two methods in two different modules:
>>
>> julia> methods(FEMMHeatDiffusionModule.distribloads)
>> # 1 method for generic function "distribloads":
>>
>> distribloads(self::FEMMHeatDiffusion,assembler,geom::NodalField{Float64},temp::N
>> odalField{Float64},fi::ForceIntensity{Float64},m::Int64) at 
>> C:\Users\pkrysl\Docu
>> ments\GitHub\jfineale\./src/FEMMHeatDiffusionModule.jl:201
>>
>> julia> methods(FEMMAcousticsModule.distribloads)
>> # 1 method for generic function "distribloads":
>>
>> distribloads{T<:Number}(self::FEMMAcoustics,assembler,geom::NodalField{T<:Number
>> },P::NodalField{T<:Number},fi::ForceIntensity{T<:Number},m::Int64) at 
>> C:\Users\p
>> krysl\Documents\GitHub\jfineale\./src/FEMMAcousticsModule.jl:180
>>
>> I cannot do "importall" on both modules at the same time: Julia complains 
>> about conflicting imports.
>> I have a trouble making sense of that: The functions clearly have 
>> different signatures, so shouldn't the compiler be able to make sense of 
>> them?
>>
>> Any ideas?  Thanks!
>>
>> Petr
>>
>>
>>

[julia-users] Puzzled by conflicting imports

2014-12-26 Thread Petr Krysl
Here are two methods in two different modules:

julia> methods(FEMMHeatDiffusionModule.distribloads)
# 1 method for generic function "distribloads":
distribloads(self::FEMMHeatDiffusion,assembler,geom::NodalField{Float64},temp::N
odalField{Float64},fi::ForceIntensity{Float64},m::Int64) at 
C:\Users\pkrysl\Docu
ments\GitHub\jfineale\./src/FEMMHeatDiffusionModule.jl:201

julia> methods(FEMMAcousticsModule.distribloads)
# 1 method for generic function "distribloads":
distribloads{T<:Number}(self::FEMMAcoustics,assembler,geom::NodalField{T<:Number
},P::NodalField{T<:Number},fi::ForceIntensity{T<:Number},m::Int64) at 
C:\Users\p
krysl\Documents\GitHub\jfineale\./src/FEMMAcousticsModule.jl:180

I cannot do "importall" on both modules at the same time: Julia complains 
about conflicting imports.
I have a trouble making sense of that: The functions clearly have different 
signatures, so shouldn't the compiler be able to make sense of them?

Any ideas?  Thanks!

Petr




[julia-users] Re: Comparison of FE codes; Julia versus commercial FE solver

2014-12-17 Thread Petr Krysl
Great. Normally I have no use for Gauss rules beyond order 4 (at most cubic 
finite elements in the library). But it is good to have access to arbitrary 
order.

Thanks!

P

On Wednesday, December 17, 2014 1:35:34 PM UTC-8, Steven G. Johnson wrote:
>
> In particular, it looks like you just need:
>
> param_coords, weights = Base.QuadGK.gauss(Float64, order)
>
> in your GaussRule function.
>


[julia-users] Re: Comparison of FE codes; Julia versus commercial FE solver

2014-12-17 Thread Petr Krysl
Excellent! Missed that one...
P

On Wednesday, December 17, 2014 1:14:37 PM UTC-8, Alex Ames wrote:
>
> Hi Petr,
>
> In case you weren't aware, there's a Julia package for computation of 
> arbitrary-order Gauss quadrature rules: 
> https://github.com/billmclean/GaussQuadrature.jl
>
> -Alex
>
> On Tuesday, December 16, 2014 11:31:01 PM UTC-6, Petr Krysl wrote:
>>
>> Hello everybody,
>>
>> In case you're interested, here is the implementation so far.  Switch to 
>> the top folder, and "include" one of the examples.
>> If you have paraview, you could check the graphics output (enable 
>> graphics export in the source).
>>
>> As always, I'm keen on getting feedback, so don't hesitate please.
>>
>> https://github.com/PetrKryslUCSD/jfineale_for_trying_out
>>
>> Petr
>>
>>

[julia-users] Re: Comparison of FE codes; Julia versus commercial FE solver

2014-12-16 Thread Petr Krysl
Hello everybody,

In case you're interested, here is the implementation so far.  Switch to 
the top folder, and "include" one of the examples.
If you have paraview, you could check the graphics output (enable graphics 
export in the source).

As always, I'm keen on getting feedback, so don't hesitate please.

https://github.com/PetrKryslUCSD/jfineale_for_trying_out

Petr



[julia-users] Re: Comparison of FE codes; Julia versus commercial FE solver

2014-12-16 Thread Petr Krysl
If there proves to be an interest in that, I will certainly make this 
initial implementation available.

Petr

On Tuesday, December 16, 2014 9:09:59 AM UTC-8, Valentin Churavy wrote:
>
> Petr,
>
> Congratulations for achieving this. Is J FinEALE available somewhere? 
> Maybe as as package?
>
> Valentin
>
> On Tuesday, 16 December 2014 17:52:29 UTC+1, Petr Krysl wrote:
>>
>> I have made some progress in my effort to gain insight into  Julia 
>> performance in finite element solvers.
>>
>> I have compared my solver, J FinEALE, (compute and assemble global FE  
>> matrices and vectors, then solve the sparse system with Julia's sparse 
>> matrix  facilities) with the commercial FEA software Comsol 4.4.
>>
>> If anyone is interested, I could document this in a more comprehensive  
>> fashion, possibly with the paper.  But here goes a real quick rundown:
>>
>> Heat conduction, nonzero essential boundary conditions, nonzero internal 
>> heat generation rate.   
>> Mesh of 2 million linear triangles.  1,000,000 degrees of freedom.
>> The mesh generation and kickoff of the simulation (assignment of boundary 
>> conditions and such) is done by the solver in a separate step, so that is 
>> not counted towards the total time spent  computing in J FinEALE  either.
>>
>> J FinEALE: 20.6 seconds
>> Comsol 4.4 w/ PARDISO: 16 seconds
>> Comsol 4.4 w/ MUMPS: 22 seconds
>> Comsol 4.4 w/ SPOOLES: 37 seconds
>>
>> Just for contrast: 
>> Matlab FinEALE: 810 seconds.
>>
>> What I found particularly interesting was what it took  to reduce the 
>> time  for this solution from the initial  rewrite from Matlab (around 86 
>> seconds) to the present performance: identify  features that degrade 
>> performance (declare types!),  code critical bottlenecks with ad hoc 
>> functions.   The critical feature of Julia that made this possible  was the 
>> speed with which it can execute  loops  written entirely in Julia.   There 
>> is no need  to call outside help (ala mex-files, which by the way would be 
>> of no help in this case).
>>
>> Briefly, Julia DELIVERS! Bravo to the development team.
>>
>> Petr
>>
>>
>>

[julia-users] Comparison of FE codes; Julia versus commercial FE solver

2014-12-16 Thread Petr Krysl
I have made some progress in my effort to gain insight into  Julia 
performance in finite element solvers.

I have compared my solver, J FinEALE, (compute and assemble global FE  
matrices and vectors, then solve the sparse system with Julia's sparse 
matrix  facilities) with the commercial FEA software Comsol 4.4.

If anyone is interested, I could document this in a more comprehensive  
fashion, possibly with the paper.  But here goes a real quick rundown:

Heat conduction, nonzero essential boundary conditions, nonzero internal 
heat generation rate.   
Mesh of 2 million linear triangles.  1,000,000 degrees of freedom.
The mesh generation and kickoff of the simulation (assignment of boundary 
conditions and such) is done by the solver in a separate step, so that is 
not counted towards the total time spent  computing in J FinEALE  either.

J FinEALE: 20.6 seconds
Comsol 4.4 w/ PARDISO: 16 seconds
Comsol 4.4 w/ MUMPS: 22 seconds
Comsol 4.4 w/ SPOOLES: 37 seconds

Just for contrast: 
Matlab FinEALE: 810 seconds.

What I found particularly interesting was what it took  to reduce the time  
for this solution from the initial  rewrite from Matlab (around 86 seconds) 
to the present performance: identify  features that degrade performance 
(declare types!),  code critical bottlenecks with ad hoc functions.   The 
critical feature of Julia that made this possible  was the speed with which 
it can execute  loops  written entirely in Julia.   There is no need  to 
call outside help (ala mex-files, which by the way would be of no help in 
this case).

Briefly, Julia DELIVERS! Bravo to the development team.

Petr




[julia-users] Difference between 0.3 and 0.4 – memory allocation running amok

2014-12-14 Thread Petr Krysl
Screen  snapshot from 0.3:

  | | |_| | | | (_| |  |  Version 0.3.2+2 (2014-10-22 01:21 UTC)
 _/ |\__'_|_|_|\__'_|  |  release-0.3/6babc84* (fork: 180 commits, 127 days)
|__/   |  x86_64-w64-mingw32

julia> cd( "C:/Users/pkrysl/Documents/Research/Software folder/FEA software 
folder/jfineale")

julia>  include("run.jl")
elapsed time: 25.926594999 seconds (4718450148 bytes allocated, 4.67% gc 
time)
elapsed time: 1.38183297 seconds (447862032 bytes allocated, 17.41% gc time)
elapsed time: 4.41040355 seconds (994818988 bytes allocated, 8.41% gc time)
elapsed time: 6.205788229 seconds (1033497984 bytes allocated, 3.36% gc 
time)
elapsed time: 1.640335364 seconds (9178772 bytes allocated)
Total time elapsed = 47.1576484985 [s]
Error =[9.898712337585991e-6]
true

Screen snapshot from 0.4:

  | | |_| | | | (_| |  |  Version 0.4.0-dev+2070 (2014-12-11 10:19 UTC)
 _/ |\__'_|_|_|\__'_|  |  Commit f2b5a11 (3 days old master)
|__/   |  x86_64-w64-mingw32

julia> cd( "C:/Users/pkrysl/Documents/Research/Software folder/FEA software 
folder/jfineale")

julia>  include("run.jl")
elapsed time: 49.525998429 seconds (12776199732 bytes allocated, 9.74% gc 
time)
elapsed time: 1.437797557 seconds (480349188 bytes allocated, 18.38% gc 
time)
elapsed time: 3.891789679 seconds (994714084 bytes allocated, 10.87% gc 
time)
elapsed time: 5.735681081 seconds (1020271524 bytes allocated, 3.69% gc 
time)
elapsed time: 0.226641073 seconds (9742992 bytes allocated)
Total time elapsed = 70.6905722046 [s]
Error =[9.898714081968407e-6]
true

Version 0.4 allocates three times as much memory in the first operation. 
Most of the work in that operation is done in this loop

   
   for j=1:npts # Loop over quadrature points
#FESetModule.Jacobianmatrix!(self.femmbase.fes, J, x, 
gradNparams[j]); # calculate the Jacobian matrix
At_mul_B!(J, x, gradNparams[j]); # calculate the Jacobian matrix
Jac = FESetModule.Jacobianvolume(self.femmbase.fes,conn, Ns[j], 
J, x);
FEMMBaseModule.getRm!(self.femmbase,Rm,Ns[j]'*x,J,labels[i]); # 
Material orientation matrix
#gradN = gradNparams[j]/(Rm'*J); # gradient WRT material 
coordinates
At_mul_B!(RmTJ, Rm, J); # local Jacobian matrix
gradN = gradNparams[j]/RmTJ; # gradient WRT material coordinates
# Ke = Ke + gradN*(kappa_bar*(Jac*w[j]))*gradN' ; #this product 
is also implemented in steps below
# This is slightly faster than the vectorized code above:
for nx=1:Kedim
for kx=1:sdim
for px=1:sdim
for mx=1:Kedim
Ke[mx,nx] = Ke[mx,nx] + 
gradN[mx,px]*((Jac*w[j])*kappa_bar[px,kx])*gradN[nx,kx]
end
end
end
end
end # Loop over quadrature points
...

Petr


Re: [julia-users] What is the use of J[:,:] = K*M please?

2014-12-14 Thread Petr Krysl
Ahhh. Now, that made sense (I did not know Julia actually had a function 
with capitals and underscores its name ;). 

Thanks.  Much obliged.

Petr

On Sunday, December 14, 2014 6:01:04 PM UTC-8, Andreas Noack wrote:
>
> The function K*M allocates a new array for the result, but if you write 
> J[:,:]=K*M then J is updated with the values from the new array. This 
> matter if e.g. J is input to a function
>
> function f1(J)
> J = K*M
> end
>
> function f2(J)
> J[:,:] = K*M
> end
>
> f1 will make a local variable J storing the result which will keep the 
> input array J unaffected whereas f2 will update the input J. However, they 
> will both allocate a new array.
>
> If you want to avoid allocation, you'll have to use either A_mul_B!(C,A,B) 
> where C stores the result or BLAS.gemm!.
>
> 2014-12-14 20:12 GMT-05:00 Petr Krysl >:
>>
>> ??? 
>>
>> Could I have that again please? I don't follow.
>>
>>  In-place in my  usage of the word here means that the result of the 
>> multiplication is immediately stored  in the matrix J,, without a temporary 
>> being created  and then assigned  to J.
>>
>> Thanks,
>>
>> Petr
>>
>> On Sunday, December 14, 2014 5:00:40 PM UTC-8, John Myles White wrote:
>>>
>>> Assigning in-place and creating temporaries are actually totally 
>>> orthogonal. 
>>>
>>> One is concerned with mutating J. This is contrasted with writing, 
>>>
>>> J = K * M 
>>>
>>> The other is concerned with the way that K * M gets computed before any 
>>> assignment operation or mutation can occur. This is contrasted with 
>>> something like A_mul_B. 
>>>
>>>  -- John 
>>>
>>> Sent from my iPhone 
>>>
>>> > On Dec 14, 2014, at 7:48 PM, Petr Krysl  wrote: 
>>> > 
>>> > Hello everybody, 
>>> > 
>>> > I hope someone knows this:  What is the use of writing 
>>> > 
>>> > J[:,:] = K*M 
>>> > 
>>> > where all of these quantities are matrices? I thought I'd seen 
>>> somewhere that it was assigning to the matrix "in-place"  instead of 
>>> creating a temporary.   Is that so? 
>>> > I couldn't find it in the documentation   for 0.3. 
>>> > 
>>> > Thanks, 
>>> > 
>>> > Petr 
>>>
>>

Re: [julia-users] What is the use of J[:,:] = K*M please?

2014-12-14 Thread Petr Krysl
??? 

Could I have that again please? I don't follow.

 In-place in my  usage of the word here means that the result of the 
multiplication is immediately stored  in the matrix J,, without a temporary 
being created  and then assigned  to J.

Thanks,

Petr

On Sunday, December 14, 2014 5:00:40 PM UTC-8, John Myles White wrote:
>
> Assigning in-place and creating temporaries are actually totally 
> orthogonal. 
>
> One is concerned with mutating J. This is contrasted with writing, 
>
> J = K * M 
>
> The other is concerned with the way that K * M gets computed before any 
> assignment operation or mutation can occur. This is contrasted with 
> something like A_mul_B. 
>
>  -- John 
>
> Sent from my iPhone 
>
> > On Dec 14, 2014, at 7:48 PM, Petr Krysl  > wrote: 
> > 
> > Hello everybody, 
> > 
> > I hope someone knows this:  What is the use of writing 
> > 
> > J[:,:] = K*M 
> > 
> > where all of these quantities are matrices? I thought I'd seen somewhere 
> that it was assigning to the matrix "in-place"  instead of creating a 
> temporary.   Is that so? 
> > I couldn't find it in the documentation   for 0.3. 
> > 
> > Thanks, 
> > 
> > Petr 
>


[julia-users] What is the use of J[:,:] = K*M please?

2014-12-14 Thread Petr Krysl
Hello everybody,

I hope someone knows this:  What is the use of writing

J[:,:] = K*M

where all of these quantities are matrices? I thought I'd seen somewhere 
that it was assigning to the matrix "in-place"  instead of creating a 
temporary.   Is that so?
I couldn't find it in the documentation   for 0.3.

Thanks,

Petr


[julia-users] @profile -> don't wait for me computation?

2014-12-14 Thread Petr Krysl
Version 0.3.2+2 (2014-10-22 01:21 UTC)
release-0.3/6babc84* (fork: 180 commits, 126 days)
x86_64-w64-mingw32

w/o @profile 0.68 seconds
w @profile not finished 15 minutes later

Anyone else had this trouble?

Petr



Re: [julia-users] Re: Are some matrices marked as special?

2014-12-13 Thread Petr Krysl
Dear Andreas,

J is for  half of the elements diagonal. So if Rm  is also diagonal,  the 
LU factorization will be avoided in those cases.
(Provided  what you say for \ also applies to the /.)

Bravo.   And many thanks.  It was driving me nuts.

Petr


On Saturday, December 13, 2014 4:13:50 PM UTC-8, Andreas Noack wrote:
>
> I don't know how J looks, but could it be that it is diagonal? If so, I 
> have an idea for the reason for this. If you look here
>
> https://github.com/JuliaLang/julia/blob/master/base/linalg/dense.jl#L412
>
> you can see the \ checks if the matrix is diagonal. If it is, the solver 
> uses the \(Diagonal, Matrix) method which avoid the LU factorization that 
> \(Matrix, Matrix) uses.
>
> Do you think that is the reason?
>
> 2014-12-13 17:56 GMT-05:00 Petr Krysl >:
>>
>> You're absolutely right, sorry!  I got distracted and I forgot to post 
>> instructions.  I've also tried to simplify things, and I think I've gotten 
>> it down to just a few lines of code. Unfortunately, the rest of the code 
>> still needs to be there for declarations and such (https://github.com/
>> PetrKryslUCSD/Test_jfineale).
>>
>> The rotation matrix gets used in one place only. Refer to lines 68ff in 
>> conductivity() from FEMMHeatDiffusionModule.jl.
>>
>> # Here is that rotation matrix Rm
>> FEMMBaseModule.getRm!(self.femmbase,Rm,Ns[j]'*x,J,labels[i]); 
>> # Material orientation matrix
>> # And here is the only place it gets used
>> gradN = gradNparams[j]/(Rm'*J); # gradient WRT material 
>> coordinates
>> #
>>
>> conductivity() is the last thing called in the driver. Picking Rm on line 
>> 41  of the driver (Poisson_FE_example.jl) leads to different  computation 
>> CPU times.
>>
>> Petr
>>
>> On Saturday, December 13, 2014 1:37:36 PM UTC-8, Steven G. Johnson wrote:
>>>
>>> Unfortunately, it's not really reasonable for people on a mailing list 
>>> to dissect such a large body of code; you need to boil it down to a smaller 
>>> example that illustrates the behavior you mentioned.
>>>
>>

[julia-users] Re: Are some matrices marked as special?

2014-12-13 Thread Petr Krysl
You're absolutely right, sorry!  I got distracted and I forgot to post 
instructions.  I've also tried to simplify things, and I think I've gotten 
it down to just a few lines of code. Unfortunately, the rest of the code 
still needs to be there for declarations and such (
https://github.com/PetrKryslUCSD/Test_jfineale).

The rotation matrix gets used in one place only. Refer to lines 68ff in 
conductivity() from FEMMHeatDiffusionModule.jl.

# Here is that rotation matrix Rm
FEMMBaseModule.getRm!(self.femmbase,Rm,Ns[j]'*x,J,labels[i]); # 
Material orientation matrix
# And here is the only place it gets used
gradN = gradNparams[j]/(Rm'*J); # gradient WRT material 
coordinates
#

conductivity() is the last thing called in the driver. Picking Rm on line 
41  of the driver (Poisson_FE_example.jl) leads to different  computation 
CPU times.

Petr

On Saturday, December 13, 2014 1:37:36 PM UTC-8, Steven G. Johnson wrote:
>
> Unfortunately, it's not really reasonable for people on a mailing list to 
> dissect such a large body of code; you need to boil it down to a smaller 
> example that illustrates the behavior you mentioned.
>


[julia-users] Re: Are some matrices marked as special?

2014-12-13 Thread Petr Krysl
https://github.com/PetrKryslUCSD/Test_jfineale

The runner is Poisson_FE_example.jl.
I am running the test with


Version 0.3.2+2 (2014-10-22 01:21 UTC)
release-0.3/6babc84* (fork: 180 commits, 125 days)
x86_64-w64-mingw32

Petr

On Saturday, December 13, 2014 9:48:42 AM UTC-8, Steven G. Johnson wrote:
>
> Without posting a gist or something of the code you are benchmarking, it 
> is impossible to say more, except that neither Julia nor BLAS take 
> advantage of 0's in a matrix for simple operations like matrix 
> multiplication and addition.
>


[julia-users] Re: Are some matrices marked as special?

2014-12-13 Thread Petr Krysl
Naturally.  I'm working on it.  It is really more or less complete  heat 
conduction  FE program, consisting of several modules. I will post the 
whole thing on github.

Thanks for the info! It's become  slightly more mysterious then...

Petr

On Saturday, December 13, 2014 9:48:42 AM UTC-8, Steven G. Johnson wrote:
>
> Without posting a gist or something of the code you are benchmarking, it 
> is impossible to say more, except that neither Julia nor BLAS take 
> advantage of 0's in a matrix for simple operations like matrix 
> multiplication and addition.
>


[julia-users] Re: Are some matrices marked as special?

2014-12-12 Thread Petr Krysl
Steven,

I also thought of  floating-point exceptions.   However, the results are 
perfectly fine. Not identical,  only within numerical precision, but pretty 
good nonetheless.

The matrix passed in  is the material of rotation matrix.  Since the 
material conductivity matrix  is an identity, the rotation matrix  makes 
absolutely no difference  as to the results.  In fact, its transpose should 
multiply through  with the matrix itself to identity.

Petr


On Friday, December 12, 2014 6:23:45 PM UTC-8, Steven G. Johnson wrote:
>
> If some of the matrix values lead to floating point exceptions (NaN or 
> Inf) in your benchmark code, that would slow it down (because of how the 
> CPU works, not because of Julia). Or maybe you are doing some other 
> computation whose computation steps depend on the values. What is your 
> benchmark?



[julia-users] Re: Are some matrices marked as special?

2014-12-12 Thread Petr Krysl
Well, I can see no difference either.

But here is the tale of four matrices.  They are each fed in turn to the 
same operation as input: everything else is precisely the same. The timings 
of that operation are as follows:

Rm1=eye(2,2) # 25 seconds
Rm2=[-0.668742918086684  -0.743493718540316;  -0.743493718540316   
0.668742918086684]; # 46 seconds
Rm3=[1.0 0.0; 0.0 1.0] # 25 seconds
Rm4=[0.0 1.0; -1.0 0.0] # 39 seconds

So  are some of these matrices more matrixy  than others?

Petr


[julia-users] Are some matrices marked as special?

2014-12-12 Thread Petr Krysl
Are perhaps matrices initialized as all zeros or identity matrices marked  
as special  behind the scenes,  so that  no actual processing is done when 
there used for instance in multiplication?

Thanks,
Petr


Re: [julia-users] Re: Aren't loops supposed to be faster?

2014-12-12 Thread Petr Krysl
I have now been able to remove the need  for all declarations except one, 
Rm still needs to be declared as returning an array of floats.

Just before the loop  I look at the type of the function that retrieves Rm  
and I get


{:($(Expr(:lambda, {:XYZ,:tangents,:fe_label}, 
{{},{{:XYZ,Array{Float64,2},0},{:tangents,Array{Float64,2},0},{:fe_label,Int64,0}},{{:eR
m,Array{Float64,2},19}}}, :(begin  # 
C:\Users\pkrysl\Documents\Research\Software folder\FEA software 
folder\jfineale\FEMMBaseModule.jl,
 line 82:
return eRm::Array{Float64,2}
end::Array{Float64,2}}

This seems to be saying that the matrix returns the correct type.  Am I 
wrong?

Thanks,

Petr

On Friday, December 12, 2014 9:17:41 AM UTC-8, Petr Krysl wrote:
>
> Stefan,
>
> If you mean  by the original code Ke1() and Ke2(), those were contrived  
> examples that unfortunately did not even represent the problems in  my 
> actual code (conductivity(), 
> https://gist.github.com/PetrKryslUCSD/ae4a0f218fe50abe370f 
> <https://www.google.com/url?q=https%3A%2F%2Fgist.github.com%2FPetrKryslUCSD%2Fae4a0f218fe50abe370f&sa=D&sntz=1&usg=AFQjCNHJQsa8ICKiXobesCkCSCKvhdW9dQ>).
>  
> That code had no globals whatsoever.
>
>
> The solution (https://gist.github.com/PetrKryslUCSD/7bd14515e1a853275923) 
> was to declare anything and everything. Now I've had more time to 
> investigate,, and I've turned off the declarations selectively  to see 
> which ones were actually needed.  The result: I actually need  four 
> declarations. Some of it may have to do with my inexperience with the 
> language  – I may not have  made it possible for  the compiler to discover 
> the types.  Some of it was addressed by  John above (dynamically called 
> function, with stable type, but  not enough information for the compiler). 
> The code (note the comments): 
> https://gist.github.com/PetrKryslUCSD/4f57d4d62758ecc5bf20
>
> Petr
>
> PS: Concerning my attempt to formulate a moral for myself: I stand 
> corrected, I'm beginning to get the idea that Julia does not need 
> everything declared.
>
>
> On Friday, December 12, 2014 7:26:28 AM UTC-8, Stefan Karpinski wrote:
>>
>> Yikes, that's much harder to read than the original. Making the globals 
>> const is a simple change and the fact that this improves the performance so 
>> much shows that all those type annotations are not necessary. The only 
>> issue with the original code was non-constant globals. We can also change 
>> the globals to function arguments and the problem goes away:
>>
>> julia> @time Ke1(N);
>> elapsed time: 0.237183958 seconds (131200224 bytes allocated, 34.34% gc 
>> time)
>>
>> julia> @time Ke2(N);
>> elapsed time: 0.021313495 seconds (400 bytes allocated)
>>
>> The gist has both the const and function-arg versions:
>>
>> https://gist.github.com/StefanKarpinski/e1d7d8804e373cc1de07
>>
>>
>> On Fri, Dec 12, 2014 at 10:12 AM, Andreas Noack  
>> wrote:
>>
>>> Stefan, please consider the updated version
>>>
>>> https://gist.github.com/PetrKryslUCSD/7bd14515e1a853275923
>>>
>>> which has no global variables.
>>>
>>> 2014-12-12 10:07 GMT-05:00 Stefan Karpinski :
>>>
>>> From the original version of the code I see this timing after code gen:
>>>>
>>>> julia> @time Ke1(N);
>>>> elapsed time: 0.251365495 seconds (134400208 bytes allocated, 30.13% gc 
>>>> time)
>>>>
>>>> julia> @time Ke2(N);
>>>> elapsed time: 3.923532621 seconds (996800384 bytes allocated, 13.97% gc 
>>>> time)
>>>>
>>>> After making all the globals const, I see this:
>>>>
>>>> julia> @time Ke1(N);
>>>> elapsed time: 0.273683599 seconds (131200208 bytes allocated, 28.52% gc 
>>>> time)
>>>>
>>>> julia> @time Ke2(N);
>>>> elapsed time: 0.026985097 seconds (384 bytes allocated)
>>>>
>>>> Type annotations everywhere are neither necessary nor recommended. It 
>>>> is recommended not to use lots of non-constant globals.
>>>>
>>>>
>>>> On Fri, Dec 12, 2014 at 9:36 AM, Tim Holy  wrote:
>>>>
>>>>> On Thursday, December 11, 2014 08:10:38 PM Petr Krysl wrote:
>>>>> > The moral of this story is: If you can't or  won't  declare every 
>>>>> single
>>>>> > variable, don't do loops. They are likely to be a losing proposition.
>>>>>
>>>>> Just to follow up further, this is not at all the right moral to 
>>>>> absorb from
>>>>> this. A better set of morals is:
>>>>> - use code_typed or TypeCheck or Lint to diagnose problems
>>>>> - remember that julia optimizes functions as a unit. Therefore, in a 
>>>>> loop with
>>>>> a type issue, one "brute force" solution is to create a separate 
>>>>> function just
>>>>> for running that loop. The types will be known when that function gets
>>>>> compiled (even if you don't annotate anything with their types), and 
>>>>> so your
>>>>> type problem should evaporate.
>>>>>
>>>>> --Tim
>>>>>
>>>>>
>>>>
>>>
>>

Re: [julia-users] Re: Aren't loops supposed to be faster?

2014-12-12 Thread Petr Krysl
Stefan,

If you mean  by the original code Ke1() and Ke2(), those were contrived  
examples that unfortunately did not even represent the problems in  my 
actual code (conductivity(), 
https://gist.github.com/PetrKryslUCSD/ae4a0f218fe50abe370f 
<https://www.google.com/url?q=https%3A%2F%2Fgist.github.com%2FPetrKryslUCSD%2Fae4a0f218fe50abe370f&sa=D&sntz=1&usg=AFQjCNHJQsa8ICKiXobesCkCSCKvhdW9dQ>).
 
That code had no globals whatsoever.


The solution (https://gist.github.com/PetrKryslUCSD/7bd14515e1a853275923) 
was to declare anything and everything. Now I've had more time to 
investigate,, and I've turned off the declarations selectively  to see 
which ones were actually needed.  The result: I actually need  four 
declarations. Some of it may have to do with my inexperience with the 
language  – I may not have  made it possible for  the compiler to discover 
the types.  Some of it was addressed by  John above (dynamically called 
function, with stable type, but  not enough information for the compiler). 
The code (note the comments): 
https://gist.github.com/PetrKryslUCSD/4f57d4d62758ecc5bf20

Petr

PS: Concerning my attempt to formulate a moral for myself: I stand 
corrected, I'm beginning to get the idea that Julia does not need 
everything declared.


On Friday, December 12, 2014 7:26:28 AM UTC-8, Stefan Karpinski wrote:
>
> Yikes, that's much harder to read than the original. Making the globals 
> const is a simple change and the fact that this improves the performance so 
> much shows that all those type annotations are not necessary. The only 
> issue with the original code was non-constant globals. We can also change 
> the globals to function arguments and the problem goes away:
>
> julia> @time Ke1(N);
> elapsed time: 0.237183958 seconds (131200224 bytes allocated, 34.34% gc 
> time)
>
> julia> @time Ke2(N);
> elapsed time: 0.021313495 seconds (400 bytes allocated)
>
> The gist has both the const and function-arg versions:
>
> https://gist.github.com/StefanKarpinski/e1d7d8804e373cc1de07
>
>
> On Fri, Dec 12, 2014 at 10:12 AM, Andreas Noack  > wrote:
>
>> Stefan, please consider the updated version
>>
>> https://gist.github.com/PetrKryslUCSD/7bd14515e1a853275923
>>
>> which has no global variables.
>>
>> 2014-12-12 10:07 GMT-05:00 Stefan Karpinski > >:
>>
>> From the original version of the code I see this timing after code gen:
>>>
>>> julia> @time Ke1(N);
>>> elapsed time: 0.251365495 seconds (134400208 bytes allocated, 30.13% gc 
>>> time)
>>>
>>> julia> @time Ke2(N);
>>> elapsed time: 3.923532621 seconds (996800384 bytes allocated, 13.97% gc 
>>> time)
>>>
>>> After making all the globals const, I see this:
>>>
>>> julia> @time Ke1(N);
>>> elapsed time: 0.273683599 seconds (131200208 bytes allocated, 28.52% gc 
>>> time)
>>>
>>> julia> @time Ke2(N);
>>> elapsed time: 0.026985097 seconds (384 bytes allocated)
>>>
>>> Type annotations everywhere are neither necessary nor recommended. It is 
>>> recommended not to use lots of non-constant globals.
>>>
>>>
>>> On Fri, Dec 12, 2014 at 9:36 AM, Tim Holy >> > wrote:
>>>
>>>> On Thursday, December 11, 2014 08:10:38 PM Petr Krysl wrote:
>>>> > The moral of this story is: If you can't or  won't  declare every 
>>>> single
>>>> > variable, don't do loops. They are likely to be a losing proposition.
>>>>
>>>> Just to follow up further, this is not at all the right moral to absorb 
>>>> from
>>>> this. A better set of morals is:
>>>> - use code_typed or TypeCheck or Lint to diagnose problems
>>>> - remember that julia optimizes functions as a unit. Therefore, in a 
>>>> loop with
>>>> a type issue, one "brute force" solution is to create a separate 
>>>> function just
>>>> for running that loop. The types will be known when that function gets
>>>> compiled (even if you don't annotate anything with their types), and so 
>>>> your
>>>> type problem should evaporate.
>>>>
>>>> --Tim
>>>>
>>>>
>>>
>>
>

Re: [julia-users] Re: Aren't loops supposed to be faster?

2014-12-11 Thread Petr Krysl
Clever.   Thanks!

Petr


On Thursday, December 11, 2014 9:26:34 PM UTC-8, John Myles White wrote:
>
> Petr, 
>
> You should be able to do something like the following: 
>
> function foo(n::Integer) 
> if iseven(n) 
> return 1.0 
> else 
> return 1 
> end 
> end 
>
> function bar1() 
> x = foo(1) 
> return x 
> end 
>
> function bar2() 
> x = foo(1)::Int 
> return x 
> end 
>
> julia> code_typed(bar1, ()) 
> 1-element Array{Any,1}: 
>  :($(Expr(:lambda, Any[], 
> Any[Any[:x],Any[Any[:x,Union(Float64,Int64),18]],Any[]], :(begin  # none, 
> line 2: 
> x = foo(1)::Union(Float64,Int64) # line 3: 
> return x::Union(Float64,Int64) 
> end::Union(Float64,Int64) 
>
> julia> code_typed(bar2, ()) 
> 1-element Array{Any,1}: 
>  :($(Expr(:lambda, Any[], Any[Any[:x],Any[Any[:x,Int64,18]],Any[]], 
> :(begin  # none, line 2: 
> x = (top(typeassert))(foo(1)::Union(Float64,Int64),Int)::Int64 # 
> line 3: 
> return x::Int64 
> end::Int64 
>
> In the output of code_typed, note how the strategically placed type 
> declaration at the point at which a type-unstable function is called 
> resolves the type inference problem completely when you move downstream 
> from the point of ambiguity. 
>
>  -- John 
>
> On Dec 12, 2014, at 12:20 AM, Petr Krysl > 
> wrote: 
>
> > John, 
> > 
> > I hear you.   I agree with you  that type instability is not very 
> helpful,  and indicates  problems with program design. 
> > However, I believe that  provided the program cannot resolve  the types 
> properly  (as it couldn't in the original design of my program, because I 
> haven't provided  declarations  of  variables where they were getting used, 
>  only in their data structure  types that the compiler apparently couldn't 
> see), the optimization  for loop performance  cannot be successful. It 
> certainly wasn't successful in this case. 
> > 
> > How would you solve  the problem with  storing a function and at the 
> same time allowing the compiler to deduce what  values it returns?  In my 
> case I store a function that always returns a floating-point array.   
> However, it may return a constant value supplied as input to the 
> constructor,  or it may return the value provided by another function (that 
> the  user of the type supplied). 
> > 
> > So, the type  of the return value is  stable, but I haven't found a way 
> of informing the compiler that it is so. 
> > 
> > Petr 
> > 
> > 
> > 
> > 
> > On Thursday, December 11, 2014 8:20:20 PM UTC-8, John Myles White wrote: 
> > > The moral of this story is: If you can't or  won't  declare every 
> single variable, don't do loops. They are likely to be a losing 
> proposition. 
> > 
> > I don't think this lesson will serve most people well. It doesn't 
> reflect my experiences using Julia at all. 
> > 
> > My experience is that code that requires variable type declarations 
> usually suffers from a deeper problem that the variable declarations 
> suppress without solving: either (a) there's some insoluble source of 
> ambiguity in the program (as occurs when calling a function that's passed 
> around as a value and therefore not amenable to static analysis) or (b) 
> there's some subtle source of type instability, as happens sometimes when 
> mixing integers and floating point numbers. 
> > 
> >  -- John 
> > 
>
>

Re: [julia-users] Re: Aren't loops supposed to be faster?

2014-12-11 Thread Petr Krysl
John,

I hear you.   I agree with you  that type instability is not very helpful,  
and indicates  problems with program design. 
However, I believe that  provided the program cannot resolve  the types 
properly  (as it couldn't in the original design of my program, because I 
haven't provided  declarations  of  variables where they were getting 
used,  only in their data structure  types that the compiler apparently 
couldn't see), the optimization  for loop performance  cannot be 
successful. It certainly wasn't successful in this case.

How would you solve  the problem with  storing a function and at the same 
time allowing the compiler to deduce what  values it returns?  In my case I 
store a function that always returns a floating-point array.   However, it 
may return a constant value supplied as input to the constructor,  or it 
may return the value provided by another function (that the  user of the 
type supplied).

So, the type  of the return value is  stable, but I haven't found a way of 
informing the compiler that it is so.

Petr




On Thursday, December 11, 2014 8:20:20 PM UTC-8, John Myles White wrote:
>
> > The moral of this story is: If you can't or  won't  declare every single 
> variable, don't do loops. They are likely to be a losing proposition. 
>
> I don't think this lesson will serve most people well. It doesn't reflect 
> my experiences using Julia at all. 
>
> My experience is that code that requires variable type declarations 
> usually suffers from a deeper problem that the variable declarations 
> suppress without solving: either (a) there's some insoluble source of 
> ambiguity in the program (as occurs when calling a function that's passed 
> around as a value and therefore not amenable to static analysis) or (b) 
> there's some subtle source of type instability, as happens sometimes when 
> mixing integers and floating point numbers. 
>
>  -- John 
>
>

[julia-users] Re: Aren't loops supposed to be faster?

2014-12-11 Thread Petr Krysl
Everybody:

Thank you very much for your  advice. I believe that I have gotten the 
loops  working  now.

The method conductivity(), 2 million triangles
With loops expanded: 25.294999069 seconds (5253831296 bytes
With matrix operations: 29.744513256 seconds (6564763296

So now the loops  win by about 15%  of time consumed.

The key was apparently  to have  every single variable  declared. I'm not  
quite sure  how the compiler is able to pursue the tree of  interdependent 
modules to ferret out  the types, but I think it must have missed a few 
(the modules  properly declared  the types, but I'm not sure if the 
compiler  saw them). Therefore I believe  the loops were losing heavily  
compared to matrix operations.

So I went in and  declared  everything I could get my hands on. That made 
the loops competitive.

The moral of this story is: If you can't or  won't  declare every single 
variable, don't do loops. They are likely to be a losing proposition.

The updated code is at 
https://gist.github.com/PetrKryslUCSD/7bd14515e1a853275923

Petr

PS: One particular problem was  that I was calling the function  stored  
as  type  Function. Being new to the language,  I don't know if I can 
declare the function  to have a type  that would indicate the type of the 
return value. (Is it possible?) Without this information,  the compiler is 
apparently stuck with  data whose types need to be determined  at runtime. 
Unfortunately, the compiler  does not throw fits  when it does not know the 
type, and  then  bad things can happen  as in the present example  where 
expanding the loops  was  twice as slow  as doing a matrix operation.



[julia-users] Re: Aren't loops supposed to be faster?

2014-12-11 Thread Petr Krysl
Robert,

This is very nice. Basically  it confirms that if  every single variable is 
properly declared  and the compiler can make  all its optimizations,  then 
the loops  have a chance of working.

I got a bit lost  in the follow-up discussion: I think the message chain  
might have been broken.

Petr


On Thursday, December 11, 2014 2:05:40 PM UTC-8, Robert Gates wrote:
>
> Hi Petr,
>
> I just tried the devectorized problem, although I did choose to go a bit 
> of a different route: 
> https://gist.github.com/rleegates/2d99e6251fe246b017ac   
> I am not sure that this is what you intended, however, using the 
> vectorized code as a reference, I do obtain the same results up to machine 
> epsilon.
>
> Anyways, I got:
>
> In  [4]: keTest(200_000)
> Vectorized:
> elapsed time: 0.426404203 seconds (140804768 bytes allocated, 22.42% gc 
> time)
> DeVectorized:
> elapsed time: 0.078519349 seconds (128 bytes allocated)
> DeVectorized InBounds:
> elapsed time: 0.032812311 seconds (128 bytes allocated)
> Error norm deVec: 0.0
> Error norm inBnd: 0.0
>
> On Thursday, December 11, 2014 5:47:01 PM UTC+1, Petr Krysl wrote:
>>
>> Acting upon the advice that replacing matrix-matrix multiplications in 
>> vectorized form with loops would help with performance, I chopped out a 
>> piece of code from my finite element solver (
>> https://gist.github.com/anonymous/4ec426096c02faa4354d) and ran some 
>> tests with the following results:
>>
>> Vectorized code:
>> elapsed time: 0.326802682 seconds (134490340 bytes allocated, 17.06% gc 
>> time)
>>
>> Loops code:
>> elapsed time: 4.681451441 seconds (997454276 bytes allocated, 9.05% gc 
>> time) 
>>
>> SLOWER and using MORE memory?!
>>
>> I must be doing something terribly wrong.
>>
>> Petr
>>
>>

Re: [julia-users] Aren't loops supposed to be faster?

2014-12-11 Thread Petr Krysl
I experimented with it a little bit before (mx innermost loop): does not 
make a difference.

On Thursday, December 11, 2014 9:55:46 AM UTC-8, Peter Simon wrote:
>
> One thing I noticed after a quick glance:  The ordering of your nested 
> loops is very cache-unfriendly.  Julia stores arrays in column-major order 
> (same as Fortran) so that nested loops should arrange that the first 
> subscripts of multidimensional arrays are varied most rapidly.
>
> --Peter
>
> On Thursday, December 11, 2014 9:47:33 AM UTC-8, Petr Krysl wrote:
>>
>> One more note: I conjectured that perhaps the compiler was not able to 
>> infer correctly the type of the matrices,  so I hardwired (in the actual FE 
>> code)
>>
>> Jac = 1.0; gradN = gradNparams[j]/(J); # get rid of Rm for the moment
>>
>> About 10% less memory used, runtime about the same.  So, no effect 
>> really. Loops are still slower than the vectorized code by a factor of two.
>>
>> Petr
>>
>>
>>

Re: [julia-users] Aren't loops supposed to be faster?

2014-12-11 Thread Petr Krysl
One more note: I conjectured that perhaps the compiler was not able to 
infer correctly the type of the matrices,  so I hardwired (in the actual FE 
code)

Jac = 1.0; gradN = gradNparams[j]/(J); # get rid of Rm for the moment

About 10% less memory used, runtime about the same.  So, no effect really. 
Loops are still slower than the vectorized code by a factor of two.

Petr




Re: [julia-users] Aren't loops supposed to be faster?

2014-12-11 Thread Petr Krysl
Dear Andreas,

Thank you very much. True, I have not noticed that. I put the definitions 
of the arrays outside of the two functions so that their results could be 
compared.

What I'm trying to do here is write a simple chunk of code that would 
reproduce the conditions in the FE package.
There the vectorized code and the loops only see local variables, declared 
above the major loop.  So in my opinion the conditions then are the same as 
in the corrected fragment from the gist (only local variables).

Now I can see that the fragment for some reason did not reproduce the 
conditions from the full code.  Indeed, as you predicted the loop 
implementation is almost 10 times faster than the vectorized version. 
 However, in the FE code the loops run twice as slow and consume more 
memory.

Just in case you, Andreas, or anyone else are curious,  here is the full FE 
code that displays the weird behavior of loops being slower than vectorized 
code.
https://gist.github.com/PetrKryslUCSD/ae4a0f218fe50abe370f

Thanks again,

Petr

On Thursday, December 11, 2014 9:02:00 AM UTC-8, Andreas Noack wrote:
>
> See the comment in the gist.
>
> 2014-12-11 11:47 GMT-05:00 Petr Krysl >:
>
>> Acting upon the advice that replacing matrix-matrix multiplications in 
>> vectorized form with loops would help with performance, I chopped out a 
>> piece of code from my finite element solver (
>> https://gist.github.com/anonymous/4ec426096c02faa4354d) and ran some 
>> tests with the following results:
>>
>> Vectorized code:
>> elapsed time: 0.326802682 seconds (134490340 bytes allocated, 17.06% gc 
>> time)
>>
>> Loops code:
>> elapsed time: 4.681451441 seconds (997454276 bytes allocated, 9.05% gc 
>> time) 
>>
>> SLOWER and using MORE memory?!
>>
>> I must be doing something terribly wrong.
>>
>> Petr
>>
>>
>

[julia-users] Aren't loops supposed to be faster?

2014-12-11 Thread Petr Krysl
Acting upon the advice that replacing matrix-matrix multiplications in 
vectorized form with loops would help with performance, I chopped out a 
piece of code from my finite element solver 
(https://gist.github.com/anonymous/4ec426096c02faa4354d) and ran some tests 
with the following results:

Vectorized code:
elapsed time: 0.326802682 seconds (134490340 bytes allocated, 17.06% gc 
time)

Loops code:
elapsed time: 4.681451441 seconds (997454276 bytes allocated, 9.05% gc 
time) 

SLOWER and using MORE memory?!

I must be doing something terribly wrong.

Petr



Re: [julia-users] Re: Finite element code in Julia: Curious overhead in .* product

2014-12-11 Thread Petr Krysl
Valentin,

Thank you so much for this valuable information. 

I would define decent, as does the Julia community with respect to Julia 
itself, as being within a small factor of the best performance.  As Tim 
Holy demonstrated with his example, these two implementations run about a 
factor of 30 apart:

Fe += Ns * (f * Jac); #30 times slower than

for k=1:length(Fe)
  Fe[k]+=Ns[k]*(f * Jac);#this expanded loop
end
   
So that seems like a lot.  As you point out, one might be willing to write 
it like this if to decide between two pains (losing a readable code, or 
losing  a lot of performance) meant that loops win by a lot.

Petr

On Thursday, December 11, 2014 12:18:15 AM UTC-8, Valentin Churavy wrote:
>
> Dear Petr,
>
> I tought the point of having the array objects and the associated 
>> manipulation functions was to hide the loops while delivering decent 
>> performance...
>
>
> This assumption is very true in Matlab. Matlab spend an enormous 
> engineering amount on optimizing code that uses vectorization and array 
> objects (and it is still slow). There has been ongoing discussions over on 
> Github [1] on how to improve the current situation in Julia. One of the 
> major selling points of Julia for me is the fact that it is quite 
> transparent on which kind of optimizations it requires. I can write in a 
> very dynamic style with a lot of matrix operations like in Matlab and still 
> get decent performance or I can go in and identify with tools like @profile 
> [2] what the pains point in my program are. 
>
> The point of using vectorized operations in Matlab is that is the one 
> reliable way to get good performance in matlab, because all underlying 
> functions are written in C. In Julia most underlying functions are written 
> in Julia (note that most mathematical operations call out to C libraries. 
> No need to reinvent the wheel). There is a Julia package you can use for 
> porting over Matlab code that devectorizes you code [3], but if the 
> operation is occurring in a hot loop it still will be a good and necessary 
> optimization to unroll the vectorized code. Then you no longer get just 
> decent performance, but excellent performance instead. 
>
> Best Valentin
>
> PS:
> You can also use @which 1 .* [1] and @edit to see where the operation is 
> defined and how it is implemented. The .* operation is implemented by 
> allocating an output array and the running * element wise over the array. 
> No magic is going on there.
>
> [1] https://github.com/JuliaLang/julia/issues/8450 
> [2] http://docs.julialang.org/en/release-0.3/stdlib/profile/
> [3] https://github.com/lindahua/Devectorize.jl
>
> On Thursday, 11 December 2014 06:23:29 UTC+1, Petr Krysl wrote:
>>
>> Thanks.  Now my head is really spinning!
>>
>> See, before I posted the original question I tried expanding the loop in 
>> the actual FE code, and the code was SLOWER and was using MORE memory:
>>
>> With the expression 
>> Fe += Ns[j] * (f * Jac * w[j]); :
>> 6.223416655 seconds (1648832052 bytes
>>
>> With the expanded loop 
>> for kx=1:length(Fe) # alternative (devectorization)
>>  Fe[kx] += Ns[j][kx] * (f * Jac * w[j]); 
>> end 
>>  7.340272676 seconds (1776971604 bytes allocated,
>>
>> In addition, your argument clearly demonstrates how to avoid the 
>> temporary array for doit1(), but doit2() adds to the 3 x 1 one additional 
>> 1x1 temporary (it seems to me), yet it is about 14 times slower. Why is 
>> that?
>>
>> Finally, if the only way I can get decent performance with lines like
>>
>>  Fe += Ns[j] * (f * Jac * w[j]);  # Fe and Ns[j] arrays
>>
>> is to manually write out all the loops, that would be terrible news 
>> indeed.  Not only that is a lot of work when rewriting loads of Matlab code 
>> (with several matrices concatenated in many many expressions), but the 
>> legibility and maintainability tanks. I tought the point of having the 
>> array objects and the associated manipulation functions was to hide the 
>> loops while delivering decent performance...
>>
>> Petr
>>
>>
>> On Wednesday, December 10, 2014 8:28:31 PM UTC-8, Tim Holy wrote:
>>>
>>> Multiplying two Float64s yields another Float64; most likely, this will 
>>> be 
>>> stored in the CPU's registers. In contrast,  [f]*Jac creates an array, 
>>> on each 
>>> iteration, that has to be stored on the heap. 
>>>
>>> A faster variant devectorizes: 
>>>function doit1a(N) 
>>>Fe=zeros(3,1);Ns=zeros(3,1)+1.0;f=-6.0;Jac=1.0; 
>>>for i=1:N 
>>>

Re: [julia-users] Re: Finite element code in Julia: Curious overhead in .* product

2014-12-10 Thread Petr Krysl
Thanks.  Now my head is really spinning!

See, before I posted the original question I tried expanding the loop in 
the actual FE code, and the code was SLOWER and was using MORE memory:

With the expression 
Fe += Ns[j] * (f * Jac * w[j]); :
6.223416655 seconds (1648832052 bytes

With the expanded loop 
for kx=1:length(Fe) # alternative (devectorization)
 Fe[kx] += Ns[j][kx] * (f * Jac * w[j]); 
end 
 7.340272676 seconds (1776971604 bytes allocated,

In addition, your argument clearly demonstrates how to avoid the temporary 
array for doit1(), but doit2() adds to the 3 x 1 one additional 1x1 
temporary (it seems to me), yet it is about 14 times slower. Why is that?

Finally, if the only way I can get decent performance with lines like

 Fe += Ns[j] * (f * Jac * w[j]);  # Fe and Ns[j] arrays

is to manually write out all the loops, that would be terrible news indeed. 
 Not only that is a lot of work when rewriting loads of Matlab code (with 
several matrices concatenated in many many expressions), but the legibility 
and maintainability tanks. I tought the point of having the array objects 
and the associated manipulation functions was to hide the loops while 
delivering decent performance...

Petr


On Wednesday, December 10, 2014 8:28:31 PM UTC-8, Tim Holy wrote:
>
> Multiplying two Float64s yields another Float64; most likely, this will be 
> stored in the CPU's registers. In contrast,  [f]*Jac creates an array, on 
> each 
> iteration, that has to be stored on the heap. 
>
> A faster variant devectorizes: 
>function doit1a(N) 
>Fe=zeros(3,1);Ns=zeros(3,1)+1.0;f=-6.0;Jac=1.0; 
>for i=1:N 
>tmp = f*Jac 
>for j = 1:length(Fe) 
>Fe[j] += Ns[j]*tmp 
>end 
>end 
>Fe 
>end 
>
> julia> @time doit1(N); 
> elapsed time: 0.810270399 seconds (384000320 bytes allocated, 61.23% gc 
> time) 
>
> julia> @time doit1a(N); 
> elapsed time: 0.022118726 seconds (320 bytes allocated) 
>
> Note the tiny allocations in the second case. 
>
> --Tim 
>
>
> On Wednesday, December 10, 2014 07:54:00 PM Petr Krysl wrote: 
> > The code is really short: 
> > 
> > N=200 
> > 
> > function doit1(N) 
> > Fe=zeros(3,1);Ns=zeros(3,1)+1.0;f=-6.0;Jac=1.0; 
> > for i=1:N 
> > Fe += Ns *  (f * Jac); 
> > end 
> > Fe 
> > end 
> > 
> > function doit2(N) 
> > Fe=zeros(3,1);Ns=zeros(3,1)+1.0;f=-6.0;Jac=1.0; 
> > for i=1:N 
> > Fe += Ns .*  ([f] * Jac); 
> > end 
> > Fe 
> > end 
> > 
> > function doit3(N) 
> > Fe=zeros(3,1);Ns=zeros(3,1)+1.0;f=-6.0;Jac=1.0; 
> > fs=[1.0] 
> > for i=1:N 
> > fs= ([f] * Jac );   Fe += Ns .*  fs; 
> > end 
> > Fe 
> > end 
> > 
> > function doit4(N) 
> > Fe=zeros(3,1);Ns=zeros(3,1)+1.0;f=-6.0;Jac=1.0; 
> > fs=[1.0] 
> > for i=1:N   # 
> > fs= [f]; fs *= Jac;   Fe += Ns .*  fs; 
> > end 
> > Fe 
> > end 
> > # 
> > @time doit1(N) 
> > @time doit2(N) 
> > @time doit3(N) 
> > @time doit4(N) 
> > 
> > Here are the measurements on my machine: 
> > 
> > elapsed time: 0.461173619 seconds (384129944 bytes allocated, 44.45% gc 
> > time) 
> > elapsed time: 6.905249901 seconds (1904151036 bytes allocated, 12.84% gc 
> > time) 
> > elapsed time: 6.862259146 seconds (1904166072 bytes allocated, 13.23% gc 
> > time) 
> > elapsed time: 6.842678542 seconds (1904166256 bytes allocated, 12.81% gc 
> > time) 
> > 
> > I'll be grateful for any pointers as to how to structure the code so 
> that 
> > the array operations not incur this horrendous penalty. 
> > 
> > Petr 
> > 
> > On Wednesday, December 10, 2014 7:06:43 PM UTC-8, Tim Holy wrote: 
> > > Can you post short but complete code examples in a gist? 
> > > https://gist.github.com/ 
> > > That would make it easier to follow than chasing code examples across 
> long 
> > > email chains. 
> > > 
> > > --Tim 
> > > 
> > > On Wednesday, December 10, 2014 02:35:38 PM Petr Krysl wrote: 
> > > > I don't know if this is correct, but here is a guess: 
> > > > 
> > > > Option 3 still requires a temp array ( to calculate the result of 
> the 
> > > > paren fs= ([f] * Jac * w[j]); ), and option 4 eliminates that temp. 
> The 
> > > > cost of the temp over the 2 millio

Re: [julia-users] Re: Finite element code in Julia: Curious overhead in .* product

2014-12-10 Thread Petr Krysl
https://gist.github.com/anonymous/054f6fd269f97a4442db

On Wednesday, December 10, 2014 7:06:43 PM UTC-8, Tim Holy wrote:
>
> Can you post short but complete code examples in a gist? 
> https://gist.github.com/ 
> That would make it easier to follow than chasing code examples across long 
> email chains. 
>
> --Tim 
>
>
> On Wednesday, December 10, 2014 02:35:38 PM Petr Krysl wrote: 
> > I don't know if this is correct, but here is a guess: 
> > 
> > Option 3 still requires a temp array ( to calculate the result of the 
> > paren fs= ([f] * Jac * w[j]); ), and option 4 eliminates that temp. The 
> > cost of the temp over the 2 million loops is ~200MB and 0.6 sec CPU 
> time. 
> > So WHY is the difference between 1 and 2 so HUUUGE? 
> > 
> > I think this calls for someone who wrote the compiler. Guys? 
> > 
> > Thanks a bunch, 
> > 
> > P 
> > 
> > On Wednesday, December 10, 2014 12:51:26 PM UTC-8, Petr Krysl wrote: 
> > > Actually: option (4) was also tested: 
> > > # 16.333766821 seconds (3008899660 bytes 
> > > fs[1]= f; fs *= (Jac * w[j]); 
> > > 
> > > Fe += Ns[j] .*  fs; 
> > > 
> > > So, allocation of memory was reduced somewhat, runtime not so much. 
> > > 
> > > On Wednesday, December 10, 2014 12:45:20 PM UTC-8, Petr Krysl wrote: 
> > >> Well,  temporary array was also on my mind.  However,  things are I 
> > >> believe a little bit more complicated. 
> > >> 
> > >> Here is the code with three timed options.  As you can see, the first 
> two 
> > >> options are the fast one (multiplication with a scalar) and the slow 
> one 
> > >> (multiplication with a one by one matrix).   In the third option I 
> tried 
> > >> to 
> > >> avoid the creation of an  ad hoc temporary by allocating a variable 
> > >> outside 
> > >> of the loop.  The effect unfortunately is nil. 
> > >> 
> > >> fs=[0.0]# Used only for option (3) 
> > >> # Now loop over all fes in the block 
> > >> for i=1:size(conns,1) 
> > >> 
> > >> ... 
> > >> for j=1:npts 
> > >> 
> > >>... 
> > >>   
> > >>   # Option (1): 7.193767019 seconds (1648850568 bytes 
> > >>   # Fe += Ns[j] *  (f * Jac * w[j]); # 
> > >>   
> > >># Option (2): 17.301214583 seconds (3244458368 bytes 
> > >>   
> > >>   #Fe += Ns[j] .*  ([f] * Jac * w[j]); # 
> > >>   
> > >># Option (3): 16.943314075 seconds (3232879120 
> > >>   
> > >>   fs= ([f] * Jac * w[j]);   Fe += Ns[j] .*  fs; 
> > >> 
> > >> end 
> > >> ... 
> > >> 
> > >> end 
> > >> 
> > >> What do you think?   Why is the code still getting hit with a big 
> > >> performance/memory penalty? 
> > >> 
> > >> Thanks, 
> > >> 
> > >> Petr 
> > >> 
> > >> On Monday, December 8, 2014 2:03:02 PM UTC-8, Valentin Churavy wrote: 
> > >>> I would think that when f is a 1x1 matrix Julia is allocating a new 
> 1x1 
> > >>> matrix to store the result. If it is a scalar that allocation can be 
> > >>> skipped. When this part of the code is now in a hot loop it might 
> happen 
> > >>> that you allocate millions of very small short-lived objects and 
> that 
> > >>> taxes 
> > >>> the GC quite a lot. 
>
>

Re: [julia-users] Re: Finite element code in Julia: Curious overhead in .* product

2014-12-10 Thread Petr Krysl
The code is really short:

N=200

function doit1(N)
Fe=zeros(3,1);Ns=zeros(3,1)+1.0;f=-6.0;Jac=1.0;
for i=1:N
Fe += Ns *  (f * Jac); 
end
Fe
end

function doit2(N)
Fe=zeros(3,1);Ns=zeros(3,1)+1.0;f=-6.0;Jac=1.0;
for i=1:N
Fe += Ns .*  ([f] * Jac); 
end
Fe
end

function doit3(N)
Fe=zeros(3,1);Ns=zeros(3,1)+1.0;f=-6.0;Jac=1.0;
fs=[1.0]
for i=1:N
fs= ([f] * Jac );   Fe += Ns .*  fs; 
end
Fe
end

function doit4(N)
Fe=zeros(3,1);Ns=zeros(3,1)+1.0;f=-6.0;Jac=1.0;
fs=[1.0]
for i=1:N   # 
fs= [f]; fs *= Jac;   Fe += Ns .*  fs; 
end
Fe
end
# 
@time doit1(N)
@time doit2(N)
@time doit3(N)
@time doit4(N)

Here are the measurements on my machine:

elapsed time: 0.461173619 seconds (384129944 bytes allocated, 44.45% gc 
time)
elapsed time: 6.905249901 seconds (1904151036 bytes allocated, 12.84% gc 
time)
elapsed time: 6.862259146 seconds (1904166072 bytes allocated, 13.23% gc 
time)
elapsed time: 6.842678542 seconds (1904166256 bytes allocated, 12.81% gc 
time)

I'll be grateful for any pointers as to how to structure the code so that 
the array operations not incur this horrendous penalty.

Petr

On Wednesday, December 10, 2014 7:06:43 PM UTC-8, Tim Holy wrote:
>
> Can you post short but complete code examples in a gist? 
> https://gist.github.com/ 
> That would make it easier to follow than chasing code examples across long 
> email chains. 
>
> --Tim 
>
>
> On Wednesday, December 10, 2014 02:35:38 PM Petr Krysl wrote: 
> > I don't know if this is correct, but here is a guess: 
> > 
> > Option 3 still requires a temp array ( to calculate the result of the 
> > paren fs= ([f] * Jac * w[j]); ), and option 4 eliminates that temp. The 
> > cost of the temp over the 2 million loops is ~200MB and 0.6 sec CPU 
> time. 
> > So WHY is the difference between 1 and 2 so HUUUGE? 
> > 
> > I think this calls for someone who wrote the compiler. Guys? 
> > 
> > Thanks a bunch, 
> > 
> > P 
> > 
> > On Wednesday, December 10, 2014 12:51:26 PM UTC-8, Petr Krysl wrote: 
> > > Actually: option (4) was also tested: 
> > > # 16.333766821 seconds (3008899660 bytes 
> > > fs[1]= f; fs *= (Jac * w[j]); 
> > > 
> > > Fe += Ns[j] .*  fs; 
> > > 
> > > So, allocation of memory was reduced somewhat, runtime not so much. 
> > > 
> > > On Wednesday, December 10, 2014 12:45:20 PM UTC-8, Petr Krysl wrote: 
> > >> Well,  temporary array was also on my mind.  However,  things are I 
> > >> believe a little bit more complicated. 
> > >> 
> > >> Here is the code with three timed options.  As you can see, the first 
> two 
> > >> options are the fast one (multiplication with a scalar) and the slow 
> one 
> > >> (multiplication with a one by one matrix).   In the third option I 
> tried 
> > >> to 
> > >> avoid the creation of an  ad hoc temporary by allocating a variable 
> > >> outside 
> > >> of the loop.  The effect unfortunately is nil. 
> > >> 
> > >> fs=[0.0]# Used only for option (3) 
> > >> # Now loop over all fes in the block 
> > >> for i=1:size(conns,1) 
> > >> 
> > >> ... 
> > >> for j=1:npts 
> > >> 
> > >>... 
> > >>   
> > >>   # Option (1): 7.193767019 seconds (1648850568 bytes 
> > >>   # Fe += Ns[j] *  (f * Jac * w[j]); # 
> > >>   
> > >># Option (2): 17.301214583 seconds (3244458368 bytes 
> > >>   
> > >>   #Fe += Ns[j] .*  ([f] * Jac * w[j]); # 
> > >>   
> > >># Option (3): 16.943314075 seconds (3232879120 
> > >>   
> > >>   fs= ([f] * Jac * w[j]);   Fe += Ns[j] .*  fs; 
> > >> 
> > >> end 
> > >> ... 
> > >> 
> > >> end 
> > >> 
> > >> What do you think?   Why is the code still getting hit with a big 
> > >> performance/memory penalty? 
> > >> 
> > >> Thanks, 
> > >> 
> > >> Petr 
> > >> 
> > >> On Monday, December 8, 2014 2:03:02 PM UTC-8, Valentin Churavy wrote: 
> > >>> I would think that when f is a 1x1 matrix Julia is allocating a new 
> 1x1 
> > >>> matrix to store the result. If it is a scalar that allocation can be 
> > >>> skipped. When this part of the code is now in a hot loop it might 
> happen 
> > >>> that you allocate millions of very small short-lived objects and 
> that 
> > >>> taxes 
> > >>> the GC quite a lot. 
>
>

[julia-users] Re: Finite element code in Julia: Curious overhead in .* product

2014-12-10 Thread Petr Krysl
I don't know if this is correct, but here is a guess:

Option 3 still requires a temp array ( to calculate the result of the 
paren fs= ([f] * Jac * w[j]); ), and option 4 eliminates that temp. The 
cost of the temp over the 2 million loops is ~200MB and 0.6 sec CPU time. 
So WHY is the difference between 1 and 2 so HUUUGE?

I think this calls for someone who wrote the compiler. Guys?

Thanks a bunch,

P

On Wednesday, December 10, 2014 12:51:26 PM UTC-8, Petr Krysl wrote:
>
> Actually: option (4) was also tested:
> # 16.333766821 seconds (3008899660 bytes
> fs[1]= f; fs *= (Jac * w[j]); 
> Fe += Ns[j] .*  fs; 
>
> So, allocation of memory was reduced somewhat, runtime not so much.
>
> On Wednesday, December 10, 2014 12:45:20 PM UTC-8, Petr Krysl wrote:
>
>> Well,  temporary array was also on my mind.  However,  things are I 
>> believe a little bit more complicated.
>>
>> Here is the code with three timed options.  As you can see, the first two 
>> options are the fast one (multiplication with a scalar) and the slow one 
>> (multiplication with a one by one matrix).   In the third option I tried to 
>> avoid the creation of an  ad hoc temporary by allocating a variable outside 
>> of the loop.  The effect unfortunately is nil.
>>
>> fs=[0.0]# Used only for option (3)
>> # Now loop over all fes in the block
>> for i=1:size(conns,1)
>> ...
>> for j=1:npts
>>...
>>   # Option (1): 7.193767019 seconds (1648850568 bytes
>>   # Fe += Ns[j] *  (f * Jac * w[j]); #
>># Option (2): 17.301214583 seconds (3244458368 bytes
>>   #Fe += Ns[j] .*  ([f] * Jac * w[j]); #
>># Option (3): 16.943314075 seconds (3232879120
>>   fs= ([f] * Jac * w[j]);   Fe += Ns[j] .*  fs; 
>> end
>> ...
>> end
>>
>> What do you think?   Why is the code still getting hit with a big 
>> performance/memory penalty?
>>
>> Thanks,
>>
>> Petr
>>
>> On Monday, December 8, 2014 2:03:02 PM UTC-8, Valentin Churavy wrote:
>>
>>> I would think that when f is a 1x1 matrix Julia is allocating a new 1x1 
>>> matrix to store the result. If it is a scalar that allocation can be 
>>> skipped. When this part of the code is now in a hot loop it might happen 
>>> that you allocate millions of very small short-lived objects and that taxes 
>>> the GC quite a lot.  
>>>
>>>
>>>>

[julia-users] Re: Finite element code in Julia: Curious overhead in .* product

2014-12-10 Thread Petr Krysl
Actually: option (4) was also tested:
# 16.333766821 seconds (3008899660 bytes
fs[1]= f; fs *= (Jac * w[j]); 
Fe += Ns[j] .*  fs; 

So, allocation of memory was reduced somewhat, runtime not so much.

On Wednesday, December 10, 2014 12:45:20 PM UTC-8, Petr Krysl wrote:

> Well,  temporary array was also on my mind.  However,  things are I 
> believe a little bit more complicated.
>
> Here is the code with three timed options.  As you can see, the first two 
> options are the fast one (multiplication with a scalar) and the slow one 
> (multiplication with a one by one matrix).   In the third option I tried to 
> avoid the creation of an  ad hoc temporary by allocating a variable outside 
> of the loop.  The effect unfortunately is nil.
>
> fs=[0.0]# Used only for option (3)
> # Now loop over all fes in the block
> for i=1:size(conns,1)
> ...
> for j=1:npts
>...
>   # Option (1): 7.193767019 seconds (1648850568 bytes
>   # Fe += Ns[j] *  (f * Jac * w[j]); #
># Option (2): 17.301214583 seconds (3244458368 bytes
>   #Fe += Ns[j] .*  ([f] * Jac * w[j]); #
># Option (3): 16.943314075 seconds (3232879120
>   fs= ([f] * Jac * w[j]);   Fe += Ns[j] .*  fs; 
> end
> ...
> end
>
> What do you think?   Why is the code still getting hit with a big 
> performance/memory penalty?
>
> Thanks,
>
> Petr
>
> On Monday, December 8, 2014 2:03:02 PM UTC-8, Valentin Churavy wrote:
>
>> I would think that when f is a 1x1 matrix Julia is allocating a new 1x1 
>> matrix to store the result. If it is a scalar that allocation can be 
>> skipped. When this part of the code is now in a hot loop it might happen 
>> that you allocate millions of very small short-lived objects and that taxes 
>> the GC quite a lot.  
>>
>>
>>>

[julia-users] Re: Finite element code in Julia: Curious overhead in .* product

2014-12-10 Thread Petr Krysl
Well,  temporary array was also on my mind.  However,  things are I believe 
a little bit more complicated.

Here is the code with three timed options.  As you can see, the first two 
options are the fast one (multiplication with a scalar) and the slow one 
(multiplication with a one by one matrix).   In the third option I tried to 
avoid the creation of an  ad hoc temporary by allocating a variable outside 
of the loop.  The effect unfortunately is nil.

fs=[0.0]# Used only for option (3)
# Now loop over all fes in the block
for i=1:size(conns,1)
...
for j=1:npts
   ...
  # Option (1): 7.193767019 seconds (1648850568 bytes
  # Fe += Ns[j] *  (f * Jac * w[j]); #
   # Option (2): 17.301214583 seconds (3244458368 bytes
  #Fe += Ns[j] .*  ([f] * Jac * w[j]); #
   # Option (3): 16.943314075 seconds (3232879120
  fs= ([f] * Jac * w[j]);   Fe += Ns[j] .*  fs; 
end
...
end

What do you think?   Why is the code still getting hit with a big 
performance/memory penalty?

Thanks,

Petr

On Monday, December 8, 2014 2:03:02 PM UTC-8, Valentin Churavy wrote:

> I would think that when f is a 1x1 matrix Julia is allocating a new 1x1 
> matrix to store the result. If it is a scalar that allocation can be 
> skipped. When this part of the code is now in a hot loop it might happen 
> that you allocate millions of very small short-lived objects and that taxes 
> the GC quite a lot.  
>
>
>>

[julia-users] THANKS to Julia core developers!

2014-12-08 Thread Petr Krysl
I've been playing in Julia for the past week or so, but already the results 
are convincing.  This language is GREAT.   I've coded hundreds of thousands 
of lines in Fortran, C, C++, Matlab, and this is the first language that 
feels good. And it is precisely what I envision for my project.

So, THANKS! 

Petr Krysl


[julia-users] Re: Finite element code in Julia: Curious overhead in .* product

2014-12-08 Thread Petr Krysl
Hi Robert,

At this point I am really impressed with Julia.  Consequently, I am tempted 
to rewrite my Matlab toolkit FinEALE 
(https://github.com/PetrKryslUCSD/FinEALE) in Julia and to implement 
further research ideas in this new environment. 

What did you have in mind?

Petr
pkr...@ucsd.edu

On Monday, December 8, 2014 6:41:02 PM UTC-8, Robert Gates wrote:
>
> Hi Petr,
>
> just on a side-note: are you planning on implementing a complete FE 
> package in Julia? In that case we could pool our efforts.
>
>
>

[julia-users] Finite element code in Julia: Curious overhead in .* product

2014-12-08 Thread Petr Krysl
I posted a message yesterday about a simple port of fragments a finite 
element code to solve the heat conduction equation.  The Julia port was 
compared to the original Matlab toolkit FinEALE and the Julia code 
presented last year by Amuthan:

https://groups.google.com/forum/?fromgroups=#!searchin/julia-users/Krysl%7Csort:relevance/julia-users/3tTljDSQ6cs/-8UCPnNmzn4J

I have now speeded the code up some more, so that we have the table (on my 
laptop, i7, 16 gig of memory):

Amuthan's 29 seconds
J FinEALE 58 seconds
FinEALE 810 seconds

So, given that Amuthan reports to be slower by a general-purpose C++ code 
by a factor of around 1.36, J FinEALE is presumably slower with respect to 
an equivalent FE solver coded in C++ by a factor of 2.7.  So far so good.

The curious thing is that @time reports huge amounts of memory allocated 
(something like 10% of GC time).
One particular source of wild memory allocation was this line (executed in 
this case 2 million times)

Fe += Ns[j] .*  (f * Jac * w[j]);

where 
Fe = 3 x 1 matrix
Ns[j] = 3 by one matrix
f = one by one matrix
Jac, w[j]= scalars

The cost of the operation that encloses this line (among many others):
19.835263928 seconds (4162094480 bytes allocated, 16.79% gc time

Changing the one by one matrix f into a scalar (and replacing .*)

Fe += Ns[j] *  (f * Jac * w[j]);

changed the cost quite drastically:
10.105620394 seconds (2738120272 bytes allocated, 21.33% gc time

Any ideas, you Julian wizards?

Thanks,

Petr


Re: [julia-users] Re: Article on finite element programming in Julia

2014-12-08 Thread Petr Krysl
Thanks!

On Monday, December 8, 2014 2:09:27 AM UTC-8, Mauro wrote:
>
> There was one talk about FEM at JuliaCon: 
> https://www.youtube.com/watch?v=8wx6apk3xQs 
>
> On Mon, 2014-12-08 at 04:48, Petr Krysl > 
> wrote: 
> > Bit more optimization: 
> > 
> > Amuthan's code: 29 seconds 
> > J FinEALE: 54 seconds 
> > Matlab FinEALE: 810 seconds 
> > 
> > Petr 
> > 
> > 
> > On Sunday, December 7, 2014 1:45:28 PM UTC-8, Petr Krysl wrote: 
> >> 
> >> Sorry: minor correction.  I mistakenly timed also output to a VTK file 
> for 
> >> paraview postprocessing.  This being an ASCII file, it takes a few 
> seconds. 
> >> 
> >> With J FinEALE the timing is now 78 seconds. 
> >> 
> >> Petr 
> >> 
> >> On Sunday, December 7, 2014 10:21:51 AM UTC-8, Petr Krysl wrote: 
> >>> 
> >>> 
> >>> Amuthan's code: 29 seconds 
> >>> 
> >>> J FinEALE: 86 seconds 
> >>> 
> >>> FinEALE: 810 seconds 
> >>> 
> >>> 
> >>> 
>
>

Re: [julia-users] Re: Article on finite element programming in Julia

2014-12-07 Thread Petr Krysl
Bit more optimization:

Amuthan's code: 29 seconds
J FinEALE: 54 seconds
Matlab FinEALE: 810 seconds

Petr


On Sunday, December 7, 2014 1:45:28 PM UTC-8, Petr Krysl wrote:
>
> Sorry: minor correction.  I mistakenly timed also output to a VTK file for 
> paraview postprocessing.  This being an ASCII file, it takes a few seconds.
>
> With J FinEALE the timing is now 78 seconds.
>
> Petr
>
> On Sunday, December 7, 2014 10:21:51 AM UTC-8, Petr Krysl wrote:
>>
>>
>> Amuthan's code: 29 seconds
>>
>> J FinEALE: 86 seconds
>>
>> FinEALE: 810 seconds
>>
>>
>>

Re: [julia-users] Re: Article on finite element programming in Julia

2014-12-07 Thread Petr Krysl
Sorry: minor correction.  I mistakenly timed also output to a VTK file for 
paraview postprocessing.  This being an ASCII file, it takes a few seconds.

With J FinEALE the timing is now 78 seconds.

Petr

On Sunday, December 7, 2014 10:21:51 AM UTC-8, Petr Krysl wrote:
>
>
> Amuthan's code: 29 seconds
>
> J FinEALE: 86 seconds
>
> FinEALE: 810 seconds
>
>
>

Re: [julia-users] Re: Article on finite element programming in Julia

2014-12-07 Thread Petr Krysl
Correct. That is the blog cited by the OP in this thread.

On Sunday, December 7, 2014 1:09:36 PM UTC-8, Rob J Goedman wrote:
>
> Petr,
>
> Are you referring to 
> http://www.codeproject.com/Articles/579983/Finite-Element-programming-in-Julia
>  ?
>
> Or is this from another blog?
>
> Rob J. Goedman
> goe...@mac.com 
>
>
>
>
>  
> On Dec 7, 2014, at 10:21 AM, Petr Krysl > 
> wrote:
>
> Hello everybody,
>
>
> I found Amuthan 's blog a while back, but only about two weeks ago I found 
> the time to look seriously at Julia. What I found was very encouraging.
>
>
> For a variety of teaching and research purposes I maintain a Matlab FEA 
> toolkit called FinEALE. It is about 80,000 lines of code with all the 
> examples and tutorials. In the past week I rewrote the bits and pieces that 
> allow me to run a comparison with Amuthan 's code. Here are the results:
>
>
> For 1000 x 1000 grid (2 million triangles):
>
>
> Amuthan's code: 29 seconds
>
>
> J FinEALE: 86 seconds
>
>
> FinEALE: 810 seconds
>
>
> Mind you, we are not doing the same thing in these codes. FinEALE and J 
> FinEALE implement code to solve the heat conduction problem with 
> arbitrarily anisotropic materials. The calculation of the FE space is also 
> not vectorized as in Amuthan's code. The code is written to be legible and 
> general: the same code that calculates the matrices and vectors for a 
> triangle mesh would also work for quadrilaterals, linear and quadratic, 
> both in the pure 2-D and the axially symmetric set up, and tetrahedral and 
> hexahedral elements in 3-D. There is obviously a price to pay for all this 
> generality.
>
>
> Concerning Amuthan 's comparison with the two compiled FEA codes: it 
> really depends how the problem is set up for those codes. I believe that 
> Fenics has a form compiler which can spit out an optimized code that in 
> this case would be entirely equivalent to the simplified calculation 
> (isotropic material with conductivity equal to 1.0), and linear triangles. 
> I'm not sure about freefem++, but since it has a domain-specific language, 
> it can also presumably optimize its operations. So in my opinion it is 
> rather impressive that Amuthan 's code in Julia can do so well.
>
>
> Petr
>  
>
>
>

Re: [julia-users] Re: Article on finite element programming in Julia

2014-12-07 Thread Petr Krysl


Hello everybody,


I found Amuthan 's blog a while back, but only about two weeks ago I found 
the time to look seriously at Julia. What I found was very encouraging.


For a variety of teaching and research purposes I maintain a Matlab FEA 
toolkit called FinEALE. It is about 80,000 lines of code with all the 
examples and tutorials. In the past week I rewrote the bits and pieces that 
allow me to run a comparison with Amuthan 's code. Here are the results:


For 1000 x 1000 grid (2 million triangles):


Amuthan's code: 29 seconds


J FinEALE: 86 seconds


FinEALE: 810 seconds


Mind you, we are not doing the same thing in these codes. FinEALE and J 
FinEALE implement code to solve the heat conduction problem with 
arbitrarily anisotropic materials. The calculation of the FE space is also 
not vectorized as in Amuthan's code. The code is written to be legible and 
general: the same code that calculates the matrices and vectors for a 
triangle mesh would also work for quadrilaterals, linear and quadratic, 
both in the pure 2-D and the axially symmetric set up, and tetrahedral and 
hexahedral elements in 3-D. There is obviously a price to pay for all this 
generality.


Concerning Amuthan 's comparison with the two compiled FEA codes: it really 
depends how the problem is set up for those codes. I believe that Fenics 
has a form compiler which can spit out an optimized code that in this case 
would be entirely equivalent to the simplified calculation (isotropic 
material with conductivity equal to 1.0), and linear triangles. I'm not 
sure about freefem++, but since it has a domain-specific language, it can 
also presumably optimize its operations. So in my opinion it is rather 
impressive that Amuthan 's code in Julia can do so well.


Petr
 


[julia-users] Re: Is everything visible within a module by default?

2014-12-06 Thread Petr Krysl
I see, I missed the bit about the non-exported function p.

But that kind of worries me (am I the only one?).   If I can get at 
anything within a module, nothing is ever protected?   I cannot have a 
private function, shared only by the functions within the module?

In the same vein: there are no private fields within types.

So I guess the Julia way of doing things is TRUST.   There is no 
encapsulation apparently.

 Am I wrong?

Petr

>


[julia-users] Re: Is everything visible within a module by default?

2014-12-06 Thread Petr Krysl
Yes, but I haven't exported anything (with "export").

On Saturday, December 6, 2014 3:40:04 PM UTC-8, Patrick O'Leary wrote:
>
> Yes. Exporting the name causes it to be available in the global namespace 
> with "using." All names are visible in the fully qualified namespace.
>
> On Saturday, December 6, 2014 5:34:32 PM UTC-6, Petr Krysl wrote:
>>
>>  When I qualify the names of functions or types within a module that I am 
>> "using" with the name of the module, as for instance 
>>
>> fens1=FENodeSetModule.FENodeSet(xyz=[X[:,1] X[:,2]]),
>>
>>  I can access everything, even though the module does not export anything.
>>
>> Is that as expected?
>>
>> Thanks,
>>
>> Petr
>>
>

  1   2   >